Science.gov

Sample records for acoustic video images

  1. Inferences of Particle Size and Composition From Video-like Images Based on Acoustic Data: Grotto Plume, Main Endeavor Field

    NASA Astrophysics Data System (ADS)

    Bemis, K. G.; Rona, P. A.; Santilli, K.; Dastur, J.; Silver, D.

    2004-12-01

    Optical and acoustic scattering from particles in a seafloor hydrothermal plume can be related if the particle properties and scattering mechanisms are known. We assume Rayleigh backscattering of sound and Mie forward scattering of light. We then use the particle concentrations implicit in the observed acoustic backscatter intensity to recreate the optical image a camera would see given a particular lighting level. The motivation for this study is to discover what information on particle size and composition in the buoyant plume can be inferred from a comparison of the calculated optical images (based on acoustic data) with actual video images from the acoustic acquisition cruise and the IMAX film "Volcanoes of the Deep Sea" (Stephen Low Productions, Inc.). Because the geologists, biologists and oceanographers involved in the study of seafloor hydrothermal plumes all "see" plumes in different ways, an additional motivation is to create more realistic plume images from the acoustic data. By using visualization techniques, with realistic lighting models, we can convert the plume image from mechanical waves (sound) to electromagnetic waves (light). The resulting image depends on assumptions about the particle size distribution and composition. Conversion of the volume scattering coefficients from Rayleigh to Mie scattering is accomplished by an extinction scale factor that depends on the wavelengths of light and sound and on the average particle size. We also make an adjustment to the scattered light based on the particles reflectivity (albedo) and color. We present a series of images of acoustic data for Grotto Plume, Main Endeavour Field (within the Endeavour ISS Site) using both realistic lighting models and traditional visualization techniques to investigate the dependence of the images on assumptions about particle composition and size. Sensitivity analysis suggests that the visibility of the buoyant plume increases as the intensity of supplied light increases

  2. Video Toroid Cavity Imager

    SciTech Connect

    Gerald, Rex E. II; Sanchez, Jairo; Rathke, Jerome W.

    2004-08-10

    A video toroid cavity imager for in situ measurement of electrochemical properties of an electrolytic material sample includes a cylindrical toroid cavity resonator containing the sample and employs NMR and video imaging for providing high-resolution spectral and visual information of molecular characteristics of the sample on a real-time basis. A large magnetic field is applied to the sample under controlled temperature and pressure conditions to simultaneously provide NMR spectroscopy and video imaging capabilities for investigating electrochemical transformations of materials or the evolution of long-range molecular aggregation during cooling of hydrocarbon melts. The video toroid cavity imager includes a miniature commercial video camera with an adjustable lens, a modified compression coin cell imager with a fiat circular principal detector element, and a sample mounted on a transparent circular glass disk, and provides NMR information as well as a video image of a sample, such as a polymer film, with micrometer resolution.

  3. Video image position determination

    NASA Astrophysics Data System (ADS)

    Christensen, W.; Anderson, F. L.; Kortegaard, B. L.

    1990-04-01

    The present invention generally relates to the control of video and optical information and, more specifically, to control systems utilizing video images to provide control. Accurate control of video images and laser beams is becoming increasingly important as the use of lasers for machine, medical, and experimental processes escalates. In AURORA, an installation at Los Alamos National Laboratory dedicated to laser fusion research, it is necessary to precisely control the path and angle of up to 96 laser beams. This invention is comprised of an optical beam position controller in which a video camera captures an image of the beam in its video frames, and conveys those images to a processing board which calculates the centroid coordinates for the image. The image coordinates are used by motor controllers and stepper motors to position the beam in a predetermined alignment. In one embodiment, system noise, used in conjunction with Bernoulli trials, yields higher resolution centroid coordinates.

  4. Video image position determination

    DOEpatents

    Christensen, Wynn; Anderson, Forrest L.; Kortegaard, Birchard L.

    1991-01-01

    An optical beam position controller in which a video camera captures an image of the beam in its video frames, and conveys those images to a processing board which calculates the centroid coordinates for the image. The image coordinates are used by motor controllers and stepper motors to position the beam in a predetermined alignment. In one embodiment, system noise, used in conjunction with Bernoulli trials, yields higher resolution centroid coordinates.

  5. Ultrasound Imaging System Video

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In this video, astronaut Peggy Whitson uses the Human Research Facility (HRF) Ultrasound Imaging System in the Destiny Laboratory of the International Space Station (ISS) to image her own heart. The Ultrasound Imaging System provides three-dimension image enlargement of the heart and other organs, muscles, and blood vessels. It is capable of high resolution imaging in a wide range of applications, both research and diagnostic, such as Echocardiography (ultrasound of the heart), abdominal, vascular, gynecological, muscle, tendon, and transcranial ultrasound.

  6. Observation of hydrothermal flows with acoustic video camera

    NASA Astrophysics Data System (ADS)

    Mochizuki, M.; Asada, A.; Tamaki, K.; Scientific Team Of Yk09-13 Leg 1

    2010-12-01

    Ridge 18-20deg.S, where hydrothermal plume signatures were previously perceived. DIDSON was equipped on the top of Shinkai6500 in order to get acoustic video images of hydrothermal plumes. In this cruise, seven dives of Shinkai6500 were conducted. The acoustic video images of the hydrothermal plumes had been captured in three of seven dives. These are only a few acoustic video images of the hydrothermal plumes. Processing and analyzing the acoustic video image data are going on. We will report the overview of the acoustic video image of the hydrothermal plumes and discuss possibility of DIDSON as an observation tool for seafloor hydrothermal activity.

  7. Video Image Stabilization and Registration

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor); Meyer, Paul J. (Inventor)

    2003-01-01

    A method of stabilizing and registering a video image in multiple video fields of a video sequence provides accurate determination of the image change in magnification, rotation and translation between video fields, so that the video fields may be accurately corrected for these changes in the image in the video sequence. In a described embodiment, a key area of a key video field is selected which contains an image which it is desired to stabilize in a video sequence. The key area is subdivided into nested pixel blocks and the translation of each of the pixel blocks from the key video field to a new video field is determined as a precursor to determining change in magnification, rotation and translation of the image from the key video field to the new video field.

  8. Video Image Stabilization and Registration

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor); Meyer, Paul J. (Inventor)

    2002-01-01

    A method of stabilizing and registering a video image in multiple video fields of a video sequence provides accurate determination of the image change in magnification, rotation and translation between video fields, so that the video fields may be accurately corrected for these changes in the image in the video sequence. In a described embodiment, a key area of a key video field is selected which contains an image which it is desired to stabilize in a video sequence. The key area is subdivided into nested pixel blocks and the translation of each of the pixel blocks from the key video field to a new video field is determined as a precursor to determining change in magnification, rotation and translation of the image from the key video field to the new video field.

  9. Video Image Stabilization and Registration

    NASA Astrophysics Data System (ADS)

    Hathaway, David H.; Meyer, Paul J.

    2002-10-01

    A method of stabilizing and registering a video image in multiple video fields of a video sequence provides accurate determination of the image change in magnification, rotation and translation between video fields, so that the video fields may be accurately corrected for these changes in the image in the video sequence. In a described embodiment, a key area of a key video field is selected which contains an image which it is desired to stabilize in a video sequence. The key area is subdivided into nested pixel blocks and the translation of each of the pixel blocks from the key video field to a new video field is determined as a precursor to determining change in magnification, rotation and translation of the image from the key video field to the new video field.

  10. Video and image quality

    NASA Astrophysics Data System (ADS)

    Aldridge, Jim

    1995-09-01

    This paper presents some of the results of a UK government research program into methods of improving the effectiveness of CCTV surveillance systems. The paper identifies the major components of video security systems and primary causes of unsatisfactory images. A method is outline for relating the picture detail limitations imposed by each system component on overall system performance. The paper also points out some possible difficulties arising from the use of emerging new technology.

  11. Enhancement of video images.

    PubMed

    Baily, N A; Nachazel, R J

    1980-04-01

    The enhancement of radiographic and fluoroscopic images using simple video analog techniques is described. In each instance, both the degree of enhancement and the features of the image to be enhanced are under the direct control of the radiologist. Noise is suppressed with a sharp cut-off, low-pass filter. Three types of analog circuits are discussed. One provides edge sharpening and contrast enhancement; one allows either black or white suppression, with expansion of the remaining shades of gray; and one provides an exponential response to selectable portions of the input signal. PMID:7360962

  12. Acoustic imaging system

    DOEpatents

    Smith, Richard W.

    1979-01-01

    An acoustic imaging system for displaying an object viewed by a moving array of transducers as the array is pivoted about a fixed point within a given plane. A plurality of transducers are fixedly positioned and equally spaced within a laterally extending array and operatively directed to transmit and receive acoustic signals along substantially parallel transmission paths. The transducers are sequentially activated along the array to transmit and receive acoustic signals according to a preestablished sequence. Means are provided for generating output voltages for each reception of an acoustic signal, corresponding to the coordinate position of the object viewed as the array is pivoted. Receptions from each of the transducers are presented on the same display at coordinates corresponding to the actual position of the object viewed to form a plane view of the object scanned.

  13. Synergy of seismic, acoustic, and video signals in blast analysis

    SciTech Connect

    Anderson, D.P.; Stump, B.W.; Weigand, J.

    1997-09-01

    The range of mining applications from hard rock quarrying to coal exposure to mineral recovery leads to a great variety of blasting practices. A common characteristic of many of the sources is that they are detonated at or near the earth`s surface and thus can be recorded by camera or video. Although the primary interest is in the seismic waveforms that these blasts generate, the visual observations of the blasts provide important constraints that can be applied to the physical interpretation of the seismic source function. In particular, high speed images can provide information on detonation times of individuals charges, the timing and amount of mass movement during the blasting process and, in some instances, evidence of wave propagation away from the source. All of these characteristics can be valuable in interpreting the equivalent seismic source function for a set of mine explosions and quantifying the relative importance of the different processes. This paper documents work done at the Los Alamos National Laboratory and Southern Methodist University to take standard Hi-8 video of mine blasts, recover digital images from them, and combine them with ground motion records for interpretation. The steps in the data acquisition, processing, display, and interpretation are outlined. The authors conclude that the combination of video with seismic and acoustic signals can be a powerful diagnostic tool for the study of blasting techniques and seismology. A low cost system for generating similar diagnostics using consumer-grade video camera and direct-to-disk video hardware is proposed. Application is to verification of the Comprehensive Test Ban Treaty.

  14. Video image cliff notes

    NASA Astrophysics Data System (ADS)

    Szu, Harold; Hsu, Charles

    2012-06-01

    Can a compressive sampling expert system help to build a summary of a video in a composited picture? Digital Internet age has provided all with an information degree of freedom; but, comes with it, the societal trash being accumulated beyond analysts to sort through, to summary video automatically as a digital library category. While we wish preserve the spirit of democratic Smartphone-Internet to all, we provide an automation and unbiased tool called the compressive sampling expert system (CSpES) to summarize the video content at user's own discretion.

  15. Video image stabilization and registration--plus

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor)

    2009-01-01

    A method of stabilizing a video image displayed in multiple video fields of a video sequence includes the steps of: subdividing a selected area of a first video field into nested pixel blocks; determining horizontal and vertical translation of each of the pixel blocks in each of the pixel block subdivision levels from the first video field to a second video field; and determining translation of the image from the first video field to the second video field by determining a change in magnification of the image from the first video field to the second video field in each of horizontal and vertical directions, and determining shear of the image from the first video field to the second video field in each of the horizontal and vertical directions.

  16. Imaging of Acoustic Waves in Sand

    SciTech Connect

    Deason, Vance Albert; Telschow, Kenneth Louis; Watson, Scott Marshall

    2003-08-01

    There is considerable interest in detecting objects such as landmines shallowly buried in loose earth or sand. Various techniques involving microwave, acoustic, thermal and magnetic sensors have been used to detect such objects. Acoustic and microwave sensors have shown promise, especially if used together. In most cases, the sensor package is scanned over an area to eventually build up an image or map of anomalies. We are proposing an alternate, acoustic method that directly provides an image of acoustic waves in sand or soil, and their interaction with buried objects. The INEEL Laser Ultrasonic Camera utilizes dynamic holography within photorefractive recording materials. This permits one to image and demodulate acoustic waves on surfaces in real time, without scanning. A video image is produced where intensity is directly and linearly proportional to surface motion. Both specular and diffusely reflecting surfaces can be accomodated and surface motion as small as 0.1 nm can be quantitatively detected. This system was used to directly image acoustic surface waves in sand as well as in solid objects. Waves as frequencies of 16 kHz were generated using modified acoustic speakers. These waves were directed through sand toward partially buried objects. The sand container was not on a vibration isolation table, but sat on the lab floor. Interaction of wavefronts with buried objects showed reflection, diffraction and interference effects that could provide clues to location and characteristics of buried objects. Although results are preliminary, success in this effort suggests that this method could be applied to detection of buried landmines or other near-surface items such as pipes and tanks.

  17. Aerial Video Imaging

    NASA Technical Reports Server (NTRS)

    1991-01-01

    When Michael Henry wanted to start an aerial video service, he turned to Johnson Space Center for assistance. Two NASA engineers - one had designed and developed TV systems in Apollo, Skylab, Apollo- Soyuz and Space Shuttle programs - designed a wing-mounted fiberglass camera pod. Camera head and angles are adjustable, and the pod is shaped to reduce vibration. The controls are located so a solo pilot can operate the system. A microprocessor displays latitude, longitude, and bearing, and a GPS receiver provides position data for possible legal references. The service has been successfully utilized by railroads, oil companies, real estate companies, etc.

  18. Volumetric Imaging Using Acoustical Holography

    NASA Astrophysics Data System (ADS)

    Garlick, T. F.; Garlick, G. F.

    Transmission acoustical holography holds tremendous promise for medical imaging applications. As with optical holography, an image is obtained using the interference of two coherent acoustic sources, the transmitted object wave with a reference wave. Although resultant images are true holograms, depth can be difficult to quantify and an entire volume in one image can often result in "too much" information. Since Physicians/Radiologists are often interested in viewing a single plane at a time, techniques have been developed to generate acoustic holograms of "slices" within a volume. These primarily include focused transmission holography with spatial and frequency filtering techniques. These techniques along with an overview and current status of acoustical holography in medical imaging applications will be presented

  19. Radiation effects on video imagers

    SciTech Connect

    Yates, G.J.; Bujnosek, J.J.; Jaramillo, S.A.; Walton, R.B.; Martinez, T.M.; Black, J.P.

    1985-01-01

    Radiation sensitivity of several photoconductive, photoemissive, and solid state silicon-based video imagers was measured by analyzing stored photocharge induced by irradiation with continuous and pulsed sources of high energy photons and neutrons. Transient effects as functions of absorbed dose, dose rate, fluences, and ionizing particle energy are presented.

  20. Computerized tomography using video recorded fluoroscopic images

    NASA Technical Reports Server (NTRS)

    Kak, A. C.; Jakowatz, C. V., Jr.; Baily, N. A.; Keller, R. A.

    1975-01-01

    A computerized tomographic imaging system is examined which employs video-recorded fluoroscopic images as input data. By hooking the video recorder to a digital computer through a suitable interface, such a system permits very rapid construction of tomograms.

  1. Acoustic subwavelength imaging of subsurface objects with acoustic resonant metalens

    SciTech Connect

    Cheng, Ying; Liu, XiaoJun; Zhou, Chen; Wei, Qi; Wu, DaJian

    2013-11-25

    Early research into acoustic metamaterials has shown the possibility of achieving subwavelength near-field acoustic imaging. However, a major restriction of acoustic metamaterials is that the imaging objects must be placed in close vicinity of the devices. Here, we present an approach for acoustic imaging of subsurface objects far below the diffraction limit. An acoustic metalens made of holey-structured metamaterials is used to magnify evanescent waves, which can rebuild an image at the central plane. Without changing the physical structure of the metalens, our proposed approach can image objects located at certain distances from the input surface, which provides subsurface signatures of the objects with subwavelength spatial resolution.

  2. Video imaging systems: A survey

    SciTech Connect

    Kefauver, H.L.

    1989-07-01

    Recent technological advances in the field of electronics have made video imaging a viable substitute for the traditional Polaroid/trademark/ picture used to create photo ID credentials. New families of hardware and software products, when integrated into a system, provide an exciting and powerful toll which can be used simply to make badges or enhance an access control system. This report is designed to make the reader aware of who is currently in this business and compare their capabilities.

  3. Laser Acoustic Imaging of Film Bulk Acoustic Resonator (FBAR) Lateral Mode Dispersion

    SciTech Connect

    Ken L. Telschow

    2004-07-01

    A laser acoustic imaging microscope has been developed that measures acoustic motion with high spatial resolution without scanning. Images are recorded at normal video frame rates and heterodyne principles are used to allow operation at any frequency from Hz to GHz. Fourier transformation of the acoustic amplitude and phase displacement images provides a direct quantitative determination of excited mode wavenumbers at any frequency. Results are presented at frequencies near the first longitudinal thickness mode (~ 900 MHz) demonstrating simultaneous excitation of lateral modes with nonzero wavenumbers in an electrically driven AlN thin film acoustic resonator. Images combined at several frequencies form a direct visualization of lateral mode dispersion relations for the device under test allowing mode identification and a direct measure of specific lateral mode properties. Discussion and analysis of the results are presented in comparison with plate wave modeling of these devices taking account for material anisotropy and multilayer films.

  4. Acoustic Waves in Medical Imaging and Diagnostics

    PubMed Central

    Sarvazyan, Armen P.; Urban, Matthew W.; Greenleaf, James F.

    2013-01-01

    Up until about two decades ago acoustic imaging and ultrasound imaging were synonymous. The term “ultrasonography,” or its abbreviated version “sonography” meant an imaging modality based on the use of ultrasonic compressional bulk waves. Since the 1990s numerous acoustic imaging modalities started to emerge based on the use of a different mode of acoustic wave: shear waves. It was demonstrated that imaging with these waves can provide very useful and very different information about the biological tissue being examined. We will discuss physical basis for the differences between these two basic modes of acoustic waves used in medical imaging and analyze the advantages associated with shear acoustic imaging. A comprehensive analysis of the range of acoustic wavelengths, velocities, and frequencies that have been used in different imaging applications will be presented. We will discuss the potential for future shear wave imaging applications. PMID:23643056

  5. Acoustic imaging microscope

    DOEpatents

    Deason, Vance A.; Telschow, Kenneth L.

    2006-10-17

    An imaging system includes: an object wavefront source and an optical microscope objective all positioned to direct an object wavefront onto an area of a vibrating subject surface encompassed by a field of view of the microscope objective, and to direct a modulated object wavefront reflected from the encompassed surface area through a photorefractive material; and a reference wavefront source and at least one phase modulator all positioned to direct a reference wavefront through the phase modulator and to direct a modulated reference wavefront from the phase modulator through the photorefractive material to interfere with the modulated object wavefront. The photorefractive material has a composition and a position such that interference of the modulated object wavefront and modulated reference wavefront occurs within the photorefractive material, providing a full-field, real-time image signal of the encompassed surface area.

  6. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA Marshall Space Flight Center, atmospheric scientist Paul Meyer (left) and solar physicist Dr. David Hathaway, have developed promising new software, called Video Image Stabilization and Registration (VISAR), that may help law enforcement agencies to catch criminals by improving the quality of video recorded at crime scenes, VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects; produces clearer images of moving objects; smoothes jagged edges; enhances still images; and reduces video noise of snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of Ultrasounds which are infamous for their grainy, blurred quality. It would be especially useful for tornadoes, tracking whirling objects and helping to determine the tornado's wind speed. This image shows two scientists reviewing an enhanced video image of a license plate taken from a moving automobile.

  7. Video imaging for Nuclear Safeguards

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.; Brown, J.E.; Rodriguez, C.A.; Stoltz, L.A.

    1994-04-01

    The field of Nuclear Safeguards has received increasing amounts of public attention since the events of the Iraq-UN conflict over Kuwait, the dismantlement of the former Soviet Union, and more recently, the North Korean resistance to nuclear facility inspections by the International Atomic Energy Agency (IAEA). The role of nuclear safeguards in these and other events relating to the world`s nuclear material inventory is to assure safekeeping of these materials and to verify the inventory and usage of these materials as reported by states that have signed the Nuclear Nonproliferation Treaty. Nuclear Safeguards are measures prescribed by domestic and international regulatory bodies and implemented by the nuclear facility or the regulatory body. These measures include destructive and nondestructive analysis of product materials and process by-products for materials control and accountancy purposes, physical protection for domestic safeguards, and containment and surveillance for international safeguards. In this presentation we will introduce digital video image processing and analysis systems that have been developed at Los Alamos National Laboratory for application to the nuclear safeguards problem. Of specific interest to this audience is the detector-activated predictive wavelet transform image coding used to reduce drastically the data storage requirements for these unattended, remote safeguards systems.

  8. Video and acoustic camera techniques for studying fish under ice: a review and comparison

    SciTech Connect

    Mueller, Robert P.; Brown, Richard S.; Hop, Haakon H.; Moulton, Larry

    2006-09-05

    Researchers attempting to study the presence, abundance, size, and behavior of fish species in northern and arctic climates during winter face many challenges, including the presence of thick ice cover, snow cover, and, sometimes, extremely low temperatures. This paper describes and compares the use of video and acoustic cameras for determining fish presence and behavior in lakes, rivers, and streams with ice cover. Methods are provided for determining fish density and size, identifying species, and measuring swimming speed and successful applications of previous surveys of fish under the ice are described. These include drilling ice holes, selecting batteries and generators, deploying pan and tilt cameras, and using paired colored lasers to determine fish size and habitat associations. We also discuss use of infrared and white light to enhance image-capturing capabilities, deployment of digital recording systems and time-lapse techniques, and the use of imaging software. Data are presented from initial surveys with video and acoustic cameras in the Sagavanirktok River Delta, Alaska, during late winter 2004. These surveys represent the first known successful application of a dual-frequency identification sonar (DIDSON) acoustic camera under the ice that achieved fish detection and sizing at camera ranges up to 16 m. Feasibility tests of video and acoustic cameras for determining fish size and density at various turbidity levels are also presented. Comparisons are made of the different techniques in terms of suitability for achieving various fisheries research objectives. This information is intended to assist researchers in choosing the equipment that best meets their study needs.

  9. Reflective echo tomographic imaging using acoustic beams

    DOEpatents

    Kisner, Roger; Santos-Villalobos, Hector J

    2014-11-25

    An inspection system includes a plurality of acoustic beamformers, where each of the plurality of acoustic beamformers including a plurality of acoustic transmitter elements. The system also includes at least one controller configured for causing each of the plurality of acoustic beamformers to generate an acoustic beam directed to a point in a volume of interest during a first time. Based on a reflected wave intensity detected at a plurality of acoustic receiver elements, an image of the volume of interest can be generated.

  10. Enhanced Video Surveillance (EVS) with speckle imaging

    SciTech Connect

    Carrano, C J

    2004-01-13

    Enhanced Video Surveillance (EVS) with Speckle Imaging is a high-resolution imaging system that substantially improves resolution and contrast in images acquired over long distances. This technology will increase image resolution up to an order of magnitude or greater for video surveillance systems. The system's hardware components are all commercially available and consist of a telescope or large-aperture lens assembly, a high-performance digital camera, and a personal computer. The system's software, developed at LLNL, extends standard speckle-image-processing methods (used in the astronomical community) to solve the atmospheric blurring problem associated with imaging over medium to long distances (hundreds of meters to tens of kilometers) through horizontal or slant-path turbulence. This novel imaging technology will not only enhance national security but also will benefit law enforcement, security contractors, and any private or public entity that uses video surveillance to protect their assets.

  11. Marking spatial parts within stereoscopic video images

    NASA Astrophysics Data System (ADS)

    Belz, Constance; Boehm, Klaus; Duong, Thanh; Kuehn, Volker; Weber, Martin

    1996-04-01

    The technology of stereoscopic imaging enables reliable online telediagnoses. Applications of telediagnosis include the fields of medicine and in general telerobotics. For allowing the participants in a telediagnosis to mark spatial parts within the stereoscopic video image, graphic tools and automatism have to be provided. The process of marking spatial parts and objects inside a stereoscopic video image is a non trivial interaction technique. The markings themselves have to be 3D elements instead of 2D markings which would lead to an alienated effect `in' the stereoscopic video image. Furthermore, one problem to be tackled here, is that the content of the stereoscopic video image is unknown. This is in contrast to 3D Virtual Reality scenes, which enable an easy 3D interaction because all the objects and their position within the 3D scene are known. The goals of our research comprised the development of new interaction paradigms and marking techniques in stereoscopic video images, as well as an investigation of input devices appropriate for this interaction task. We have implemented these interaction techniques in a test environment and integrated therefore computer graphics into stereoscopic video images. In order to evaluate the new interaction techniques a user test was carried out. The results of our research will be presented here.

  12. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR). VISAR may help law enforcement agencies catch criminals by improving the quality of video recorded at crime scenes. In this photograph, the single frame at left, taken at night, was brightened in order to enhance details and reduce noise or snow. To further overcome the video defects in one frame, Law enforcement officials can use VISAR software to add information from multiple frames to reveal a person. Images from less than a second of videotape were added together to create the clarified image at right. VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. The software can be used for defense application by improving recornaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  13. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  14. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center, atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  15. First images of thunder: Acoustic imaging of triggered lightning

    NASA Astrophysics Data System (ADS)

    Dayeh, M. A.; Evans, N. D.; Fuselier, S. A.; Trevino, J.; Ramaekers, J.; Dwyer, J. R.; Lucia, R.; Rassoul, H. K.; Kotovsky, D. A.; Jordan, D. M.; Uman, M. A.

    2015-07-01

    An acoustic camera comprising a linear microphone array is used to image the thunder signature of triggered lightning. Measurements were taken at the International Center for Lightning Research and Testing in Camp Blanding, FL, during the summer of 2014. The array was positioned in an end-fire orientation thus enabling the peak acoustic reception pattern to be steered vertically with a frequency-dependent spatial resolution. On 14 July 2014, a lightning event with nine return strokes was successfully triggered. We present the first acoustic images of individual return strokes at high frequencies (>1 kHz) and compare the acoustically inferred profile with optical images. We find (i) a strong correlation between the return stroke peak current and the radiated acoustic pressure and (ii) an acoustic signature from an M component current pulse with an unusual fast rise time. These results show that acoustic imaging enables clear identification and quantification of thunder sources as a function of lightning channel altitude.

  16. Acoustic 3D imaging of dental structures

    SciTech Connect

    Lewis, D.K.; Hume, W.R.; Douglass, G.D.

    1997-02-01

    Our goals for the first year of this three dimensional electodynamic imaging project was to determine how to combine flexible, individual addressable; preprocessing of array source signals; spectral extrapolation or received signals; acoustic tomography codes; and acoustic propagation modeling code. We investigated flexible, individually addressable acoustic array material to find the best match in power, sensitivity and cost and settled on PVDF sheet arrays and 3-1 composite material.

  17. Intergraph video and images exploitation capabilities

    NASA Astrophysics Data System (ADS)

    Colla, Simone; Manesis, Charalampos

    2013-08-01

    The current paper focuses on the capture, fusion and process of aerial imagery in order to leverage full motion video, giving analysts the ability to collect, analyze, and maximize the value of video assets. Unmanned aerial vehicles (UAV) have provided critical real-time surveillance and operational support to military organizations, and are a key source of intelligence, particularly when integrated with other geospatial data. In the current workflow, at first, the UAV operators plan the flight by using a flight planning software. During the flight the UAV send a live video stream directly on the field to be processed by Intergraph software, to generate and disseminate georeferenced images trough a service oriented architecture based on ERDAS Apollo suite. The raw video-based data sources provide the most recent view of a situation and can augment other forms of geospatial intelligence - such as satellite imagery and aerial photos - to provide a richer, more detailed view of the area of interest. To effectively use video as a source of intelligence, however, the analyst needs to seamlessly fuse the video with these other types of intelligence, such as map features and annotations. Intergraph has developed an application that automatically generates mosaicked georeferenced image, tags along the video route which can then be seamlessly integrated with other forms of static data, such as aerial photos, satellite imagery, or geospatial layers and features. Consumers will finally have the ability to use a single, streamlined system to complete the entire geospatial information lifecycle: capturing geospatial data using sensor technology; processing vector, raster, terrain data into actionable information; managing, fusing, and sharing geospatial data and video toghether; and finally, rapidly and securely delivering integrated information products, ensuring individuals can make timely decisions.

  18. Imaging marine geophysical environments with vector acoustics.

    PubMed

    Lindwall, Dennis

    2006-09-01

    Using vector acoustic sensors for marine geoacoustic surveys instead of the usual scalar hydrophones enables one to acquire three-dimensional (3D) survey data with instrumentation and logistics similar to current 2D surveys. Vector acoustic sensors measure the sound wave direction directly without the cumbersome arrays that hydrophones require. This concept was tested by a scaled experiment in an acoustic water tank that had a well-controlled environment with a few targets. Using vector acoustic data from a single line of sources, the three-dimensional tank environment was imaged by directly locating the source and all reflectors. PMID:17004497

  19. Video surveillance with speckle imaging

    DOEpatents

    Carrano, Carmen J.; Brase, James M.

    2007-07-17

    A surveillance system looks through the atmosphere along a horizontal or slant path. Turbulence along the path causes blurring. The blurring is corrected by speckle processing short exposure images recorded with a camera. The exposures are short enough to effectively freeze the atmospheric turbulence. Speckle processing is used to recover a better quality image of the scene.

  20. Latino Film and Video Images.

    ERIC Educational Resources Information Center

    Vazquez, Blanca, Ed.

    1990-01-01

    This theme issue of the "Centro Bulletin" examines media stereotypes of Latinos and presents examples of alternatives. "From Assimilation to Annihilation: Puerto Rican Images in U.S. Films" (R. Perez) traces the representation of Puerto Ricans from the early days of television to the films of the 1970s. "The Latino 'Boom' in Hollywood" (C. Fusco)…

  1. Video guidance, landing, and imaging systems

    NASA Technical Reports Server (NTRS)

    Schappell, R. T.; Knickerbocker, R. L.; Tietz, J. C.; Grant, C.; Rice, R. B.; Moog, R. D.

    1975-01-01

    The adaptive potential of video guidance technology for earth orbital and interplanetary missions was explored. The application of video acquisition, pointing, tracking, and navigation technology was considered to three primary missions: planetary landing, earth resources satellite, and spacecraft rendezvous and docking. It was found that an imaging system can be mechanized to provide a spacecraft or satellite with a considerable amount of adaptability with respect to its environment. It also provides a level of autonomy essential to many future missions and enhances their data gathering ability. The feasibility of an autonomous video guidance system capable of observing a planetary surface during terminal descent and selecting the most acceptable landing site was successfully demonstrated in the laboratory. The techniques developed for acquisition, pointing, and tracking show promise for recognizing and tracking coastlines, rivers, and other constituents of interest. Routines were written and checked for rendezvous, docking, and station-keeping functions.

  2. Pulsed-Source Interferometry in Acoustic Imaging

    NASA Technical Reports Server (NTRS)

    Shcheglov, Kirill; Gutierrez, Roman; Tang, Tony K.

    2003-01-01

    A combination of pulsed-source interferometry and acoustic diffraction has been proposed for use in imaging subsurface microscopic defects and other features in such diverse objects as integrated-circuit chips, specimens of materials, and mechanical parts. A specimen to be inspected by this technique would be mounted with its bottom side in contact with an acoustic transducer driven by a continuous-wave acoustic signal at a suitable frequency, which could be as low as a megahertz or as high as a few hundred gigahertz. The top side of the specimen would be coupled to an object that would have a flat (when not vibrating) top surface and that would serve as the acoustical analog of an optical medium (in effect, an acoustical "optic").

  3. TORNADO: omnistereo video imaging with rotating optics.

    PubMed

    Tanaka, Kenji; Tachi, Susumu

    2005-01-01

    One of the key techniques for vision-based communication is omnidirectional stereo (omnistereo) imaging, in which stereoscopic images for an arbitrary horizontal direction are captured and presented according to the viewing direction of the observer. Although omnistereo models have been surveyed in several studies, few omnistereo sensors have actually been implemented. In this paper, a practical method for capturing omnistereo video sequences using rotating optics is proposed and evaluated. The rotating optics system consists of prism sheets, circular or linear polarizing films, and a hyperboloidal mirror. This system has two different modes of operation with regard to the separation of images for the left and right eyes. In the high-speed shutter mode, images are separated using postimage processing, while, in the low-speed shutter mode, the image separation is completed by optics. By capturing actual images, we confirmed the effectiveness of the methods. PMID:16270855

  4. An automated, video tape-based image archiving system.

    PubMed

    Vesely, I; Eickmeier, B; Campbell, G

    1991-01-01

    We have developed an image storage and retrieval system that makes use of a Super-VHS video tape recorder, and a personal computer fitted with an interface board and a video frame grabber. Under PC control, video images are acquired into the frame grabber, a numeric bar code is graphically superimposed for identification purposes, and the composite images are recorded on video tape. During retrieval, the bar code is decoded in real-time and the desired images are automatically retrieved. This video tape-based system, enables the images to be previewed and retrieved much faster than if stored in digital format. PMID:1769220

  5. Measuring rainfall from video images

    NASA Astrophysics Data System (ADS)

    Allamano, Paola; Croci, Alberto; Laio, Francesco

    2015-04-01

    We propose a novel technique based on the quantitative detection of rain intensity from images, i.e. from pictures taken in rainy conditions. The method is fully analytical and based on the fundamentals of camera optics. A rigorous statistical framing of the technique allows one to obtain the rain rate estimates in terms of expected values and associated uncertainty. Preliminary applications of the method provide promising results with errors of the order of +-20%. A precise quantification of the method accuracy will require a more systematic and long-term comparison with benchmark measures, but we suspect there might be ample room for improvement. The significant step forward with respect to standard rain gauges resides in the possibility to retrieve measures at very high temporal resolution (10 measures per minute) at a very low cost. Perspective applications include the possibility to dramatically increase the spatial density of rain observations by exporting the technique to crowdsourced pictures of rain acquired with cameras and smartphones.

  6. Retrosigmoid craniotomy for resection of acoustic neuroma with hearing preservation: a video publication.

    PubMed

    Forbes, Jonathan A; Carlson, Matthew L; Godil, Saniya S; Bennett, Marc L; Wanna, George B; Weaver, Kyle D

    2014-01-01

    In this publication, video format is utilized to review the operative technique of retrosigmoid craniotomy for resection of acoustic neuroma with attempted hearing preservation. Steps of the operative procedure are reviewed and salient principles and technical nuances useful in minimizing complications and maximizing efficacy are discussed. The video can be found here: http://youtu.be/PBE5rQ7B0Ls . PMID:24380523

  7. Acoustic imaging systems (for robotic object acquisition)

    NASA Astrophysics Data System (ADS)

    Richardson, J. M.; Martin, J. F.; Marsh, K. A.; Schoenwald, J. S.

    1985-03-01

    The long-term objective of the effort is to establish successful approaches for 3D acoustic imaging of dense solid objects in air to provide the information required for acquisition and manipulation of these objects by a robotic system. The objective of this first year's work was to achieve and demonstrate the determination of the external geometry (shape) of such objects with a fixed sparse array of sensors, without the aid of geometrical models or extensive training procedures. Conventional approaches for acoustic imaging fall into two basic categories. The first category is used exclusively for dense solid objects. It involves echo-ranging from a large number of sensor positions, achieved either through the use of a larger array of transducers or through extensive physical scanning of a small array. This approach determines the distance to specular reflection points from each sensor position; with suitable processing an image can be inferred. The second category uses the full acoustic waveforms to provide an image, but is strictly applicable only to weak inhomogeneities. The most familiar example is medical imaging of the soft tissue portions of the body where the range of acoustic impedance is relatively small.

  8. Feature-preserving image/video compression

    NASA Astrophysics Data System (ADS)

    Al-Jawad, Naseer; Jassim, Sabah

    2005-10-01

    Advances in digital image processing, the advents of multimedia computing, and the availability of affordable high quality digital cameras have led to increased demand for digital images/videos. There has been a fast growth in the number of information systems that benefit from digital imaging techniques and present many tough challenges. In this paper e are concerned with applications for which image quality is a critical requirement. The fields of medicine, remote sensing, real time surveillance, and image-based automatic fingerprint/face identification systems are all but few examples of such applications. Medical care is increasingly dependent on imaging for diagnostics, surgery, and education. It is estimated that medium size hospitals in the US generate terabytes of MRI images and X-Ray images are generated to be stored in very large databases which are frequently accessed and searched for research and training. On the other hand, the rise of international terrorism and the growth of identity theft have added urgency to the development of new efficient biometric-based person verification/authentication systems. In future, such systems can provide an additional layer of security for online transactions or for real-time surveillance.

  9. Magnetic resonance acoustic radiation force imaging

    PubMed Central

    McDannold, Nathan; Maier, Stephan E.

    2008-01-01

    Acoustic radiation force impulse imaging is an elastography method developed for ultrasound imaging that maps displacements produced by focused ultrasound pulses systematically applied to different locations. The resulting images are “stiffness weighted” and yield information about local mechanical tissue properties. Here, the feasibility of magnetic resonance acoustic radiation force imaging (MR-ARFI) was tested. Quasistatic MR elastography was used to measure focal displacements using a one-dimensional MRI pulse sequence. A 1.63 or 1.5 MHz transducer supplied ultrasound pulses which were triggered by the magnetic resonance imaging hardware to occur before a displacement-encoding gradient. Displacements in and around the focus were mapped in a tissue-mimicking phantom and in an ex vivo bovine kidney. They were readily observed and increased linearly with acoustic power in the phantom (R2=0.99). At higher acoustic power levels, the displacement substantially increased and was associated with irreversible changes in the phantom. At these levels, transverse displacement components could also be detected. Displacements in the kidney were also observed and increased after thermal ablation. While the measurements need validation, the authors have demonstrated the feasibility of detecting small displacements induced by low-power ultrasound pulses using an efficient magnetic resonance imaging pulse sequence that is compatible with tracking of a dynamically steered ultrasound focal spot, and that the displacement increases with acoustic power. MR-ARFI has potential for elastography or to guide ultrasound therapies that use low-power pulsed ultrasound exposures, such as drug delivery. PMID:18777934

  10. Underwater imaging with a moving acoustic lens.

    PubMed

    Kamgar-Parsi, B; Rosenblum, L J; Belcher, E O

    1998-01-01

    The acoustic lens is a high-resolution, forward-looking sonar for three dimensional (3-D) underwater imaging. We discuss processing the lens data for recreating and visualizing the scene. Acoustical imaging, compared to optical imaging, is sparse and low resolution. To achieve higher resolution, we obtain a denser sample by mounting the lens on a moving platform and passing over the scene. This introduces the problem of data fusion from multiple overlapping views for scene formation, which we discuss. We also discuss the improvements in object reconstruction by combining data from several passes over an object. We present algorithms for pass registration and show that this process can be done with enough accuracy to improve the image and provide greater detail about the object. The results of in-water experiments show the degree to which size and shape can be obtained under (nearly) ideal conditions. PMID:18267382

  11. Computerized tomography using video recorded fluoroscopic images

    NASA Technical Reports Server (NTRS)

    Kak, A. C.; Jakowatz, C. V., Jr.; Baily, N. A.; Keller, R. A.

    1977-01-01

    The use of video-recorded fluoroscopic images as input data for digital reconstruction of objects from their projections is examined. The fluoroscopic and the scanning apparatus used for the experiments are of a commercial type already in existence in most hospitals. It is shown that for beams with divergence up to about 15 deg, one can use a convolution algorithm designed for the parallel radiation case with negligible degradation both quantitatively and from a visual quality standpoint. This convolution algorithm is computationally more efficient than either the algebraic techniques or the convolution algorithms for radially diverging data. Results from studies on Lucite phantoms and a freshly sacrificed rat are included.

  12. Objective analysis of image quality of video image capture systems

    NASA Astrophysics Data System (ADS)

    Rowberg, Alan H.

    1990-07-01

    As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use methods of digitizing the video signal that is produced for display on the scanner viewing console itself. A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These images have been captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and noise filters can also add or modify artifacts seen in the captured images, often giving unusual results. Each image is described, together with the tests which were performed using them. One image contains alternating black and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give

  13. Full-Field Imaging of GHz Film Bulk Acoustic Resonator Motion

    SciTech Connect

    Telschow, Kenneth Louis; Deason, Vance Albert; Cottle, David Lynn; Larson III, J. D.

    2003-10-01

    A full-field view laser ultrasonic imaging method has been developed that measures acoustic motion at a surface without scanning. Images are recorded at normal video frame rates by using dynamic holography with photorefractive interferometric detection. By extending the approach to ultra high frequencies, an acoustic microscope has been developed that is capable of operation at gigahertz frequency and micron length scales. Both acoustic amplitude and phase are recorded, allowing full calibration and determination of phases to within a single arbitrary constant. Results are presented of measurements at frequencies of 800-900 MHz, illustrating a multitude of normal mode behavior in electrically driven thin film acoustic resonators. Coupled with microwave electrical impedance measurements, this imaging mode provides an exceptionally fast method for evaluation of electric-to-acoustic coupling of these devices and their performance. Images of 256 /spl times/ 240 pixels are recorded at 18 fps rates synchronized to obtain both in-phase and quadrature detection of the acoustic motion. Simple averaging provides sensitivity to the subnanometer level at each pixel calibrated over the image using interferometry. Identification of specific acoustic modes and their relationship to electrical impedance characteristics show the advantages and overall high speed of the technique.

  14. Methane distribution in porewaters of the Eastern Siberian Shelf Sea - chemical, acoustic, and video observations

    NASA Astrophysics Data System (ADS)

    Bruchert, V.; Sawicka, J. E.; Samarkin, V.; Noormets, R.; Stockmann, G. J.; Bröder, L.; Rattray, J.; Steinbach, J.

    2015-12-01

    We present porewater methane and sulfate concentrations, and the isotope composition of carbon dioxide from 18 sites in areas of reported high methane water column concentrations on the Siberian shelf. Echosounder imaging and video imagery of the benthic environment were used to detect potential bubble emission from the sea bottom and to locate high methane emission areas. In areas where bubble flares were identified by acoustic echsounder imaging, recovered sediment cores provided evidence for slightly elevated porewater methane concentrations 10 cm below the sediment surface relative to sites without flares. Throughout the recovered sediment depth intervals porewater concentrations of methane were more than a factor 300 below the gas saturation limit at sea surface pressure. In addition, surface sediment video recordings provided no evidence for bubble emissions in the investigated methane hotspot areas although at nearby sites bubbles were detected higher in the water column. The conflicting observations of acoustic indications of rising bubbles and the absence of bubbles and methane oversaturation in any of the sediment cores during the whole SWERUS cruise suggest that advective methane seepage is a spatially limited phenomenon that is difficult to capture with routine ship-based core sampling methods in this field area. Recovery of a sediment core from one high-activity site indicated steep gradients in dissolved sulfate and methane in the first 8 cm of sediment pointing to the presence of anaerobic methane oxidation at a site with a high upward flux of methane. Based on the decrease of methane towards the sediment surface and the rates of sulfate reduction-coupled methane oxidation, most of the upward-transported methane was oxidized within the sediment. This conclusion is further supported by the stable isotope composition of dissolved carbon dioxide in porewaters and the precipitation of calcium carbonate minerals only found in sediment at this site

  15. Real-time adaptive video image enhancement

    NASA Astrophysics Data System (ADS)

    Garside, John R.; Harrison, Chris G.

    1999-07-01

    As part of a continuing collaboration between the University of Manchester and British Aerospace, a signal processing array has been constructed to demonstrate that it is feasible to compensate a video signal for the degradation caused by atmospheric haze in real-time. Previously reported work has shown good agreement between a simple physical model of light scattering by atmospheric haze and the observed loss of contrast. This model predicts a characteristic relationship between contrast loss in the image and the range from the camera to the scene. For an airborne camera, the slant-range to a point on the ground may be estimated from the airplane's pose, as reported by the inertial navigation system, and the contrast may be obtained from the camera's output. Fusing data from these two streams provides a means of estimating model parameters such as the visibility and the overall illumination of the scene. This knowledge allows the same model to be applied in reverse, thus restoring the contrast lost to atmospheric haze. An efficient approximation of range is vital for a real-time implementation of the method. Preliminary results show that an adaptive approach to fitting the model's parameters, exploiting the temporal correlation between video frames, leads to a robust implementation with a significantly accelerated throughput.

  16. VLSI-based Video Event Triggering for Image Data Compression

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    1994-01-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  17. VLSI-based video event triggering for image data compression

    NASA Astrophysics Data System (ADS)

    Williams, Glenn L.

    1994-02-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  18. Image and video fingerprinting: forensic applications

    NASA Astrophysics Data System (ADS)

    Lefebvre, Frédéric; Chupeau, Bertrand; Massoudi, Ayoub; Diehl, Eric

    2009-02-01

    Fighting movie piracy often requires automatic content identification. The most common technique to achieve this uses watermarking, but not all copyrighted content is watermarked. Video fingerprinting is an efficient alternative solution to identify content, to manage multimedia files in UGC sites or P2P networks and to register pirated copies with master content. When registering by matching copy fingerprints with master ones, a model of distortion can be estimated. In case of in-theater piracy, the model of geometric distortion allows the estimation of the capture location. A step even further is to determine, from passive image analysis only, whether different pirated versions were captured with the same camcorder. In this paper we present three such fingerprinting-based forensic applications: UGC filtering, estimation of capture location and source identification.

  19. Quantitative Ultrasound Imaging Using Acoustic Backscatter Coefficients.

    NASA Astrophysics Data System (ADS)

    Boote, Evan Jeffery

    Current clinical ultrasound scanners render images which have brightness levels related to the degree of backscattered energy from the tissue being imaged. These images offer the interpreter a qualitative impression of the scattering characteristics of the tissue being examined, but due to the complex factors which affect the amplitude and character of the echoed acoustic energy, it is difficult to make quantitative assessments of scattering nature of the tissue, and thus, difficult to make precise diagnosis when subtle disease effects are present. In this dissertation, a method of data reduction for determining acoustic backscatter coefficients is adapted for use in forming quantitative ultrasound images of this parameter. In these images, the brightness level of an individual pixel corresponds to the backscatter coefficient determined for the spatial position represented by that pixel. The data reduction method utilized rigorously accounts for extraneous factors which affect the scattered echo waveform and has been demonstrated to accurately determine backscatter coefficients under a wide range of conditions. The algorithms and procedures used to form backscatter coefficient images are described. These were tested using tissue-mimicking phantoms which have regions of varying scattering levels. Another phantom has a fat-mimicking layer for testing these techniques under more clinically relevant conditions. Backscatter coefficient images were also formed of in vitro human liver tissue. A clinical ultrasound scanner has been adapted for use as a backscatter coefficient imaging platform. The digital interface between the scanner and the computer used for data reduction are described. Initial tests, using phantoms are presented. A study of backscatter coefficient imaging of in vivo liver was performed using several normal, healthy human subjects.

  20. Acoustic Imaging of Snowpack Physical Properties

    NASA Astrophysics Data System (ADS)

    Kinar, N. J.; Pomeroy, J. W.

    2011-12-01

    Measurements of snowpack depth, density, structure and temperature have often been conducted by the use of snowpits and invasive measurement devices. Previous research has shown that acoustic waves passing through snow are capable of measuring these properties. An experimental observation device (SAS2, System for the Acoustic Sounding of Snow) was used to autonomously send audible sound waves into the top of the snowpack and to receive and process the waves reflected from the interior and bottom of the snowpack. A loudspeaker and microphone array separated by an offset distance was suspended in the air above the surface of the snowpack. Sound waves produced from a loudspeaker as frequency-swept sequences and maximum length sequences were used as source signals. Up to 24 microphones measured the audible signal from the snowpack. The signal-to-noise ratio was compared between sequences in the presence of environmental noise contributed by wind and reflections from vegetation. Beamforming algorithms were used to reject spurious reflections and to compensate for movement of the sensor assembly during the time of data collection. A custom-designed circuit with digital signal processing hardware implemented an inversion algorithm to relate the reflected sound wave data to snowpack physical properties and to create a two-dimensional image of snowpack stratigraphy. The low power consumption circuit was powered by batteries and through WiFi and Bluetooth interfaces enabled the display of processed data on a mobile device. Acoustic observations were logged to an SD card after each measurement. The SAS2 system was deployed at remote field locations in the Rocky Mountains of Alberta, Canada. Acoustic snow properties data was compared with data collected from gravimetric sampling, thermocouple arrays, radiometers and snowpit observations of density, stratigraphy and crystal structure. Aspects for further research and limitations of the acoustic sensing system are also discussed.

  1. Method and apparatus for acoustic imaging of objects in water

    DOEpatents

    Deason, Vance A.; Telschow, Kenneth L.

    2005-01-25

    A method, system and underwater camera for acoustic imaging of objects in water or other liquids includes an acoustic source for generating an acoustic wavefront for reflecting from a target object as a reflected wavefront. The reflected acoustic wavefront deforms a screen on an acoustic side and correspondingly deforms the opposing optical side of the screen. An optical processing system is optically coupled to the optical side of the screen and converts the deformations on the optical side of the screen into an optical intensity image of the target object.

  2. VICAR - VIDEO IMAGE COMMUNICATION AND RETRIEVAL

    NASA Technical Reports Server (NTRS)

    Wall, R. J.

    1994-01-01

    VICAR (Video Image Communication and Retrieval) is a general purpose image processing software system that has been under continuous development since the late 1960's. Originally intended for data from the NASA Jet Propulsion Laboratory's unmanned planetary spacecraft, VICAR is now used for a variety of other applications including biomedical image processing, cartography, earth resources, and geological exploration. The development of this newest version of VICAR emphasized a standardized, easily-understood user interface, a shield between the user and the host operating system, and a comprehensive array of image processing capabilities. Structurally, VICAR can be divided into roughly two parts; a suite of applications programs and an executive which serves as the interfaces between the applications, the operating system, and the user. There are several hundred applications programs ranging in function from interactive image editing, data compression/decompression, and map projection, to blemish, noise, and artifact removal, mosaic generation, and pattern recognition and location. An information management system designed specifically for handling image related data can merge image data with other types of data files. The user accesses these programs through the VICAR executive, which consists of a supervisor and a run-time library. From the viewpoint of the user and the applications programs, the executive is an environment that is independent of the operating system. VICAR does not replace the host computer's operating system; instead, it overlays the host resources. The core of the executive is the VICAR Supervisor, which is based on NASA Goddard Space Flight Center's Transportable Applications Executive (TAE). Various modifications and extensions have been made to optimize TAE for image processing applications, resulting in a user friendly environment. The rest of the executive consists of the VICAR Run-Time Library, which provides a set of subroutines (image

  3. Video Imaging System Particularly Suited for Dynamic Gear Inspection

    NASA Technical Reports Server (NTRS)

    Broughton, Howard (Inventor)

    1999-01-01

    A digital video imaging system that captures the image of a single tooth of interest of a rotating gear is disclosed. The video imaging system detects the complete rotation of the gear and divide that rotation into discrete time intervals so that each tooth of interest of the gear is precisely determined when it is at a desired location that is illuminated in unison with a digital video camera so as to record a single digital image for each tooth. The digital images are available to provide instantaneous analysis of the tooth of interest, or to be stored and later provide images that yield a history that may be used to predict gear failure, such as gear fatigue. The imaging system is completely automated by a controlling program so that it may run for several days acquiring images without supervision from the user.

  4. VICAR - VIDEO IMAGE COMMUNICATION AND RETRIEVAL

    NASA Technical Reports Server (NTRS)

    Wall, R. J.

    1994-01-01

    VICAR (Video Image Communication and Retrieval) is a general purpose image processing software system that has been under continuous development since the late 1960's. Originally intended for data from the NASA Jet Propulsion Laboratory's unmanned planetary spacecraft, VICAR is now used for a variety of other applications including biomedical image processing, cartography, earth resources, and geological exploration. The development of this newest version of VICAR emphasized a standardized, easily-understood user interface, a shield between the user and the host operating system, and a comprehensive array of image processing capabilities. Structurally, VICAR can be divided into roughly two parts; a suite of applications programs and an executive which serves as the interfaces between the applications, the operating system, and the user. There are several hundred applications programs ranging in function from interactive image editing, data compression/decompression, and map projection, to blemish, noise, and artifact removal, mosaic generation, and pattern recognition and location. An information management system designed specifically for handling image related data can merge image data with other types of data files. The user accesses these programs through the VICAR executive, which consists of a supervisor and a run-time library. From the viewpoint of the user and the applications programs, the executive is an environment that is independent of the operating system. VICAR does not replace the host computer's operating system; instead, it overlays the host resources. The core of the executive is the VICAR Supervisor, which is based on NASA Goddard Space Flight Center's Transportable Applications Executive (TAE). Various modifications and extensions have been made to optimize TAE for image processing applications, resulting in a user friendly environment. The rest of the executive consists of the VICAR Run-Time Library, which provides a set of subroutines (image

  5. Research on defogging technology of video image based on FPGA

    NASA Astrophysics Data System (ADS)

    Liu, Shuo; Piao, Yan

    2015-03-01

    As the effect of atmospheric particles scattering, the video image captured by outdoor surveillance system has low contrast and brightness, which directly affects the application value of the system. The traditional defogging technology is mostly studied by software for the defogging algorithms of the single frame image. Moreover, the algorithms have large computation and high time complexity. Then, the defogging technology of video image based on Digital Signal Processing (DSP) has the problem of complex peripheral circuit. It can't be realized in real-time processing, and it's hard to debug and upgrade. In this paper, with the improved dark channel prior algorithm, we propose a kind of defogging technology of video image based on Field Programmable Gate Array (FPGA). Compared to the traditional defogging methods, the video image with high resolution can be processed in real-time. Furthermore, the function modules of the system have been designed by hardware description language. At last, the results show that the defogging system based on FPGA can process the video image with minimum resolution of 640×480 in real-time. After defogging, the brightness and contrast of video image are improved effectively. Therefore, the defogging technology proposed in the paper has a great variety of applications including aviation, forest fire prevention, national security and other important surveillance.

  6. Imaging of acoustic fields using optical feedback interferometry.

    PubMed

    Bertling, Karl; Perchoux, Julien; Taimre, Thomas; Malkin, Robert; Robert, Daniel; Rakić, Aleksandar D; Bosch, Thierry

    2014-12-01

    This study introduces optical feedback interferometry as a simple and effective technique for the two-dimensional visualisation of acoustic fields. We present imaging results for several pressure distributions including those for progressive waves, standing waves, as well as the diffraction and interference patterns of the acoustic waves. The proposed solution has the distinct advantage of extreme optical simplicity and robustness thus opening the way to a low cost acoustic field imaging system based on mass produced laser diodes. PMID:25606963

  7. Image and Video Compression with VLSI Neural Networks

    NASA Technical Reports Server (NTRS)

    Fang, W.; Sheu, B.

    1993-01-01

    An advanced motion-compensated predictive video compression system based on artificial neural networks has been developed to effectively eliminate the temporal and spatial redundancy of video image sequences and thus reduce the bandwidth and storage required for the transmission and recording of the video signal. The VLSI neuroprocessor for high-speed high-ratio image compression based upon a self-organization network and the conventional algorithm for vector quantization are compared. The proposed method is quite efficient and can achieve near-optimal results.

  8. A Macintosh-Based Scientific Images Video Analysis System

    NASA Technical Reports Server (NTRS)

    Groleau, Nicolas; Friedland, Peter (Technical Monitor)

    1994-01-01

    A set of experiments was designed at MIT's Man-Vehicle Laboratory in order to evaluate the effects of zero gravity on the human orientation system. During many of these experiments, the movements of the eyes are recorded on high quality video cassettes. The images must be analyzed off-line to calculate the position of the eyes at every moment in time. To this aim, I have implemented a simple inexpensive computerized system which measures the angle of rotation of the eye from digitized video images. The system is implemented on a desktop Macintosh computer, processes one play-back frame per second and exhibits adequate levels of accuracy and precision. The system uses LabVIEW, a digital output board, and a video input board to control a VCR, digitize video images, analyze them, and provide a user friendly interface for the various phases of the process. The system uses the Concept Vi LabVIEW library (Graftek's Image, Meudon la Foret, France) for image grabbing and displaying as well as translation to and from LabVIEW arrays. Graftek's software layer drives an Image Grabber board from Neotech (Eastleigh, United Kingdom). A Colour Adapter box from Neotech provides adequate video signal synchronization. The system also requires a LabVIEW driven digital output board (MacADIOS II from GW Instruments, Cambridge, MA) controlling a slightly modified VCR remote control used mainly to advance the video tape frame by frame.

  9. Interpreting Underwater Acoustic Images of the Upper Ocean Boundary Layer

    ERIC Educational Resources Information Center

    Ulloa, Marco J.

    2007-01-01

    A challenging task in physical studies of the upper ocean using underwater sound is the interpretation of high-resolution acoustic images. This paper covers a number of basic concepts necessary for undergraduate and postgraduate students to identify the most distinctive features of the images, providing a link with the acoustic signatures of…

  10. An Acoustic Charge Transport Imager for High Definition Television

    NASA Technical Reports Server (NTRS)

    Hunt, William D.; Brennan, Kevin; May, Gary; Glenn, William E.; Richardson, Mike; Solomon, Richard

    1999-01-01

    This project, over its term, included funding to a variety of companies and organizations. In addition to Georgia Tech these included Florida Atlantic University with Dr. William E. Glenn as the P.I., Kodak with Mr. Mike Richardson as the P.I. and M.I.T./Polaroid with Dr. Richard Solomon as the P.I. The focus of the work conducted by these organizations was the development of camera hardware for High Definition Television (HDTV). The focus of the research at Georgia Tech was the development of new semiconductor technology to achieve a next generation solid state imager chip that would operate at a high frame rate (I 70 frames per second), operate at low light levels (via the use of avalanche photodiodes as the detector element) and contain 2 million pixels. The actual cost required to create this new semiconductor technology was probably at least 5 or 6 times the investment made under this program and hence we fell short of achieving this rather grand goal. We did, however, produce a number of spin-off technologies as a result of our efforts. These include, among others, improved avalanche photodiode structures, significant advancement of the state of understanding of ZnO/GaAs structures and significant contributions to the analysis of general GaAs semiconductor devices and the design of Surface Acoustic Wave resonator filters for wireless communication. More of these will be described in the report. The work conducted at the partner sites resulted in the development of 4 prototype HDTV cameras. The HDTV camera developed by Kodak uses the Kodak KAI-2091M high- definition monochrome image sensor. This progressively-scanned charge-coupled device (CCD) can operate at video frame rates and has 9 gm square pixels. The photosensitive area has a 16:9 aspect ratio and is consistent with the "Common Image Format" (CIF). It features an active image area of 1928 horizontal by 1084 vertical pixels and has a 55% fill factor. The camera is designed to operate in continuous mode

  11. Method and apparatus for reading meters from a video image

    DOEpatents

    Lewis, Trevor J.; Ferguson, Jeffrey J.

    1997-01-01

    A method and system to enable acquisition of data about an environment from one or more meters using video images. One or more meters are imaged by a video camera and the video signal is digitized. Then, each region of the digital image which corresponds to the indicator of the meter is calibrated and the video signal is analyzed to determine the value indicated by each meter indicator. Finally, from the value indicated by each meter indicator in the calibrated region, a meter reading is generated. The method and system offer the advantages of automatic data collection in a relatively non-intrusive manner without making any complicated or expensive electronic connections, and without requiring intensive manpower.

  12. PIZZARO: Forensic analysis and restoration of image and video data.

    PubMed

    Kamenicky, Jan; Bartos, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozamsky, Adam; Saic, Stanislav; Sroubek, Filip; Sorel, Michal; Zita, Ales; Zitova, Barbara; Sima, Zdenek; Svarc, Petr; Horinek, Jan

    2016-07-01

    This paper introduces a set of methods for image and video forensic analysis. They were designed to help to assess image and video credibility and origin and to restore and increase image quality by diminishing unwanted blur, noise, and other possible artifacts. The motivation came from the best practices used in the criminal investigation utilizing images and/or videos. The determination of the image source, the verification of the image content, and image restoration were identified as the most important issues of which automation can facilitate criminalists work. Novel theoretical results complemented with existing approaches (LCD re-capture detection and denoising) were implemented in the PIZZARO software tool, which consists of the image processing functionality as well as of reporting and archiving functions to ensure the repeatability of image analysis procedures and thus fulfills formal aspects of the image/video analysis work. Comparison of new proposed methods with the state of the art approaches is shown. Real use cases are presented, which illustrate the functionality of the developed methods and demonstrate their applicability in different situations. The use cases as well as the method design were solved in tight cooperation of scientists from the Institute of Criminalistics, National Drug Headquarters of the Criminal Police and Investigation Service of the Police of the Czech Republic, and image processing experts from the Czech Academy of Sciences. PMID:27182830

  13. Half-Tone Video Images Of Drifting Sinusoidal Gratings

    NASA Technical Reports Server (NTRS)

    Mulligan, Jeffrey B.; Stone, Leland S.

    1991-01-01

    Digital technique for generation of slowly moving video image of sinusoidal grating avoids difficulty of transferring full image data from disk storage to image memory at conventional frame rates. Depends partly on trigonometric identity by which moving sinusoidal grating decomposed into two stationary patterns spatially and temporally modulated in quadrature. Makes motion appear smooth, even at speeds much less than one-tenth picture element per frame period. Applicable to digital video system in which image memory consists of at least 2 bits per picture element, and final brightness of picture element determined by contents of "lookup-table" memory programmed anew each frame period and indexed by coordinates of each picture element.

  14. Transthoracic Cardiac Acoustic Radiation Force Impulse Imaging

    NASA Astrophysics Data System (ADS)

    Bradway, David Pierson

    This dissertation investigates the feasibility of a real-time transthoracic Acoustic Radiation Force Impulse (ARFI) imaging system to measure myocardial function non-invasively in clinical setting. Heart failure is an important cardiovascular disease and contributes to the leading cause of death for developed countries. Patients exhibiting heart failure with a low left ventricular ejection fraction (LVEF) can often be identified by clinicians, but patients with preserved LVEF might be undetected if they do not exhibit other signs and symptoms of heart failure. These cases motivate development of transthoracic ARFI imaging to aid the early diagnosis of the structural and functional heart abnormalities leading to heart failure. M-Mode ARFI imaging utilizes ultrasonic radiation force to displace tissue several micrometers in the direction of wave propagation. Conventional ultrasound tracks the response of the tissue to the force. This measurement is repeated rapidly at a location through the cardiac cycle, measuring timing and relative changes in myocardial stiffness. ARFI imaging was previously shown capable of measuring myocardial properties and function via invasive open-chest and intracardiac approaches. The prototype imaging system described in this dissertation is capable of rapid acquisition, processing, and display of ARFI images and shear wave elasticity imaging (SWEI) movies. Also presented is a rigorous safety analysis, including finite element method (FEM) simulations of tissue heating, hydrophone intensity and mechanical index (MI) measurements, and thermocouple transducer face heating measurements. For the pulse sequences used in later animal and clinical studies, results from the safety analysis indicates that transthoracic ARFI imaging can be safely applied at rates and levels realizable on the prototype ARFI imaging system. Preliminary data are presented from in vivo trials studying changes in myocardial stiffness occurring under normal and abnormal

  15. Passive imaging in nondiffuse acoustic wavefields.

    PubMed

    Mulargia, Francesco; Castellaro, Silvia

    2008-05-30

    A main property of diffuse acoustic wavefields is that, taken any two points, each of them can be seen as the source of waves and the other as the recording station. This property is shown to follow simply from array azimuthal selectivity and Huygens principle in a locally isotropic wavefield. Without time reversal, this property holds approximately also in anisotropic azimuthally uniform wavefields, implying much looser constraints for undistorted passive imaging than those required by a diffuse field. A notable example is the seismic noise field, which is generally nondiffuse, but is found to be compatible with a finite aperture anisotropic uniform wavefield. The theoretical predictions were confirmed by an experiment on seismic noise in the mainland of Venice, Italy. PMID:18518643

  16. Passive Imaging in Nondiffuse Acoustic Wavefields

    SciTech Connect

    Mulargia, Francesco; Castellaro, Silvia

    2008-05-30

    A main property of diffuse acoustic wavefields is that, taken any two points, each of them can be seen as the source of waves and the other as the recording station. This property is shown to follow simply from array azimuthal selectivity and Huygens principle in a locally isotropic wavefield. Without time reversal, this property holds approximately also in anisotropic azimuthally uniform wavefields, implying much looser constraints for undistorted passive imaging than those required by a diffuse field. A notable example is the seismic noise field, which is generally nondiffuse, but is found to be compatible with a finite aperture anisotropic uniform wavefield. The theoretical predictions were confirmed by an experiment on seismic noise in the mainland of Venice, Italy.

  17. Aerospace video imaging systems for rangeland management

    NASA Technical Reports Server (NTRS)

    Everitt, J. H.; Escobar, D. E.; Richardson, A. J.; Lulla, K.

    1990-01-01

    This paper presents an overview on the application of airborne video imagery (VI) for assessment of rangeland resources. Multispectral black-and-white video with visible/NIR sensitivity; color-IR, normal color, and black-and-white MIR; and thermal IR video have been used to detect or distinguish among many rangeland and other natural resource variables such as heavy grazing, drought-stressed grass, phytomass levels, burned areas, soil salinity, plant communities and species, and gopher and ant mounds. The digitization and computer processing of VI have also been demonstrated. VI does not have the detailed resolution of film, but these results have shown that it has considerable potential as an applied remote sensing tool for rangeland management. In the future, spaceborne VI may provide additional data for monitoring and management of rangelands.

  18. Laser-induced acoustic imaging of underground objects

    NASA Astrophysics Data System (ADS)

    Li, Wen; DiMarzio, Charles A.; McKnight, Stephen W.; Sauermann, Gerhard O.; Miller, Eric L.

    1999-02-01

    This paper introduces a new demining technique based on the photo-acoustic interaction, together with results from photo- acoustic experiments. We have buried different types of targets (metal, rubber and plastic) in different media (sand, soil and water) and imaged them by measuring reflection of acoustic waves generated by irradiation with a CO2 laser. Research has been focused on the signal acquisition and signal processing. A deconvolution method using Wiener filters is utilized in data processing. Using a uniform spatial distribution of laser pulses at the ground's surface, we obtained 3D images of buried objects. The images give us a clear representation of the shapes of the underground objects. The quality of the images depends on the mismatch of acoustic impedance of the buried objects, the bandwidth and center frequency of the acoustic sensors and the selection of filter functions.

  19. Optimization of a Biometric System Based on Acoustic Images

    PubMed Central

    Izquierdo Fuente, Alberto; Del Val Puente, Lara; Villacorta Calvo, Juan J.; Raboso Mateos, Mariano

    2014-01-01

    On the basis of an acoustic biometric system that captures 16 acoustic images of a person for 4 frequencies and 4 positions, a study was carried out to improve the performance of the system. On a first stage, an analysis to determine which images provide more information to the system was carried out showing that a set of 12 images allows the system to obtain results that are equivalent to using all of the 16 images. Finally, optimization techniques were used to obtain the set of weights associated with each acoustic image that maximizes the performance of the biometric system. These results improve significantly the performance of the preliminary system, while reducing the time of acquisition and computational burden, since the number of acoustic images was reduced. PMID:24616643

  20. Optimization of a biometric system based on acoustic images.

    PubMed

    Izquierdo Fuente, Alberto; Del Val Puente, Lara; Villacorta Calvo, Juan J; Raboso Mateos, Mariano

    2014-01-01

    On the basis of an acoustic biometric system that captures 16 acoustic images of a person for 4 frequencies and 4 positions, a study was carried out to improve the performance of the system. On a first stage, an analysis to determine which images provide more information to the system was carried out showing that a set of 12 images allows the system to obtain results that are equivalent to using all of the 16 images. Finally, optimization techniques were used to obtain the set of weights associated with each acoustic image that maximizes the performance of the biometric system. These results improve significantly the performance of the preliminary system, while reducing the time of acquisition and computational burden, since the number of acoustic images was reduced. PMID:24616643

  1. Reconfigurable machine for applications in image and video compression

    NASA Astrophysics Data System (ADS)

    Hartenstein, Reiner W.; Becker, Juergen; Kress, Rainier; Reinig, Helmut; Schmidt, Karin

    1995-02-01

    This paper presents a reconfigurable machine for applications in image or video compression. The machine can be used stand alone or as a universal accelerator co-processor for desktop computers for image processing. It is well suited for image compression algorithms such as JPEG for still pictures or for encoding MPEG movies. It provides a much cheaper and more flexible hardware platform than special image compression ASICs and it can substantially accelerate desktop computing.

  2. Image/video encryption using single shot digital holography

    NASA Astrophysics Data System (ADS)

    Li, Xiaoyu; Tang, Chen; Zhu, Xinjun; Li, Biyuan; Wang, Linlin; Yan, Xiusheng

    2015-05-01

    We propose a method for image/video encryption that combines double random-phase encoding in the Fresnel domain with a single shot digital holography. In this method, a complex object field can be reconstructed with only single frame hologram based on a constrained optimization method. The system without multiple shots and Fourier lens is simple, and allows to dynamically encrypt information. We test the proposed method on a computer simulated image, a grayscale image and a video in AVI format. Also we investigate the quality of the decryption process and the performance against noise attacks. The experimental results demonstrate the performance of the method.

  3. Image Effects in the Appreciation of Video Rock.

    ERIC Educational Resources Information Center

    Zillman, Dolf; Mundorf, Norbert

    1987-01-01

    Assesses the addition of sexual and/or violent images to rock music videos whose originals were both nonsexual and nonviolent. Notes that sexual stimuli intensified music appreciation in males and females, that violent stimuli sometimes did the same, but that the combination of sexual and violent images failed to enhance appreciation of the music.…

  4. Combined Photoacoustic-Acoustic Technique for Crack Imaging

    NASA Astrophysics Data System (ADS)

    Zakrzewski, J.; Chigarev, N.; Tournat, V.; Gusev, V.

    2010-01-01

    Nonlinear imaging of a crack by combination of a common photoacoustic imaging technique with additional acoustic loading has been performed. Acoustic signals at two different fundamental frequencies were launched in the sample, one photoacoustically through heating of the sample surface by the intensity-modulated scanning laser beam and another by a piezoelectrical transducer. The acoustic signal at mixed frequencies, generated due to system nonlinearity, has been detected by an accelerometer. Different physical mechanisms of the nonlinearity contributing to the contrast in linear and nonlinear photoacoustic imaging of the crack are discussed.

  5. Axial resolution of laser opto-acoustic imaging: influence of acoustic attenuation and diffraction

    NASA Astrophysics Data System (ADS)

    Esenaliev, Rinat O.; Alma, Herve; Tittel, Frank K.; Oraevsky, Alexander A.

    1998-05-01

    Laser optoacoustic imaging can be applied for characterization of layered and heterogeneous tissue structures in vivo. Accurate tissue characterization may provide: (1) means for medical diagnoses, and (2) pretreatment tissue properties important for therapeutic laser procedures. Axial resolution of the optoacoustic imaging is higher than that of optical imaging. However, the resolution may degrade due to either attenuation of high-frequency ultrasonic waves in tissue, or/and diffraction of low-frequency acoustic waves. The goal of this study was to determine the axial resolution as a function of acoustic attenuation and diffraction upon propagation of laser-induced pressure waves in water with absorbing layer, in breast phantoms, and in biological tissues. Acoustic pressure measurements were performed in absolute values using piezoelectric transducers. A layer or a small sphere of absorbing medium was placed within a medium with lower optical absorption. The distance between the acoustic transducer and the absorbing object was varied, so that the effects of acoustic attenuation and diffraction could be observed. The location of layers or spheres was measured from recorded optoacoustic pressure profiles and compared with real values measured with a micrometer. The experimental results were analyzed using theoretical models for spherical and planar acoustic waves. Our studies demonstrated that despite strong acoustic attenuation of high-frequency ultrasonic waves, the axial resolution of laser optoacoustic imaging may be as high as 20 micrometers for tissue layers located at a 5-mm depth. An axial resolution of 10 micrometers to 20 micrometers was demonstrated for an absorbing layer at a distance of 5 cm in water, when the resolution is affected only by diffraction. Acoustic transducers employed in optoacoustic imaging can have either high sensitivity or fast temporal response. Therefore, a high resolution may not be achieved with sensitive transducers utilized in

  6. A Surface Approximation Method for Image and Video Correspondences.

    PubMed

    Huang, Jingwei; Wang, Bin; Wang, Wenping; Sen, Pradeep

    2015-12-01

    Although finding correspondences between similar images is an important problem in image processing, the existing algorithms cannot find accurate and dense correspondences in images with significant changes in lighting/transformation or with the non-rigid objects. This paper proposes a novel method for finding accurate and dense correspondences between images even in these difficult situations. Starting with the non-rigid dense correspondence algorithm [1] to generate an initial correspondence map, we propose a new geometric filter that uses cubic B-Spline surfaces to approximate the correspondence mapping functions for shared objects in both images, thereby eliminating outliers and noise. We then propose an iterative algorithm which enlarges the region containing valid correspondences. Compared with the existing methods, our method is more robust to significant changes in lighting, color, or viewpoint. Furthermore, we demonstrate how to extend our surface approximation method to video editing by first generating a reliable correspondence map between a given source frame and each frame of a video. The user can then edit the source frame, and the changes are automatically propagated through the entire video using the correspondence map. To evaluate our approach, we examine applications of unsupervised image recognition and video texture editing, and show that our algorithm produces better results than those from state-of-the-art approaches. PMID:26241974

  7. Magnetic resonance imaging of acoustic streaming: absorption coefficient and acoustic field shape estimation.

    PubMed

    Madelin, Guillaume; Grucker, Daniel; Franconi, Jean-Michel; Thiaudiere, Eric

    2006-07-01

    In this study, magnetic resonance imaging (MRI) is used to visualize acoustic streaming in liquids. A single-shot spin echo sequence (HASTE) with a saturation band perpendicular to the acoustic beam permits the acquisition of an instantaneous image of the flow due to the application of ultrasound. An average acoustic streaming velocity can be estimated from the MR images, from which the ultrasonic absorption coefficient and the bulk viscosity of different glycerol-water mixtures can be deduced. In the same way, this MRI method could be used to assess the acoustic field and time-average power of ultrasonic transducers in water (or other liquids with known physical properties), after calibration of a geometrical parameter that is dependent on the experimental setup. PMID:16650447

  8. Acoustic radiation force-based elasticity imaging methods

    PubMed Central

    Palmeri, Mark L.; Nightingale, Kathryn R.

    2011-01-01

    Conventional diagnostic ultrasound images portray differences in the acoustic properties of soft tissues, whereas ultrasound-based elasticity images portray differences in the elastic properties of soft tissues (i.e. stiffness, viscosity). The benefit of elasticity imaging lies in the fact that many soft tissues can share similar ultrasonic echogenicities, but may have different mechanical properties that can be used to clearly visualize normal anatomy and delineate pathological lesions. Acoustic radiation force-based elasticity imaging methods use acoustic radiation force to transiently deform soft tissues, and the dynamic displacement response of those tissues is measured ultrasonically and is used to estimate the tissue's mechanical properties. Both qualitative images and quantitative elasticity metrics can be reconstructed from these measured data, providing complimentary information to both diagnose and longitudinally monitor disease progression. Recently, acoustic radiation force-based elasticity imaging techniques have moved from the laboratory to the clinical setting, where clinicians are beginning to characterize tissue stiffness as a diagnostic metric, and commercial implementations of radiation force-based ultrasonic elasticity imaging are beginning to appear on the commercial market. This article provides an overview of acoustic radiation force-based elasticity imaging, including a review of the relevant soft tissue material properties, a review of radiation force-based methods that have been proposed for elasticity imaging, and a discussion of current research and commercial realizations of radiation force based-elasticity imaging technologies. PMID:22419986

  9. Video rate multispectral imaging for camouflaged target detection

    NASA Astrophysics Data System (ADS)

    Henry, Sam

    2015-05-01

    The ability to detect and identify camouflaged targets is critical in combat environments. Hyperspectral and Multispectral cameras allow a soldier to identify threats more effectively than traditional RGB cameras due to both increased color resolution and ability to see beyond visible light. Static imagers have proven successful, however the development of video rate imagers allows for continuous real time target identification and tracking. This paper presents an analysis of existing anomaly detection algorithms and how they can be adopted to video rates, and presents a general purpose semisupervised real time anomaly detection algorithm using multiple frame sampling.

  10. Acoustic Radiation Force Elasticity Imaging in Diagnostic Ultrasound

    PubMed Central

    Doherty, Joshua R.; Trahey, Gregg E.; Nightingale, Kathryn R.; Palmeri, Mark L.

    2013-01-01

    The development of ultrasound-based elasticity imaging methods has been the focus of intense research activity since the mid-1990s. In characterizing the mechanical properties of soft tissues, these techniques image an entirely new subset of tissue properties that cannot be derived with conventional ultrasound techniques. Clinically, tissue elasticity is known to be associated with pathological condition and with the ability to image these features in vivo, elasticity imaging methods may prove to be invaluable tools for the diagnosis and/or monitoring of disease. This review focuses on ultrasound-based elasticity imaging methods that generate an acoustic radiation force to induce tissue displacements. These methods can be performed non-invasively during routine exams to provide either qualitative or quantitative metrics of tissue elasticity. A brief overview of soft tissue mechanics relevant to elasticity imaging is provided, including a derivation of acoustic radiation force, and an overview of the various acoustic radiation force elasticity imaging methods. PMID:23549529

  11. 3D holoscopic video imaging system

    NASA Astrophysics Data System (ADS)

    Steurer, Johannes H.; Pesch, Matthias; Hahne, Christopher

    2012-03-01

    Since many years, integral imaging has been discussed as a technique to overcome the limitations of standard still photography imaging systems where a three-dimensional scene is irrevocably projected onto two dimensions. With the success of 3D stereoscopic movies, a huge interest in capturing three-dimensional motion picture scenes has been generated. In this paper, we present a test bench integral imaging camera system aiming to tailor the methods of light field imaging towards capturing integral 3D motion picture content. We estimate the hardware requirements needed to generate high quality 3D holoscopic images and show a prototype camera setup that allows us to study these requirements using existing technology. The necessary steps that are involved in the calibration of the system as well as the technique of generating human readable holoscopic images from the recorded data are discussed.

  12. Calibration method for video and radiation imagers

    DOEpatents

    Cunningham, Mark F.; Fabris, Lorenzo; Gee, Timothy F.; Goddard, Jr., James S.; Karnowski, Thomas P.; Ziock, Klaus-peter

    2011-07-05

    The relationship between the high energy radiation imager pixel (HERIP) coordinate and real-world x-coordinate is determined by a least square fit between the HERIP x-coordinate and the measured real-world x-coordinates of calibration markers that emit high energy radiation imager and reflect visible light. Upon calibration, a high energy radiation imager pixel position may be determined based on a real-world coordinate of a moving vehicle. Further, a scale parameter for said high energy radiation imager may be determined based on the real-world coordinate. The scale parameter depends on the y-coordinate of the moving vehicle as provided by a visible light camera. The high energy radiation imager may be employed to detect radiation from moving vehicles in multiple lanes, which correspondingly have different distances to the high energy radiation imager.

  13. Submillimeter video imaging with a superconducting bolometer array

    NASA Astrophysics Data System (ADS)

    Becker, Daniel Thomas

    Millimeter wavelength radiation holds promise for detection of security threats at a distance, including suicide bombers and maritime threats in poor weather. The high sensitivity of superconducting Transition Edge Sensor (TES) bolometers makes them ideal for passive imaging of thermal signals at millimeter and submillimeter wavelengths. I have built a 350 GHz video-rate imaging system using an array of feedhorn-coupled TES bolometers. The system operates at standoff distances of 16 m to 28 m with a measured spatial resolution of 1.4 cm (at 17 m). It currently contains one 251-detector sub-array, and can be expanded to contain four sub-arrays for a total of 1004 detectors. The system has been used to take video images that reveal the presence of weapons concealed beneath a shirt in an indoor setting. This dissertation describes the design, implementation and characterization of this system. It presents an overview of the challenges associated with standoff passive imaging and how these problems can be overcome through the use of large-format TES bolometer arrays. I describe the design of the system and cover the results of detector and optical characterization. I explain the procedure used to generate video images using the system, and present a noise analysis of those images. This analysis indicates that the Noise Equivalent Temperature Difference (NETD) of the video images is currently limited by artifacts of the scanning process. More sophisticated image processing algorithms can eliminate these artifacts and reduce the NETD to 100 mK, which is the target value for the most demanding passive imaging scenarios. I finish with an overview of future directions for this system.

  14. Edge adaptive intra field de-interlacing of video images

    NASA Astrophysics Data System (ADS)

    Lachine, Vladimir; Smith, Gregory; Lee, Louie

    2013-02-01

    Expanding image by an arbitrary scale factor and thereby creating an enlarged image is a crucial image processing operation. De-interlacing is an example of such operation where a video field is enlarged in vertical direction with 1 to 2 scale factor. The most advanced de-interlacing algorithms use a few consequent input fields to generate one output frame. In order to save hardware resources in video processors, missing lines in each field may be generated without reference to the other fields. Line doubling, known as "bobbing", is the simplest intra field de-interlacing method. However, it may generate visual artifacts. For example, interpolation of an inserted line from a few neighboring lines by vertical filter may produce such visual artifacts as "jaggies." In this work we present edge adaptive image up-scaling and/or enhancement algorithm, which can produce "jaggies" free video output frames. As a first step, an edge and its parameters in each interpolated pixel are detected from gradient squared tensor based on local signal variances. Then, according to the edge parameters including orientation, anisotropy and variance strength, the algorithm determines footprint and frequency response of two-dimensional interpolation filter for the output pixel. Filter's coefficients are defined by edge parameters, so that quality of the output frame is controlled by local content. The proposed method may be used for image enlargement or enhancement (for example, anti-aliasing without resampling). It has been hardware implemented in video display processor for intra field de-interlacing of video images.

  15. Low-noise video amplifiers for imaging CCD's

    NASA Technical Reports Server (NTRS)

    Scinicariello, F.

    1976-01-01

    Various techniques were developed which enable the CCD (charge coupled device) imaging array user to obtain optimum performance from the device. A CCD video channel was described, and detector-preamplifier interface requirements were examined. A noise model for the system was discussed at length and laboratory data presented and compared to predicted results.

  16. New strategy for image and video quality assessment

    NASA Astrophysics Data System (ADS)

    Ma, Qi; Zhang, Liming; Wang, Bin

    2010-01-01

    Image and video quality assessment (QA) is a critical issue in image and video processing applications. General full-reference (FR) QA criteria such as peak signal-to-noise ratio (PSNR) and mean squared error (MSE) do not accord well with human subjective assessment. Some QA indices that consider human visual sensitivity, such as mean structural similarity (MSSIM) with structural sensitivity, visual information fidelity (VIF) with statistical sensitivity, etc., were proposed in view of the differences between reference and distortion frames on a pixel or local level. However, they ignore the role of human visual attention (HVA). Recently, some new strategies with HVA have been proposed, but the methods extracting the visual attention are too complex for real-time realization. We take advantage of the phase spectrum of quaternion Fourier transform (PQFT), a very fast algorithm we previously proposed, to extract saliency maps of color images or videos. Then we propose saliency-based methods for both image QA (IQA) and video QA (VQA) by adding weights related to saliency features to these original IQA or VQA criteria. Experimental results show that our saliency-based strategy can approach more closely to human subjective assessment compared with these original IQA or VQA methods and does not take more time because of the fast PQFT algorithm.

  17. Acoustic force mapping in a hybrid acoustic-optical micromanipulation device supporting high resolution optical imaging.

    PubMed

    Thalhammer, Gregor; McDougall, Craig; MacDonald, Michael Peter; Ritsch-Marte, Monika

    2016-04-12

    Many applications in the life-sciences demand non-contact manipulation tools for forceful but nevertheless delicate handling of various types of sample. Moreover, the system should support high-resolution optical imaging. Here we present a hybrid acoustic/optical manipulation system which utilizes a transparent transducer, making it compatible with high-NA imaging in a microfluidic environment. The powerful acoustic trapping within a layered resonator, which is suitable for highly parallel particle handling, is complemented by the flexibility and selectivity of holographic optical tweezers, with the specimens being under high quality optical monitoring at all times. The dual acoustic/optical nature of the system lends itself to optically measure the exact acoustic force map, by means of direct force measurements on an optically trapped particle. For applications with (ultra-)high demand on the precision of the force measurements, the position of the objective used for the high-NA imaging may have significant influence on the acoustic force map in the probe chamber. We have characterized this influence experimentally and the findings were confirmed by model simulations. We show that it is possible to design the chamber and to choose the operating point in such a way as to avoid perturbations due to the objective lens. Moreover, we found that measuring the electrical impedance of the transducer provides an easy indicator for the acoustic resonances. PMID:27025398

  18. Image-guided transorbital procedures with endoscopic video augmentation

    PubMed Central

    DeLisi, Michael P.; Mawn, Louise A.; Galloway, Robert L.

    2014-01-01

    Purpose: Surgical interventions to the orbital space behind the eyeball are limited to highly invasive procedures due to the confined nature of the region along with the presence of several intricate soft tissue structures. A minimally invasive approach to orbital surgery would enable several therapeutic options, particularly new treatment protocols for optic neuropathies such as glaucoma. The authors have developed an image-guided system for the purpose of navigating a thin flexible endoscope to a specified target region behind the eyeball. Navigation within the orbit is particularly challenging despite its small volume, as the presence of fat tissue occludes the endoscopic visual field while the surgeon must constantly be aware of optic nerve position. This research investigates the impact of endoscopic video augmentation to targeted image-guided navigation in a series of anthropomorphic phantom experiments. Methods: A group of 16 surgeons performed a target identification task within the orbits of four skull phantoms. The task consisted of identifying the correct target, indicated by the augmented video and the preoperative imaging frames, out of four possibilities. For each skull, one orbital intervention was performed with video augmentation, while the other was done with the standard image guidance technique, in random order. Results: The authors measured a target identification accuracy of 95.3% and 85.9% for the augmented and standard cases, respectively, with statistically significant improvement in procedure time (Z = −2.044, p = 0.041) and intraoperator mean procedure time (Z = 2.456, p = 0.014) when augmentation was used. Conclusions: Improvements in both target identification accuracy and interventional procedure time suggest that endoscopic video augmentation provides valuable additional orientation and trajectory information in an image-guided procedure. Utilization of video augmentation in transorbital interventions could further minimize

  19. Data-Driven Affective Filtering for Images and Videos.

    PubMed

    Li, Teng; Ni, Bingbing; Xu, Mengdi; Wang, Meng; Gao, Qingwei; Yan, Shuicheng

    2015-10-01

    In this paper, a novel system is developed for synthesizing user-specified emotions onto arbitrary input images or videos. Other than defining the visual affective model based on empirical knowledge, a data-driven learning framework is proposed to extract the emotion-related knowledge from a set of emotion-annotated images. In a divide-and-conquer manner, the images are clustered into several emotion-specific scene subgroups for model learning. The visual affection is modeled with Gaussian mixture models based on color features of local image patches. For the purpose of affective filtering, the feature distribution of the target is aligned to the statistical model constructed from the emotion-specific scene subgroup, through a piecewise linear transformation. The transformation is derived through a learning algorithm, which is developed with the incorporation of a regularization term enforcing spatial smoothness, edge preservation, and temporal smoothness for the derived image or video transformation. Optimization of the objective function is sought via standard nonlinear method. Intensive experimental results and user studies demonstrate that the proposed affective filtering framework can yield effective and natural effects for images and videos. PMID:25675469

  20. Thermal imagers: from ancient analog video output to state-of-the-art video streaming

    NASA Astrophysics Data System (ADS)

    Haan, Hubertus; Feuchter, Timo; Münzberg, Mario; Fritze, Jörg; Schlemmer, Harry

    2013-06-01

    The video output of thermal imagers stayed constant over almost two decades. When the famous Common Modules were employed a thermal image at first was presented to the observer in the eye piece only. In the early 1990s TV cameras were attached and the standard output was CCIR. In the civil camera market output standards changed to digital formats a decade ago with digital video streaming being nowadays state-of-the-art. The reasons why the output technique in the thermal world stayed unchanged over such a long time are: the very conservative view of the military community, long planning and turn-around times of programs and a slower growth of pixel number of TIs in comparison to consumer cameras. With megapixel detectors the CCIR output format is not sufficient any longer. The paper discusses the state-of-the-art compression and streaming solutions for TIs.

  1. Techniques for estimating blood pressure variation using video images.

    PubMed

    Sugita, Norihiro; Obara, Kazuma; Yoshizawa, Makoto; Abe, Makoto; Tanaka, Akira; Homma, Noriyasu

    2015-08-01

    It is important to know about a sudden blood pressure change that occurs in everyday life and may pose a danger to human health. However, monitoring the blood pressure variation in daily life is difficult because a bulky and expensive sensor is needed to measure the blood pressure continuously. In this study, a new non-contact method is proposed to estimate the blood pressure variation using video images. In this method, the pulse propagation time difference or instantaneous phase difference is calculated between two pulse waves obtained from different parts of a subject's body captured by a video camera. The forehead, left cheek, and right hand are selected as regions to obtain pulse waves. Both the pulse propagation time difference and instantaneous phase difference were calculated from the video images of 20 healthy subjects performing the Valsalva maneuver. These indices are considered to have a negative correlation with the blood pressure variation because they approximate the pulse transit time obtained from a photoplethysmograph. However, the experimental results showed that the correlation coefficients between the blood pressure and the proposed indices were approximately 0.6 for the pulse wave obtained from the right hand. This result is considered to be due to the difference in the transmission depth into the skin between the green and infrared light used as light sources for the video image and conventional photoplethysmogram, respectively. In addition, the difference in the innervation of the face and hand may be related to the results. PMID:26737225

  2. Investigation of an acoustical holography system for real-time imaging

    NASA Astrophysics Data System (ADS)

    Fecht, Barbara A.; Andre, Michael P.; Garlick, George F.; Shelby, Ronald L.; Shelby, Jerod O.; Lehman, Constance D.

    1998-07-01

    A new prototype imaging system based on ultrasound transmission through the object of interest -- acoustical holography -- was developed which incorporates significant improvements in acoustical and optical design. This system is being evaluated for potential clinical application in the musculoskeletal system, interventional radiology, pediatrics, monitoring of tumor ablation, vascular imaging and breast imaging. System limiting resolution was estimated using a line-pair target with decreasing line thickness and equal separation. For a swept frequency beam from 2.6 - 3.0 MHz, the minimum resolution was 0.5 lp/mm. Apatite crystals were suspended in castor oil to approximate breast microcalcifications. Crystals from 0.425 - 1.18 mm in diameter were well resolved in the acoustic zoom mode. Needle visibility was examined with both a 14-gauge biopsy needle and a 0.6 mm needle. The needle tip was clearly visible throughout the dynamic imaging sequence as it was slowly inserted into a RMI tissue-equivalent breast biopsy phantom. A selection of human images was acquired in several volunteers: a 25 year-old female volunteer with normal breast tissue, a lateral view of the elbow joint showing muscle fascia and tendon insertions, and the superficial vessels in the forearm. Real-time video images of these studies will be presented. In all of these studies, conventional sonography was used for comparison. These preliminary investigations with the new prototype acoustical holography system showed favorable results in comparison to state-of-the-art pulse-echo ultrasound and demonstrate it to be suitable for further clinical study. The new patient interfaces will facilitate orthopedic soft tissue evaluation, study of superficial vascular structures and potentially breast imaging.

  3. Combination of acoustical radiosity and the image source method.

    PubMed

    Koutsouris, Georgios I; Brunskog, Jonas; Jeong, Cheol-Ho; Jacobsen, Finn

    2013-06-01

    A combined model for room acoustic predictions is developed, aiming to treat both diffuse and specular reflections in a unified way. Two established methods are incorporated: acoustical radiosity, accounting for the diffuse part, and the image source method, accounting for the specular part. The model is based on conservation of acoustical energy. Losses are taken into account by the energy absorption coefficient, and the diffuse reflections are controlled via the scattering coefficient, which defines the portion of energy that has been diffusely reflected. The way the model is formulated allows for a dynamic control of the image source production, so that no fixed maximum reflection order is required. The model is optimized for energy impulse response predictions in arbitrary polyhedral rooms. The predictions are validated by comparison with published measured data for a real music studio hall. The proposed model turns out to be promising for acoustic predictions providing a high level of detail and accuracy. PMID:23742350

  4. Face retrieval in video sequences using Web images database

    NASA Astrophysics Data System (ADS)

    Leo, M.; Battisti, F.; Carli, M.; Neri, A.

    2015-03-01

    Face processing techniques for automatic recognition in broadcast video attract the research interest because of its value in applications, such as video indexing, retrieval, and summarization. In multimedia press review, the automatic annotation of broadcasting news programs is a challenging task because people can appear with large appearance variations such as hair styles, illumination conditions and poses that make the comparison between similar faces more difficult. In this paper a technique for automatic face identification in TV broadcasting programs based on a gallery of faces downloaded from Web is proposed. The approach is based on a joint use of Scale Invariant Feature Transform descriptor and Eigenfaces-based algorithms and it has been tested on video sequences using a database of images acquired starting from a web search. Experimental results show that the joint use of these two approaches improves the recognition rate in case of use Standard Definition (SD) and High Definition (HD) standards.

  5. Acoustic imaging in a water filled metallic pipe

    SciTech Connect

    Kolbe, W.F.; Turko, B.T.; Leskovar, B.

    1984-04-01

    A method is described for the imaging of the interior of a water filled metallic pipe using acoustical techniques. The apparatus consists of an array of 20 acoustic transducers mounted circumferentially around the pipe. Each transducer is pulsed in sequence, and the echos resulting from bubbles in the interior are digitized and processed by a computer to generate an image. The electronic control and digitizing system and the software processing of the echo signals are described. The performance of the apparatus is illustrated by the imaging of simulated bubbles consisting of thin walled glass spheres suspended in the pipe.

  6. Time-Reversal Acoustics and Maximum-Entropy Imaging

    SciTech Connect

    Berryman, J G

    2001-08-22

    Target location is a common problem in acoustical imaging using either passive or active data inversion. Time-reversal methods in acoustics have the important characteristic that they provide a means of determining the eigenfunctions and eigenvalues of the scattering operator for either of these problems. Each eigenfunction may often be approximately associated with an individual scatterer. The resulting decoupling of the scattered field from a collection of targets is a very useful aid to localizing the targets, and suggests a number of imaging and localization algorithms. Two of these are linear subspace methods and maximum-entropy imaging.

  7. Acoustic Radiation Force Impulse (ARFI) Imaging: a Review

    PubMed Central

    Nightingale, Kathy

    2012-01-01

    Acoustic radiation force based elasticity imaging methods are under investigation by many groups. These methods differ from traditional ultrasonic elasticity imaging methods in that they do not require compression of the transducer, and are thus expected to be less operator dependent. Methods have been developed that utilize impulsive (i.e. < 1 ms), harmonic (pulsed), and steady state radiation force excitations. The work discussed herein utilizes impulsive methods, for which two imaging approaches have been pursued: 1) monitoring the tissue response within the radiation force region of excitation (ROE) and generating images of relative differences in tissue stiffness (Acoustic Radiation Force Impulse (ARFI) imaging); and 2) monitoring the speed of shear wave propagation away from the ROE to quantify tissue stiffness (Shear Wave Elasticity Imaging (SWEI)). For these methods, a single ultrasound transducer on a commercial ultrasound system can be used to both generate acoustic radiation force in tissue, and to monitor the tissue displacement response. The response of tissue to this transient excitation is complicated and depends upon tissue geometry, radiation force field geometry, and tissue mechanical and acoustic properties. Higher shear wave speeds and smaller displacements are associated with stiffer tissues, and slower shear wave speeds and larger displacements occur with more compliant tissues. ARFI images have spatial resolution comparable to that of B-mode, often with greater contrast, providing matched, adjunctive information. SWEI images provide quantitative information about the tissue stiffness, typically with lower spatial resolution. A review these methods and examples of clinical applications are presented herein. PMID:22545033

  8. Biased lineup instructions and face identification from video images.

    PubMed

    Thompson, W Burt; Johnson, Jaime

    2008-01-01

    Previous eyewitness memory research has shown that biased lineup instructions reduce identification accuracy, primarily by increasing false-positive identifications in target-absent lineups. Because some attempts at identification do not rely on a witness's memory of the perpetrator but instead involve matching photos to images on surveillance video, the authors investigated the effects of biased instructions on identification accuracy in a matching task. In Experiment 1, biased instructions did not affect the overall accuracy of participants who used video images as an identification aid, but nearly all correct decisions occurred with target-present photo spreads. Both biased and unbiased instructions resulted in high false-positive rates. In Experiment 2, which focused on video-photo matching accuracy with target-absent photo spreads, unbiased instructions led to more correct responses (i.e., fewer false positives). These findings suggest that investigators should not relax precautions against biased instructions when people attempt to match photos to an unfamiliar person recorded on video. PMID:18318406

  9. Acoustic-optical imaging without immersion

    NASA Technical Reports Server (NTRS)

    Liu, H.

    1979-01-01

    System using membraneous end wall of Bragg cell to separate test specimen from acoustic transmission medium, operates in real time and uses readily available optical components. System can be easily set up and maintained by people with little or no training in holography.

  10. Quantitative Determination of Lateral Mode Dispersion in Film Bulk Acoustic Resonators through Laser Acoustic Imaging

    SciTech Connect

    Ken Telschow; John D. Larson III

    2006-10-01

    Film Bulk Acoustic Resonators are useful for many signal processing applications. Detailed knowledge of their operation properties are needed to optimize their design for specific applications. The finite size of these resonators precludes their use in single acoustic modes; rather, multiple wave modes, such as, lateral wave modes are always excited concurrently. In order to determine the contributions of these modes, we have been using a newly developed full-field laser acoustic imaging approach to directly measure their amplitude and phase throughout the resonator. This paper describes new results comparing modeling of both elastic and piezoelectric effects in the active material with imaging measurement of all excited modes. Fourier transformation of the acoustic amplitude and phase displacement images provides a quantitative determination of excited mode amplitude and wavenumber at any frequency. Images combined at several frequencies form a direct visualization of lateral mode excitation and dispersion for the device under test allowing mode identification and comparison with predicted operational properties. Discussion and analysis are presented for modes near the first longitudinal thickness resonance (~900 MHz) in an AlN thin film resonator. Plate wave modeling, taking account of material crystalline orientation, elastic and piezoelectric properties and overlayer metallic films, will be discussed in relation to direct image measurements.

  11. Synthetic aperture acoustic imaging of non-metallic cords

    NASA Astrophysics Data System (ADS)

    Glean, Aldo A. J.; Good, Chelsea E.; Vignola, Joseph F.; Judge, John A.; Ryan, Teresa J.; Bishop, Steven S.; Gugino, Peter M.; Soumekh, Mehrdad

    2012-06-01

    This work presents a set of measurements collected with a research prototype synthetic aperture acoustic (SAA) imaging system. SAA imaging is an emerging technique that can serve as an inexpensive alternative or logical complement to synthetic aperture radar (SAR). The SAA imaging system uses an acoustic transceiver (speaker and microphone) to project acoustic radiation and record backscatter from a scene. The backscattered acoustic energy is used to generate information about the location, morphology, and mechanical properties of various objects. SAA detection has a potential advantage when compared to SAR in that non-metallic objects are not readily detectable with SAR. To demonstrate basic capability of the approach with non-metallic objects, targets are placed in a simple, featureless scene. Nylon cords of five diameters, ranging from 2 to 15 mm, and a joined pair of 3 mm fiber optic cables are placed in various configurations on flat asphalt that is free of clutter. The measurements were made using a chirp with a bandwidth of 2-15 kHz. The recorded signal is reconstructed to form a two-dimensional image of the distribution of acoustic scatterers within the scene. The goal of this study was to identify basic detectability characteristics for a range of sizes and configurations of non-metallic cord. It is shown that for sufficiently small angles relative to the transceiver path, the SAA approach creates adequate backscatter for detectability.

  12. Performance evaluation of a biometric system based on acoustic images.

    PubMed

    Izquierdo-Fuente, Alberto; del Val, Lara; Jiménez, María I; Villacorta, Juan J

    2011-01-01

    An acoustic electronic scanning array for acquiring images from a person using a biometric application is developed. Based on pulse-echo techniques, multifrequency acoustic images are obtained for a set of positions of a person (front, front with arms outstretched, back and side). Two Uniform Linear Arrays (ULA) with 15 λ/2-equispaced sensors have been employed, using different spatial apertures in order to reduce sidelobe levels. Working frequencies have been designed on the basis of the main lobe width, the grating lobe levels and the frequency responses of people and sensors. For a case-study with 10 people, the acoustic profiles, formed by all images acquired, are evaluated and compared in a mean square error sense. Finally, system performance, using False Match Rate (FMR)/False Non-Match Rate (FNMR) parameters and the Receiver Operating Characteristic (ROC) curve, is evaluated. On the basis of the obtained results, this system could be used for biometric applications. PMID:22163708

  13. Performance Evaluation of a Biometric System Based on Acoustic Images

    PubMed Central

    Izquierdo-Fuente, Alberto; del Val, Lara; Jiménez, María I.; Villacorta, Juan J.

    2011-01-01

    An acoustic electronic scanning array for acquiring images from a person using a biometric application is developed. Based on pulse-echo techniques, multifrequency acoustic images are obtained for a set of positions of a person (front, front with arms outstretched, back and side). Two Uniform Linear Arrays (ULA) with 15 λ/2-equispaced sensors have been employed, using different spatial apertures in order to reduce sidelobe levels. Working frequencies have been designed on the basis of the main lobe width, the grating lobe levels and the frequency responses of people and sensors. For a case-study with 10 people, the acoustic profiles, formed by all images acquired, are evaluated and compared in a mean square error sense. Finally, system performance, using False Match Rate (FMR)/False Non-Match Rate (FNMR) parameters and the Receiver Operating Characteristic (ROC) curve, is evaluated. On the basis of the obtained results, this system could be used for biometric applications. PMID:22163708

  14. Two-dimensional acoustic metamaterial structure for potential image processing

    NASA Astrophysics Data System (ADS)

    Sun, Hongwei; Han, Yu; Li, Ying; Pai, Frank

    2015-12-01

    This paper presents modeling, analysis techniques and experiment of for two-Dimensional Acoustic metamaterial Structure for filtering acoustic waves. For a unit cell of an infinite two-Dimensional Acoustic metamaterial Structure, governing equations are derived using the extended Hamilton principle. The concepts of negative effective mass and stiffness and how the spring-mass-damper subsystems create a stopband are explained in detail. Numerical simulations reveal that the actual working mechanism of the proposed acoustic metamaterial structure is based on the concept of conventional mechanical vibration absorbers. It uses the incoming wave in the structure to resonate the integrated membrane-mass-damper absorbers to vibrate in their optical mode at frequencies close to but above their local resonance frequencies to create shear forces and bending moments to straighten the panel and stop the wave propagation. Moreover, a two-dimension acoustic metamaterial structure consisting of lumped mass and elastic membrane is fabricated in the lab. We do experiments on the model and The results validate the concept and show that, for two-dimension acoustic metamaterial structure do exist two vibration modes. For the wave absorption, the mass of each cell should be considered in the design. With appropriate design calculations, the proposed two-dimension acoustic metamaterial structure can be used for absorption of low-frequency waves. Hence this special structure can be used in filtering the waves, and the potential using can increase the ultrasonic imaging quality.

  15. Evaluation of Skybox Video and Still Image products

    NASA Astrophysics Data System (ADS)

    d'Angelo, P.; Kuschk, G.; Reinartz, P.

    2014-11-01

    The SkySat-1 satellite lauched by Skybox Imaging on November 21 in 2013 opens a new chapter in civilian earth observation as it is the first civilian satellite to image a target in high definition panchromatic video for up to 90 seconds. The small satellite with a mass of 100 kg carries a telescope with 3 frame sensors. Two products are available: Panchromatic video with a resolution of around 1 meter and a frame size of 2560 × 1080 pixels at 30 frames per second. Additionally, the satellite can collect still imagery with a swath of 8 km in the panchromatic band, and multispectral images with 4 bands. Using super-resolution techniques, sub-meter accuracy is reached for the still imagery. The paper provides an overview of the satellite design and imaging products. The still imagery product consists of 3 stripes of frame images with a footprint of approximately 2.6 × 1.1 km. Using bundle block adjustment, the frames are registered, and their accuracy is evaluated. Image quality of the panchromatic, multispectral and pansharpened products are evaluated. The video product used in this evaluation consists of a 60 second gazing acquisition of Las Vegas. A DSM is generated by dense stereo matching. Multiple techniques such as pairwise matching or multi image matching are used and compared. As no ground truth height reference model is availble to the authors, comparisons on flat surface and compare differently matched DSMs are performed. Additionally, visual inspection of DSM and DSM profiles show a detailed reconstruction of small features and large skyscrapers.

  16. Video Guidance, Landing, and Imaging system (VGLIS) for space missions

    NASA Technical Reports Server (NTRS)

    Schappell, R. T.; Knickerbocker, R. L.; Tietz, J. C.; Grant, C.; Flemming, J. C.

    1975-01-01

    The feasibility of an autonomous video guidance system that is capable of observing a planetary surface during terminal descent and selecting the most acceptable landing site was demonstrated. The system was breadboarded and "flown" on a physical simulator consisting of a control panel and monitor, a dynamic simulator, and a PDP-9 computer. The breadboard VGLIS consisted of an image dissector camera and the appropriate processing logic. Results are reported.

  17. Holographic Vibration Testing With Video/Computer Imaging

    NASA Technical Reports Server (NTRS)

    Miller, Christopher J.

    1996-01-01

    In improved system for holographic vibration testing, holographic interferograms indicating shapes of vibrational modes recorded by video camera under computer control. Results available almost immediately, and images distributed simultaneously to multiple computer terminals connected to vibration-testing computer via local-area network. Designed to replace prior photography-based system for identifying natural vibrational modes of propfan blades; adaptable to similar vibration testing of other objects.

  18. Laser Imaging of Airborne Acoustic Emission by Nonlinear Defects

    NASA Astrophysics Data System (ADS)

    Solodov, Igor; Döring, Daniel; Busse, Gerd

    2008-06-01

    Strongly nonlinear vibrations of near-surface fractured defects driven by an elastic wave radiate acoustic energy into adjacent air in a wide frequency range. The variations of pressure in the emitted airborne waves change the refractive index of air thus providing an acoustooptic interaction with a collimated laser beam. Such an air-coupled vibrometry (ACV) is proposed for detecting and imaging of acoustic radiation of nonlinear spectral components by cracked defects. The photoelastic relation in air is used to derive induced phase modulation of laser light in the heterodyne interferometer setup. The sensitivity of the scanning ACV to different spatial components of the acoustic radiation is analyzed. The animated airborne emission patterns are visualized for the higher harmonic and frequency mixing fields radiated by planar defects. The results confirm a high localization of the nonlinear acoustic emission around the defects and complicated directivity patterns appreciably different from those observed for fundamental frequencies.

  19. Spatial image polynomial decomposition with application to video classification

    NASA Astrophysics Data System (ADS)

    El Moubtahij, Redouane; Augereau, Bertrand; Tairi, Hamid; Fernandez-Maloigne, Christine

    2015-11-01

    This paper addresses the use of orthogonal polynomial basis transform in video classification due to its multiple advantages, especially for multiscale and multiresolution analysis similar to the wavelet transform. In our approach, we benefit from these advantages to reduce the resolution of the video by using a multiscale/multiresolution decomposition to define a new algorithm that decomposes a color image into geometry and texture component by projecting the image on a bivariate polynomial basis and considering the geometry component as the partial reconstruction and the texture component as the remaining part, and finally to model the features (like motion and texture) extracted from reduced image sequences by projecting them into a bivariate polynomial basis in order to construct a hybrid polynomial motion texture video descriptor. To evaluate our approach, we consider two visual recognition tasks, namely the classification of dynamic textures and recognition of human actions. The experimental section shows that the proposed approach achieves a perfect recognition rate in the Weizmann database and highest accuracy in the Dyntex++ database compared to existing methods.

  20. Object segmentation based on guided layering from video image

    NASA Astrophysics Data System (ADS)

    Lin, Guangfeng; Zhu, Hong; Fan, Caixia; Zhang, Erhu

    2011-09-01

    When the object is similar to the background, it is difficult to segment the completed human body object from video images. To solve the problem, this paper proposes an object segmentation algorithm based on guided layering from video images. This algorithm adopts the structure of advance by degrees, including three parts altogether. Each part constructs the different energy function in terms of the spatiotemporal information to maximize the posterior probability of segmentation label. In part one, the energy functions are established, respectively, with the frame difference information in the first layer and second layer. By optimization, the initial segmentation is solved in the first layer, and then the amended segmentation is obtained in the second layer. In part two, the energy function is built in the interframe with the shape feature as the prior guiding to eliminate the interframe difference of the segmentation result. In art three, the segmentation results in the previous two parts are fused to suppress or inhibit the over-repairing segmentation and the object shape variations in the adjacent two-frame. The results from the compared experiment indicate that this algorithm can obtain the completed human body object in the case of the video image with similarity between object and background.

  1. Block-based embedded color image and video coding

    NASA Astrophysics Data System (ADS)

    Nagaraj, Nithin; Pearlman, William A.; Islam, Asad

    2004-01-01

    Set Partitioned Embedded bloCK coder (SPECK) has been found to perform comparable to the best-known still grayscale image coders like EZW, SPIHT, JPEG2000 etc. In this paper, we first propose Color-SPECK (CSPECK), a natural extension of SPECK to handle color still images in the YUV 4:2:0 format. Extensions to other YUV formats are also possible. PSNR results indicate that CSPECK is among the best known color coders while the perceptual quality of reconstruction is superior than SPIHT and JPEG2000. We then propose a moving picture based coding system called Motion-SPECK with CSPECK as the core algorithm in an intra-based setting. Specifically, we demonstrate two modes of operation of Motion-SPECK, namely the constant-rate mode where every frame is coded at the same bit-rate and the constant-distortion mode, where we ensure the same quality for each frame. Results on well-known CIF sequences indicate that Motion-SPECK performs comparable to Motion-JPEG2000 while the visual quality of the sequence is in general superior. Both CSPECK and Motion-SPECK automatically inherit all the desirable features of SPECK such as embeddedness, low computational complexity, highly efficient performance, fast decoding and low dynamic memory requirements. The intended applications of Motion-SPECK would be high-end and emerging video applications such as High Quality Digital Video Recording System, Internet Video, Medical Imaging etc.

  2. Acoustic angiography: a new imaging modality for assessing microvasculature architecture.

    PubMed

    Gessner, Ryan C; Frederick, C Brandon; Foster, F Stuart; Dayton, Paul A

    2013-01-01

    The purpose of this paper is to provide the biomedical imaging community with details of a new high resolution contrast imaging approach referred to as "acoustic angiography." Through the use of dual-frequency ultrasound transducer technology, images acquired with this approach possess both high resolution and a high contrast-to-tissue ratio, which enables the visualization of microvascular architecture without significant contribution from background tissues. Additionally, volumetric vessel-tissue integration can be visualized by using b-mode overlays acquired with the same probe. We present a brief technical overview of how the images are acquired, followed by several examples of images of both healthy and diseased tissue volumes. 3D images from alternate modalities often used in preclinical imaging, contrast-enhanced micro-CT and photoacoustics, are also included to provide a perspective on how acoustic angiography has qualitatively similar capabilities to these other techniques. These preliminary images provide visually compelling evidence to suggest that acoustic angiography may serve as a powerful new tool in preclinical and future clinical imaging. PMID:23997762

  3. Optimal flushing agents for integrated optical and acoustic imaging systems

    PubMed Central

    Li, Jiawen; Minami, Hataka; Steward, Earl; Ma, Teng; Mohar, Dilbahar; Robertson, Claire; Shung, Kirk; Zhou, Qifa; Patel, Pranav; Chen, Zhongping

    2015-01-01

    Abstract. An increasing number of integrated optical and acoustic intravascular imaging systems have been developed and hold great promise for accurately diagnosing vulnerable plaques and guiding atherosclerosis treatment. However, in any intravascular environment, the vascular lumen is filled with blood, a high-scattering source for optical and high-frequency ultrasound signals. Blood must be flushed away to provide clearer images. To our knowledge, no research has been performed to find the ideal flushing agent for combined optical and acoustic imaging techniques. We selected three solutions as potential flushing agents for their image-enhancing effects: mannitol, dextran, and iohexol. Testing of these flushing agents was performed in a closed-loop circulation model and in vivo on rabbits. We found that a high concentration of dextran was the most useful for simultaneous intravascular ultrasound and optical coherence tomography imaging. PMID:25985096

  4. Optimal flushing agents for integrated optical and acoustic imaging systems.

    PubMed

    Li, Jiawen; Minami, Hataka; Steward, Earl; Ma, Teng; Mohar, Dilbahar; Robertson, Claire; Shung, Kirk; Zhou, Qifa; Patel, Pranav; Chen, Zhongping

    2015-05-01

    An increasing number of integrated optical and acoustic intravascular imaging systems have been developed and hold great promise for accurately diagnosing vulnerable plaques and guiding atherosclerosis treatment. However, in any intravascular environment, the vascular lumen is filled with blood, a high-scattering source for optical and high-frequency ultrasound signals. Blood must be flushed away to provide clearer images. To our knowledge, no research has been performed to find the ideal flushing agent for combined optical and acoustic imaging techniques. We selected three solutions as potential flushing agents for their image-enhancing effects: mannitol, dextran, and iohexol. Testing of these flushing agents was performed in a closed-loop circulation model and in vivo on rabbits. We found that a high concentration of dextran was the most useful for simultaneous intravascular ultrasound and optical coherence tomography imaging. PMID:25985096

  5. Optimal flushing agents for integrated optical and acoustic imaging systems

    NASA Astrophysics Data System (ADS)

    Li, Jiawen; Minami, Hataka; Steward, Earl; Ma, Teng; Mohar, Dilbahar; Robertson, Claire; Shung, Kirk; Zhou, Qifa; Patel, Pranav; Chen, Zhongping

    2015-05-01

    An increasing number of integrated optical and acoustic intravascular imaging systems have been developed and hold great promise for accurately diagnosing vulnerable plaques and guiding atherosclerosis treatment. However, in any intravascular environment, the vascular lumen is filled with blood, a high-scattering source for optical and high-frequency ultrasound signals. Blood must be flushed away to provide clearer images. To our knowledge, no research has been performed to find the ideal flushing agent for combined optical and acoustic imaging techniques. We selected three solutions as potential flushing agents for their image-enhancing effects: mannitol, dextran, and iohexol. Testing of these flushing agents was performed in a closed-loop circulation model and in vivo on rabbits. We found that a high concentration of dextran was the most useful for simultaneous intravascular ultrasound and optical coherence tomography imaging.

  6. Epipolar geometry of opti-acoustic stereo imaging.

    PubMed

    Negahdaripour, Shahriar

    2007-10-01

    Optical and acoustic cameras are suitable imaging systems to inspect underwater structures, both in regular maintenance and security operations. Despite high resolution, optical systems have limited visibility range when deployed in turbid waters. In contrast, the new generation of high-frequency (MHz) acoustic cameras can provide images with enhanced target details in highly turbid waters, though their range is reduced by one to two orders of magnitude compared to traditional low-/midfrequency (10s-100s KHz) sonar systems. It is conceivable that an effective inspection strategy is the deployment of both optical and acoustic cameras on a submersible platform, to enable target imaging in a range of turbidity conditions. Under this scenario and where visibility allows, registration of the images from both cameras arranged in binocular stereo configuration provides valuable scene information that cannot be readily recovered from each sensor alone. We explore and derive the constraint equations for the epipolar geometry and stereo triangulation in utilizing these two sensing modalities with different projection models. Theoretical results supported by computer simulations show that an opti-acoustic stereo imaging system outperforms a traditional binocular vision with optical cameras, particularly for increasing target distance and (or) turbidity. PMID:17699922

  7. Application of acoustic reflection tomography to sonar imaging.

    PubMed

    Ferguson, Brian G; Wyber, Ron J

    2005-05-01

    Computer-aided tomography is a technique for providing a two-dimensional cross-sectional view of a three-dimensional object through the digital processing of many one-dimensional views (or projections) taken at different look directions. In acoustic reflection tomography, insonifying the object and then recording the backscattered signal provides the projection information for a given look direction (or aspect angle). Processing the projection information for all possible aspect angles enables an image to be reconstructed that represents the two-dimensional spatial distribution of the object's acoustic reflectivity function when projected on the imaging plane. The shape of an idealized object, which is an elliptical cylinder, is reconstructed by applying standard backprojection, Radon transform inversion (using both convolution and filtered backprojections), and direct Fourier inversion to simulated projection data. The relative merits of the various reconstruction algorithms are assessed and the resulting shape estimates compared. For bandpass sonar data, however, the wave number components of the acoustic reflectivity function that are outside the passband are absent. This leads to the consideration of image reconstruction for bandpass data. Tomographic image reconstruction is applied to real data collected with an ultra-wideband sonar transducer to form high-resolution acoustic images of various underwater objects when the sonar and object are widely separated. PMID:15957762

  8. Computer simulation of orthognathic surgery with video imaging

    NASA Astrophysics Data System (ADS)

    Sader, Robert; Zeilhofer, Hans-Florian U.; Horch, Hans-Henning

    1994-04-01

    Patients with extreme jaw imbalance must often undergo operative corrections. The goal of therapy is to harmonize the stomatognathic system and an aesthetical correction of the face profile. A new procedure will be presented which supports the maxillo-facial surgeon in planning the operation and which also presents the patient the result of the treatment by video images. Once an x-ray has been digitized it is possible to produce individualized cephalometric analyses. Using a ceph on screen, all current orthognathic operations can be simulated, whereby the bony segments are moved according to given parameters, and a new soft tissue profile can be calculated. The profile of the patient is fed into the computer by way of a video system and correlated to the ceph. Using the simulated operation the computer calculates a new video image of the patient which presents the expected postoperative appearance. In studies of patients treated between 1987-91, 76 out of 121 patients were able to be evaluated. The deviation in profile change varied between .0 and 1.6mm. A side effect of the practical applications was an increase in patient compliance.

  9. OHIO INTERNATIONAL TELEVISION AND VIDEO FESTIVAL AWARD WINNERS FROM THE IMAGING TECHNOLOGY CENTER IT

    NASA Technical Reports Server (NTRS)

    2000-01-01

    OHIO INTERNATIONAL TELEVISION AND VIDEO FESTIVAL AWARD WINNERS FROM THE IMAGING TECHNOLOGY CENTER ITC KEVIN BURKE - BILL FLETCHER - GARY NOLAN - EMERY ADANICH FOR THE VIDEO ENTITLED ICING FOR REGIONAL AND CORPORATE PILOTS

  10. Advanced hyperspectral video imaging system using Amici prism.

    PubMed

    Feng, Jiao; Fang, Xiaojing; Cao, Xun; Ma, Chenguang; Dai, Qionghai; Zhu, Hongbo; Wang, Yongjin

    2014-08-11

    In this paper, we propose an advanced hyperspectral video imaging system (AHVIS), which consists of an objective lens, an occlusion mask, a relay lens, an Amici prism and two cameras. An RGB camera is used for spatial reading and a gray scale camera is used for measuring the scene with spectral information. The objective lens collects more light energy from the observed scene and images the scene on an occlusion mask, which subsamples the image of the observed scene. Then, the subsampled image is sent to the gray scale camera through the relay lens and the Amici prism. The Amici prism that is used to realize spectral dispersion along the optical path reduces optical distortions and offers direct view of the scene. The main advantages of the proposed system are improved light throughput and less optical distortion. Furthermore, the presented configuration is more compact, robust and practicable. PMID:25321019

  11. Ideal flushing agents for integrated optical acoustic imaging systems

    NASA Astrophysics Data System (ADS)

    Li, Jiawen; Minami, Hataka; Steward, Earl; Ma, Teng; Mohar, Dilbahar; Robertson, Claire; Shung, K. Kirk; Zhou, Qifa; Patel, Pranav M.; Chen, Zhongping

    2015-02-01

    An increased number of integrated optical acoustic intravascular imaging systems have been researched and hold great hope for accurate diagnosing of vulnerable plaques and for guiding atherosclerosis treatment. However, in any intravascular environment, vascular lumen is filled with blood, which is a high-scattering source for optical and high frequency ultrasound signals. Blood must be flushed away to make images clear. To our knowledge, no research has been performed to find the ideal flushing agent that works for both optical and acoustic imaging techniques. We selected three solutions, mannitol, dextran and iohexol, as flushing agents because of their image-enhancing effects and low toxicities. Quantitative testing of these flushing agents was performed in a closed loop circulation model and in vivo on rabbits.

  12. 17 CFR 232.304 - Graphic, image, audio and video material.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 17 Commodity and Securities Exchanges 2 2010-04-01 2010-04-01 false Graphic, image, audio and... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio..., image, audio or video material is presented in the delivered version, or they may be listed in...

  13. 17 CFR 232.304 - Graphic, image, audio and video material.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 17 Commodity and Securities Exchanges 2 2011-04-01 2011-04-01 false Graphic, image, audio and... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio..., image, audio or video material is presented in the delivered version, or they may be listed in...

  14. 17 CFR 232.304 - Graphic, image, audio and video material.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 17 Commodity and Securities Exchanges 3 2014-04-01 2014-04-01 false Graphic, image, audio and... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio..., image, audio or video material is presented in the delivered version, or they may be listed in...

  15. 17 CFR 232.304 - Graphic, image, audio and video material.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 17 Commodity and Securities Exchanges 2 2013-04-01 2013-04-01 false Graphic, image, audio and... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio..., image, audio or video material is presented in the delivered version, or they may be listed in...

  16. 17 CFR 232.304 - Graphic, image, audio and video material.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 17 Commodity and Securities Exchanges 2 2012-04-01 2012-04-01 false Graphic, image, audio and... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio..., image, audio or video material is presented in the delivered version, or they may be listed in...

  17. Video multiple watermarking technique based on image interlacing using DWT.

    PubMed

    Ibrahim, Mohamed M; Abdel Kader, Neamat S; Zorkany, M

    2014-01-01

    Digital watermarking is one of the important techniques to secure digital media files in the domains of data authentication and copyright protection. In the nonblind watermarking systems, the need of the original host file in the watermark recovery operation makes an overhead over the system resources, doubles memory capacity, and doubles communications bandwidth. In this paper, a robust video multiple watermarking technique is proposed to solve this problem. This technique is based on image interlacing. In this technique, three-level discrete wavelet transform (DWT) is used as a watermark embedding/extracting domain, Arnold transform is used as a watermark encryption/decryption method, and different types of media (gray image, color image, and video) are used as watermarks. The robustness of this technique is tested by applying different types of attacks such as: geometric, noising, format-compression, and image-processing attacks. The simulation results show the effectiveness and good performance of the proposed technique in saving system resources, memory capacity, and communications bandwidth. PMID:25587570

  18. Video Multiple Watermarking Technique Based on Image Interlacing Using DWT

    PubMed Central

    Ibrahim, Mohamed M.; Abdel Kader, Neamat S.; Zorkany, M.

    2014-01-01

    Digital watermarking is one of the important techniques to secure digital media files in the domains of data authentication and copyright protection. In the nonblind watermarking systems, the need of the original host file in the watermark recovery operation makes an overhead over the system resources, doubles memory capacity, and doubles communications bandwidth. In this paper, a robust video multiple watermarking technique is proposed to solve this problem. This technique is based on image interlacing. In this technique, three-level discrete wavelet transform (DWT) is used as a watermark embedding/extracting domain, Arnold transform is used as a watermark encryption/decryption method, and different types of media (gray image, color image, and video) are used as watermarks. The robustness of this technique is tested by applying different types of attacks such as: geometric, noising, format-compression, and image-processing attacks. The simulation results show the effectiveness and good performance of the proposed technique in saving system resources, memory capacity, and communications bandwidth. PMID:25587570

  19. Content-weighted video quality assessment using a three-component image model

    NASA Astrophysics Data System (ADS)

    Li, Chaofeng; Bovik, Alan Conrad

    2010-01-01

    Objective image and video quality measures play important roles in numerous image and video processing applications. In this work, we propose a new content-weighted method for full-reference (FR) video quality assessment using a three-component image model. Using the idea that different image regions have different perceptual significance relative to quality, we deploy a model that classifies image local regions according to their image gradient properties, then apply variable weights to structural similarity image index (SSIM) [and peak signal-to-noise ratio (PSNR)] scores according to region. A frame-based video quality assessment algorithm is thereby derived. Experimental results on the Video Quality Experts Group (VQEG) FR-TV Phase 1 test dataset show that the proposed algorithm outperforms existing video quality assessment methods.

  20. Voice assessment: Updates on perceptual, acoustic, aerodynamic, and endoscopic imaging methods

    PubMed Central

    Mehta, Daryush D.; Hillman, Robert E.

    2013-01-01

    Purpose of review This paper describes recent advances in perceptual, acoustic, aerodynamic, and endoscopic imaging methods for assessing voice production. Recent findings Perceptual assessment Speech-language pathologists are being encouraged to use the new CAPE-V inventory for auditory perceptual assessment of voice quality, and recent studies have provided new insights into listener reliability issues that have plagued subjective perceptual judgments of voice quality. Acoustic assessment Progress is being made on the development of algorithms that are more robust for analyzing disordered voices, including the capability to extract voice quality-related measures from running speech segments. Aerodynamic assessment New devices for measuring phonation threshold air pressures and air flows have the potential to serve as sensitive indices of glottal phonatory conditions, and recent developments in aeroacoustic theory may provide new insights into laryngeal sound production mechanisms. Endoscopic imaging The increased light sensitivity of new ultra high-speed color digital video processors is enabling high-quality endoscopic imaging of vocal fold tissue motion at unprecedented image capture rates, which promises to provide new insights into mechanisms of normal and disordered voice production. Summary Some of the recent research advances in voice quality assessment could be more readily adopted into clinical practice, while others will require further development. PMID:18475073

  1. Evaluation schemes for video and image anomaly detection algorithms

    NASA Astrophysics Data System (ADS)

    Parameswaran, Shibin; Harguess, Josh; Barngrover, Christopher; Shafer, Scott; Reese, Michael

    2016-05-01

    Video anomaly detection is a critical research area in computer vision. It is a natural first step before applying object recognition algorithms. There are many algorithms that detect anomalies (outliers) in videos and images that have been introduced in recent years. However, these algorithms behave and perform differently based on differences in domains and tasks to which they are subjected. In order to better understand the strengths and weaknesses of outlier algorithms and their applicability in a particular domain/task of interest, it is important to measure and quantify their performance using appropriate evaluation metrics. There are many evaluation metrics that have been used in the literature such as precision curves, precision-recall curves, and receiver operating characteristic (ROC) curves. In order to construct these different metrics, it is also important to choose an appropriate evaluation scheme that decides when a proposed detection is considered a true or a false detection. Choosing the right evaluation metric and the right scheme is very critical since the choice can introduce positive or negative bias in the measuring criterion and may favor (or work against) a particular algorithm or task. In this paper, we review evaluation metrics and popular evaluation schemes that are used to measure the performance of anomaly detection algorithms on videos and imagery with one or more anomalies. We analyze the biases introduced by these by measuring the performance of an existing anomaly detection algorithm.

  2. [Purkinje images in slit lamp videography : Video article].

    PubMed

    Gellrich, M-M; Kandzia, C

    2016-09-01

    Reflexes that accompany every examination with the slit lamp are usually regarded as annoying and therefore do not receive much attention. In the video available online, clinical information "hidden" in the Purkinje images is analyzed according to our concept of slit lamp videography. In the first part of the video, the four Purkinje images which are reflections on the eye's optical surfaces are introduced for the phakic eye. In the pseudophakic eye, however, the refracting surfaces of the intraocular lens (IOL) have excellent optical properties and therefore form Purkinje images 3 and 4 of high quality. Especially the third Purkinje image from the anterior IOL surface, which is usually hardly visible in the phakic eye can be detected deep in the vitreous, enlarged through the eye's own optics like a magnifying glass. Its area of reflection can be used to visualize changes of the anterior segment at high contrast. The third Purkinje image carries valuable information about the anterior curvature and, thus, about the power of the IOL. If the same IOL type is implanted in a patient, often a difference between right and left of 0.5 diopter in its power can be detected by the difference in size of the respective third Purkinje image. In a historical excursion to the "prenatal phase" of the slit lamp in Uppsala, we show that our most important instrument in clinical work was originally designed for catoptric investigations (of specular reflections). Accordingly A. Gullstrand called it an ophthalmometric Nernst lamp. PMID:27558688

  3. An improved architecture for video rate image transformations

    NASA Technical Reports Server (NTRS)

    Fisher, Timothy E.; Juday, Richard D.

    1989-01-01

    Geometric image transformations are of interest to pattern recognition algorithms for their use in simplifying some aspects of the pattern recognition process. Examples include reducing sensitivity to rotation, scale, and perspective of the object being recognized. The NASA Programmable Remapper can perform a wide variety of geometric transforms at full video rate. An architecture is proposed that extends its abilities and alleviates many of the first version's shortcomings. The need for the improvements are discussed in the context of the initial Programmable Remapper and the benefits and limitations it has delivered. The implementation and capabilities of the proposed architecture are discussed.

  4. The image-mode laser for video projection display

    NASA Astrophysics Data System (ADS)

    Firehammer, Joel Allen

    1999-11-01

    The experimental and theoretical work presented in this thesis consists of the development of the image mode laser concept and its potential application to laser video projection display. Such a laser is made from a high-gain laser cavity with an intracavity spatial light modulator (SLM) that provides area-selective cavity- Q modulation. In the image mode laser, rather than the SLM being used to form an image by masking a uniform light source as in conventional projection displays, it is used to modulate the transverse mode of the output of a laser light source into a desired image, resulting in a very high contrast projection system with the capability for high output intensities. A model for the laser output response of a pixel with respect to the voltage applied to that pixel is developed. Considerations of energy coupling between neighboring pixels are explored, and a numerical simulation is developed to model these interactions. The requirements for the optical gain medium and the difficulties in realizing all necessary aspects in one material are discussed, and the properties and physical principles of organic dyes are presented, which is the medium chosen for use in our experiments and prototype. The optimum concentration of the dye in its host medium is determined from a model based on the cavity parameters and dye properties. Additionally, the considerations of amplified spontaneous emission in the resonator are explored, and its effect on the useful output of the laser is modeled. Experimental results are presented which are compared to the models developed and are shown to be in good agreement. A prototype monochromatic image mode laser is described, and the output of the laser is characterized and compared to current video projection displays. The results indicate that the image mode laser does result in a video projection system with very high contrast and high brightness at the faceplate, with a spectral linewidth that maximizes the color gamut while

  5. VIDEO IMAGE ANALYSIS SYSTEM FOR CONCENTRATION MEASUREMENTS AND FLOW VISUALIZATION IN BUILDING WAKES

    EPA Science Inventory

    A video image analysis technique for concentration measurements and flow visualization was developed for the study of diffusion in building wakes and other wind tunnel flows. moke injected into the flow was photographed from above with a video camera, and the video signal was dig...

  6. Using underwater video imaging as an assessment tool for coastal condition

    EPA Science Inventory

    As part of an effort to monitor ecological conditions in nearshore habitats, from 2009-2012 underwater videos were captured at over 400 locations throughout the Laurentian Great Lakes. This study focuses on developing a video rating system and assessing video images. This ratin...

  7. Opto-acoustic breast imaging with co-registered ultrasound

    NASA Astrophysics Data System (ADS)

    Zalev, Jason; Clingman, Bryan; Herzog, Don; Miller, Tom; Stavros, A. Thomas; Oraevsky, Alexander; Kist, Kenneth; Dornbluth, N. Carol; Otto, Pamela

    2014-03-01

    We present results from a recent study involving the ImagioTM breast imaging system, which produces fused real-time two-dimensional color-coded opto-acoustic (OA) images that are co-registered and temporally inter- leaved with real-time gray scale ultrasound using a specialized duplex handheld probe. The use of dual optical wavelengths provides functional blood map images of breast tissue and tumors displayed with high contrast based on total hemoglobin and oxygen saturation of the blood. This provides functional diagnostic information pertaining to tumor metabolism. OA also shows morphologic information about tumor neo-vascularity that is complementary to the morphological information obtained with conventional gray scale ultrasound. This fusion technology conveniently enables real-time analysis of the functional opto-acoustic features of lesions detected by readers familiar with anatomical gray scale ultrasound. We demonstrate co-registered opto-acoustic and ultrasonic images of malignant and benign tumors from a recent clinical study that provide new insight into the function of tumors in-vivo. Results from the Feasibility Study show preliminary evidence that the technology may have the capability to improve characterization of benign and malignant breast masses over conventional diagnostic breast ultrasound alone and to improve overall accuracy of breast mass diagnosis. In particular, OA improved speci city over that of conventional diagnostic ultrasound, which could potentially reduce the number of negative biopsies performed without missing cancers.

  8. Acoustic and photoacoustic microscopy imaging of single leukocytes

    NASA Astrophysics Data System (ADS)

    Strohm, Eric M.; Moore, Michael J.; Kolios, Michael C.

    2016-03-01

    An acoustic/photoacoustic microscope was used to create micrometer resolution images of stained cells from a blood smear. Pulse echo ultrasound images were made using a 1000 MHz transducer with 1 μm resolution. Photoacoustic images were made using a fiber coupled 532 nm laser, where energy losses through stimulated Raman scattering enabled output wavelengths from 532 nm to 620 nm. The laser was focused onto the sample using a 20x objective, and the laser spot co-aligned with the 1000 MHz transducer opposite the laser. The blood smear was stained with Wright-Giemsa, a common metachromatic dye that differentially stains the cellular components for visual identification. A neutrophil, lymphocyte and a monocyte were imaged using acoustic and photoacoustic microscopy at two different wavelengths, 532 nm and 600 nm. Unique features in each imaging modality enabled identification of the different cell types. This imaging method provides a new way of imaging stained leukocytes, with applications towards identifying and differentiating cell types, and detecting disease at the single cell level.

  9. Ultra high frequency imaging acoustic microscope

    DOEpatents

    Deason, Vance A.; Telschow, Kenneth L.

    2006-05-23

    An imaging system includes: an object wavefront source and an optical microscope objective all positioned to direct an object wavefront onto an area of a vibrating subject surface encompassed by a field of view of the microscope objective, and to direct a modulated object wavefront reflected from the encompassed surface area through a photorefractive material; and a reference wavefront source and at least one phase modulator all positioned to direct a reference wavefront through the phase modulator and to direct a modulated reference wavefront from the phase modulator through the photorefractive material to interfere with the modulated object wavefront. The photorefractive material has a composition and a position such that interference of the modulated object wavefront and modulated reference wavefront occurs within the photorefractive material, providing a full-field, real-time image signal of the encompassed surface area.

  10. Feature Extraction in Sequential Multimedia Images: with Applications in Satellite Images and On-line Videos

    NASA Astrophysics Data System (ADS)

    Liang, Yu-Li

    Multimedia data is increasingly important in scientific discovery and people's daily lives. Content of massive multimedia is often diverse and noisy, and motion between frames is sometimes crucial in analyzing those data. Among all, still images and videos are commonly used formats. Images are compact in size but do not contain motion information. Videos record motion but are sometimes too big to be analyzed. Sequential images, which are a set of continuous images with low frame rate, stand out because they are smaller than videos and still maintain motion information. This thesis investigates features in different types of noisy sequential images, and the proposed solutions that intelligently combined multiple features to successfully retrieve visual information from on-line videos and cloudy satellite images. The first task is detecting supraglacial lakes above ice sheet in sequential satellite images. The dynamics of supraglacial lakes on the Greenland ice sheet deeply affect glacier movement, which is directly related to sea level rise and global environment change. Detecting lakes above ice is suffering from diverse image qualities and unexpected clouds. A new method is proposed to efficiently extract prominent lake candidates with irregular shapes, heterogeneous backgrounds, and in cloudy images. The proposed system fully automatize the procedure that track lakes with high accuracy. We further cooperated with geoscientists to examine the tracked lakes and found new scientific findings. The second one is detecting obscene content in on-line video chat services, such as Chatroulette, that randomly match pairs of users in video chat sessions. A big problem encountered in such systems is the presence of flashers and obscene content. Because of various obscene content and unstable qualities of videos capture by home web-camera, detecting misbehaving users is a highly challenging task. We propose SafeVchat, which is the first solution that achieves satisfactory

  11. Development of passive submillimeter-wave video imaging systems

    NASA Astrophysics Data System (ADS)

    Heinz, Erik; May, Torsten; Born, Detlef; Zieger, Gabriel; Peiselt, Katja; Brömel, Anika; Anders, Solveig; Zakosarenko, Vyacheslav; Krause, Torsten; Krüger, André; Schulz, Marco; Meyer, Hans-Georg

    2013-05-01

    Passive submillimeter wave imaging is a concept that has been in the focus of interest as a promising technology for security applications for a number of years. It utilizes the unique optical properties of submillimeter waves and promises an alternative to millimeter-wave and X-ray backscattering portals for personal security screening in particular. Possible application scenarios demand sensitive, fast, and fleixible high-quality imaging techniques. Considering the low radiometric contrast of indoor scenes in the submillimeter range, this objective calls for an extremely high detector sensitivity that can only be achieved using cooled detectors. Our approach to this task is a series of passives standoff video cameras for the 350 GHz band that represent an evolving concept and a continuous development since 2007. The cameras utilize arrays of superconducting transition-edge sensors (TES), i.e. cryogenic microbolometers, as radiation detectors. The TES are operate at temperatures below 1K, cooled by a closed-cycle cooling system, and coupled to superconducting readout electronics. By this means, background limited photometry (BLIP) mode is achieved providing the maximum possible signal to noise ratio. At video rates, this leads to a pixel NETD well below 1K. The imaging system is completed by reflector optics based on free-form mirrors. For object distances of 3-10m, a field of view up to 2m height and a diffraction-limited spatial resolution in the order of 1-2cm is provided. Opto-mechanical scanning systems are part of the optical setup and capable frame rates up to 25 frames per second. Both spiraliform and linear scanning schemes have been developed.

  12. Acquisition and Analysis of Dynamic Responses of a Historic Pedestrian Bridge using Video Image Processing

    NASA Astrophysics Data System (ADS)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; O'Donnell, Deirdre; Wright, Robert; Pakrashi, Vikram

    2015-07-01

    Video based tracking is capable of analysing bridge vibrations that are characterised by large amplitudes and low frequencies. This paper presents the use of video images and associated image processing techniques to obtain the dynamic response of a pedestrian suspension bridge in Cork, Ireland. This historic structure is one of the four suspension bridges in Ireland and is notable for its dynamic nature. A video camera is mounted on the river-bank and the dynamic responses of the bridge have been measured from the video images. The dynamic response is assessed without the need of a reflector on the bridge and in the presence of various forms of luminous complexities in the video image scenes. Vertical deformations of the bridge were measured in this regard. The video image tracking for the measurement of dynamic responses of the bridge were based on correlating patches in time-lagged scenes in video images and utilisinga zero mean normalised cross correlation (ZNCC) metric. The bridge was excited by designed pedestrian movement and by individual cyclists traversing the bridge. The time series data of dynamic displacement responses of the bridge were analysedto obtain the frequency domain response. Frequencies obtained from video analysis were checked against accelerometer data from the bridge obtained while carrying out the same set of experiments used for video image based recognition.

  13. Acquisition and Analysis of Dynamic Responses of a Historic Pedestrian Bridge using Video Image Processing

    NASA Astrophysics Data System (ADS)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; O'Donnell, Deirdre; Wright, Robert; Pakrashi, Vikram

    2015-07-01

    Video based tracking is capable of analysing bridge vibrations that are characterised by large amplitudes and low frequencies. This paper presents the use of video images and associated image processing techniques to obtain the dynamic response of a pedestrian suspension bridge in Cork, Ireland. This historic structure is one of the four suspension bridges in Ireland and is notable for its dynamic nature. A video camera is mounted on the river-bank and the dynamic responses of the bridge have been measured from the video images. The dynamic response is assessed without the need of a reflector on the bridge and in the presence of various forms of luminous complexities in the video image scenes. Vertical deformations of the bridge were measured in this regard. The video image tracking for the measurement of dynamic responses of the bridge were based on correlating patches in time-lagged scenes in video images and utilisinga zero mean normalisedcross correlation (ZNCC) metric. The bridge was excited by designed pedestrian movement and by individual cyclists traversing the bridge. The time series data of dynamic displacement responses of the bridge were analysedto obtain the frequency domain response. Frequencies obtained from video analysis were checked against accelerometer data from the bridge obtained while carrying out the same set of experiments used for video image based recognition.

  14. An introduction to video image compression and authentication technology for safeguards applications

    SciTech Connect

    Johnson, C.S.

    1995-07-01

    Verification of a video image has been a major problem for safeguards for several years. Various verification schemes have been tried on analog video signals ever since the mid-1970`s. These schemes have provided a measure of protection but have never been widely adopted. The development of reasonably priced complex video processing integrated circuits makes it possible to digitize a video image and then compress the resulting digital file into a smaller file without noticeable loss of resolution. Authentication and/or encryption algorithms can be more easily applied to digital video files that have been compressed. The compressed video files require less time for algorithm processing and image transmission. An important safeguards application for authenticated, compressed, digital video images is in unattended video surveillance systems and remote monitoring systems. The use of digital images in the surveillance system makes it possible to develop remote monitoring systems that send images over narrow bandwidth channels such as the common telephone line. This paper discusses the video compression process, authentication algorithm, and data format selected to transmit and store the authenticated images.

  15. A novel Kalman filter based video image processing scheme for two-photon fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Sun, Wenqing; Huang, Xia; Li, Chunqiang; Xiao, Chuan; Qian, Wei

    2016-03-01

    Two-photon fluorescence microscopy (TPFM) is a perfect optical imaging equipment to monitor the interaction between fast moving viruses and hosts. However, due to strong unavoidable background noises from the culture, videos obtained by this technique are too noisy to elaborate this fast infection process without video image processing. In this study, we developed a novel scheme to eliminate background noises, recover background bacteria images and improve video qualities. In our scheme, we modified and implemented the following methods for both host and virus videos: correlation method, round identification method, tree-structured nonlinear filters, Kalman filters, and cell tracking method. After these procedures, most of noises were eliminated and host images were recovered with their moving directions and speed highlighted in the videos. From the analysis of the processed videos, 93% bacteria and 98% viruses were correctly detected in each frame on average.

  16. Vector Acoustics, Vector Sensors, and 3D Underwater Imaging

    NASA Astrophysics Data System (ADS)

    Lindwall, D.

    2007-12-01

    Vector acoustic data has two more dimensions of information than pressure data and may allow for 3D underwater imaging with much less data than with hydrophone data. The vector acoustic sensors measures the particle motions due to passing sound waves and, in conjunction with a collocated hydrophone, the direction of travel of the sound waves. When using a controlled source with known source and sensor locations, the reflection points of the sound field can be determined with a simple trigonometric calculation. I demonstrate this concept with an experiment that used an accelerometer based vector acoustic sensor in a water tank with a short-pulse source and passive scattering targets. The sensor consists of a three-axis accelerometer and a matched hydrophone. The sound source was a standard transducer driven by a short 7 kHz pulse. The sensor was suspended in a fixed location and the hydrophone was moved about the tank by a robotic arm to insonify the tank from many locations. Several floats were placed in the tank as acoustic targets at diagonal ranges of approximately one meter. The accelerometer data show the direct source wave as well as the target scattered waves and reflections from the nearby water surface, tank bottom and sides. Without resorting to the usual methods of seismic imaging, which in this case is only two dimensional and relied entirely on the use of a synthetic source aperture, the two targets, the tank walls, the tank bottom, and the water surface were imaged. A directional ambiguity inherent to vector sensors is removed by using collocated hydrophone data. Although this experiment was in a very simple environment, it suggests that 3-D seismic surveys may be achieved with vector sensors using the same logistics as a 2-D survey that uses conventional hydrophones. This work was supported by the Office of Naval Research, program element 61153N.

  17. a Three-Dimensional Acoustical Imaging System for Zooplankton Observations

    NASA Astrophysics Data System (ADS)

    McGehee, Duncan Ewell

    This dissertation describes the design, testing, and use of a three-dimensional acoustical imaging system, called Fish TV, or FTV, for tracking zooplankton swimming in situ. There is an increasing recognition that three -dimensional tracks of individual plankters are needed for some studies in behavioral ecology including, for example, the role of individual behavior in patch formation and maintenance. Fish TV was developed in part to provide a means of examining zooplankton swimming behavior in a non-invasive way. The system works by forming a set of 64 acoustic beams in an 8 by 8 pattern, each beam 2 ^circ by 2^circ , for a total coverage of 16^circ by 16^circ. The 8 by 8 beams form two dimensions of the image; range provides the third dimension. The system described in the thesis produces three-dimensional images at the rate of approximately one per second. A set of laboratory and field experiments is described that demonstrates the capabilities of the system. The final field experiment was the in situ observation of zooplankton swimming behavior at a site in the San Diego Trough, 15 nautical miles southwest of San Diego. 314 plankters were tracked for one minute. It was observed that there was no connection between the acoustic size of the animals and their repertoire of swimming behaviors. Other contributions of the dissertation include the development of two novel methods for generating acoustic beams with low side lobes. The first is the method of dense random arrays. The second is the optimum mean square quantized aperture method. Both methods were developed originally as ways to "build a better beam pattern" for Fish TV, but also have general significance with respect to aperture theory.

  18. Video Skimming and Characterization through the Combination of Image and Language Understanding Techniques

    NASA Technical Reports Server (NTRS)

    Smith, Michael A.; Kanade, Takeo

    1997-01-01

    Digital video is rapidly becoming important for education, entertainment, and a host of multimedia applications. With the size of the video collections growing to thousands of hours, technology is needed to effectively browse segments in a short time without losing the content of the video. We propose a method to extract the significant audio and video information and create a "skim" video which represents a very short synopsis of the original. The goal of this work is to show the utility of integrating language and image understanding techniques for video skimming by extraction of significant information, such as specific objects, audio keywords and relevant video structure. The resulting skim video is much shorter, where compaction is as high as 20:1, and yet retains the essential content of the original segment.

  19. Degraded visual environment image/video quality metrics

    NASA Astrophysics Data System (ADS)

    Baumgartner, Dustin D.; Brown, Jeremy B.; Jacobs, Eddie L.; Schachter, Bruce J.

    2014-06-01

    A number of image quality metrics (IQMs) and video quality metrics (VQMs) have been proposed in the literature for evaluating techniques and systems for mitigating degraded visual environments. Some require both pristine and corrupted imagery. Others require patterned target boards in the scene. None of these metrics relates well to the task of landing a helicopter in conditions such as a brownout dust cloud. We have developed and used a variety of IQMs and VQMs related to the pilot's ability to detect hazards in the scene and to maintain situational awareness. Some of these metrics can be made agnostic to sensor type. Not only are the metrics suitable for evaluating algorithm and sensor variation, they are also suitable for choosing the most cost effective solution to improve operating conditions in degraded visual environments.

  20. Video-rate terahertz electric-field vector imaging

    SciTech Connect

    Takai, Mayuko; Takeda, Masatoshi; Sasaki, Manabu; Tachizaki, Takehiro; Yasumatsu, Naoya; Watanabe, Shinichi

    2014-10-13

    We present an experimental setup to dramatically reduce a measurement time for obtaining spatial distributions of terahertz electric-field (E-field) vectors. The method utilizes the electro-optic sampling, and we use a charge-coupled device to detect a spatial distribution of the probe beam polarization rotation by the E-field-induced Pockels effect in a 〈110〉-oriented ZnTe crystal. A quick rotation of the ZnTe crystal allows analyzing the terahertz E-field direction at each image position, and the terahertz E-field vector mapping at a fixed position of an optical delay line is achieved within 21 ms. Video-rate mapping of terahertz E-field vectors is likely to be useful for achieving real-time sensing of terahertz vector beams, vector vortices, and surface topography. The method is also useful for a fast polarization analysis of terahertz beams.

  1. Airframe noise measurements by acoustic imaging

    NASA Technical Reports Server (NTRS)

    Kendall, J. M.

    1977-01-01

    Studies of the noise produced by flow past wind tunnel models are presented. The central objective of these is to find the specific locations within a flow which are noisy, and to identify the fluid dynamic processes responsible, with the expectation that noise reduction principles will be discovered. The models tested are mostly simple shapes which result in types of flow that are similar to those occurring on, for example, aircraft landing gear and wheel cavities. A model landing gear and a flap were also tested. Turbulence has been intentionally induced as appropriate in order to simulate full-scale effects more closely. The principal technique involves use of a highly directional microphone system which is scanned about the flow field to be analyzed. The data so acquired are presented as a pictorial image of the noise source distribution. An important finding is that the noise production is highly variable within a flow field and that sources can be attributed to various fluid dynamic features of the flow. Flow separation was not noisy, but separation closure usually was.

  2. Application of time reversal acoustics focusing for nonlinear imaging ms

    NASA Astrophysics Data System (ADS)

    Sarvazyan, Armen; Sutin, Alexander

    2001-05-01

    Time reversal acoustic (TRA) focusing of ultrasound appears to be an effective tool for nonlinear imaging in industrial and medical applications because of its ability to efficiently concentrate ultrasonic energy (close to diffraction limit) in heterogeneous media. In this study, we used two TRA systems to focus ultrasonic beams with different frequencies in coinciding focal points, thus causing the generation of ultrasonic waves with combination frequencies. Measurements of the intensity of these combination frequency waves provide information on the nonlinear parameter of medium in the focal region. Synchronized stirring of two TRA focused beams enables obtaining 3-D acoustic nonlinearity images of the object. Each of the TRA systems employed an aluminum resonator with piezotransducers glued to its facet. One of the free facets of each resonator was submerged into a water tank and served as a virtual phased array capable of ultrasound focusing and beam steering. To mimic a medium with spatially varying acoustical nonlinearity a simplest model such as a microbubble column in water was used. Microbubbles were generated by electrolysis of water using a needle electrode. An order of magnitude increase of the sum frequency component was observed when the ultrasound beams were focused in the area with bubbles.

  3. ATR/OTR-SY Tank Camera Purge System and in Tank Color Video Imaging System

    SciTech Connect

    Werry, S.M.

    1995-06-06

    This procedure will document the satisfactory operation of the 101-SY tank Camera Purge System (CPS) and 101-SY in tank Color Camera Video Imaging System (CCVIS). Included in the CPRS is the nitrogen purging system safety interlock which shuts down all the color video imaging system electronics within the 101-SY tank vapor space during loss of nitrogen purge pressure.

  4. Passive 350 GHz Video Imaging Systems for Security Applications

    NASA Astrophysics Data System (ADS)

    Heinz, E.; May, T.; Born, D.; Zieger, G.; Anders, S.; Zakosarenko, V.; Meyer, H.-G.; Schäffel, C.

    2015-10-01

    Passive submillimeter-wave imaging is a concept that has been in the focus of interest as a promising technology for personal security screening for a number of years. In contradiction to established portal-based millimeter-wave scanning techniques, it allows for scanning people from a distance in real time with high throughput and without a distinct inspection procedure. This opens up new possibilities for scanning, which directly address an urgent security need of modern societies: protecting crowds and critical infrastructure from the growing threat of individual terror attacks. Considering the low radiometric contrast of indoor scenes in the submillimeter range, this objective calls for an extremely high detector sensitivity that can only be achieved using cooled detectors. Our approach to this task is a series of passive standoff video cameras for the 350 GHz band that represent an evolving concept and a continuous development since 2007. Arrays of superconducting transition-edge sensors (TES), operated at temperatures below 1 K, are used as radiation detectors. By this means, background limited performance (BLIP) mode is achieved, providing the maximum possible signal to noise ratio. At video rates, this leads to a temperature resolution well below 1 K. The imaging system is completed by reflector optics based on free-form mirrors. For object distances of 5-25 m, a field of view up to 2 m height and a diffraction-limited spatial resolution in the order of 1-2 cm is provided. Opto-mechanical scanning systems are part of the optical setup and capable of frame rates of up to 25 frames per second.

  5. Quantization table design revisited for image/video coding.

    PubMed

    Yang, En-Hui; Sun, Chang; Meng, Jin

    2014-11-01

    Quantization table design is revisited for image/video coding where soft decision quantization (SDQ) is considered. Unlike conventional approaches, where quantization table design is bundled with a specific encoding method, we assume optimal SDQ encoding and design a quantization table for the purpose of reconstruction. Under this assumption, we model transform coefficients across different frequencies as independently distributed random sources and apply the Shannon lower bound to approximate the rate distortion function of each source. We then show that a quantization table can be optimized in a way that the resulting distortion complies with certain behavior. Guided by this new design principle, we propose an efficient statistical-model-based algorithm using the Laplacian model to design quantization tables for DCT-based image coding. When applied to standard JPEG encoding, it provides more than 1.5-dB performance gain in PSNR, with almost no extra burden on complexity. Compared with the state-of-the-art JPEG quantization table optimizer, the proposed algorithm offers an average 0.5-dB gain in PSNR with computational complexity reduced by a factor of more than 2000 when SDQ is OFF, and a 0.2-dB performance gain or more with 85% of the complexity reduced when SDQ is ON. Significant compression performance improvement is also seen when the algorithm is applied to other image coding systems proposed in the literature. PMID:25248184

  6. Energy Efficient Image/Video Data Transmission on Commercial Multi-Core Processors

    PubMed Central

    Lee, Sungju; Kim, Heegon; Chung, Yongwha; Park, Daihee

    2012-01-01

    In transmitting image/video data over Video Sensor Networks (VSNs), energy consumption must be minimized while maintaining high image/video quality. Although image/video compression is well known for its efficiency and usefulness in VSNs, the excessive costs associated with encoding computation and complexity still hinder its adoption for practical use. However, it is anticipated that high-performance handheld multi-core devices will be used as VSN processing nodes in the near future. In this paper, we propose a way to improve the energy efficiency of image and video compression with multi-core processors while maintaining the image/video quality. We improve the compression efficiency at the algorithmic level or derive the optimal parameters for the combination of a machine and compression based on the tradeoff between the energy consumption and the image/video quality. Based on experimental results, we confirm that the proposed approach can improve the energy efficiency of the straightforward approach by a factor of 2∼5 without compromising image/video quality. PMID:23202181

  7. Identifying Vulnerable Plaques with Acoustic Radiation Force Impulse Imaging

    NASA Astrophysics Data System (ADS)

    Doherty, Joshua Ryan

    The rupture of arterial plaques is the most common cause of ischemic complications including stroke, the fourth leading cause of death and number one cause of long term disability in the United States. Unfortunately, because conventional diagnostic tools fail to identify plaques that confer the highest risk, often a disabling stroke and/or sudden death is the first sign of disease. A diagnostic method capable of characterizing plaque vulnerability would likely enhance the predictive ability and ultimately the treatment of stroke before the onset of clinical events. This dissertation evaluates the hypothesis that Acoustic Radiation Force Impulse (ARFI) imaging can noninvasively identify lipid regions, that have been shown to increase a plaque's propensity to rupture, within carotid artery plaques in vivo. The work detailed herein describes development efforts and results from simulations and experiments that were performed to evaluate this hypothesis. To first demonstrate feasibility and evaluate potential safety concerns, finite- element method simulations are used to model the response of carotid artery plaques to an acoustic radiation force excitation. Lipid pool visualization is shown to vary as a function of lipid pool geometry and stiffness. A comparison of the resulting Von Mises stresses indicates that stresses induced by an ARFI excitation are three orders of magnitude lower than those induced by blood pressure. This thesis also presents the development of a novel pulse inversion harmonic tracking method to reduce clutter-imposed errors in ultrasound-based tissue displacement estimates. This method is validated in phantoms and was found to reduce bias and jitter displacement errors for a marked improvement in image quality in vivo. Lastly, this dissertation presents results from a preliminary in vivo study that compares ARFI imaging derived plaque stiffness with spatially registered composition determined by a Magnetic Resonance Imaging (MRI) gold standard

  8. An application of backprojection for video SAR image formation exploiting a subaperature circular shift register

    NASA Astrophysics Data System (ADS)

    Miller, J.; Bishop, E.; Doerry, A.

    2013-05-01

    This paper details a Video SAR (Synthetic Aperture Radar) mode that provides a persistent view of a scene centered at the Motion Compensation Point (MCP). The radar platform follows a circular flight path. An objective is to form a sequence of SAR images while observing dynamic scene changes at a selectable video frame rate. A formulation of backprojection meets this objective. Modified backprojection equations take into account changes in the grazing angle or squint angle that result from non-ideal flight paths. The algorithm forms a new video frame relying upon much of the signal processing performed in prior frames. The method described applies an appropriate azimuth window to each video frame for window sidelobe rejection. A Cardinal Direction Up (CDU) coordinate frame forms images with the top of the image oriented along a given cardinal direction for all video frames. Using this coordinate frame helps characterize a moving target's target response. Generation of synthetic targets with linear motion including both constant velocity and constant acceleration is described. The synthetic target video imagery demonstrates dynamic SAR imagery with expected moving target responses. The paper presents 2011 flight data collected by General Atomics Aeronautical Systems, Inc. (GA-ASI) implementing the video SAR mode. The flight data demonstrates good video quality showing moving vehicles. The flight imagery demonstrates the real-time capability of the video SAR mode. The video SAR mode uses a circular shift register of subapertures. The radar employs a Graphics Processing Unit (GPU) in order to implement this algorithm.

  9. Informative frame detection from wireless capsule video endoscopic images

    NASA Astrophysics Data System (ADS)

    Bashar, Md. Khayrul; Mori, Kensaku; Suenaga, Yasuhito; Kitasaka, Takayuki; Mekada, Yoshito

    2008-03-01

    Wireless capsule endoscopy (WCE) is a new clinical technology permitting the visualization of the small bowel, the most difficult segment of the digestive tract. The major drawback of this technology is the high amount of time for video diagnosis. In this study, we propose a method for informative frame detection by isolating useless frames that are substantially covered by turbid fluids or their contamination with other materials, e.g., faecal, semi-processed or unabsorbed foods etc. Such materials and fluids present a wide range of colors, from brown to yellow, and/or bubble-like texture patterns. The detection scheme, therefore, consists of two stages: highly contaminated non-bubbled (HCN) frame detection and significantly bubbled (SB) frame detection. Local color moments in the Ohta color space are used to characterize HCN frames, which are isolated by the Support Vector Machine (SVM) classifier in Stage-1. The rest of the frames go to the Stage-2, where Laguerre gauss Circular Harmonic Functions (LG-CHFs) extract the characteristics of the bubble-structures in a multi-resolution framework. An automatic segmentation method is designed to extract the bubbled regions based on local absolute energies of the CHF responses, derived from the grayscale version of the original color image. Final detection of the informative frames is obtained by using threshold operation on the extracted regions. An experiment with 20,558 frames from the three videos shows the excellent average detection accuracy (96.75%) by the proposed method, when compared with the Gabor based- (74.29%) and discrete wavelet based features (62.21%).

  10. Micro-nondestructive evaluation of microelectronics using three-dimensional acoustic imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Guang-Ming; Harvey, David M.; Burton, David R.

    2011-02-01

    Holographic-like three-dimensional (3D) acoustic imaging is developed for micro-nondestructive evaluation of microelectronics. It is implemented by stacking all the interface slices together to locate and identify hidden defects. Matching pursuit based acoustic time-frequency domain imaging is proposed to overcome the wavelength limit of axial resolution so that ultra-thin slices are generated. Experiments are performed on 3D acoustic data collected from microelectronic packages. Results show that the proposed technique resolves closely spaced features that are unavailable by conventional acoustic imaging, revealing more image details of defects.

  11. A Dual Communication and Imaging Underwater Acoustic System

    NASA Astrophysics Data System (ADS)

    Fu, Tricia C.

    A dual communication and imaging underwater acoustic system is proposed and developed throughout this dissertation. Due to the wide variation in underwater channel characteristics, the research here focuses more on robustness to multipath in the shallow underwater acoustic environment, rather than high bit-rate applications and signaling schemes. Lower bit-rate (in the hundreds of bits per second (bps) to low kbps), applications such as the transfer of ecological telemetry data, e.g. conductivity or temperature data, are the primary focus of this dissertation. The parallels between direct sequence spread spectrum in digital communication and pulse-echo with pulse compression in imaging, and channel estimation in communication and range profile estimation in imaging are drawn, leading to a unified communications and imaging platform. A digital communication algorithm for channel order and channel coefficient estimation and symbol demodulation using Matching Pursuit (MP) with Generalized Multiple Hypothesis Testing (GMHT) is implemented in programmable DSP in real time with field experiment results in varying underwater environments for the single receiver (Rx), single transmitter (Tx) case. The custom and off-the-shelf hardware used in the single receiver, single transmitter set of experiments are detailed as well. This work is then extended to the single-input multiple-output (SIMO) case, and then to the full multiple-input multiple-output (MIMO) case. The results of channel estimation are used for simple range profile imaging reconstructions. Successful simulated and experimental results for both transducer array configurations are presented and analyzed. Non-real-time symbol demodulation and channel estimation is performed using experimental data from a scaled testing environment. New hardware based on cost-effective fish-finder transducers for a 6 Rx--1 Tx and 6 Rx--4 Tx transducer array is detailed. Lastly, in an application that is neither communication nor

  12. Acoustic Radiation Force Impulse (ARFI) Imaging-Based Needle Visualization

    PubMed Central

    Rotemberg, Veronica; Palmeri, Mark; Rosenzweig, Stephen; Grant, Stuart; Macleod, David; Nightingale, Kathryn

    2011-01-01

    Ultrasound-guided needle placement is widely used in the clinical setting, particularly for central venous catheter placement, tissue biopsy and regional anesthesia. Difficulties with ultrasound guidance in these areas often result from steep needle insertion angles and spatial offsets between the imaging plane and the needle. Acoustic Radiation Force Impulse (ARFI) imaging leads to improved needle visualization because it uses a standard diagnostic scanner to perform radiation force based elasticity imaging, creating a displacement map that displays tissue stiffness variations. The needle visualization in ARFI images is independent of needle-insertion angle and also extends needle visibility out of plane. Although ARFI images portray needles well, they often do not contain the usual B-mode landmarks. Therefore, a three-step segmentation algorithm has been developed to identify a needle in an ARFI image and overlay the needle prediction on a coregistered B-mode image. The steps are: (1) contrast enhancement by median filtration and Laplacian operator filtration, (2) noise suppression through displacement estimate correlation coefficient thresholding and (3) smoothing by removal of outliers and best-fit line prediction. The algorithm was applied to data sets from horizontal 18, 21 and 25 gauge needles between 0–4 mm offset in elevation from the transducer imaging plane and to 18G needles on the transducer axis (in plane) between 10° and 35° from the horizontal. Needle tips were visualized within 2 mm of their actual position for both horizontal needle orientations up to 1.5 mm off set in elevation from the transducer imaging plane and on-axis angled needles between 10°–35° above the horizontal orientation. We conclude that segmented ARFI images overlaid on matched B-mode images hold promise for improved needle visibility in many clinical applications. PMID:21608445

  13. single-channel stereoscopic video imaging modality based on a transparent rotating deflector

    NASA Astrophysics Data System (ADS)

    Radfar, Edalat; Park, Jihoon; Jun, Eunkwon; Ha, Myungjin; Lee, Sangyeob; Yu, SungKon; Jang, Seul G.; Jung, Byungjo

    2015-03-01

    This paper introduces a stereoscopic video imaging modality based on a transparent rotating deflector (TRD). Sequential two-dimensional (2D) left and right images were obtained by rotating the TRD on a stepping motor synchronized with a complementary metal-oxide semiconductor camera, and the components of the imaging modality were controlled through general purpose input/output ports using a microcontroller unit. In this research, live stereoscopic videos were visualized on a personal computer by both active shutter 3D and passive polarization 3D methods. The imaging modality was characterized by evaluating the stereoscopic video image generation, rotation characteristics of the TRD. The level of 3D conception was estimated in terms of simplified human stereovision. The results show that singlechannel stereoscopic video imaging modality has the potential to become an economical compact stereoscopic device as the system components are amenable to miniaturization; and could be applied in a wide variety of fields.

  14. Frequency Identification of Vibration Signals Using Video Camera Image Data

    PubMed Central

    Jeng, Yih-Nen; Wu, Chia-Hung

    2012-01-01

    This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC) can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system. PMID:23202026

  15. Frequency identification of vibration signals using video camera image data.

    PubMed

    Jeng, Yih-Nen; Wu, Chia-Hung

    2012-01-01

    This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC) can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system. PMID:23202026

  16. An infrared high rate video imager for various space applications

    NASA Astrophysics Data System (ADS)

    Svedhem, Hâkan; Koschny, Detlef

    2010-05-01

    Modern spacecraft with high data transmission capabilities have opened up the possibility to fly video rate imagers in space. Several fields concerned with observations of transient phenomena can benefit significantly from imaging at video frame rate. Some applications are observations and characterization of bolides/meteors, sprites, lightning, volcanic eruptions, and impacts on airless bodies. Applications can be found both on low and high Earth orbiting spacecraft as well as on planetary and lunar orbiters. The optimum wavelength range varies depending on the application but we will focus here on the near infrared, partly since it allows exploration of a new field and partly because it, in many cases, allows operation both during day and night. Such an instrument has to our knowledge never flown in space so far. The only sensors of a similar kind fly on US defense satellites for monitoring launches of ballistic missiles. The data from these sensors, however, is largely inaccessible to scientists. We have developed a bread-board version of such an instrument, the SPOSH-IR. The instrument is based on an earlier technology development - SPOSH - a Smart Panoramic Optical Sensor Head, for operation in the visible range, but with the sensor replace by a cooled IR detector and new optics. The instrument is using a Sofradir 320x256 pixel HgCdTe detector array with 30µm pixel size, mounted directly on top of a four stage thermoelectric Peltier cooler. The detector-cooler combination is integrated into an evacuated closed package with a glass window on its front side. The detector has a sensitive range between 0.8 and 2.5 µm. The optical part is a seven lens design with a focal length of 6 mm and a FOV 90deg by 72 deg optimized for use at SWIR. The detector operates at 200K while the optics operates at ambient temperature. The optics and electronics for the bread-board has been designed and built by Jena-Optronik, Jena, Germany. This talk will present the design and the

  17. Feasibility of High Frequency Acoustic Imaging for Inspection of Containments

    SciTech Connect

    C.N. Corrado; J.E. Bondaryk; V. Godino

    1998-08-01

    The Nuclear Regulatory Commission has a program at the Oak Ridge National Laboratory to provide assistance in their assessment of the effects of potential degradation on the structural integrity and Ieaktightness of metal containment vessels and steel liners of concrete containment in nuclear power plants. One of the program objectives is to identify a technique(s) for inspection of inaccessible portions of the containment pressure boundary. Acoustic imaging has been identified as one of these potential techniques. A numerical feasibility study investigated the use of high-frequency bistatic acoustic imaging techniques for inspection of inaccessible portions of the metallic pressure boundary of nuclear power plant containment. The range-dependent version of the OASES Code developed at the Massachusetts Institute of Technology was utilized to perform a series of numerical simulations. OASES is a well developed and extensively tested code for evaluation of the acoustic field in a system of stratified fluid and/or elastic layers. Using the code, an arbitrary number of fluid or solid elastic layers are interleaved, with the outer layers modeled as halfspaces. High frequency vibrational sources were modeled to simulate elastic waves in the steel. The received field due to an arbitrary source array can be calculated at arbitrary depth and range positions. In this numerical study, waves that reflect and scatter from surface roughness caused by modeled degradations (e.g., corrosion) are detected and used to identify and map the steel degradation. Variables in the numerical study included frequency, flaw size, interrogation distance, and sensor incident angle.Based on these analytical simulations, it is considered unlikely that acoustic imaging technology can be used to investigate embedded steel liners of reinforced concrete containment. The thin steel liner and high signal losses to the concrete make this application difficult. Results for portions of steel containment

  18. Multi-crack imaging using nonclassical nonlinear acoustic method

    NASA Astrophysics Data System (ADS)

    Zhang, Lue; Zhang, Ying; Liu, Xiao-Zhou; Gong, Xiu-Fen

    2014-10-01

    Solid materials with cracks exhibit the nonclassical nonlinear acoustical behavior. The micro-defects in solid materials can be detected by nonlinear elastic wave spectroscopy (NEWS) method with a time-reversal (TR) mirror. While defects lie in viscoelastic solid material with different distances from one another, the nonlinear and hysteretic stress—strain relation is established with Preisach—Mayergoyz (PM) model in crack zone. Pulse inversion (PI) and TR methods are used in numerical simulation and defect locations can be determined from images obtained by the maximum value. Since false-positive defects might appear and degrade the imaging when the defects are located quite closely, the maximum value imaging with a time window is introduced to analyze how defects affect each other and how the fake one occurs. Furthermore, NEWS-TR-NEWS method is put forward to improve NEWS-TR scheme, with another forward propagation (NEWS) added to the existing phases (NEWS and TR). In the added phase, scanner locations are determined by locations of all defects imaged in previous phases, so that whether an imaged defect is real can be deduced. NEWS-TR-NEWS method is proved to be effective to distinguish real defects from the false-positive ones. Moreover, it is also helpful to detect the crack that is weaker than others during imaging procedure.

  19. An acoustic charge transport imager for high definition television applications

    NASA Technical Reports Server (NTRS)

    Hunt, W. D.; Brennan, Kevin F.

    1994-01-01

    The primary goal of this research is to develop a solid-state high definition television (HDTV) imager chip operating at a frame rate of about 170 frames/sec at 2 Megapixels per frame. This imager offers an order of magnitude improvement in speed over CCD designs and will allow for monolithic imagers operating from the IR to the UV. The technical approach of the project focuses on the development of the three basic components of the imager and their integration. The imager chip can be divided into three distinct components: (1) image capture via an array of avalanche photodiodes (APD's), (2) charge collection, storage and overflow control via a charge transfer transistor device (CTD), and (3) charge readout via an array of acoustic charge transport (ACT) channels. The use of APD's allows for front end gain at low noise and low operating voltages while the ACT readout enables concomitant high speed and high charge transfer efficiency. Currently work is progressing towards the development of manufacturable designs for each of these component devices. In addition to the development of each of the three distinct components, work towards their integration is also progressing. The component designs are considered not only to meet individual specifications but to provide overall system level performance suitable for HDTV operation upon integration. The ultimate manufacturability and reliability of the chip constrains the design as well. The progress made during this period is described in detail in Sections 2-4.

  20. Object detection and imaging with acoustic time reversal mirrors

    NASA Astrophysics Data System (ADS)

    Fink, Mathias

    1993-11-01

    Focusing an acoustic wave on an object of unknown shape through an inhomogeneous medium of any geometrical shape is a challenge in underground detection. Optimal detection and imaging of objects needs the development of such focusing techniques. The use of a time reversal mirror (TRM) represents an original solution to this problem. It realizes in real time a focusing process matched to the object shape, to the geometries of the acoustic interfaces and to the geometries of the mirror. It is a self adaptative technique which compensates for any geometrical distortions of the mirror structure as well as for diffraction and refraction effects through the interfaces. Two real time 64 and 128 channel prototypes have been built in our laboratory and TRM experiments demonstrating the TRM performance through inhomogeneous solid and liquid media are presented. Applications to medical therapy (kidney stone detection and destruction) and to nondestructive testing of metallurgical samples of different geometries are described. Extension of this study to underground detection and imaging will be discussed.

  1. Diagnostic agreement when comparing still and video imaging for the medical evaluation of child sexual abuse.

    PubMed

    Killough, Emily; Spector, Lisa; Moffatt, Mary; Wiebe, Jan; Nielsen-Parker, Monica; Anderst, Jim

    2016-02-01

    Still photo imaging is often used in medical evaluations of child sexual abuse (CSA) but video imaging may be superior. We aimed to compare still images to videos with respect to diagnostic agreement regarding hymenal deep notches and transections in post-pubertal females. Additionally, we evaluated the role of experience and expertise on agreement. We hypothesized that videos would result in improved diagnostic agreement of multiple evaluators as compared to still photos. This was a prospective quasi-experimental study using imaging modality as the quasi-independent variable. The dependent variable was diagnostic agreement of participants regarding presence/absence of findings indicating penetrative trauma on non-acute post-pubertal genital exams. Participants were medical personnel who regularly perform CSA exams. Diagnostic agreement was evaluated utilizing a retrospective selection of videos and still photos obtained directly from the videos. Videos and still photos were embedded into an on-line survey as sixteen cases. One-hundred sixteen participants completed the study. Participant diagnosis was more likely to agree with study center diagnosis when using video (p<0.01). Use of video resulted in statistically significant changes in diagnosis in four of eight cases. In two cases, the diagnosis of the majority of participants changed from no hymenal transection to transection present. No difference in agreement was found based on experience or expertise. Use of video vs. still images resulted in increased agreement with original examiner and changes in diagnostic impressions in review of CSA exams. Further study is warranted, as video imaging may have significant impacts on diagnosis. PMID:26746111

  2. USING SAS (TRADE NAME) COLOR GRAPHICS FOR VIDEO IMAGE ANALYSIS

    EPA Science Inventory

    Wind-tunnel studies are conducted to evaluate the temporal and spatial distributions of pollutants in the wake of a model building. As part of these studies, video pictures of smoke are being used to study the dispersion patterns of pollution in the wake of buildings. The video i...

  3. Potential usefulness of a video printer for producing secondary images from digitized chest radiographs

    NASA Astrophysics Data System (ADS)

    Nishikawa, Robert M.; MacMahon, Heber; Doi, Kunio; Bosworth, Eric

    1991-05-01

    Communication between radiologists and clinicians could be improved if a secondary image (copy of the original image) accompanied the radiologic report. In addition, the number of lost original radiographs could be decreased, since clinicians would have less need to borrow films. The secondary image should be simple and inexpensive to produce, while providing sufficient image quality for verification of the diagnosis. We are investigating the potential usefulness of a video printer for producing copies of radiographs, i.e. images printed on thermal paper. The video printer we examined (Seikosha model VP-3500) can provide 64 shades of gray. It is capable of recording images up to 1,280 pixels by 1,240 lines and can accept any raster-type video signal. The video printer was characterized in terms of its linearity, contrast, latitude, resolution, and noise properties. The quality of video-printer images was also evaluated in an observer study using portable chest radiographs. We found that observers could confirm up to 90 of the reported findings in the thorax using video- printer images, when the original radiographs were of high quality. The number of verified findings was diminished when high spatial resolution was required (e.g. detection of a subtle pneumothorax) or when a low-contrast finding was located in the mediastinal area or below the diaphragm (e.g. nasogastric tubes).

  4. An acoustic charge transport imager for high definition television applications

    NASA Technical Reports Server (NTRS)

    Hunt, W. D.; Brennan, K. F.; Summers, C. J.

    1994-01-01

    The primary goal of this research is to develop a solid-state television (HDTV) imager chip operating at a frame rate of about 170 frames/sec at 2 Megapixels/frame. This imager will offer an order of magnitude improvements in speed over CCD designs and will allow for monolithic imagers operating from the IR to UV. The technical approach of the project focuses on the development of the three basic components of the imager and their subsequent integration. The camera chip can be divided into three distinct functions: (1) image capture via an array of avalanche photodiodes (APD's); (2) charge collection, storage, and overflow control via a charge transfer transistor device (CTD); and (3) charge readout via an array of acoustic charge transport (ACT) channels. The use of APD's allows for front end gain at low noise and low operating voltages while the ACT readout enables concomitant high speed and high charge transfer efficiency. Currently work is progressing towards the optimization of each of these component devices. In addition to the development of each of the three distinct components, work towards their integration and manufacturability is also progressing. The component designs are considered not only to meet individual specifications but to provide overall system level performance suitable for HDTV operation upon integration. The ultimate manufacturability and reliability of the chip constrains the design as well. The progress made during this period is described in detail.

  5. Video imaging system and thermal mapping of the molten hearth in an electron beam melting furnace

    SciTech Connect

    Miszkiel, M.E.; Davis, R.A.; Van Den Avyle, J.A.

    1995-12-31

    This project was initiated to develop an enhanced video imaging system for the Liquid Metal Processing Laboratory Electron Beam Melting (EB) Furnace at Sandia and to use color video images to map the temperature distribution of the surface of the molten hearth. In a series of test melts, the color output of the video image was calibrated against temperatures measured by an optical pyrometer and CCD camera viewing port above the molten pool. To prevent potential metal vapor deposition onto line-of-sight optical surfaces above the pool, argon backfill was used along with a pinhole aperture to obtain the vide image. The geometry of the optical port to the hearth set the limits for the focus lens and CCD camera`s field of view. Initial melts were completed with the pyrometer and pinhole aperture port in a fixed position. Using commercially available vacuum components, a second flange assembly was constructed to provide flexibility in choosing pyrometer target sights on the hearth and to adjust the field of view for the focus lens/CCD combination. RGB video images processed from the melts verified that red wavelength light captured with the video camera could be calibrated with the optical pyrometer target temperatures and used to generate temperature maps of the hearth surface. Two color ratio thermal mapping using red and green video images, which has theoretical advantages, was less successful due to probable camera non-linearities in the red and green image intensities.

  6. Do Stereotypic Images in Video Games Affect Attitudes and Behavior? Adolescents’ Perspectives

    PubMed Central

    Henning, Alexandra; Brenick, Alaina; Killen, Melanie; O’Connor, Alexander; Collins, Michael J.

    2015-01-01

    This study examined adolescents’ attitudes about video games along with their self-reported play frequency. Ninth and eleventh grade students (N = 361), approximately evenly divided by grade and gender, were surveyed about whether video games have stereotypic images, involve harmful consequences or affect one’s attitudes, whether game playing should be regulated by parents or the government, and whether game playing is a personal choice. Adolescents who played video games frequently showed decreased concern about the effects that games with negatively stereotyped images may have on the players’ attitudes compared to adolescents who played games infrequently or not at all. With age, adolescents were more likely to view images as negative, but were also less likely to recognize stereotypic images of females as harmful and more likely to judge video-game playing as a personal choice. The paper discusses other findings in relation to research on adolescents’ social cognitive judgments. PMID:25729336

  7. Field methods to measure surface displacement and strain with the Video Image Correlation method

    NASA Technical Reports Server (NTRS)

    Maddux, Gary A.; Horton, Charles M.; Mcneill, Stephen R.; Lansing, Matthew D.

    1994-01-01

    The objective of this project was to develop methods and application procedures to measure displacement and strain fields during the structural testing of aerospace components using paint speckle in conjunction with the Video Image Correlation (VIC) system.

  8. Acoustical imaging of spheres above a reflecting surface

    NASA Astrophysics Data System (ADS)

    Chambers, David; Berryman, James

    2003-04-01

    An analytical study using the MUSIC method of subspace imaging is presented for the case of spheres above a reflecting boundary. The field scattered from the spheres and the reflecting boundary is calculated analytically, neglecting interactions between spheres. The singular value decomposition of the response matrix is calculated and the singular vectors divided into signal and noise subspaces. Images showing the estimated sphere locations are obtained by backpropagating the noise vectors using either the free space Green's function or the Green's function that incorporates reflections from the boundary. We show that the latter Green's function improves imaging performance after applying a normalization that compensates for the interference between direct and reflected fields. We also show that the best images are attained in some cases when the number of singular vectors in the signal subspace exceeds the number of spheres. This is consistent with previous analysis showing multiple eigenvalues of the time reversal operator for spherical scatterers [Chambers and Gautesen, J. Acoust. Soc. Am. 109 (2001)]. [Work performed under the auspices of the U.S. Department of Energy by the University of California, Lawrence Livermore National Laboratory under Contract No. W-7405-Eng-48.

  9. Implementation and evaluation of simultaneous video-electroencephalography and functional magnetic resonance imaging.

    PubMed

    Chaudhary, Umair J; Kokkinos, Vasileios; Carmichael, David W; Rodionov, Roman; Gasston, David; Duncan, John S; Lemieux, Louis

    2010-10-01

    The objective of this study was to demonstrate that the addition of simultaneous and synchronised video to electroencephalography (EEG)-correlated functional magnetic resonance imaging (fMRI) could increase recorded information without data quality reduction. We investigated the effect of placing EEG, video equipment and their required power supplies inside the scanner room, on EEG, video and MRI data quality, and evaluated video-EEG-fMRI by modelling a hand motor task. Gradient-echo, echo-planner images (EPI) were acquired on a 3-T MRI scanner at variable camera positions in a test object [with and without radiofrequency (RF) excitation], and human subjects. EEG was recorded using a commercial MR-compatible 64-channel cap and amplifiers. Video recording was performed using a two-camera custom-made system with EEG synchronization. An in-house script was used to calculate signal to fluctuation noise ratio (SFNR) from EPI in test object with variable camera positions and in human subjects with and without concurrent video recording. Five subjects were investigated with video-EEG-fMRI while performing hand motor task. The fMRI time series data was analysed using statistical parametric mapping, by building block design general linear models which were paradigm prescribed and video based. Introduction of the cameras did not alter the SFNR significantly, nor did it show any signs of spike noise during RF off conditions. Video and EEG quality also did not show any significant artefact. The Statistical Parametric Mapping{T} maps from video based design revealed additional blood oxygen level-dependent responses in the expected locations for non-compliant subjects compared to the paradigm prescribed design. We conclude that video-EEG-fMRI set up can be implemented without affecting the data quality significantly and may provide valuable information on behaviour to enhance the analysis of fMRI data. PMID:20233646

  10. Single-channel stereoscopic video imaging modality based on transparent rotating deflector.

    PubMed

    Radfar, Edalat; Jang, Won Hyuk; Freidoony, Leila; Park, Jihoon; Kwon, Kichul; Jung, Byungjo

    2015-10-19

    In this study, we developed a single-channel stereoscopic video imaging modality based on a transparent rotating deflector (TRD). Sequential two-dimensional (2D) left and right images were obtained through the TRD synchronized with a camera, and the components of the imaging modality were controlled by a microcontroller unit. The imaging modality was characterized by evaluating the stereoscopic video image generation, rotation of the TRD, heat generation by the stepping motor, and image quality and its stability in terms of the structural similarity index. The degree of depth perception was estimated and subjective analysis was performed to evaluate the depth perception improvement. The results show that the single-channel stereoscopic video imaging modality may: 1) overcome some limitations of conventional stereoscopic video imaging modalities; 2) be a potential economical compact stereoscopic imaging modality if the system components can be miniaturized; 3) be easily integrated into current 2D optical imaging modalities to produce a stereoscopic image; and 4) be applied to various medical and industrial fields. PMID:26480428

  11. A system for the real-time display of radar and video images of targets

    NASA Technical Reports Server (NTRS)

    Allen, W. W.; Burnside, W. D.

    1990-01-01

    Described here is a software and hardware system for the real-time display of radar and video images for use in a measurement range. The main purpose is to give the reader a clear idea of the software and hardware design and its functions. This system is designed around a Tektronix XD88-30 graphics workstation, used to display radar images superimposed on video images of the actual target. The system's purpose is to provide a platform for tha analysis and documentation of radar images and their associated targets in a menu-driven, user oriented environment.

  12. Comparison of active millimeter-wave and acoustic imaging for weapon detection

    NASA Astrophysics Data System (ADS)

    Sheen, David M.; Collins, H. D.; Gribble, R. Parks; McMakin, Douglas L.

    1997-02-01

    Millimeter-wave holographic imaging techniques have recently been developed for personnel surveillance applications at airports and other high-security checkpoints. Millimeter- wave imaging is useful for this application since millimeter-waves easily pass through common clothing materials yet are reflected from the human body and any items concealed by clothing. This allows a high-resolution imaging system to form an image revealing items concealed on the person imaged. A prototype imaging system developed at Pacific Northwest National Laboratory uses a scanned linear array of millimeter-wave antennas to capture wideband millimeter-wave data in approximately one second. This data is then mathematically reconstructed to form a high- resolution 3D image of the person being scanned. Millimeter- wave imaging has been demonstrated to be effective for detecting concealed weapons on personnel. Another imaging technique which could be applied to the weapon detection problem is acoustic imaging. Like millimeter-waves, ultrasonic acoustic waves can also penetrate clothing, and can be used to form relatively high-resolution images which can reveal concealed weapons on personnel. Acoustic imaging results have been obtained using wideband holographic imaging techniques nearly identical to the imaging techniques used for millimeter-wave imaging. Preliminary imaging results at 50 kHz indicate that acoustic imaging can be used to penetrate some types of common clothing materials. Hard clothing materials, such as leather on vinyl, are essentially opaque to acoustic waves at 50 kHz. In this paper, millimeter-wave and acoustic wave imaging techniques are compared for their effectiveness and suitability in weapon detection imaging systems. Experimental results from both imaging modalities are shown.

  13. Acoustics

    NASA Astrophysics Data System (ADS)

    The acoustics research activities of the DLR fluid-mechanics department (Forschungsbereich Stroemungsmechanik) during 1988 are surveyed and illustrated with extensive diagrams, drawings, graphs, and photographs. Particular attention is given to studies of helicopter rotor noise (high-speed impulsive noise, blade/vortex interaction noise, and main/tail-rotor interaction noise), propeller noise (temperature, angle-of-attack, and nonuniform-flow effects), noise certification, and industrial acoustics (road-vehicle flow noise and airport noise-control installations).

  14. Adaptive sensing and optimal power allocation for wireless video sensors with sigma-delta imager.

    PubMed

    Marijan, Malisa; Demirkol, Ilker; Maricić I, Danijel; Sharma, Gaurav; Ignjatovi, Zeljko

    2010-10-01

    We consider optimal power allocation for wireless video sensors (WVSs), including the image sensor subsystem in the system analysis. By assigning a power-rate-distortion (P-R-D) characteristic for the image sensor, we build a comprehensive P-R-D optimization framework for WVSs. For a WVS node operating under a power budget, we propose power allocation among the image sensor, compression, and transmission modules, in order to minimize the distortion of the video reconstructed at the receiver. To demonstrate the proposed optimization method, we establish a P-R-D model for an image sensor based upon a pixel level sigma-delta (Σ∆) image sensor design that allows investigation of the tradeoff between the bit depth of the captured images and spatio-temporal characteristics of the video sequence under the power constraint. The optimization results obtained in this setting confirm that including the image sensor in the system optimization procedure can improve the overall video quality under power constraint and prolong the lifetime of the WVSs. In particular, when the available power budget for a WVS node falls below a threshold, adaptive sensing becomes necessary to ensure that the node communicates useful information about the video content while meeting its power budget. PMID:20551000

  15. From computer images to video presentation: Enhancing technology transfer

    NASA Technical Reports Server (NTRS)

    Beam, Sherilee F.

    1994-01-01

    With NASA placing increased emphasis on transferring technology to outside industry, NASA researchers need to evaluate many aspects of their efforts in this regard. Often it may seem like too much self-promotion to many researchers. However, industry's use of video presentations in sales, advertising, public relations and training should be considered. Today, the most typical presentation at NASA is through the use of vu-graphs (overhead transparencies) which can be effective for text or static presentations. For full blown color and sound presentations, however, the best method is videotape. In fact, it is frequently more convenient due to its portability and the availability of viewing equipment. This talk describes techniques for creating a video presentation through the use of a combined researcher and video professional team.

  16. Dual-frequency acoustic droplet vaporization detection for medical imaging.

    PubMed

    Arena, Christopher B; Novell, Anthony; Sheeran, Paul S; Puett, Connor; Moyer, Linsey C; Dayton, Paul A

    2015-09-01

    Liquid-filled perfluorocarbon droplets emit a unique acoustic signature when vaporized into gas-filled microbubbles using ultrasound. Here, we conducted a pilot study in a tissue-mimicking flow phantom to explore the spatial aspects of droplet vaporization and investigate the effects of applied pressure and droplet concentration on image contrast and axial and lateral resolution. Control microbubble contrast agents were used for comparison. A confocal dual-frequency transducer was used to transmit at 8 MHz and passively receive at 1 MHz. Droplet signals were of significantly higher energy than microbubble signals. This resulted in improved signal separation and high contrast-to-tissue ratios (CTR). Specifically, with a peak negative pressure (PNP) of 450 kPa applied at the focus, the CTR of B-mode images was 18.3 dB for droplets and -0.4 for microbubbles. The lateral resolution was dictated by the size of the droplet activation area, with lower pressures resulting in smaller activation areas and improved lateral resolution (0.67 mm at 450 kPa). The axial resolution in droplet images was dictated by the size of the initial droplet and was independent of the properties of the transmit pulse (3.86 mm at 450 kPa). In post-processing, time-domain averaging (TDA) improved droplet and microbubble signal separation at high pressures (640 kPa and 700 kPa). Taken together, these results indicate that it is possible to generate high-sensitivity, high-contrast images of vaporization events. In the future, this has the potential to be applied in combination with droplet-mediated therapy to track treatment outcomes or as a standalone diagnostic system to monitor the physical properties of the surrounding environment. PMID:26415125

  17. Acoustic and Elastodynamic Redatuming for VSP Salt Dome Flank Imaging

    NASA Astrophysics Data System (ADS)

    Lu, R.; Willis, M.; Toksoz, N.

    2007-12-01

    We apply an extension of the concept of Time Reversed Acoustics (TRA) for imaging salt dome flanks using Vertical Seismic Profile (VSP) data. We demonstrate its performance and capabilities on both synthetic acoustic and elastic seismic data from a Gulf of Mexico (GOM) model. This target-oriented strategy eliminates the need for the traditional complex process of velocity estimation, model building, and iterative depth migration to remove the effects of the salt canopy and surrounding overburden. In this study, we use data from surface shots recorded in a well from a walkaway VSP survey. The method, called redatuming, creates a geometry as if the source and receiver pairs had been located in the borehole at the positions of the receivers. This process generates effective downhole shot gathers without any knowledge of the overburden velocity structure. The resulting shot gathers are less complex since the VSP ray paths from the surface source are shortened and moved to be as if they started in the borehole, then reflected off the salt flank region and captured in the borehole. After redatuming, we apply multiple passes of prestack migration from the reference datum of the borehole. In our example, the first pass migration, using only simple vertical velocity gradient model, reveals the outline of the salt edge. A second pass of reverse-time prestack depth migration using the full, two-way wave equation, is performed with an updated velocity model that now consists of the velocity gradient and the salt dome. The second pass migration brings out the dipping sediments abutting the salt flank because these reflectors were illuminated by energy that bounced off the salt flank forming prismatic reflections.

  18. Facial Attractiveness Ratings from Video-Clips and Static Images Tell the Same Story

    PubMed Central

    Rhodes, Gillian; Lie, Hanne C.; Thevaraja, Nishta; Taylor, Libby; Iredell, Natasha; Curran, Christine; Tan, Shi Qin Claire; Carnemolla, Pia; Simmons, Leigh W.

    2011-01-01

    Most of what we know about what makes a face attractive and why we have the preferences we do is based on attractiveness ratings of static images of faces, usually photographs. However, several reports that such ratings fail to correlate significantly with ratings made to dynamic video clips, which provide richer samples of appearance, challenge the validity of this literature. Here, we tested the validity of attractiveness ratings made to static images, using a substantial sample of male faces. We found that these ratings agreed very strongly with ratings made to videos of these men, despite the presence of much more information in the videos (multiple views, neutral and smiling expressions and speech-related movements). Not surprisingly, given this high agreement, the components of video-attractiveness were also very similar to those reported previously for static-attractiveness. Specifically, averageness, symmetry and masculinity were all significant components of attractiveness rated from videos. Finally, regression analyses yielded very similar effects of attractiveness on success in obtaining sexual partners, whether attractiveness was rated from videos or static images. These results validate the widespread use of attractiveness ratings made to static images in evolutionary and social psychological research. We speculate that this validity may stem from our tendency to make rapid and robust judgements of attractiveness. PMID:22096491

  19. Realization of video electronics system in the space-borne multispectral imager

    NASA Astrophysics Data System (ADS)

    Rong, Peng; Lei, Ning; Cheng, Ganglin; Huang, Jing

    2015-08-01

    In this paper, a new multispectral imager video electronics system is introduced. The system has an imaging function of visible spectrum (VIS), near infrared spectrum (NIR), short wave infrared spectrum (SWIR), medium wave infrared spectrum (MWIR) and long wave infrared spectrum (LWIR). It is comprised of three video processors and an information processor. Three video processors are VIS-NIR processor, SWIR-MWIR processor and LWIR processor. The VIS-NIR processor uses time delay and integration charge coupled devices (TDICCD) as detector, samples and quantifies CCD signal under the mode of correlated double sampling (CDS), corrects image data by using large-scale field programmable gate array (FPGA). The application methods of SWIR-MWIR processor and LWIR processor are similar. Information processor is the most important part of the video electronics systems. It is responsible for receiving remote control command from other equipments, transmitting telemetric data, controlling the three video processors working synchronously, encoding and transmitting the image data from the video processor. Besides the introduction of system's functions and system composition framework, detailed implementation methods of some important components will be described in this paper as well. The experimental result shows that all main technical indexes meet the design requirement.

  20. Thinking Images: Doing Philosophy in Film and Video

    ERIC Educational Resources Information Center

    Parkes, Graham

    2009-01-01

    Over the past several decades film and video have been steadily infiltrating the philosophy curriculum at colleges and universities. Traditionally, teachers of philosophy have not made much use of "audiovisual aids" in the classroom beyond the chalk board or overhead projector, with only the more adventurous playing audiotapes, for example, or…

  1. Development of an Image Compression and Authentication Module for video surveillance systems

    SciTech Connect

    Hale, W.R.; Johnson, C.S.; DeKeyser, P.

    1995-07-01

    An Image Compression and Authentication Module (ICAM) has been designed to perform the digitization, compression, and authentication of video images in a camera enclosure. The ICAM makes it possible to build video surveillance systems that protect the transmission and storage of video images. The ICAM functions with both NTSC 525 line and PAL 625 line cameras and contains a neuron chip (integrated circuit) permitting it to be interfaced with a local operating network which is part of the Modular Integrated Monitor System (MIMS). The MIMS can be used to send commands to the ICAM from a central controller or any sensor on the network. The ICAM is capable of working as a stand alone unit or it can be integrated into a network of other cameras. As a stand alone unit it sends its video images directly over a high speed serial digital link to a central controller for storage. A number of ICAMs can be multiplexed on a single coaxial cable. In this case, images are captured by each ICAM and held until the MIMS delivers commands for an individual image to be transmitted for review or storage. The ICAM can capture images on a time interval basis or upon receipt of a trigger signal from another sensor on the network. An ICAM which collects images based on other sensor signals, forms the basis of an intelligent {open_quotes}front end{close_quotes} image collection system. The burden of image review associated with present video systems is reduced by only recording the images with significant action. The cards used in the ICAM can also be used to decompress and display the compressed images on a NTSC/PAL monitor.

  2. Video image processor on the Spacelab 2 Solar Optical Universal Polarimeter /SL2 SOUP/

    NASA Technical Reports Server (NTRS)

    Lindgren, R. W.; Tarbell, T. D.

    1981-01-01

    The SOUP instrument is designed to obtain diffraction-limited digital images of the sun with high photometric accuracy. The Video Processor originated from the requirement to provide onboard real-time image processing, both to reduce the telemetry rate and to provide meaningful video displays of scientific data to the payload crew. This original concept has evolved into a versatile digital processing system with a multitude of other uses in the SOUP program. The central element in the Video Processor design is a 16-bit central processing unit based on 2900 family bipolar bit-slice devices. All arithmetic, logical and I/O operations are under control of microprograms, stored in programmable read-only memory and initiated by commands from the LSI-11. Several functions of the Video Processor are described, including interface to the High Rate Multiplexer downlink, cosmetic and scientific data processing, scan conversion for crew displays, focus and exposure testing, and use as ground support equipment.

  3. Image-based rendering method for mapping endoscopic video onto CT-based endoluminal views

    NASA Astrophysics Data System (ADS)

    Rai, Lav; Higgins, William E.

    2006-03-01

    One of the indicators of early lung cancer is a color change in airway mucosa. Bronchoscopy of the major airways can provide high-resolution color video of the airway tree's mucosal surfaces. In addition, 3D MDCT chest images provide 3D structural information of the airways. Unfortunately, the bronchoscopic video contains no explicit 3D structural and position information, and the 3D MDCT data captures no color or textural information of the mucosa. A fusion of the topographical information from the 3D CT data and the color information from the bronchoscopic video, however, enables realistic 3D visualization, navigation, localization, and quantitative color-topographic analysis of the airways. This paper presents a method for topographic airway-mucosal surface mapping from bronchoscopic video onto 3D MDCT endoluminal views. The method uses registered video images and CT-based virtual endoscopic renderings of the airways. The visibility and depth data are also generated by the renderings. Uniform sampling and over-scanning of the visible triangles are done before they are packed into a texture space. The texels are then re-projected onto video images and assigned color values based on depth and illumination data obtained from renderings. The texture map is loaded into the rendering engine to enable real-time navigation through the combined 3D CT surface and bronchoscopic video data. Tests were performed on pre-recorded bronchoscopy patient video and associated 3D MDCT scans. Results show that we can effectively accomplish mapping over a continuous sequence of airway images spanning several generations of airways.

  4. JSC Shuttle Mission Simulator (SMS) visual system payload bay video image

    NASA Technical Reports Server (NTRS)

    1981-01-01

    This space shuttle orbiter payload bay (PLB) video image is used in JSC's Fixed Based (FB) Shuttle Mission Simulator (SMS). The image is projected inside the FB-SMS crew compartment during mission simulation training. The FB-SMS is located in the Mission Simulation and Training Facility Bldg 5.

  5. Self-images in the video monitor coded by monkey intraparietal neurons.

    PubMed

    Iriki, A; Tanaka, M; Obayashi, S; Iwamura, Y

    2001-06-01

    When playing a video game, or using a teleoperator system, we feel our self-image projected into the video monitor as a part of or an extension of ourselves. Here we show that such a self image is coded by bimodal (somatosensory and visual) neurons in the monkey intraparietal cortex, which have visual receptive fields (RFs) encompassing their somatosensory RFs. We earlier showed these neurons to code the schema of the hand which can be altered in accordance with psychological modification of the body image; that is, when the monkey used a rake as a tool to extend its reach, the visual RFs of these neurons elongated along the axis of the tool, as if the monkey's self image extended to the end of the tool. In the present experiment, we trained monkeys to recognize their image in a video monitor (despite the earlier general belief that monkeys are not capable of doing so), and demonstrated that the visual RF of these bimodal neurons was now projected onto the video screen so as to code the image of the hand as an extension of the self. Further, the coding of the imaged hand could intentionally be altered to match the image artificially modified in the monitor. PMID:11377755

  6. PRELIMINARY STUDIES OF VIDEO IMAGES OF SMOKE DISPERSION IN THE NEAR WAKE OF A MODEL BUILDING

    EPA Science Inventory

    A scary of analyses of video images of smoke in a wind tunnel study of dispersion in the near wake of a model building is presented. The analyses provide information on both the instantaneous and the time- average patterns of dispersion. ince the images represent vertically-integ...

  7. EVALUATION OF THE WAKE EFFECTS ON PLUME DISPERSION USING VIDEO IMAGE ANALYSIS

    EPA Science Inventory

    The report summarizes results from a cooperative research agreement entitled "Evaluation of Wake Effects on Plume Dispersion using Video Image Analysis." ideo images of smoke flow in the wake of a model building which were collected in previous wind tunnel studies conducted by EP...

  8. 12-Month-Old Infants' Perception of Attention Direction in Static Video Images

    ERIC Educational Resources Information Center

    von Hofsten, Claes; Dahlstrom, Emma; Fredriksson, Ylva

    2005-01-01

    Twelve-month-old infants' ability to perceive gaze direction in static video images was investigated. The images showed a woman who performed attention-directing actions by looking or pointing toward 1 of 4 objects positioned in front of her (2 on each side). When the model just pointed at the objects, she looked straight ahead, and when she just…

  9. Experimental design and analysis of JND test on coded image/video

    NASA Astrophysics Data System (ADS)

    Lin, Joe Yuchieh; Jin, Lina; Hu, Sudeng; Katsavounidis, Ioannis; Li, Zhi; Aaron, Anne; Kuo, C.-C. Jay

    2015-09-01

    The visual Just-Noticeable-Difference (JND) metric is characterized by the detectable minimum amount of two visual stimuli. Conducting the subjective JND test is a labor-intensive task. In this work, we present a novel interactive method in performing the visual JND test on compressed image/video. JND has been used to enhance perceptual visual quality in the context of image/video compression. Given a set of coding parameters, a JND test is designed to determine the distinguishable quality level against a reference image/video, which is called the anchor. The JND metric can be used to save coding bitrates by exploiting the special characteristics of the human visual system. The proposed JND test is conducted using a binary-forced choice, which is often adopted to discriminate the difference in perception in a psychophysical experiment. The assessors are asked to compare coded image/video pairs and determine whether they are of the same quality or not. A bisection procedure is designed to find the JND locations so as to reduce the required number of comparisons over a wide range of bitrates. We will demonstrate the efficiency of the proposed JND test, report experimental results on the image and video JND tests.

  10. [Development of a video image system for wireless capsule endoscopes based on DSP].

    PubMed

    Yang, Li; Peng, Chenglin; Wu, Huafeng; Zhao, Dechun; Zhang, Jinhua

    2008-02-01

    A video image recorder to record video picture for wireless capsule endoscopes was designed. TMS320C6211 DSP of Texas Instruments Inc. is the core processor of this system. Images are periodically acquired from Composite Video Broadcast Signal (CVBS) source and scaled by video decoder (SAA7114H). Video data is transported from high speed buffer First-in First-out (FIFO) to Digital Signal Processor (DSP) under the control of Complex Programmable Logic Device (CPLD). This paper adopts JPEG algorithm for image coding, and the compressed data in DSP was stored to Compact Flash (CF) card. TMS320C6211 DSP is mainly used for image compression and data transporting. Fast Discrete Cosine Transform (DCT) algorithm and fast coefficient quantization algorithm are used to accelerate operation speed of DSP and decrease the executing code. At the same time, proper address is assigned for each memory, which has different speed;the memory structure is also optimized. In addition, this system uses plenty of Extended Direct Memory Access (EDMA) to transport and process image data, which results in stable and high performance. PMID:18435246

  11. High-speed video recording system using multiple CCD imagers and digital storage

    NASA Astrophysics Data System (ADS)

    Racca, Roberto G.; Clements, Reginald M.

    1995-05-01

    This paper describes a fully solid state high speed video recording system. Its principle of operation is based on the use of several independent CCD imagers and an array of liquid crystal light valves that control which imager receives the light from the subject. The imagers are exposed in rapid succession and are then read out sequentially at standard video rate into digital memory, generating a time-resolved sequence with as many frames as there are imagers. This design allows the use of inexpensive, consumer-grade camera modules and electronics. A microprocessor-based controller, designed to accept up to ten imagers, handles all phases of the recording: exposure timing, image digitization and storage, and sequential playback onto a standard video monitor. The system is capable of recording full screen black and white images with spatial resolution similar to that of standard television, at rates of about 10,000 images per second in pulsed illumination mode. We have designed and built two optical configurations for the imager multiplexing system. The first one involves permanently splitting the subject light into multiple channels and placing a liquid crystal shutter in front of each imager. A prototype with three CCD imagers and shutters based on this configuration has allowed successful three-image video recordings of phenomena such as the action of an air rifle pellet shattering a piece of glass, using a high-intensity pulsed light emitting diode as the light source. The second configuration is more light-efficient in that it routes the entire subject light to each individual imager in sequence by using the liquid crystal cells as selectable binary switches. Despite some operational limitations, this method offers a solution when the available light, if subdivided among all the imagers, would not allow a sufficiently short exposure time.

  12. Representing and Retrieving Video Shots in Human-Centric Brain Imaging Space

    PubMed Central

    Han, Junwei; Ji, Xiang; Hu, Xintao; Zhu, Dajiang; Li, Kaiming; Jiang, Xi; Cui, Guangbin; Guo, Lei; Liu, Tianming

    2014-01-01

    Meaningful representation and effective retrieval of video shots in a large-scale database has been a profound challenge for the image/video processing and computer vision communities. A great deal of effort has been devoted to the extraction of low-level visual features such as color, shape, texture, and motion for characterizing and retrieving video shots. However, the accuracy of these feature descriptors is still far from satisfaction due to the well-known semantic gap. In order to alleviate the problem, this paper investigates a novel methodology of representing and retrieving video shots using human-centric high-level features derived in brain imaging space (BIS) where brain responses to natural stimulus of video watching can be explored and interpreted. At first, our recently developed Dense Individualized and Common Connectivity-based Cortical Landmarks (DICCCOL) system is employed to locate large-scale functional brain networks and their ROIs (regions of interests) that are involved in the comprehension of video stimulus. Then, functional connectivities between various functional ROI pairs are utilized as BIS features to characterize the brain’s comprehension of video semantics. Then an effective feature selection procedure is applied to learn the most relevant features while removing redundancy, which results in the formation of the final BIS features. Afterwards, a mapping from low-level visual features to high-level semantic features in the BIS is built via the Gaussian Process Regression (GPR) algorithm, and a manifold structure is then inferred in which video key frames are represented by the mapped feature vectors in the BIS. Finally, the manifold-ranking algorithm concerning the relationship among all data is applied to measure the similarity between key frames of video shots. Experimental results on the TRECVID 2005 dataset have demonstrated the superiority of the proposed work in comparison with traditional methods. PMID:23568507

  13. Acoustic multimode interference and self-imaging phenomena realized in multimodal phononic crystal waveguides

    NASA Astrophysics Data System (ADS)

    Zou, Qiushun; Yu, Tianbao; Liu, Jiangtao; Liu, Nianhua; Wang, Tongbiao; Liao, Qinghua

    2015-09-01

    We report an acoustic multimode interference effect and self-imaging phenomena in an acoustic multimode waveguide system which consists of M parallel phononic crystal waveguides (M-PnCWs). Results show that the self-imaging principle remains applicable for acoustic waveguides just as it does for optical multimode waveguides. To achieve the dispersions and replicas of the input acoustic waves produced along the propagation direction, we performed the finite element method on M-PnCWs, which support M guided modes within the target frequency range. The simulation results show that single images (including direct and mirrored images) and N-fold images (N is an integer) are identified along the propagation direction with asymmetric and symmetric incidence discussed separately. The simulated positions of the replicas agree well with the calculated values that are theoretically decided by self-imaging conditions based on the guided mode propagation analysis. Moreover, the potential applications based on this self-imaging effect for acoustic wavelength de-multiplexing and beam splitting in the acoustic field are also presented.

  14. Comparison of Kodak Professional Digital Camera System images to conventional film, still video, and freeze-frame images

    NASA Astrophysics Data System (ADS)

    Kent, Richard A.; McGlone, John T.; Zoltowski, Norbert W.

    1991-06-01

    Electronic cameras provide near real time image evaluation with the benefits of digital storage methods for rapid transmission or computer processing and enhancement of images. But how does the image quality of their images compare to that of conventional film? A standard Nikon F-3TM 35 mm SLR camera was transformed into an electro-optical camera by replacing the film back with Kodak's KAF-1400V (or KAF-1300L) megapixel CCD array detector back and a processing accessory. Images taken with these Kodak electronic cameras were compared to those using conventional films and to several still video cameras. Quantitative and qualitative methods were used to compare images from these camera systems. Images captured on conventional video analog systems provide a maximum of 450 - 500 TV lines of resolution depending upon the camera resolution, storage method, and viewing system resolution. The Kodak Professional Digital Camera SystemTM exceeded this resolution and more closely approached that of film.

  15. Negative refraction induced acoustic concentrator and the effects of scattering cancellation, imaging, and mirage

    NASA Astrophysics Data System (ADS)

    Wei, Qi; Cheng, Ying; Liu, Xiao-jun

    2012-07-01

    We present a three-dimensional acoustic concentrator capable of significantly enhancing the sound intensity in the compressive region with scattering cancellation, imaging, and mirage effects. The concentrator shell is built by isotropic gradient negative-index materials, which together with an exterior host medium slab constructs a pair of complementary media. The enhancement factor, which can approach infinity by tuning the geometric parameters, is always much higher than that of a traditional concentrator made by positive-index materials with the same size. The acoustic scattering theory is applied to derive the pressure field distribution of the concentrator, which is consistent with the numerical full-wave simulations. The inherent acoustic impedance match at the interfaces of the shell as well as the inverse processes of “negative refraction—progressive curvature—negative refraction” for arbitrary sound rays can exactly cancel the scattering of the concentrator. In addition, the concentrator shell can also function as an acoustic spherical magnifying superlens, which produces a perfect image with the same shape, with bigger geometric and acoustic parameters located at a shifted position. Then some acoustic mirages are observed whereby the waves radiated from (scattered by) an object located in the center region may seem to be radiated from (scattered by) its image. Based on the mirage effect, we further propose an intriguing acoustic transformer which can transform the sound scattering pattern of one object into another object at will with arbitrary geometric, acoustic, and location parameters.

  16. Endotracheal intubation confirmation based on video image classification using a parallel GMMs framework: a preliminary evaluation.

    PubMed

    Lederman, Dror

    2011-01-01

    In this paper, the problem of endotracheal intubation confirmation is addressed. Endotracheal intubation is a complex procedure which requires high skills and the use of secondary confirmation devices to ensure correct positioning of the tube. A novel confirmation approach, based on video images classification, is introduced. The approach is based on identification of specific anatomical landmarks, including esophagus, upper trachea and main bifurcation of the trachea into the two primary bronchi ("carina"), as indicators of correct or incorrect tube insertion and positioning. Classification of the images is performed using a parallel Gaussian mixture models (GMMs) framework, which is composed of several GMMs, schematically connected in parallel, where each GMM represents a different imaging angle. The performance of the proposed approach was evaluated using a dataset of cow-intubation videos and a dataset of human-intubation videos. Each one of the video images was manually (visually) classified by a medical expert into one of three categories: upper-tracheal intubation, correct (carina) intubation, and esophageal intubation. The image classification algorithm was applied off-line using a leave-one-case-out method. The results show that the system correctly classified 1517 out of 1600 (94.8%) of the cow-intubation images, and 340 out of the 358 human images (95.0%). The classification results compared favorably with a "standard" GMM approach utilizing textural based features, as well as with a state-of-the-art classification method, tested on the cow-intubation dataset. PMID:20878236

  17. Acoustics

    NASA Technical Reports Server (NTRS)

    Goodman, Jerry R.; Grosveld, Ferdinand

    2007-01-01

    The acoustics environment in space operations is important to maintain at manageable levels so that the crewperson can remain safe, functional, effective, and reasonably comfortable. High acoustic levels can produce temporary or permanent hearing loss, or cause other physiological symptoms such as auditory pain, headaches, discomfort, strain in the vocal cords, or fatigue. Noise is defined as undesirable sound. Excessive noise may result in psychological effects such as irritability, inability to concentrate, decrease in productivity, annoyance, errors in judgment, and distraction. A noisy environment can also result in the inability to sleep, or sleep well. Elevated noise levels can affect the ability to communicate, understand what is being said, hear what is going on in the environment, degrade crew performance and operations, and create habitability concerns. Superfluous noise emissions can also create the inability to hear alarms or other important auditory cues such as an equipment malfunctioning. Recent space flight experience, evaluations of the requirements in crew habitable areas, and lessons learned (Goodman 2003; Allen and Goodman 2003; Pilkinton 2003; Grosveld et al. 2003) show the importance of maintaining an acceptable acoustics environment. This is best accomplished by having a high-quality set of limits/requirements early in the program, the "designing in" of acoustics in the development of hardware and systems, and by monitoring, testing and verifying the levels to ensure that they are acceptable.

  18. Computer Vision Tools for Finding Images and Video Sequences.

    ERIC Educational Resources Information Center

    Forsyth, D. A.

    1999-01-01

    Computer vision offers a variety of techniques for searching for pictures in large collections of images. Appearance methods compare images based on the overall content of the image using certain criteria. Finding methods concentrate on matching subparts of images, defined in a variety of ways, in hope of finding particular objects. These ideas…

  19. Tracking Energy Flow Using a Volumetric Acoustic Intensity Imager (VAIM)

    NASA Technical Reports Server (NTRS)

    Klos, Jacob; Williams, Earl G.; Valdivia, Nicolas P.

    2006-01-01

    A new measurement device has been invented at the Naval Research Laboratory which images instantaneously the intensity vector throughout a three-dimensional volume nearly a meter on a side. The measurement device consists of a nearly transparent spherical array of 50 inexpensive microphones optimally positioned on an imaginary spherical surface of radius 0.2m. Front-end signal processing uses coherence analysis to produce multiple, phase-coherent holograms in the frequency domain each related to references located on suspect sound sources in an aircraft cabin. The analysis uses either SVD or Cholesky decomposition methods using ensemble averages of the cross-spectral density with the fixed references. The holograms are mathematically processed using spherical NAH (nearfield acoustical holography) to convert the measured pressure field into a vector intensity field in the volume of maximum radius 0.4 m centered on the sphere origin. The utility of this probe is evaluated in a detailed analysis of a recent in-flight experiment in cooperation with Boeing and NASA on NASA s Aries 757 aircraft. In this experiment the trim panels and insulation were removed over a section of the aircraft and the bare panels and windows were instrumented with accelerometers to use as references for the VAIM. Results show excellent success at locating and identifying the sources of interior noise in-flight in the frequency range of 0 to 1400 Hz. This work was supported by NASA and the Office of Naval Research.

  20. Negative refraction imaging of acoustic metamaterial lens in the supersonic range

    SciTech Connect

    Han, Jianning; Wen, Tingdun; Yang, Peng; Zhang, Lu

    2014-05-15

    Acoustic metamaterials with negative refraction index is the most promising method to overcome the diffraction limit of acoustic imaging to achieve ultrahigh resolution. In this paper, we use localized resonant phononic crystal as the unit cell to construct the acoustic negative refraction lens. Based on the vibration model of the phononic crystal, negative quality parameters of the lens are obtained while excited near the system resonance frequency. Simulation results show that negative refraction of the acoustic lens can be achieved when a sound wave transmiting through the phononic crystal plate. The patterns of the imaging field agree well with that of the incident wave, while the dispersion is very weak. The unit cell size in the simulation is 0.0005 m and the wavelength of the sound source is 0.02 m, from which we show that acoustic signal can be manipulated through structures with dimensions much smaller than the wavelength of incident wave.

  1. Breaking the acoustic diffraction limit in photoacoustic imaging with multiple speckle illumination

    NASA Astrophysics Data System (ADS)

    Chaigne, Thomas; Gateau, Jérôme; Allain, Marc; Katz, Ori; Gigan, Sylvain; Sentenac, Anne; Bossy, Emmanuel

    2016-03-01

    In deep photoacoustic imaging, resolution is inherently limited by acoustic diffraction, and ultrasonic frequencies cannot be arbitrarily increased because of attenuation in tissue. Here we report on the use of multiple speckle illumination to perform super resolution photoacoustic imaging. We show that the analysis of speckle-induced second-order fluctuations of the photoacoustic signal combined with deconvolution enables to resolve optically absorbing structures below the acoustic diffraction limit.

  2. VQone MATLAB toolbox: A graphical experiment builder for image and video quality evaluations : VQone MATLAB toolbox.

    PubMed

    Nuutinen, Mikko; Virtanen, Toni; Rummukainen, Olli; Häkkinen, Jukka

    2016-03-01

    This article presents VQone, a graphical experiment builder, written as a MATLAB toolbox, developed for image and video quality ratings. VQone contains the main elements needed for the subjective image and video quality rating process. This includes building and conducting experiments and data analysis. All functions can be controlled through graphical user interfaces. The experiment builder includes many standardized image and video quality rating methods. Moreover, it enables the creation of new methods or modified versions from standard methods. VQone is distributed free of charge under the terms of the GNU general public license and allows code modifications to be made so that the program's functions can be adjusted according to a user's requirements. VQone is available for download from the project page ( http://www.helsinki.fi/psychology/groups/visualcognition/ ). PMID:25595311

  3. Internet Teleprescence by Real-Time View-Dependent Image Generation with Omnidirectional Video Camera

    NASA Astrophysics Data System (ADS)

    Morita, Shinji; Yamazawa, Kazumasa; Yokoya, Naokazu

    2003-01-01

    This paper describes a new networked telepresence system which realizes virtual tours into a visualized dynamic real world without significant time delay. Our system is realized by the following three steps: (1) video-rate omnidirectional image acquisition, (2) transportation of an omnidirectional video stream via internet, and (3) real-time view-dependent perspective image generation from the omnidirectional video stream. Our system is applicable to real-time telepresence in the situation where the real world to be seen is far from an observation site, because the time delay from the change of user"s viewing direction to the change of displayed image is small and does not depend on the actual distance between both sites. Moreover, multiple users can look around from a single viewpoint in a visualized dynamic real world in different directions at the same time. In experiments, we have proved that the proposed system is useful for internet telepresence.

  4. Change Detection in Uav Video Mosaics Combining a Feature Based Approach and Extended Image Differencing

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Krüger, Wolfgang

    2016-06-01

    Change detection is an important task when using unmanned aerial vehicles (UAV) for video surveillance. We address changes of short time scale using observations in time distances of a few hours. Each observation (previous and current) is a short video sequence acquired by UAV in near-Nadir view. Relevant changes are, e.g., recently parked or moved vehicles. Examples for non-relevant changes are parallaxes caused by 3D structures of the scene, shadow and illumination changes, and compression or transmission artifacts. In this paper we present (1) a new feature based approach to change detection, (2) a combination with extended image differencing (Saur et al., 2014), and (3) the application to video sequences using temporal filtering. In the feature based approach, information about local image features, e.g., corners, is extracted in both images. The label "new object" is generated at image points, where features occur in the current image and no or weaker features are present in the previous image. The label "vanished object" corresponds to missing or weaker features in the current image and present features in the previous image. This leads to two "directed" change masks and differs from image differencing where only one "undirected" change mask is extracted which combines both label types to the single label "changed object". The combination of both algorithms is performed by merging the change masks of both approaches. A color mask showing the different contributions is used for visual inspection by a human image interpreter.

  5. Computer-enhanced video microscopy: digitally processed microscope images can be produced in real time.

    PubMed Central

    Walter, R J; Berns, M W

    1981-01-01

    Digital processing techniques can be used to greatly enhance the available information in an optical image. Although this technology has been routinely used in many fields for a number of years, little application of digital image-processing techniques have been made toward analysis and enhancement of the types of images seen most often by the research biologist. We describe here a computer-based video microscope system that is capable of performing extensive manipulation and enhancement of microscope images in real time. The types of manipulations possible with these techniques greatly surpass the enhancement capabilities of photographic or video techniques alone. The speed and flexibility of this system enables experimental manipulation of the microscopic specimen based on its live processed image. These features greatly extend the power and versatility of the light microscope. Images PMID:6947267

  6. Analysis of Particle Image Velocimetry (PIV) Data for Acoustic Velocity Measurements

    NASA Technical Reports Server (NTRS)

    Blackshire, James L.

    1997-01-01

    Acoustic velocity measurements were taken using Particle Image Velocimetry (PIV) in a Normal Incidence Tube configuration at various frequency, phase, and amplitude levels. This report presents the results of the PIV analysis and data reduction portions of the test and details the processing that was done. Estimates of lower measurement sensitivity levels were determined based on PIV image quality, correlation, and noise level parameters used in the test. Comparison of measurements with linear acoustic theory are presented. The onset of nonlinear, harmonic frequency acoustic levels were also studied for various decibel and frequency levels ranging from 90 to 132 dB and 500 to 3000 Hz, respectively.

  7. ACOUSTICAL IMAGING AND MECHANICAL PROPERTIES OF SOFT ROCK AND MARINE SEDIMENTS

    SciTech Connect

    Thurman E. Scott, Jr.; Younane Abousleiman

    2004-04-01

    The research during this project has concentrated on developing a correlation between rock deformation mechanisms and their acoustic velocity signature. This has included investigating: (1) the acoustic signature of drained and undrained unconsolidated sands, (2) the acoustic emission signature of deforming high porosity rocks (in comparison to their low porosity high strength counterparts), (3) the effects of deformation on anisotropic elastic and poroelastic moduli, and (4) the acoustic tomographic imaging of damage development in rocks. Each of these four areas involve triaxial experimental testing of weak porous rocks or unconsolidated sand and involves measuring acoustic properties. The research is directed at determining the seismic velocity signature of damaged rocks so that 3-D or 4-D seismic imaging can be utilized to image rock damage. These four areas of study are described in the report: (1) Triaxial compression experiments have been conducted on unconsolidated Oil Creek sand at high confining pressures. (2) Initial experiments on measuring the acoustic emission activity from deforming high porosity Danian chalk were accomplished and these indicate that the AE activity was of a very low amplitude. (3) A series of triaxial compression experiments were conducted to investigate the effects of induced stress on the anisotropy developed in dynamic elastic and poroelastic parameters in rocks. (4) Tomographic acoustic imaging was utilized to image the internal damage in a deforming porous limestone sample. Results indicate that the deformation damage in rocks induced during laboratory experimentation can be imaged tomographically in the laboratory. By extension the results also indicate that 4-D seismic imaging of a reservoir may become a powerful tool for imaging reservoir deformation (including imaging compaction and subsidence) and for imaging zones where drilling operation may encounter hazardous shallow water flows.

  8. Video Outside Versus Video Inside the Web: Do Media Setting and Image Size Have an Impact on the Emotion-Evoking Potential of Video?

    ERIC Educational Resources Information Center

    Verleur, Ria; Verhagen, Plon W.

    To explore the educational potential of video-evoked affective responses in a Web-based environment, the question was raised whether video in a Web-based environment is experienced differently from video in a traditional context. An experiment was conducted that studied the affect-evoking power of video segments in a window on a computer screen…

  9. Learning Computational Models of Video Memorability from fMRI Brain Imaging.

    PubMed

    Han, Junwei; Chen, Changyuan; Shao, Ling; Hu, Xintao; Han, Jungong; Liu, Tianming

    2015-08-01

    Generally, various visual media are unequally memorable by the human brain. This paper looks into a new direction of modeling the memorability of video clips and automatically predicting how memorable they are by learning from brain functional magnetic resonance imaging (fMRI). We propose a novel computational framework by integrating the power of low-level audiovisual features and brain activity decoding via fMRI. Initially, a user study experiment is performed to create a ground truth database for measuring video memorability and a set of effective low-level audiovisual features is examined in this database. Then, human subjects' brain fMRI data are obtained when they are watching the video clips. The fMRI-derived features that convey the brain activity of memorizing videos are extracted using a universal brain reference system. Finally, due to the fact that fMRI scanning is expensive and time-consuming, a computational model is learned on our benchmark dataset with the objective of maximizing the correlation between the low-level audiovisual features and the fMRI-derived features using joint subspace learning. The learned model can then automatically predict the memorability of videos without fMRI scans. Evaluations on publically available image and video databases demonstrate the effectiveness of the proposed framework. PMID:25314715

  10. Advances in low-power visible/thermal IR video image fusion hardware

    NASA Astrophysics Data System (ADS)

    Wolff, Lawrence B.; Socolinsky, Diego A.; Eveland, Christopher K.; Reese, C. E.; Bender, E. J.; Wood, M. V.

    2005-03-01

    Equinox Corporation has developed two new video board products for real-time image fusion of visible (or intensified visible/near-infrared) and thermal (emissive) infrared video. These products can provide unique capabilities to the dismounted soldier, maritime/naval operations and Unmanned Aerial Vehicles (UAVs) with low-power, lightweight, compact and inexpensive FPGA video fusion hardware. For several years Equinox Corporation has been studying and developing image fusion methodologies using the complementary modalities of the visible and thermal infrared wavebands including applications to face recognition, tracking, sensor development and fused image visualization. The video board products incorporate Equinox's proprietary image fusion algorithms into an FPGA architecture with embedded programmable capability. Currently included are (1) user interactive image fusion algorithms that go significantly beyond standard "A+B" fusion providing an intuitive color visualization invariant to distracting illumination changes, (2) generalized image co-registration to compensate for parallax, scale and rotation differences between visible/intensified and thermal IR, as well as non-linear optical and display distortion, and (3) automatic gain control (AGC) for dynamic range adaptation.

  11. Accuracy of video imaging for predicting the soft tissue profile after mandibular set-back surgery.

    PubMed

    Kazandjian, S; Sameshima, G T; Champlin, T; Sinclair, P M

    1999-04-01

    The purpose of this study was to compare the accuracy of two video-imaging programs for predicting the soft tissue outcomes of mandibular set-back surgery for patients with skeletal class III malocclusion. The sample consisted of 30 previously treated, nongrowing, white patients who had undergone isolated mandibular set-back surgery. An objective comparison was made of each program's cephalometric prediction using a customized analysis, as well as a subjective comparison of the predicted images as evaluated by a panel of six raters. The results showed that both programs produced similar cephalometric and video image predictions. The cephalometric visual treatment objective predictions were found to be most accurate in the horizontal plane; approximately 30% of cases showed errors greater than 2.0 mm, whereas in the vertical plane, the error rate was greater (50%). The resulting video image predictions were judged by the panel as being in the "fair" category. A particular problem was noted when significant vertical compression of the soft tissue images was required. Video imaging was suitable for patient education but not accurate enough for detailed diagnosis and treatment planning. PMID:10194281

  12. Acoustic and optical borehole-wall imaging for fractured-rock aquifer studies

    USGS Publications Warehouse

    Williams, J.H.; Johnson, C.D.

    2004-01-01

    Imaging with acoustic and optical televiewers results in continuous and oriented 360?? views of the borehole wall from which the character, relation, and orientation of lithologic and structural planar features can be defined for studies of fractured-rock aquifers. Fractures are more clearly defined under a wider range of conditions on acoustic images than on optical images including dark-colored rocks, cloudy borehole water, and coated borehole walls. However, optical images allow for the direct viewing of the character of and relation between lithology, fractures, foliation, and bedding. The most powerful approach is the combined application of acoustic and optical imaging with integrated interpretation. Imaging of the borehole wall provides information useful for the collection and interpretation of flowmeter and other geophysical logs, core samples, and hydraulic and water-quality data from packer testing and monitoring. ?? 2003 Elsevier B.V. All rights reserved.

  13. High-quality photoacoustic imaging by using of concentration-adjustable glycerin as an acoustic couplant

    NASA Astrophysics Data System (ADS)

    Yang, Sihua; Gu, Huaimin

    2007-01-01

    The influences of mismatch of ultrasonic propagation velocities on photoacoustic imaging are studied. The concentration-adjustable glycerin is used as an ultrasonic couplant to match the ultrasonic velocities in different media in order to eliminate the acoustic refraction, reduce the acoustic reflection, and rectify the acoustic path difference. Two biological phantoms are tested by using water and glycerin as ultrasonic couplant, respectively. The spatial resolution of reconstructed image by experimental evaluation also is estimated to be 0.12mm. The experimental results demonstrate that the high-quality photoacoustic imaging can be obtained by matching the ultrasonic propagation velocities in different media. The contrast of reconstructed image is significantly improved and the image artifacts are obviously reduced after matching ultrasonic velocity. It has potential to promote photoacoustic imaging as a clinical diagnosis technique.

  14. Inverse image alignment method for image mosaicing and video stabilization in fundus indocyanine green angiography under confocal scanning laser ophthalmoscope.

    PubMed

    Zhou, Yongjin; Xue, Hui; Wan, Mingxi

    2003-01-01

    An efficient image registration algorithm, the Inverse Compositional image alignment method based on minimization of Sum of Squared Differences of images, is applied in fundus blood vessel angiography under confocal scanning laser ophthalmoscope, to build image mosaics which have larger field of view without loss of resolution to assist diagnosis. Furthermore, based on similar technique, the angiography video stabilization algorithm is implemented for fundus documenting. The actual underlying models of motion between images and corresponding convergence criteria are also discussed. The experiment results in fundus images demonstrate the effectiveness of the registration scheme. PMID:14575786

  15. OASIS in the sea: Measurement of the acoustic reflectivity of zooplankton with concurrent optical imaging

    NASA Astrophysics Data System (ADS)

    Jaffe, J. S.; Ohman, M. D.; De Robertis, A.

    A new instrument Optical-Acoustic Submersible Imaging System (OASIS) has been developed for three-dimensional acoustic tracking of zooplankton with concurrent optical imaging to verify the identity of the insonified organisms. OASIS also measures in situ target strengths (TS) of freely swimming zooplankton and nekton of known identity and 3-D orientation. The system consists of a three-dimensional acoustic imaging system (FishTV), a sensitive optical CCD camera with red-filtered strobe illumination, and ancillary oceanographic sensors. The sonar triggers the acquisition of an optical image when it detects the presence of a significant target in the precise location where the camera, strobe and sonar are co-registered. Acoustic TS can then be related to the optical image, which permits identification of the animal and its 3-D aspect. The system was recently deployed (August 1996) in Saanich Inlet, B.C., Canada. Motile zooplankton and nekton were imaged with no evidence of reaction to or avoidance of the OASIS instrument package. Target strengths of many acoustic reflectors were recorded in parallel with the optical images, triggered by the presence of an animal in the correct location of the sonar system. Inspection of the optical images, corroborated with zooplankton sampling with a MOCNESS net, revealed that the joint optically and acoustically sensed taxa at the site were the euphausiid Euphausia pacifica, the gammarid amphipod Orchomene obtusa, and a gadid fish. The simultaneous optical and acoustic images permitted an exact correlation of TS and taxa. Computer simulations from a model of the backscattered strength from euphausiids are in good agreement with the observed data.

  16. The path to COVIS: A review of acoustic imaging of hydrothermal flow regimes

    NASA Astrophysics Data System (ADS)

    Bemis, Karen G.; Silver, Deborah; Xu, Guangyu; Light, Russ; Jackson, Darrell; Jones, Christopher; Ozer, Sedat; Liu, Li

    2015-11-01

    Acoustic imaging of hydrothermal flow regimes started with the incidental recognition of a plume on a routine sonar scan for obstacles in the path of the human-occupied submersible ALVIN. Developments in sonar engineering, acoustic data processing and scientific visualization have been combined to develop technology which can effectively capture the behavior of focused and diffuse hydrothermal discharge. This paper traces the development of these acoustic imaging techniques for hydrothermal flow regimes from their conception through to the development of the Cabled Observatory Vent Imaging Sonar (COVIS). COVIS has monitored such flow eight times a day for several years. Successful acoustic techniques for estimating plume entrainment, bending, vertical rise, volume flux, and heat flux are presented as is the state-of-the-art in diffuse flow detection.

  17. Sub-component modeling for face image reconstruction in video communications

    NASA Astrophysics Data System (ADS)

    Shiell, Derek J.; Xiao, Jing; Katsaggelos, Aggelos K.

    2008-08-01

    Emerging communications trends point to streaming video as a new form of content delivery. These systems are implemented over wired systems, such as cable or ethernet, and wireless networks, cell phones, and portable game systems. These communications systems require sophisticated methods of compression and error-resilience encoding to enable communications across band-limited and noisy delivery channels. Additionally, the transmitted video data must be of high enough quality to ensure a satisfactory end-user experience. Traditionally, video compression makes use of temporal and spatial coherence to reduce the information required to represent an image. In many communications systems, the communications channel is characterized by a probabilistic model which describes the capacity or fidelity of the channel. The implication is that information is lost or distorted in the channel, and requires concealment on the receiving end. We demonstrate a generative model based transmission scheme to compress human face images in video, which has the advantages of a potentially higher compression ratio, while maintaining robustness to errors and data corruption. This is accomplished by training an offline face model and using the model to reconstruct face images on the receiving end. We propose a sub-component AAM modeling the appearance of sub-facial components individually, and show face reconstruction results under different types of video degradation using a weighted and non-weighted version of the sub-component AAM.

  18. Progress in passive submillimeter-wave video imaging

    NASA Astrophysics Data System (ADS)

    Heinz, Erik; May, Torsten; Born, Detlef; Zieger, Gabriel; Peiselt, Katja; Zakosarenko, Vyacheslav; Krause, Torsten; Krüger, André; Schulz, Marco; Bauer, Frank; Meyer, Hans-Georg

    2014-06-01

    Since 2007 we are developing passive submillimeter-wave video cameras for personal security screening. In contradiction to established portal-based millimeter-wave scanning techniques, these are suitable for stand-off or stealth operation. The cameras operate in the 350GHz band and use arrays of superconducting transition-edge sensors (TES), reflector optics, and opto-mechanical scanners. Whereas the basic principle of these devices remains unchanged, there has been a continuous development of the technical details, as the detector array, the scanning scheme, and the readout, as well as system integration and performance. The latest prototype of this camera development features a linear array of 128 detectors and a linear scanner capable of 25Hz frame rate. Using different types of reflector optics, a field of view of 1×2m2 and a spatial resolution of 1-2 cm is provided at object distances of about 5-25m. We present the concept of this camera and give details on system design and performance. Demonstration videos show its capability for hidden threat detection and illustrate possible application scenarios.

  19. Collaborative real-time motion video analysis by human observer and image exploitation algorithms

    NASA Astrophysics Data System (ADS)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2015-05-01

    Motion video analysis is a challenging task, especially in real-time applications. In most safety and security critical applications, a human observer is an obligatory part of the overall analysis system. Over the last years, substantial progress has been made in the development of automated image exploitation algorithms. Hence, we investigate how the benefits of automated video analysis can be integrated suitably into the current video exploitation systems. In this paper, a system design is introduced which strives to combine both the qualities of the human observer's perception and the automated algorithms, thus aiming to improve the overall performance of a real-time video analysis system. The system design builds on prior work where we showed the benefits for the human observer by means of a user interface which utilizes the human visual focus of attention revealed by the eye gaze direction for interaction with the image exploitation system; eye tracker-based interaction allows much faster, more convenient, and equally precise moving target acquisition in video images than traditional computer mouse selection. The system design also builds on prior work we did on automated target detection, segmentation, and tracking algorithms. Beside the system design, a first pilot study is presented, where we investigated how the participants (all non-experts in video analysis) performed in initializing an object tracking subsystem by selecting a target for tracking. Preliminary results show that the gaze + key press technique is an effective, efficient, and easy to use interaction technique when performing selection operations on moving targets in videos in order to initialize an object tracking function.

  20. Enhancing thermal video using a public database of images

    NASA Astrophysics Data System (ADS)

    Qadir, Hemin; Kozaitis, S. P.; Ali, Ehsan

    2014-05-01

    We presented a system to display nightime imagery with natural colors using a public database of images. We initially combined two spectral bands of images, thermal and visible, to enhance night vision imagery, however the fused image gave an unnatural color appearance. Therefore, a color transfer based on look-up table (LUT) was used to replace the false color appearance with a colormap derived from a daytime reference image obtained from a public database using the GPS coordinates of the vehicle. Because of the computational demand in deriving the colormap from the reference image, we created an additional local database of colormaps. Reference images from the public database were compared to a compact local database to retrieve one of a limited number of colormaps that represented several driving environments. Each colormap in the local database was stored with an image from which it was derived. To retrieve a colormap, we compared the histogram of the fused image with histograms of images in the local database. The colormaps of the best match was then used for the fused image. Continuously selecting and applying colormaps using this approach offered a convenient way to color night vision imagery.

  1. Image enhancement using a range gated MCPII video system with a 180-ps FWHM shutter

    SciTech Connect

    Thomas, M.C.; Yates, G.J.; Zadgarino, P.

    1995-09-01

    The video image of a target submerged in a scattering medium was improved through the use of range gating techniques. The target, an Air Force resolution chart, was submerged in 18 in. of a colloidal suspension of tincture green soap in water. The target was illuminated with pulsed light from a Raman shifted, frequency-doubled, ND:YAG laser having a wavelength of 559 mm and a width of 20 ps FWHM. The laser light reflected by the target along with the light scattered by the soap, was imaged onto a microchannel-plate image intensifier (MCPII). The output from the MCPII was then recorded with a RS-170 video camera and a video digitizer. The MCPII was gated on with a pulse synchronously timed to the laser pulse. The relative timing between the reflected laser pulse and the shuttering of the MCPII determined the distance to the imaged region. The resolution of the image was influenced by the MCPII`s shutter time. A comparison was made between the resolution of images obtained with 6 ns, 500 ps and 180 ps FWHM (8 ns, 750 ps and 250 ps off-to-off) shutter times. it was found that the image resolution was enhanced by using the faster shutter since the longer exposures allowed light scattered by the water to be recorded too. The presence of scattered light in the image increased the noise, thereby reducing the contrast and the resolution.

  2. PhotoAcoustic-guided Focused UltraSound imaging (PAFUSion) for reducing reflection artifacts in photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Singh, Mithun K.; Steenbergen, Wiendelt

    2015-07-01

    Reflection artifacts caused by acoustic reflectors is an important problem in reflection-mode photoacoustic imaging. The light absorbed by skin and superficial optical absorbers may produce high photoacoustic signals, which traverse into the tissue and get reflected from structures having different acoustic impedance. These reflected photoacoustic signals, when reconstructed may appear in the region of interest, which causes complications in interpreting the images. We propose a novel method to identify and reduce reflection artifacts in photoacoustic images by making use of PhotoAcoustic-guided Focused UltraSound [PAFUSion]. Our method ultrasonically mimics the photoacoustic image formation process and thus delivers a clinically feasible way to reduce reflection artifacts. Simulation and phantom measurement results are presented to demonstrate the validity and impact of this method. Results show that PAFUSion technique can identify and differentiate reflection signals from the signals of interest and thus foresees good potential for improving photoacoustic imaging of deep tissue.

  3. VIDEO IMAGE ANALYSES OF THE CROSS-STREAM DISTRIBUTION OF SMOKE IN THE NEAR WAKE OF A BUILDING

    EPA Science Inventory

    In a wind-tunnel study, recorded video images of the top view of smoke dispersion in the wake of a building were analyzed. A continuous source of smoke was emitted at floor level, midway along the leeward side of the building. The technique and usefulness of analyzing video image...

  4. A review of ultrasound common carotid artery image and video segmentation techniques.

    PubMed

    Loizou, Christos P

    2014-12-01

    The determination of the wall thickness [intima-media thickness (IMT)], the delineation of the atherosclerotic carotid plaque, the measurement of the diameter in the common carotid artery (CCA), as well as the grading of its stenosis are important for the evaluation of the atherosclerosis disease. All these measurements are also considered to be significant markers for the clinical evaluation of the risk of stroke. A number of CCA segmentation techniques have been proposed in the last few years either for the segmentation of the intima-media complex (IMC), the lumen of the CCA, or for the atherosclerotic carotid plaque from ultrasound images or videos of the CCA. The present review study proposes and discusses the methods and systems introduced so far in the literature for performing automated or semi-automated segmentation in ultrasound images or videos of the CCA. These are based on edge detection, active contours, level sets, dynamic programming, local statistics, Hough transform, statistical modeling, neural networks, and an integration of the above methods. Furthermore, the performance of these systems is evaluated and discussed based on various evaluation metrics. We finally propose the best performing method that can be used for the segmentation of the IMC and the atherosclerotic carotid plaque in ultrasound images and videos. We end the present review study with a discussion of the different image and video CCA segmentation techniques, future perspectives, and further extension of these techniques to ultrasound video segmentation and wall tracking of the CCA. Future work on the segmentation of the CCA will be focused on the development of integrated segmentation systems for the complete segmentation of the CCA as well as the segmentation and motion analysis of the plaque and or the IMC from ultrasound video sequences of the CCA. These systems will improve the evaluation, follow up, and treatment of patients affected by advanced atherosclerosis disease

  5. Evaluation of the Accuracy of Grading Indirect Ophthalmoscopy Video Images for Retinopathy of Prematurity Screening

    PubMed Central

    Prakalapakorn, Sasapin G.; Wallace, David K.; Dolland, Riana S.; Freedman, Sharon F.

    2015-01-01

    Purpose To determine whether digital retinal images obtained using a video indirect ophthalmoscopy system (Keeler) can be accurately graded for zone, stage, and plus or pre-plus disease and used to screen for type 1 retinopathy of prematurity (ROP). Methods We retrospectively reviewed the charts of 114 infants who had retinal video images acquired using the Keeler system during routine ROP examinations. Two masked ophthalmologists (1 expert and 1 non-expert in ROP screening) graded these videos for image quality, zone, stage, and pre-plus or plus disease. We compared the ophthalmologists’ grades of the videos against the clinical examination results, which served as the reference standard. We then determined the sensitivity/specificity of 2 predefined criteria for referral in detecting disease requiring treatment (i.e. type 1 ROP). Results Of images the expert considered fair or good quality (n=68), the expert and non-expert correctly identified zone (75% vs. 74%, respectively), stage (75% vs. 40%, respectively), and the presence of pre-plus or plus disease in 79% of images. Expert and non-expert judgment of prethreshold disease, pre-plus or plus disease had 100% sensitivity and 75% vs. 79% specificity, respectively, for detecting type 1 ROP. Expert and non-expert judgment of pre-plus or plus disease had 92% vs. 100% sensitivity and 77% vs. 82% specificity, respectively, for detecting type 1 ROP. Conclusions High-quality retinal video images can be read with high sensitivity and acceptable specificity to screen for type 1 ROP. Grading for pre-plus or plus disease alone may be sufficient for the purpose of ROP screening. PMID:25608280

  6. Fusion of intraoperative cone-beam CT and endoscopic video for image-guided procedures

    NASA Astrophysics Data System (ADS)

    Daly, M. J.; Chan, H.; Prisman, E.; Vescan, A.; Nithiananthan, S.; Qiu, J.; Weersink, R.; Irish, J. C.; Siewerdsen, J. H.

    2010-02-01

    Methods for accurate registration and fusion of intraoperative cone-beam CT (CBCT) with endoscopic video have been developed and integrated into a system for surgical guidance that accounts for intraoperative anatomical deformation and tissue excision. The system is based on a prototype mobile C-Arm for intraoperative CBCT that provides low-dose 3D image updates on demand with sub-mm spatial resolution and soft-tissue visibility, and also incorporates subsystems for real-time tracking and navigation, video endoscopy, deformable image registration of preoperative images and surgical plans, and 3D visualization software. The position and pose of the endoscope are geometrically registered to 3D CBCT images by way of real-time optical tracking (NDI Polaris) for rigid endoscopes (e.g., head and neck surgery), and electromagnetic tracking (NDI Aurora) for flexible endoscopes (e.g., bronchoscopes, colonoscopes). The intrinsic (focal length, principal point, non-linear distortion) and extrinsic (translation, rotation) parameters of the endoscopic camera are calibrated from images of a planar calibration checkerboard (2.5×2.5 mm2 squares) obtained at different perspectives. Video-CBCT registration enables a variety of 3D visualization options (e.g., oblique CBCT slices at the endoscope tip, augmentation of video with CBCT images and planning data, virtual reality representations of CBCT [surface renderings]), which can reveal anatomical structures not directly visible in the endoscopic view - e.g., critical structures obscured by blood or behind the visible anatomical surface. Video-CBCT fusion is evaluated in pre-clinical sinus and skull base surgical experiments, and is currently being incorporated into an ongoing prospective clinical trial in CBCT-guided head and neck surgery.

  7. RGB representation of two-dimensional multi-spectral acoustic data for object surface profile imaging

    NASA Astrophysics Data System (ADS)

    Guo, Xinhua; Wada, Yuji; Mizuno, Yosuke; Nakamura, Kentaro

    2013-10-01

    Conventionally, acoustic imaging has been performed using a single frequency or a limited number of frequencies. However, the rich information on surface profiles, structures hidden under surfaces and material properties of objects may exhibit frequency dependence. In this study, acoustic imaging on object surface was conducted over a wide frequency range with a fine frequency step, and a method for displaying the acquired multi-spectral acoustic data was proposed. A complicated rigid surface with different profiles was illuminated by sound waves sweeping over the frequency range from 1 to 20 kHz with a 30 Hz step. The reflected sound was two-dimensionally recorded using a scanning microphone, and processed using a holographic reconstruction method. The two-dimensional distributions of obtained sound pressure at each frequency were defined as ‘multi-spectral acoustic imaging data’. Next, the multi-spectral acoustic data were transformed into a single RGB-based picture for easy understanding of the surface characteristics. The acoustic frequencies were allocated to red, green and blue using the RGB filter technique. The depths of the grooves were identified by their colours in the RGB image.

  8. Real Time Speed Estimation of Moving Vehicles from Side View Images from an Uncalibrated Video Camera

    PubMed Central

    Doğan, Sedat; Temiz, Mahir Serhan; Külür, Sıtkı

    2010-01-01

    In order to estimate the speed of a moving vehicle with side view camera images, velocity vectors of a sufficient number of reference points identified on the vehicle must be found using frame images. This procedure involves two main steps. In the first step, a sufficient number of points from the vehicle is selected, and these points must be accurately tracked on at least two successive video frames. In the second step, by using the displacement vectors of the tracked points and passed time, the velocity vectors of those points are computed. Computed velocity vectors are defined in the video image coordinate system and displacement vectors are measured by the means of pixel units. Then the magnitudes of the computed vectors in image space should be transformed to the object space to find the absolute values of these magnitudes. This transformation requires an image to object space information in a mathematical sense that is achieved by means of the calibration and orientation parameters of the video frame images. This paper presents proposed solutions for the problems of using side view camera images mentioned here. PMID:22399909

  9. Video and image retrieval beyond the cognitive level: the needs and possibilities

    NASA Astrophysics Data System (ADS)

    Hanjalic, Alan

    2001-01-01

    The worldwide research efforts in the are of image and video retrieval have concentrated so far on increasing the efficiency and reliability of extracting the elements of image and video semantics and so on improving the search and retrieval performance at the cognitive level of content abstraction. At this abstraction level, the user is searching for 'factual' or 'objective' content such as image showing a panorama of San Francisco, an outdoor or an indoor image, a broadcast news report on a defined topic, a movie dialog between the actors A and B or the parts of a basketball game showing fast breaks, steals and scores. These efforts, however, do not address the retrieval applications at the so-called affective level of content abstraction where the 'ground truth' is not strictly defined. Such applications are, for instance, those where subjectivity of the user plays the major role, e.g. the task of retrieving all images that the user 'likes most', and those that are based on 'recognizing emotions' in audiovisual data. Typical examples are searching for all images that 'radiate happiness', identifying all 'sad' movie fragments and looking for the 'romantic landscapes', 'sentimental' movie segments, 'movie highlights' or 'most exciting' moments of a sport event. This paper discusses the needs and possibilities for widening the current scope of research in the area of image and video search and retrieval in order to enable applications at the affective level of content abstraction.

  10. Video and image retrieval beyond the cognitive level: the needs and possibilities

    NASA Astrophysics Data System (ADS)

    Hanjalic, Alan

    2000-12-01

    The worldwide research efforts in the are of image and video retrieval have concentrated so far on increasing the efficiency and reliability of extracting the elements of image and video semantics and so on improving the search and retrieval performance at the cognitive level of content abstraction. At this abstraction level, the user is searching for 'factual' or 'objective' content such as image showing a panorama of San Francisco, an outdoor or an indoor image, a broadcast news report on a defined topic, a movie dialog between the actors A and B or the parts of a basketball game showing fast breaks, steals and scores. These efforts, however, do not address the retrieval applications at the so-called affective level of content abstraction where the 'ground truth' is not strictly defined. Such applications are, for instance, those where subjectivity of the user plays the major role, e.g. the task of retrieving all images that the user 'likes most', and those that are based on 'recognizing emotions' in audiovisual data. Typical examples are searching for all images that 'radiate happiness', identifying all 'sad' movie fragments and looking for the 'romantic landscapes', 'sentimental' movie segments, 'movie highlights' or 'most exciting' moments of a sport event. This paper discusses the needs and possibilities for widening the current scope of research in the area of image and video search and retrieval in order to enable applications at the affective level of content abstraction.

  11. Video-rate two-photon excited fluorescence lifetime imaging system with interleaved digitization

    PubMed Central

    Dow, Ximeng Y.; Sullivan, Shane Z.; Muir, Ryan D.; Simpson, Garth J.

    2016-01-01

    A fast (up to video rate) two-photon excited fluorescence lifetime imaging system based on interleaved digitization is demonstrated. The system is compatible with existing beam-scanning microscopes with minor electronics and software modification. Proof-of-concept demonstrations were performed using laser dyes and biological tissue. PMID:26176453

  12. Video-rate two-photon excited fluorescence lifetime imaging system with interleaved digitization.

    PubMed

    Dow, Ximeng Y; Sullivan, Shane Z; Muir, Ryan D; Simpson, Garth J

    2015-07-15

    A fast (up to video rate) two-photon excited fluorescence lifetime imaging system based on interleaved digitization is demonstrated. The system is compatible with existing beam-scanning microscopes with minor electronics and software modification. Proof-of-concept demonstrations were performed using laser dyes and biological tissue. PMID:26176453

  13. Matters of Light & Depth: Creating Memorable Images for Video, Film, & Stills through Lighting.

    ERIC Educational Resources Information Center

    Lowell, Ross

    Written for students, professionals with limited experience, and professionals who encounter lighting difficulties, this book encourages sensitivity to light in its myriad manifestations: it offers advice in creating memorable images for video, film, and stills through lighting. Chapters in the book are: (1) "Lights of Passage: Basic Theory and…

  14. Construct Implications of Including Still Image or Video in Computer-Based Listening Tests

    ERIC Educational Resources Information Center

    Ockey, Gary J.

    2007-01-01

    Over the past decade, listening comprehension tests have been converting to computer-based tests that include visual input. However, little research is available to suggest how test takers engage with different types of visuals on such tests. The present study compared a series of still images to video in academic computer-based tests to determine…

  15. Acoustic lens-based swimmer's sonar

    NASA Astrophysics Data System (ADS)

    Linnenbrink, Thomas E.; Desilets, Charles S.; Folds, Donald L.; Quick, Marshall K.

    1999-07-01

    A new high resolution imaging sonar is begin developed for use by swimmers to identify objects in turbid water or under low light level conditions. Beam forming for both the transmit and receive functions is performed with acoustic lenses. The acoustic image is focused on an acoustic retina or focal pane. An acoustic video converter converts the acoustic image to an electronic from suitable for display with conventional electronics. The image will be presented to the swimmer as a heads-up display on the face of his or her mask. The system will provide 1 cm resolution in range and cross range from 1-5 meters from the object. A longer range search mode is being explored. Laboratory prototypes of key components have been fabricated and evaluated. Results to date are promising.

  16. Progress toward a video-rate, passive millimeter-wave imager for brownout mitigation

    NASA Astrophysics Data System (ADS)

    Mackrides, Daniel G.; Schuetz, Christopher A.; Martin, Richard D.; Dillon, Thomas E.; Yao, Peng; Prather, Dennis W.

    2011-05-01

    Currently, brownout is the single largest contributor to military rotary-wing losses. Millimeter-wave radiation penetrates these dust clouds effectively, thus millimeter-wave imaging could provide pilots with valuable situational awareness during hover, takeoff, and landing operations. Herein, we detail efforts towards a passive, video-rate imager for use as a brownout mitigation tool. The imager presented herein uses a distributed-aperture, optically-upconverted architecture that provides real-time, video-rate imagery with minimal size and weight. Specifically, we detail phenomenology measurements in brownout environments, show developments in enabling component technologies, and present results from a 30-element aperiodic array imager that has recently been fabricated.

  17. Class Energy Image analysis for video sensor-based gait recognition: a review.

    PubMed

    Lv, Zhuowen; Xing, Xianglei; Wang, Kejun; Guan, Donghai

    2015-01-01

    Gait is a unique perceptible biometric feature at larger distances, and the gait representation approach plays a key role in a video sensor-based gait recognition system. Class Energy Image is one of the most important gait representation methods based on appearance, which has received lots of attentions. In this paper, we reviewed the expressions and meanings of various Class Energy Image approaches, and analyzed the information in the Class Energy Images. Furthermore, the effectiveness and robustness of these approaches were compared on the benchmark gait databases. We outlined the research challenges and provided promising future directions for the field. To the best of our knowledge, this is the first review that focuses on Class Energy Image. It can provide a useful reference in the literature of video sensor-based gait representation approach. PMID:25574935

  18. Class Energy Image Analysis for Video Sensor-Based Gait Recognition: A Review

    PubMed Central

    Lv, Zhuowen; Xing, Xianglei; Wang, Kejun; Guan, Donghai

    2015-01-01

    Gait is a unique perceptible biometric feature at larger distances, and the gait representation approach plays a key role in a video sensor-based gait recognition system. Class Energy Image is one of the most important gait representation methods based on appearance, which has received lots of attentions. In this paper, we reviewed the expressions and meanings of various Class Energy Image approaches, and analyzed the information in the Class Energy Images. Furthermore, the effectiveness and robustness of these approaches were compared on the benchmark gait databases. We outlined the research challenges and provided promising future directions for the field. To the best of our knowledge, this is the first review that focuses on Class Energy Image. It can provide a useful reference in the literature of video sensor-based gait representation approach. PMID:25574935

  19. Efficient super-resolution image reconstruction applied to surveillance video captured by small unmanned aircraft systems

    NASA Astrophysics Data System (ADS)

    He, Qiang; Schultz, Richard R.; Chu, Chee-Hung Henry

    2008-04-01

    The concept surrounding super-resolution image reconstruction is to recover a highly-resolved image from a series of low-resolution images via between-frame subpixel image registration. In this paper, we propose a novel and efficient super-resolution algorithm, and then apply it to the reconstruction of real video data captured by a small Unmanned Aircraft System (UAS). Small UAS aircraft generally have a wingspan of less than four meters, so that these vehicles and their payloads can be buffeted by even light winds, resulting in potentially unstable video. This algorithm is based on a coarse-to-fine strategy, in which a coarsely super-resolved image sequence is first built from the original video data by image registration and bi-cubic interpolation between a fixed reference frame and every additional frame. It is well known that the median filter is robust to outliers. If we calculate pixel-wise medians in the coarsely super-resolved image sequence, we can restore a refined super-resolved image. The primary advantage is that this is a noniterative algorithm, unlike traditional approaches based on highly-computational iterative algorithms. Experimental results show that our coarse-to-fine super-resolution algorithm is not only robust, but also very efficient. In comparison with five well-known super-resolution algorithms, namely the robust super-resolution algorithm, bi-cubic interpolation, projection onto convex sets (POCS), the Papoulis-Gerchberg algorithm, and the iterated back projection algorithm, our proposed algorithm gives both strong efficiency and robustness, as well as good visual performance. This is particularly useful for the application of super-resolution to UAS surveillance video, where real-time processing is highly desired.

  20. High-accuracy 3D image-based registration of endoscopic video to C-arm cone-beam CT for image-guided skull base surgery

    NASA Astrophysics Data System (ADS)

    Mirota, Daniel J.; Uneri, Ali; Schafer, Sebastian; Nithiananthan, Sajendra; Reh, Douglas D.; Gallia, Gary L.; Taylor, Russell H.; Hager, Gregory D.; Siewerdsen, Jeffrey H.

    2011-03-01

    Registration of endoscopic video to preoperative CT facilitates high-precision surgery of the head, neck, and skull-base. Conventional video-CT registration is limited by the accuracy of the tracker and does not use the underlying video or CT image data. A new image-based video registration method has been developed to overcome the limitations of conventional tracker-based registration. This method adds to a navigation system based on intraoperative C-arm cone-beam CT (CBCT), in turn providing high-accuracy registration of video to the surgical scene. The resulting registration enables visualization of the CBCT and planning data within the endoscopic video. The system incorporates a mobile C-arm, integrated with an optical tracking system, video endoscopy, deformable registration of preoperative CT with intraoperative CBCT, and 3D visualization. Similarly to tracker-based approach, the image-based video-CBCT registration the endoscope is localized with optical tracking system followed by a direct 3D image-based registration of the video to the CBCT. In this way, the system achieves video-CBCT registration that is both fast and accurate. Application in skull-base surgery demonstrates overlay of critical structures (e.g., carotid arteries) and surgical targets with sub-mm accuracy. Phantom and cadaver experiments show consistent improvement of target registration error (TRE) in video overlay over conventional tracker-based registration-e.g., 0.92mm versus 1.82mm for image-based and tracker-based registration, respectively. The proposed method represents a two-fold advance-first, through registration of video to up-to-date intraoperative CBCT, and second, through direct 3D image-based video-CBCT registration, which together provide more confident visualization of target and normal tissues within up-to-date images.

  1. Reconstructed imaging of acoustic cloak using time-lapse reversal method

    NASA Astrophysics Data System (ADS)

    Zhou, Chen; Cheng, Ying; Xu, Jian-yi; Li, Bo; Liu, Xiao-jun

    2014-08-01

    We proposed and investigated a solution to the inverse acoustic cloak problem, an anti-stealth technology to make cloaks visible, using the time-lapse reversal (TLR) method. The TLR method reconstructs the image of an unknown acoustic cloak by utilizing scattered acoustic waves. Compared to previous anti-stealth methods, the TLR method can determine not only the existence of a cloak but also its exact geometric information like definite shape, size, and position. Here, we present the process for TLR reconstruction based on time reversal invariance. This technology may have potential applications in detecting various types of cloaks with different geometric parameters.

  2. Laser Imaging Video Camera Sees Through Fire, Fog, Smoke

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Under a series of SBIR contracts with Langley Research Center, inventor Richard Billmers refined a prototype for a laser imaging camera capable of seeing through fire, fog, smoke, and other obscurants. Now, Canton, Ohio-based Laser Imaging through Obscurants (LITO) Technologies Inc. is demonstrating the technology as a perimeter security system at Glenn Research Center and planning its future use in aviation, shipping, emergency response, and other fields.

  3. Lidar-Incorporated Traffic Sign Detection from Video Log Images of Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Li, Y.; Fan, J.; Huang, Y.; Chen, Z.

    2016-06-01

    Mobile Mapping System (MMS) simultaneously collects the Lidar points and video log images in a scenario with the laser profiler and digital camera. Besides the textural details of video log images, it also captures the 3D geometric shape of point cloud. It is widely used to survey the street view and roadside transportation infrastructure, such as traffic sign, guardrail, etc., in many transportation agencies. Although many literature on traffic sign detection are available, they only focus on either Lidar or imagery data of traffic sign. Based on the well-calibrated extrinsic parameters of MMS, 3D Lidar points are, the first time, incorporated into 2D video log images to enhance the detection of traffic sign both physically and visually. Based on the local elevation, the 3D pavement area is first located. Within a certain distance and height of the pavement, points of the overhead and roadside traffic signs can be obtained according to the setup specification of traffic signs in different transportation agencies. The 3D candidate planes of traffic signs are then fitted using the RANSAC plane-fitting of those points. By projecting the candidate planes onto the image, Regions of Interest (ROIs) of traffic signs are found physically with the geometric constraints between laser profiling and camera imaging. The Random forest learning of the visual color and shape features of traffic signs is adopted to validate the sign ROIs from the video log images. The sequential occurrence of a traffic sign among consecutive video log images are defined by the geometric constraint of the imaging geometry and GPS movement. Candidate ROIs are predicted in this temporal context to double-check the salient traffic sign among video log images. The proposed algorithm is tested on a diverse set of scenarios on the interstate highway G-4 near Beijing, China under varying lighting conditions and occlusions. Experimental results show the proposed algorithm enhances the rate of detecting

  4. Opti-acoustic stereo imaging: on system calibration and 3-D target reconstruction.

    PubMed

    Negahdaripour, Shahriar; Sekkati, Hicham; Pirsiavash, Hamed

    2009-06-01

    Utilization of an acoustic camera for range measurements is a key advantage for 3-D shape recovery of underwater targets by opti-acoustic stereo imaging, where the associated epipolar geometry of optical and acoustic image correspondences can be described in terms of conic sections. In this paper, we propose methods for system calibration and 3-D scene reconstruction by maximum likelihood estimation from noisy image measurements. The recursive 3-D reconstruction method utilized as initial condition a closed-form solution that integrates the advantages of two other closed-form solutions, referred to as the range and azimuth solutions. Synthetic data tests are given to provide insight into the merits of the new target imaging and 3-D reconstruction paradigm, while experiments with real data confirm the findings based on computer simulations, and demonstrate the merits of this novel 3-D reconstruction paradigm. PMID:19380272

  5. Student Video Stimulus and Changing Images of the Soviet Union: An Experimental Pilot Study of Video in Teaching International Relations

    ERIC Educational Resources Information Center

    Stover, William J.

    1978-01-01

    In an experiment on the effects of video in teaching international relations, two classes were given video stimuli related to the Soviet Union and one class was given none. Findings indicate that exposure to video stimulus produced strong attitude change toward the Soviet Union and ability to understand and express Soviet perspectives. (Author/DB)

  6. Liver reserve function assessment by acoustic radiation force impulse imaging

    PubMed Central

    Sun, Xiao-Lan; Liang, Li-Wei; Cao, Hui; Men, Qiong; Hou, Ke-Zhu; Chen, Zhen; Zhao, Ya-E

    2015-01-01

    AIM: To evaluate the utility of liver reserve function by acoustic radiation force impulse (ARFI) imaging in patients with liver tumors. METHODS: Seventy-six patients with liver tumors were enrolled in this study. Serum biochemical indexes, such as aminotransferase (ALT), aspartate aminotransferase (AST), serum albumin (ALB), total bilirubin (T-Bil), and other indicators were observed. Liver stiffness (LS) was measured by ARFI imaging, measurements were repeated 10 times, and the average value of the results was taken as the final LS value. Indocyanine green (ICG) retention was performed, and ICG-K and ICG-R15 were recorded. Child-Pugh (CP) scores were carried out based on patient’s preoperative biochemical tests and physical condition. Correlations among CP scores, ICG-R15, ICG-K and LS values were observed and analyzed using either the Pearson correlation coefficient or the Spearman rank correlation coefficient. Kruskal-Wallis test was used to compare LS values of CP scores, and the receiver-operator characteristic (ROC) curve was used to analyze liver reserve function assessment accuracy. RESULTS: LS in the ICG-R15 10%-20% group was significantly higher than in the ICG-R15 < 10% group; and the difference was statistically significant (2.19 ± 0.27 vs 1.59 ± 0.32, P < 0.01). LS in the ICG-R15 > 20% group was significantly higher than in the ICG-R15 < 10% group; and the difference was statistically significant (2.92 ± 0.29 vs 1.59 ± 0.32, P < 0.01). The LS value in patients with CP class A was lower than in patients with CP class B (1.57 ± 0.34 vs 1.86 ± 0.27, P < 0.05), while the LS value in patients with CP class B was lower than in patients with CP class C (1.86 ± 0.27 vs 2.47 ± 0.33, P < 0.01). LS was positively correlated with ICG-R15 (r = 0.617, P < 0.01) and CP score (r = 0.772, P < 0.01). Meanwhile, LS was negatively correlated with ICG-K (r = -0.673, P < 0.01). AST, ALT and T-Bil were positively correlated with LS, while ALB was negatively

  7. Real time autonomous video image registration for endomicroscopy: fighting the compromises

    NASA Astrophysics Data System (ADS)

    Vercauteren, Tom; Meining, Alexander; Lacombe, François; Perchant, Aymeric

    2008-02-01

    Confocal endomicroscopy provides tools for in vivo imaging of human cell architecture endoscopically. These technologies are a tough challenge since multiple trade-offs have to be overcome: resolution versus field of view, dynamic versus stability, contrast versus low laser power or low contrast agent doses. Many difficult clinical applications, such as lung, bile duct, urethral imaging and NOTES applications, need to optimize miniaturization, resolution, frame rate and contrast agent dose simultaneously. We propose one solution based on real-time video image processing to efficiently address these trade-offs. Dynamic imaging provides a flow of images that we process in real time. Images are aligned using efficientalgorithms specifically adapted to confocal devices. From the displacement that we find across the images, instantaneous velocities are computed and used to compensate for motion distortions. All images are stitched together onto the same reference space and displayed in real-time to reconstruct an image of the entire surface explored during the clinical procedure. This representation brings both stability and an increased field of view. Moreover, because a given area can be imaged by several frames, the contrast can be improved using temporal adaptive averaging. Such processing enhances the visualization of the video sequence, overcoming most classical trade-offs. The stability and increased field of view help the clinician better focus his attention on his practice which improves the patient benefit. Our tools are currently evaluated in a multicenter clinical trial to assess the improvement of the clinical practice.

  8. Video-rate molecular imaging in vivo with stimulated Raman scattering.

    PubMed

    Saar, Brian G; Freudiger, Christian W; Reichman, Jay; Stanley, C Michael; Holtom, Gary R; Xie, X Sunney

    2010-12-01

    Optical imaging in vivo with molecular specificity is important in biomedicine because of its high spatial resolution and sensitivity compared with magnetic resonance imaging. Stimulated Raman scattering (SRS) microscopy allows highly sensitive optical imaging based on vibrational spectroscopy without adding toxic or perturbative labels. However, SRS imaging in living animals and humans has not been feasible because light cannot be collected through thick tissues, and motion-blur arises from slow imaging based on backscattered light. In this work, we enable in vivo SRS imaging by substantially enhancing the collection of the backscattered signal and increasing the imaging speed by three orders of magnitude to video rate. This approach allows label-free in vivo imaging of water, lipid, and protein in skin and mapping of penetration pathways of topically applied drugs in mice and humans. PMID:21127249

  9. Optimal block boundary pre/postfiltering for wavelet-based image and video compression.

    PubMed

    Liang, Jie; Tu, Chengjie; Tran, Trac D

    2005-12-01

    This paper presents a pre/postfiltering framework to reduce the reconstruction errors near block boundaries in wavelet-based image and video compression. Two algorithms are developed to obtain the optimal filter, based on boundary filter bank and polyphase structure, respectively. A low-complexity structure is employed to approximate the optimal solution. Performances of the proposed method in the removal of JPEG 2000 tiling artifact and the jittering artifact of three-dimensional wavelet video coding are reported. Comparisons with other methods demonstrate the advantages of our pre/postfiltering framework. PMID:16370467

  10. Image quality, tissue heating, and frame rate trade-offs in acoustic radiation force impulse imaging.

    PubMed

    Bouchard, Richard R; Dahl, Jeremy J; Hsu, Stephen J; Palmeri, Mark L; Trahey, Gregg E

    2009-01-01

    The real-time application of acoustic radiation force impulse (ARFI) imaging requires both short acquisition times for a single ARFI image and repeated acquisition of these frames. Due to the high energy of pulses required to generate appreciable radiation force, however, repeated acquisitions could result in substantial transducer face and tissue heating. We describe and evaluate several novel beam sequencing schemes which, along with parallel-receive acquisition, are designed to reduce acquisition time and heating. These techniques reduce the total number of radiation force impulses needed to generate an image and minimize the time between successive impulses. We present qualitative and quantitative analyses of the trade-offs in image quality resulting from the acquisition schemes. Results indicate that these techniques yield a significant improvement in frame rate with only moderate decreases in image quality. Tissue and transducer face heating resulting from these schemes is assessed through finite element method modeling and thermocouple measurements. Results indicate that heating issues can be mitigated by employing ARFI acquisition sequences that utilize the highest track-to-excitation ratio possible. PMID:19213633

  11. Compact Video Microscope Imaging System Implemented in Colloid Studies

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2002-01-01

    Long description Photographs showing fiber-optic light source, microscope and charge-coupled discharge (CCD) camera head connected to camera body, CCD camera body feeding data to image acquisition board in PC, and Cartesian robot controlled via PC board. The Compact Microscope Imaging System (CMIS) is a diagnostic tool with intelligent controls for use in space, industrial, medical, and security applications. CMIS can be used in situ with a minimum amount of user intervention. This system can scan, find areas of interest in, focus on, and acquire images automatically. Many multiple-cell experiments require microscopy for in situ observations; this is feasible only with compact microscope systems. CMIS is a miniature machine vision system that combines intelligent image processing with remote control. The software also has a user-friendly interface, which can be used independently of the hardware for further post-experiment analysis. CMIS has been successfully developed in the SML Laboratory at the NASA Glenn Research Center and adapted for use for colloid studies and is available for telescience experiments. The main innovations this year are an improved interface, optimized algorithms, and the ability to control conventional full-sized microscopes in addition to compact microscopes. The CMIS software-hardware interface is being integrated into our SML Analysis package, which will be a robust general-purpose image-processing package that can handle over 100 space and industrial applications.

  12. Real-Depth imaging: a new (no glasses) 3D imaging technology with video/data projection applications

    NASA Astrophysics Data System (ADS)

    Dolgoff, Eugene

    1997-05-01

    Floating Images, Inc. has developed the software and hardware for anew, patent pending, 'floating 3D, off-the- screen-experience' display technology. This technology has the potential to become the next standard for home and arcade video games, computers, corporate presentations, Internet/Intranet viewing, and television. Current '3D Graphics' technologies are actually flat on screen. Floating Images technology actually produce images at different depths from any display, such as CRT and LCD, for television, computer, projection, and other formats. In addition, unlike stereoscopic 3D imaging, no glasses, headgear, or other viewing aids are used. And, unlike current autostereoscopic imaging technologies, there is virtually no restriction on where viewers can sit to view the images, with no 'bad' or 'dead' zones, flipping, or pseudoscopy. In addition to providing traditional depth cues such as perspective and background image occlusion, the new technology also provides both horizontal and vertical binocular parallax and accommodation which coincides with convergence. Since accommodation coincides with convergence, viewing these images doesn't produce headaches, fatigue, or eye-strain, regardless of how long they are viewed. The imagery must either be formatted for the Floating Images platform when written, or existing software can be reformatted without much difficult. The optical hardware system can be made to accommodate virtually any projection system to produce Floating Images for the Boardroom, video arcade, stage shows, or the classroom.

  13. Video-rate dual polarization multispectral endoscopic imaging

    NASA Astrophysics Data System (ADS)

    Pigula, Anne; Clancy, Neil T.; Arya, Shobhit; Hanna, George B.; Elson, Daniel S.

    2015-03-01

    Cancerous and precancerous growths often exhibit changes in scattering, and therefore depolarization, as well as collagen breakdown, causing changes in birefringent effects. Simple difference of linear polarization imaging is unable to represent anisotropic effects like birefringence, and Mueller polarimetry is time-consuming and difficult to implement clinically. This work presents a dual-polarization endoscope to collect co- and cross-polarized images for each of two polarization states, and further incorporates narrow band detection to increase vascular contrast, particularly vascular remodeling present in diseased tissue, and provide depth sensitivity. The endoscope was shown to be sensitive to both isotropic and anisotropic materials in phantom and in vivo experiments.

  14. Segmentation of the spinous process and its acoustic shadow in vertebral ultrasound images.

    PubMed

    Berton, Florian; Cheriet, Farida; Miron, Marie-Claude; Laporte, Catherine

    2016-05-01

    Spinal ultrasound imaging is emerging as a low-cost, radiation-free alternative to conventional X-ray imaging for the clinical follow-up of patients with scoliosis. Currently, deformity measurement relies almost entirely on manual identification of key vertebral landmarks. However, the interpretation of vertebral ultrasound images is challenging, primarily because acoustic waves are entirely reflected by bone. To alleviate this problem, we propose an algorithm to segment these images into three regions: the spinous process, its acoustic shadow and other tissues. This method consists, first, in the extraction of several image features and the selection of the most relevant ones for the discrimination of the three regions. Then, using this set of features and linear discriminant analysis, each pixel of the image is classified as belonging to one of the three regions. Finally, the image is segmented by regularizing the pixel-wise classification results to account for some geometrical properties of vertebrae. The feature set was first validated by analyzing the classification results across a learning database. The database contained 107 vertebral ultrasound images acquired with convex and linear probes. Classification rates of 84%, 92% and 91% were achieved for the spinous process, the acoustic shadow and other tissues, respectively. Dice similarity coefficients of 0.72 and 0.88 were obtained respectively for the spinous process and acoustic shadow, confirming that the proposed method accurately segments the spinous process and its acoustic shadow in vertebral ultrasound images. Furthermore, the centroid of the automatically segmented spinous process was located at an average distance of 0.38 mm from that of the manually labeled spinous process, which is on the order of image resolution. This suggests that the proposed method is a promising tool for the measurement of the Spinous Process Angle and, more generally, for assisting ultrasound-based assessment of scoliosis

  15. Standoff passive video imaging at 350 GHz with 251 superconducting detectors

    NASA Astrophysics Data System (ADS)

    Becker, Daniel; Gentry, Cale; Smirnov, Ilya; Ade, Peter; Beall, James; Cho, Hsiao-Mei; Dicker, Simon; Duncan, William; Halpern, Mark; Hilton, Gene; Irwin, Kent; Li, Dale; Paulter, Nicholas; Reintsema, Carl; Schwall, Robert; Tucker, Carole

    2014-06-01

    Millimeter wavelength radiation holds promise for detection of security threats at a distance, including suicide bomb belts and maritime threats in poor weather. The high sensitivity of superconducting Transition Edge Sensor (TES) detectors makes them ideal for passive imaging of thermal signals at these wavelengths. We have built a 350 GHz video-rate imaging system using a large-format array of feedhorn-coupled TES bolometers. The system operates at a standoff distance of 16m to 28m with a spatial resolution of 1:4 cm (at 17m). It currently contains one 251-detector subarray, and will be expanded to contain four subarrays for a total of 1004 detectors. The system has been used to take video images which reveal the presence of weapons concealed beneath a shirt in an indoor setting. We present a summary of this work.

  16. An acoustic charge transport imager for high definition television applications

    NASA Technical Reports Server (NTRS)

    Hunt, William D.; Brennan, Kevin F.; Summers, Chris J.

    1992-01-01

    In this report we present the progress during the second six month period of the project. This includes both experimental and theoretical work on the acoustic charge transport (ACT) portion of the chip, the theoretical program modelling of both the avalanche photodiode (APD) and the charge transfer and overflow transistor and the materials growth and fabrication part of the program.

  17. Segmentation and classification of shallow subbottom acoustic data, using image processing and neural networks

    NASA Astrophysics Data System (ADS)

    Yegireddi, Satyanarayana; Thomas, Nitheesh

    2014-06-01

    Subbottom acoustic profiler provides acoustic imaging of the subbottom structure constituting the upper sediment layers of the seabed, which is essential for geological and offshore geo-engineering studies. Delineation of the subbottom structure from a noisy acoustic data and classification of the sediment strata is a challenging task with the conventional signal processing techniques. Image processing techniques utilise the spatial variability of the image characteristics, known for their potential in medical imaging and pattern recognition applications. In the present study, they are found to be good in demarcating the boundaries of the sediment layers associated with weak acoustic reflectivity, masked by noisy background. The study deals with application of image processing techniques, like segmentation in identification of subbottom features and extraction of textural feature vectors using grey level co-occurrence matrix statistics. And also attempted classification using Self Organised Map, an unsupervised neural network model utilising these feature vectors. The methodology was successfully demonstrated in demarcating the different sediment layers from the subbottom images and established the sediments constituting the inferred four subsurface sediment layers differ from each other. The network model was also tested for its consistency, with repeated runs of different configuration of the network. Also the ability of simulated network was tested using a few untrained test images representing the similar environment and the classification results show a good agreement with the anticipated.

  18. High performance computational integral imaging system using multi-view video plus depth representation

    NASA Astrophysics Data System (ADS)

    Shi, Shasha; Gioia, Patrick; Madec, Gérard

    2012-12-01

    Integral imaging is an attractive auto-stereoscopic three-dimensional (3D) technology for next-generation 3DTV. But its application is obstructed by poor image quality, huge data volume and high processing complexity. In this paper, a new computational integral imaging (CII) system using multi-view video plus depth (MVD) representation is proposed to solve these problems. The originality of this system lies in three aspects. Firstly, a particular depth-image-based rendering (DIBR) technique is used in encoding process to exploit the inter-view correlation between different sub-images (SIs). Thereafter, the same DIBR method is applied in the display side to interpolate virtual SIs and improve the reconstructed 3D image quality. Finally, a novel parallel group projection (PGP) technique is proposed to simplify the reconstruction process. According to experimental results, the proposed CII system improves compression efficiency and displayed image quality, while reducing calculation complexity. [Figure not available: see fulltext.

  19. Coastal morphodynamic features/patterns analisys through a video-based system and image processing

    NASA Astrophysics Data System (ADS)

    Santos, Fábio; Pais-Barbosa, Joaquim; Teodoro, Ana C.; Gonçalves, Hernâni; Baptista, Paolo; Moreira, António; Veloso-Gomes, Fernando; Taveira-Pinto, Francisco; Gomes-Costa, Paulo; Lopes, Vítor; Neves-Santos, Filipe

    2012-10-01

    The Portuguese coastline, like many other worldwide coastlines, is often submitted to several types of extreme events resulting in erosion, thus, acquisition of high quality field measurements has become a common concern. The nearshore survey systems have been traditionally based on in situ measurements or in the use of satellite or aircraft mounted remote sensing systems. As an alternative, video-monitoring systems proved to be an economic and efficient way to collect useful and continuous data, and to document extreme events. In this context, is under development the project MoZCo (Advanced Methodologies and Techniques Development for Coastal Zone Monitoring), which intends to develop and implement monitoring techniques for the coastal zone based on a low cost video monitoring system. The pilot study area is Ofir beach (north of Portugal), a critical coastal area. In the beginning of this project (2010) a monitoring video station was developed, collecting snapshots and 10 minutes videos every hour. In order to process the data, several video image processing algorithms were implemented in Matlab®, allowing achieve the main video-monitoring system products, such as, the shoreline detection. An algorithm based on image processing techniques was developed, using the HSV color space, the idea is to select a study and a sample area, containing pixels associated with dry and wet regions, over which a thresholding and some morphological operators are applied. After comparing the results with manual digitalization, promising results were achieved despite the method's simplicity, which is in continuous development in order to optimize the results.

  20. Method and apparatus for detecting internal structures of bulk objects using acoustic imaging

    DOEpatents

    Deason, Vance A.; Telschow, Kenneth L.

    2002-01-01

    Apparatus for producing an acoustic image of an object according to the present invention may comprise an excitation source for vibrating the object to produce at least one acoustic wave therein. The acoustic wave results in the formation of at least one surface displacement on the surface of the object. A light source produces an optical object wavefront and an optical reference wavefront and directs the optical object wavefront toward the surface of the object to produce a modulated optical object wavefront. A modulator operatively associated with the optical reference wavefront modulates the optical reference wavefront in synchronization with the acoustic wave to produce a modulated optical reference wavefront. A sensing medium positioned to receive the modulated optical object wavefront and the modulated optical reference wavefront combines the modulated optical object and reference wavefronts to produce an image related to the surface displacement on the surface of the object. A detector detects the image related to the surface displacement produced by the sensing medium. A processing system operatively associated with the detector constructs an acoustic image of interior features of the object based on the phase and amplitude of the surface displacement on the surface of the object.

  1. Are existing procedures enough? Image and video quality assessment: review of subjective and objective metrics

    NASA Astrophysics Data System (ADS)

    Ouni, Sonia; Chambah, Majed; Herbin, Michel; Zagrouba, Ezzeddine

    2008-01-01

    Images and videos are subject to a wide variety of distortions during acquisition, digitizing, processing, restoration, compression, storage, transmission and reproduction, any of which may result in degradation in visual quality. That is why image quality assessment plays a major role in many image processing applications. Image and video quality metrics can be classified by using a number of criteria such as the type of the application domain, the predicted distortion (noise, blur, etc.) and the type of information needed to assess the quality (original image, distorted image, etc.). In the literature, the most reliable way of assessing the quality of an image or of a video is subjective evaluation [1], because human beings are the ultimate receivers in most applications. The subjective quality metric, obtained from a number of human observers, has been regarded for many years as the most reliable form of quality measurement. However, this approach is too cumbersome, slow and expensive for most applications [2]. So, in recent years a great effort has been made towards the development of quantitative measures. The objective quality evaluation is automated, done in real time and needs no user interaction. But ideally, such a quality assessment system would perceive and measure image or video impairments just like a human being [3]. The quality assessment is so important and is still an active and evolving research topic because it is a central issue in the design, implementation, and performance testing of all systems [4, 5]. Usually, the relevant literature and the related work present only a state of the art of metrics that are limited to a specific application domain. The major goal of this paper is to present a wider state of the art of the most used metrics in several application domains such as compression [6], restoration [7], etc. In this paper, we review the basic concepts and methods in subjective and objective image/video quality assessment research and

  2. The integrated platform of controlling and digital video processing for underwater range-gated laser imaging system

    NASA Astrophysics Data System (ADS)

    Shi, Yan; Qiu, Su; Jin, Wei-qi; Yu, Bing; Li, Li; Tian, Dong-kang

    2015-04-01

    Laser range-gated imaging is one of the effective techniques of underwater optical imaging. It could make the viewing distance up to 4 to 7 times with the video image processing technology. Accordingly, the control and image processing technologies come to be the key technologies for the underwater laser range-gated imaging system. In this article, the integrated platform of controlling and digital video processing for the underwater range-gated laser imaging system based on FPGA has been introduced. It accomplishes both the communication for remote control system as the role of lower computer and the task of high-speed images grabbing and video enhance processing as the role of high-speed image processing platform. The host computer can send commands composed to the FPGA, vectoring the underwater range-gated laser imaging system to executive operation.

  3. A four-dimensional snapshot hyperspectral video-endoscope for bio-imaging applications

    PubMed Central

    Lim, Hoong-Ta; Murukeshan, Vadakke Matham

    2016-01-01

    Hyperspectral imaging has proven significance in bio-imaging applications and it has the ability to capture up to several hundred images of different wavelengths offering relevant spectral signatures. To use hyperspectral imaging for in vivo monitoring and diagnosis of the internal body cavities, a snapshot hyperspectral video-endoscope is required. However, such reported systems provide only about 50 wavelengths. We have developed a four-dimensional snapshot hyperspectral video-endoscope with a spectral range of 400–1000 nm, which can detect 756 wavelengths for imaging, significantly more than such systems. Capturing the three-dimensional datacube sequentially gives the fourth dimension. All these are achieved through a flexible two-dimensional to one-dimensional fiber bundle. The potential of this custom designed and fabricated compact biomedical probe is demonstrated by imaging phantom tissue samples in reflectance and fluorescence imaging modalities. It is envisaged that this novel concept and developed probe will contribute significantly towards diagnostic in vivo biomedical imaging in the near future. PMID:27044607

  4. A four-dimensional snapshot hyperspectral video-endoscope for bio-imaging applications.

    PubMed

    Lim, Hoong-Ta; Murukeshan, Vadakke Matham

    2016-01-01

    Hyperspectral imaging has proven significance in bio-imaging applications and it has the ability to capture up to several hundred images of different wavelengths offering relevant spectral signatures. To use hyperspectral imaging for in vivo monitoring and diagnosis of the internal body cavities, a snapshot hyperspectral video-endoscope is required. However, such reported systems provide only about 50 wavelengths. We have developed a four-dimensional snapshot hyperspectral video-endoscope with a spectral range of 400-1000 nm, which can detect 756 wavelengths for imaging, significantly more than such systems. Capturing the three-dimensional datacube sequentially gives the fourth dimension. All these are achieved through a flexible two-dimensional to one-dimensional fiber bundle. The potential of this custom designed and fabricated compact biomedical probe is demonstrated by imaging phantom tissue samples in reflectance and fluorescence imaging modalities. It is envisaged that this novel concept and developed probe will contribute significantly towards diagnostic in vivo biomedical imaging in the near future. PMID:27044607

  5. Method based on video imaging to correct the consistency of multi-optical axes

    NASA Astrophysics Data System (ADS)

    Wang, Ning; Li, Ya-can; Fan, Jing-fan; Jiang, Yu-hua; Fan, Fan; Jin, Wei-qi; Wang, Xia

    2010-10-01

    The multi-sensor photoelectric systems, which collect laser ranging, laser-guided radiation, visible light imaging and thermal imaging as a whole, are widely used at modern weaponry platform. The consistency detection of optical axis has become a key to measure these systems' functions. According to the multi-optic-axis consistency detection's requirements, this paper puts forward a new method based on video imaging to correct the consistency of multi-optical axes. This method abandons the traditional methods based on the numerous refractions and reflections between many optical components. It takes laser axis as the base and obtains the exposure point of laser beam in the scene through video imaging technology. Then, by contrasting the exposure point and the TV's optical axis reticle, the TV's optical axis and the laser axis can be adjusted and kept in consistency in the form of electric reticle. According to this method, a set of portable imaging detector prototype that can be used to detect the consistency of multi-optical axes in the outfield has been made. This prototype can achieve the consistency detection of CCD imaging system and thermal imaging system by adjusting them with the laser range finder axis/laser irradiation axis separately. This prototype makes it simple and straightforward to adjust the optical axes by the way of imaging. With easy operation, environmental adaptability and compact structure, this system is suitable for the outfield testing and expected to be used for multi-optic-axis consistency detection online.

  6. A four-dimensional snapshot hyperspectral video-endoscope for bio-imaging applications

    NASA Astrophysics Data System (ADS)

    Lim, Hoong-Ta; Murukeshan, Vadakke Matham

    2016-04-01

    Hyperspectral imaging has proven significance in bio-imaging applications and it has the ability to capture up to several hundred images of different wavelengths offering relevant spectral signatures. To use hyperspectral imaging for in vivo monitoring and diagnosis of the internal body cavities, a snapshot hyperspectral video-endoscope is required. However, such reported systems provide only about 50 wavelengths. We have developed a four-dimensional snapshot hyperspectral video-endoscope with a spectral range of 400–1000 nm, which can detect 756 wavelengths for imaging, significantly more than such systems. Capturing the three-dimensional datacube sequentially gives the fourth dimension. All these are achieved through a flexible two-dimensional to one-dimensional fiber bundle. The potential of this custom designed and fabricated compact biomedical probe is demonstrated by imaging phantom tissue samples in reflectance and fluorescence imaging modalities. It is envisaged that this novel concept and developed probe will contribute significantly towards diagnostic in vivo biomedical imaging in the near future.

  7. Picturing Video

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Video Pics is a software program that generates high-quality photos from video. The software was developed under an SBIR contract with Marshall Space Flight Center by Redhawk Vision, Inc.--a subsidiary of Irvine Sensors Corporation. Video Pics takes information content from multiple frames of video and enhances the resolution of a selected frame. The resulting image has enhanced sharpness and clarity like that of a 35 mm photo. The images are generated as digital files and are compatible with image editing software.

  8. JSC Shuttle Mission Simulator (SMS) visual system payload bay video image

    NASA Technical Reports Server (NTRS)

    1981-01-01

    This video image is of the STS-2 Columbia, Orbiter Vehicle (OV) 102, payload bay (PLB) showing the Office of Space Terrestrial Applications 1 (OSTA-1) pallet (Shuttle Imaging Radar A (SIR-A) antenna (left) and SIR-A recorder, Shuttle Multispectral Infrared Radiometer (SMIRR), Feature Identification Location Experiment (FILE), Measurement of Air Pollution for Satellites (MAPS) (right)). The image is used in JSC's Fixed Based (FB) Shuttle Mission Simulator (SMS). It is projected inside the FB-SMS crew compartment during mission simulation training. The FB-SMS is located in the Mission Simulation and Training Facility Bldg 5.

  9. Enhancement of time-domain acoustic imaging based on generalized cross-correlation and spatial weighting

    NASA Astrophysics Data System (ADS)

    Quaegebeur, Nicolas; Padois, Thomas; Gauthier, Philippe-Aubert; Masson, Patrice

    2016-06-01

    In this paper, an alternative formulation of the time-domain beamforming is proposed using the generalized cross-correlation of measured signals. This formulation uses spatial weighting functions adapted to microphone positions and imaging points. The proposed approach is demonstrated for acoustic source localization using a microphone array, both theoretically and experimentally. An increase in accuracy of acoustic imaging results is shown for both narrow and broadband sources, while a factor of reduction up to 20 in the computation time can be achieved, allowing real-time or volumetric source localization over very large grids.

  10. Apparatus for real-time acoustic imaging of Rayleigh-Benard convection.

    PubMed

    Kuehn, Kerry; Polfer, Jonathan; Furno, Joanna; Finke, Nathan

    2007-11-01

    We have designed and built an apparatus for real-time acoustic imaging of convective flow patterns in optically opaque fluids. This apparatus takes advantage of recent advances in two-dimensional ultrasound transducer array technology; it employs a modified version of a commercially available ultrasound camera, similar to those employed in nondestructive testing of solids. Images of convection patterns are generated by observing the lateral variation of the temperature dependent speed of sound via refraction of acoustic plane waves passing vertically through the fluid layer. The apparatus has been validated by observing convection rolls in both silicone oil and ferrofluid. PMID:18052477

  11. Exploration of amphoteric and negative refraction imaging of acoustic sources via active metamaterials

    NASA Astrophysics Data System (ADS)

    Wen, Jihong; Shen, Huijie; Yu, Dianlong; Wen, Xisen

    2013-11-01

    The present work describes the design of three flat superlens structures for acoustic source imaging and explores an active acoustic metamaterial (AAM) to realise such a design. The first two lenses are constructed via the coordinate transform method (CTM), and their constituent materials are anisotropic. The third lens consists of a material that has both a negative density and a negative bulk modulus. In these lenses, the quality of the images is “clear” and sharp; thus, the diffraction limit of classical lenses is overcome. Finally, a multi-control strategy is developed to achieve the desired parameters and to eliminate coupling effects in the AAM.

  12. Video-rate scanning two-photon excitation fluorescence microscopy and ratio imaging with cameleons.

    PubMed

    Fan, G Y; Fujisaki, H; Miyawaki, A; Tsay, R K; Tsien, R Y; Ellisman, M H

    1999-05-01

    A video-rate (30 frames/s) scanning two-photon excitation microscope has been successfully tested. The microscope, based on a Nikon RCM 8000, incorporates a femtosecond pulsed laser with wavelength tunable from 690 to 1050 nm, prechirper optics for laser pulse-width compression, resonant galvanometer for video-rate point scanning, and a pair of nonconfocal detectors for fast emission ratioing. An increase in fluorescent emission of 1.75-fold is consistently obtained with the use of the prechirper optics. The nonconfocal detectors provide another 2.25-fold increase in detection efficiency. Ratio imaging and optical sectioning can therefore be performed more efficiently without confocal optics. Faster frame rates, at 60, 120, and 240 frames/s, can be achieved with proportionally reduced scan lines per frame. Useful two-photon images can be acquired at video rate with a laser power as low as 2.7 mW at specimen with the genetically modified green fluorescent proteins. Preliminary results obtained using this system confirm that the yellow "cameleons" exhibit similar optical properties as under one-photon excitation conditions. Dynamic two-photon images of cardiac myocytes and ratio images of yellow cameleon-2.1, -3.1, and -3.1nu are also presented. PMID:10233058

  13. Image processing and computer controls for video profile diagnostic system in the ground test accelerator (GTA)

    SciTech Connect

    Wright, R.M.; Zander, M.E.; Brown, S.K.; Sandoval, D.P.; Gilpatrick, J.D.; Gibson, H.E.

    1992-09-01

    This paper describes the application of video image processing to beam profile measurements on the Ground Test Accelerator (GTA). A diagnostic was needed to measure beam profiles in the intermediate matching section (IMS) between the radio-frequency quadrupole (RFQ) and the drift tube linac (DTL). Beam profiles are measured by injecting puffs of gas into the beam. The light emitted from the beam-gas interaction is captured and processed by a video image processing system, generating the beam profile data. A general purpose, modular and flexible video image processing system, imagetool, was used for the GTA image profile measurement. The development of both software and hardware for imagetool and its integration with the GTA control system (GTACS) will be discussed. The software includes specialized algorithms for analyzing data and calibrating the system. The underlying design philosophy of imagetool was tested by the experience of building and using the system, pointing the way for future improvements. The current status of the system will be illustrated by samples of experimental data.

  14. Image processing and computer controls for video profile diagnostic system in the ground test accelerator (GTA)

    SciTech Connect

    Wright, R.M.; Zander, M.E.; Brown, S.K.; Sandoval, D.P.; Gilpatrick, J.D.; Gibson, H.E.

    1992-01-01

    This paper describes the application of video image processing to beam profile measurements on the Ground Test Accelerator (GTA). A diagnostic was needed to measure beam profiles in the intermediate matching section (IMS) between the radio-frequency quadrupole (RFQ) and the drift tube linac (DTL). Beam profiles are measured by injecting puffs of gas into the beam. The light emitted from the beam-gas interaction is captured and processed by a video image processing system, generating the beam profile data. A general purpose, modular and flexible video image processing system, imagetool, was used for the GTA image profile measurement. The development of both software and hardware for imagetool and its integration with the GTA control system (GTACS) will be discussed. The software includes specialized algorithms for analyzing data and calibrating the system. The underlying design philosophy of imagetool was tested by the experience of building and using the system, pointing the way for future improvements. The current status of the system will be illustrated by samples of experimental data.

  15. Acoustic Neuroma Educational Video

    MedlinePlus Videos and Cool Tools

    ... for Healthcare Providers Patient Surveys Related Links Health Care Insurance Toolkit ANA Store Clinical Trials.gov Additional ... for Healthcare Providers Patient Surveys Related Links Health Care Insurance Toolkit ANA Store Clinical Trials.gov Additional ...

  16. The effects of video compression on acceptability of images for monitoring life sciences experiments

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Chuang, Sherry L.

    1992-01-01

    Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters

  17. Viral video: Live imaging of virus-host encounters

    NASA Astrophysics Data System (ADS)

    Son, Kwangmin; Guasto, Jeffrey S.; Cubillos-Ruiz, Andres; Chisholm, Sallie W.; Sullivan, Matthew B.; Stocker, Roman

    2014-11-01

    Viruses are non-motile infectious agents that rely on Brownian motion to encounter and subsequently adsorb to their hosts. Paradoxically, the viral adsorption rate is often reported to be larger than the theoretical limit imposed by the virus-host encounter rate, highlighting a major gap in the experimental quantification of virus-host interactions. Here we present the first direct quantification of the viral adsorption rate, obtained using live imaging of individual host cells and viruses for thousands of encounter events. The host-virus pair consisted of Prochlorococcus MED4, a 800 nm small non-motile bacterium that dominates photosynthesis in the oceans, and its virus PHM-2, a myovirus that has a 80 nm icosahedral capsid and a 200 nm long rigid tail. We simultaneously imaged hosts and viruses moving by Brownian motion using two-channel epifluorescent microscopy in a microfluidic device. This detailed quantification of viral transport yielded a 20-fold smaller adsorption efficiency than previously reported, indicating the need for a major revision in infection models for marine and likely other ecosystems.

  18. Acoustic Imaging of Ferroelectric Domains in BaTiO3 Single Crystals Using Atomic Force Microscope

    NASA Astrophysics Data System (ADS)

    Zeng, Huarong; Shimamura, Kiyoshi; Kannan, Chinna Venkadasamy; Villora, Encarnacion G.; Takekawa, Shunji; Kitamura, Kenji; Yin, Qingrui

    2007-01-01

    An “alternating-force-modulated” atomic force microscope (AFM) operating in the acoustic mode, generated by launching acoustic waves on the piezoelectric transducer that is attached to the cantilever, was used to visualize the ferroelectric domains in barium titanate (BaTiO3) single crystals by detecting acoustic vibrations generated by the tip and transmitted through the sample placed beneath it to the transducer. The acoustic signal was found to reflect locally elastic microstructures at low frequencies, while high-frequency acoustic images revealed strip like domain configurations of internal substructures in BaTiO3 single crystals. The underlying acoustic imaging mechanism using the AFM was discussed in terms of the interaction between the excited acoustic wave and ferroelectric domains.

  19. Precise color images a high-speed color video camera system with three intensified sensors

    NASA Astrophysics Data System (ADS)

    Oki, Sachio; Yamakawa, Masafumi; Gohda, Susumu; Etoh, Takeharu G.

    1999-06-01

    High speed imaging systems have been used in a large field of science and engineering. Although the high speed camera systems have been improved to high performance, most of their applications are only to get high speed motion pictures. However, in some fields of science and technology, it is useful to get some other information, such as temperature of combustion flame, thermal plasma and molten materials. Recent digital high speed video imaging technology should be able to get such information from those objects. For this purpose, we have already developed a high speed video camera system with three-intensified-sensors and cubic prism image splitter. The maximum frame rate is 40,500 pps (picture per second) at 64 X 64 pixels and 4,500 pps at 256 X 256 pixels with 256 (8 bit) intensity resolution for each pixel. The camera system can store more than 1,000 pictures continuously in solid state memory. In order to get the precise color images from this camera system, we need to develop a digital technique, which consists of a computer program and ancillary instruments, to adjust displacement of images taken from two or three image sensors and to calibrate relationship between incident light intensity and corresponding digital output signals. In this paper, the digital technique for pixel-based displacement adjustment are proposed. Although the displacement of the corresponding circle was more than 8 pixels in original image, the displacement was adjusted within 0.2 pixels at most by this method.

  20. MO-A-BRD-06: In Vivo Cherenkov Video Imaging to Verify Whole Breast Irradiation Treatment

    SciTech Connect

    Zhang, R; Glaser, A; Jarvis, L; Gladstone, D; Andreozzi, J; Hitchcock, W; Pogue, B

    2014-06-15

    Purpose: To show in vivo video imaging of Cherenkov emission (Cherenkoscopy) can be acquired in the clinical treatment room without affecting the normal process of external beam radiation therapy (EBRT). Applications of Cherenkoscopy, such as patient positioning, movement tracking, treatment monitoring and superficial dose estimation, were examined. Methods: In a phase 1 clinical trial, including 12 patients undergoing post-lumpectomy whole breast irradiation, Cherenkov emission was imaged with a time-gated ICCD camera synchronized to the radiation pulses, during 10 fractions of the treatment. Images from different treatment days were compared by calculating the 2-D correlations corresponding to the averaged image. An edge detection algorithm was utilized to highlight biological features, such as the blood vessels. Superficial dose deposited at the sampling depth were derived from the Eclipse treatment planning system (TPS) and compared with the Cherenkov images. Skin reactions were graded weekly according to the Common Toxicity Criteria and digital photographs were obtained for comparison. Results: Real time (fps = 4.8) imaging of Cherenkov emission was feasible and feasibility tests indicated that it could be improved to video rate (fps = 30) with system improvements. Dynamic field changes due to fast MLC motion were imaged in real time. The average 2-D correlation was about 0.99, suggesting the stability of this imaging technique and repeatability of patient positioning was outstanding. Edge enhanced images of blood vessels were observed, and could serve as unique biological markers for patient positioning and movement tracking (breathing). Small discrepancies exists between the Cherenkov images and the superficial dose predicted from the TPS but the former agreed better with actual skin reactions than did the latter. Conclusion: Real time Cherenkoscopy imaging during EBRT is a novel imaging tool that could be utilized for patient positioning, movement tracking

  1. Object detection and classification using image moment functions in the applied to video and imagery analysis

    NASA Astrophysics Data System (ADS)

    Mise, Olegs; Bento, Stephen

    2013-05-01

    This paper proposes an object detection algorithm and a framework based on a combination of Normalized Central Moment Invariant (NCMI) and Normalized Geometric Radial Moment (NGRM). The developed framework allows detecting objects with offline pre-loaded signatures and/or using the tracker data in order to create an online object signature representation. The framework has been successfully applied to the target detection and has demonstrated its performance on real video and imagery scenes. In order to overcome the implementation constraints of the low-powered hardware, the developed framework uses a combination of image moment functions and utilizes a multi-layer neural network. The developed framework has been shown to be robust to false alarms on non-target objects. In addition, optimization for fast calculation of the image moments descriptors is discussed. This paper presents an overview of the developed framework and demonstrates its performance on real video and imagery scenes.

  2. A novel multiresolution spatiotemporal saliency detection model and its applications in image and video compression.

    PubMed

    Guo, Chenlei; Zhang, Liming

    2010-01-01

    Salient areas in natural scenes are generally regarded as areas which the human eye will typically focus on, and finding these areas is the key step in object detection. In computer vision, many models have been proposed to simulate the behavior of eyes such as SaliencyToolBox (STB), Neuromorphic Vision Toolkit (NVT), and others, but they demand high computational cost and computing useful results mostly relies on their choice of parameters. Although some region-based approaches were proposed to reduce the computational complexity of feature maps, these approaches still were not able to work in real time. Recently, a simple and fast approach called spectral residual (SR) was proposed, which uses the SR of the amplitude spectrum to calculate the image's saliency map. However, in our previous work, we pointed out that it is the phase spectrum, not the amplitude spectrum, of an image's Fourier transform that is key to calculating the location of salient areas, and proposed the phase spectrum of Fourier transform (PFT) model. In this paper, we present a quaternion representation of an image which is composed of intensity, color, and motion features. Based on the principle of PFT, a novel multiresolution spatiotemporal saliency detection model called phase spectrum of quaternion Fourier transform (PQFT) is proposed in this paper to calculate the spatiotemporal saliency map of an image by its quaternion representation. Distinct from other models, the added motion dimension allows the phase spectrum to represent spatiotemporal saliency in order to perform attention selection not only for images but also for videos. In addition, the PQFT model can compute the saliency map of an image under various resolutions from coarse to fine. Therefore, the hierarchical selectivity (HS) framework based on the PQFT model is introduced here to construct the tree structure representation of an image. With the help of HS, a model called multiresolution wavelet domain foveation (MWDF) is

  3. Une Technique D' Analyse D' Image Par Cellule Sur Ecran Video

    NASA Astrophysics Data System (ADS)

    Lefebvre, G.; Thorax, L.; Ducom, J.

    1985-02-01

    In packaging,vision of making process is difficult due to high speed of machines. The setting of box erecting is tedious. The origin of filling hazards tedious is often unknown. Some performances on line are not reproducible. To visualisate, decompose and analyse the fast moving phenomenons which are causing packaging troubles, the French Paper and Board Research Institute has obtained video NAC eouinment (200 pictures/sec.). An articulate stand able to move on a carriage is designed to facilitate shooting on machine. An elaborate image analysis is undertaken to find the characteristics of the hoards which are necessary to optimize making the folding box erection on the packaging lines. Actual image analysis with computer systems are large, expensive and necessitate a specific program for each problem. Important equipment are actually used exclusively for special fields. For packaging field where machines and products are diversified we have developed an easy electronical technique for picture analysis. This technique is suitable for all kinds of processes or defects, visualised by high speed video from video shooting on machine. The processes and the manufacture incidents are analysed, controled with a cell on video screen. The light intensity variations are detected and writed on self-recording apparatus. Materials move, their forming modifications and the moves of machine elements are expressed like "signature". All changes on "signature" show hazardous or reproducible variations according to defects of manufacture processes. A short time event, visible on few images only is located with normal or slower speed of the magnetic tape according to the importance of variation. This technique is used to measure in real time the packaging deformations on line. Ease for use, speed of setting and quantity of data are operating qualities of efficient image analysis.

  4. The effect of music video clips on adolescent boys' body image, mood, and schema activation.

    PubMed

    Mulgrew, Kate E; Volcevski-Kostas, Diana; Rendell, Peter G

    2014-01-01

    There is limited research that has examined experimentally the effects of muscular images on adolescent boys' body image, with no research specifically examining the effects of music television. The aim of the current study was to examine the effects of viewing muscular and attractive singers in music video clips on early, mid, and late adolescent boys' body image, mood, and schema activation. Participants were 180 boys in grade 7 (mean age = 12.73 years), grade 9 (mean age = 14.40 years) or grade 11 (mean age = 16.15 years) who completed pre- and post-test measures of mood and body satisfaction after viewing music videos containing male singers of muscular or average appearance. They also completed measures of schema activation and social comparison after viewing the clips. The results showed that the boys who viewed the muscular clips reported poorer upper body satisfaction, lower appearance satisfaction, lower happiness, and more depressive feelings compared to boys who viewed the clips depicting singers of average appearance. There was no evidence of increased appearance schema activation but the boys who viewed the muscular clips did report higher levels of social comparison to the singers. The results suggest that music video clips are a powerful form of media in conveying information about the male ideal body shape and that negative effects are found in boys as young as 12 years. PMID:23443315

  5. A low-cost, high-resolution, video-rate imaging optical radar

    SciTech Connect

    Sackos, J.T.; Nellums, R.O.; Lebien, S.M.; Diegert, C.F.; Grantham, J.W.; Monson, T.

    1998-04-01

    Sandia National Laboratories has developed a unique type of portable low-cost range imaging optical radar (laser radar or LADAR). This innovative sensor is comprised of an active floodlight scene illuminator and an image intensified CCD camera receiver. It is a solid-state device (no moving parts) that offers significant size, performance, reliability, and simplicity advantages over other types of 3-D imaging sensors. This unique flash LADAR is based on low cost, commercially available hardware, and is well suited for many government and commercial uses. This paper presents an update of Sandia`s development of the Scannerless Range Imager technology and applications, and discusses the progress that has been made in evolving the sensor into a compact, low, cost, high-resolution, video rate Laser Dynamic Range Imager.

  6. Sunscope: a video-guided intubation system through a detachable imaging probe.

    PubMed

    Yeh, Jia-Rong; Shieh, Jiann-Shing; Lin, Chih-Peng; Sun, Wei-Zen

    2008-06-01

    We have designed a novel apparatus, the Sunscope, which integrates a semiconductor image sensor into a compact video-guided intubation system. This device consists of three separate modules: viewer, console and visual tube. The 4-inch LCD viewer panel displays the real-time video image with optimal view angle. The console is designed with respect to ergonomics allowing comfortable manipulation and internally accommodating the power supply, image processing components and connector platform for both viewer and probe. The distal end of the detachable probe is packaged with a high resolution lens, CMOS sensor, and four LEDs. The proximal end is a 6-pin connector which can be readily removed and attached on demand. The probe is detachable and disposable with length and diameter adaptable to the size of the endotracheal tube. In our preliminary test, the video-guided apparatus helped inexperienced performers to identify the vocal cords correctly and improve the success rate of intubation on the simulation model. With further improvements on the miniature design, all captured images could be transmitted to remote devices through standard wireless transmission and could thus be stored in a specific database station. The wireless technique enables image sharing on multiple devices while a powerful database can provide valuable resources for training, data mining and serial case studies. We demonstrate that the CMOS image sensor combined with advanced reduced instruction set computer machine can serve as a visual aid for tracheal intubation. The disposable station will become a revolutionary technology both in clinical practice and medical education. PMID:18593652

  7. High-Performance Motion Estimation for Image Sensors with Video Compression.

    PubMed

    Xu, Weizhi; Yin, Shouyi; Liu, Leibo; Liu, Zhiyong; Wei, Shaojun

    2015-01-01

    It is important to reduce the time cost of video compression for image sensors in video sensor network. Motion estimation (ME) is the most time-consuming part in video compression. Previous work on ME exploited intra-frame data reuse in a reference frame to improve the time efficiency but neglected inter-frame data reuse. We propose a novel inter-frame data reuse scheme which can exploit both intra-frame and inter-frame data reuse for ME in video compression (VC-ME). Pixels of reconstructed frames are kept on-chip until they are used by the next current frame to avoid off-chip memory access. On-chip buffers with smart schedules of data access are designed to perform the new data reuse scheme. Three levels of the proposed inter-frame data reuse scheme are presented and analyzed. They give different choices with tradeoff between off-chip bandwidth requirement and on-chip memory size. All three levels have better data reuse efficiency than their intra-frame counterparts, so off-chip memory traffic is reduced effectively. Comparing the new inter-frame data reuse scheme with the traditional intra-frame data reuse scheme, the memory traffic can be reduced by 50% for VC-ME. PMID:26307996

  8. High-Performance Motion Estimation for Image Sensors with Video Compression

    PubMed Central

    Xu, Weizhi; Yin, Shouyi; Liu, Leibo; Liu, Zhiyong; Wei, Shaojun

    2015-01-01

    It is important to reduce the time cost of video compression for image sensors in video sensor network. Motion estimation (ME) is the most time-consuming part in video compression. Previous work on ME exploited intra-frame data reuse in a reference frame to improve the time efficiency but neglected inter-frame data reuse. We propose a novel inter-frame data reuse scheme which can exploit both intra-frame and inter-frame data reuse for ME in video compression (VC-ME). Pixels of reconstructed frames are kept on-chip until they are used by the next current frame to avoid off-chip memory access. On-chip buffers with smart schedules of data access are designed to perform the new data reuse scheme. Three levels of the proposed inter-frame data reuse scheme are presented and analyzed. They give different choices with tradeoff between off-chip bandwidth requirement and on-chip memory size. All three levels have better data reuse efficiency than their intra-frame counterparts, so off-chip memory traffic is reduced effectively. Comparing the new inter-frame data reuse scheme with the traditional intra-frame data reuse scheme, the memory traffic can be reduced by 50% for VC-ME. PMID:26307996

  9. Biologically relevant photoacoustic imaging phantoms with tunable optical and acoustic properties.

    PubMed

    Vogt, William C; Jia, Congxian; Wear, Keith A; Garra, Brian S; Joshua Pfefer, T

    2016-10-01

    Established medical imaging technologies such as magnetic resonance imaging and computed tomography rely on well-validated tissue-simulating phantoms for standardized testing of device image quality. The availability of high-quality phantoms for optical-acoustic diagnostics such as photoacoustic tomography (PAT) will facilitate standardization and clinical translation of these emerging approaches. Materials used in prior PAT phantoms do not provide a suitable combination of long-term stability and realistic acoustic and optical properties. Therefore, we have investigated the use of custom polyvinyl chloride plastisol (PVCP) formulations for imaging phantoms and identified a dual-plasticizer approach that provides biologically relevant ranges of relevant properties. Speed of sound and acoustic attenuation were determined over a frequency range of 4 to 9 MHz and optical absorption and scattering over a wavelength range of 400 to 1100 nm. We present characterization of several PVCP formulations, including one designed to mimic breast tissue. This material is used to construct a phantom comprised of an array of cylindrical, hemoglobin-filled inclusions for evaluation of penetration depth. Measurements with a custom near-infrared PAT imager provide quantitative and qualitative comparisons of phantom and tissue images. Results indicate that our PVCP material is uniquely suitable for PAT system image quality evaluation and may provide a practical tool for device validation and intercomparison. PMID:26886681

  10. An echolocation model for the restoration of an acoustic image from a single-emission echo

    NASA Astrophysics Data System (ADS)

    Matsuo, Ikuo; Yano, Masafumi

    2004-12-01

    Bats can form a fine acoustic image of an object using frequency-modulated echolocation sound. The acoustic image is an impulse response, known as a reflected-intensity distribution, which is composed of amplitude and phase spectra over a range of frequencies. However, bats detect only the amplitude spectrum due to the low-time resolution of their peripheral auditory system, and the frequency range of emission is restricted. It is therefore necessary to restore the acoustic image from limited information. The amplitude spectrum varies with the changes in the configuration of the reflected-intensity distribution, while the phase spectrum varies with the changes in its configuration and location. Here, by introducing some reasonable constraints, a method is proposed for restoring an acoustic image from the echo. The configuration is extrapolated from the amplitude spectrum of the restricted frequency range by using the continuity condition of the amplitude spectrum at the minimum frequency of the emission and the minimum phase condition. The determination of the location requires extracting the amplitude spectra, which vary with its location. For this purpose, the Gaussian chirplets with a carrier frequency compatible with bat emission sweep rates were used. The location is estimated from the temporal changes of the amplitude spectra. .

  11. Method and system to synchronize acoustic therapy with ultrasound imaging

    NASA Technical Reports Server (NTRS)

    Owen, Neil (Inventor); Bailey, Michael R. (Inventor); Hossack, James (Inventor)

    2009-01-01

    Interference in ultrasound imaging when used in connection with high intensity focused ultrasound (HIFU) is avoided by employing a synchronization signal to control the HIFU signal. Unless the timing of the HIFU transducer is controlled, its output will substantially overwhelm the signal produced by ultrasound imaging system and obscure the image it produces. The synchronization signal employed to control the HIFU transducer is obtained without requiring modification of the ultrasound imaging system. Signals corresponding to scattered ultrasound imaging waves are collected using either the HIFU transducer or a dedicated receiver. A synchronization processor manipulates the scattered ultrasound imaging signals to achieve the synchronization signal, which is then used to control the HIFU bursts so as to substantially reduce or eliminate HIFU interference in the ultrasound image. The synchronization processor can alternatively be implemented using a computing device or an application-specific circuit.

  12. Imaging Acoustic Phonon Dynamics on the Nanometer-Femtosecond Spatiotemporal Length-Scale with Ultrafast Electron Microscopy

    NASA Astrophysics Data System (ADS)

    Plemmons, Dayne; Flannigan, David

    Coherent collective lattice oscillations known as phonons dictate a broad range of physical observables in condensed matter and act as primary energy carriers across a wide range of material systems. Despite this omnipresence, analysis of phonon dynamics on their ultrashort native spatiotemporal length scale - that is, the combined nanometer (nm), spatial and femtosecond (fs), temporal length-scales - has largely remained experimentally inaccessible. Here, we employ ultrafast electron microscopy (UEM) to directly image discrete acoustic phonons in real-space with combined nm-fs resolution. By directly probing electron scattering in the image plane (as opposed to the diffraction plane), we retain phase information critical for following the evolution, propagation, scattering, and decay of phonons in relation to morphological features of the specimen (i.e. interfaces, grain boundaries, voids, ripples, etc.). We extract a variety of morphologically-specific quantitative information from the UEM videos including phonon frequencies, phase velocities, and decays times. We expect these direct manifestations of local elastic properties in the vicinity of material defects and interfaces will aide in the understanding and application of phonon-mediated phenomena in nanostructures. Department of Chemical Engineering and Materials Science, University of Minnesota, Minneapolis, MN, 55455, USA.

  13. Video rate imaging of narrow band THz radiation based on frequency upconversion

    NASA Astrophysics Data System (ADS)

    Tekavec, Patrick F.; Kozlov, Vladimir G.; Mcnee, Ian; Spektor, Igor E.; Lebedev, Sergey P.

    2015-03-01

    We demonstrate video rate THz imaging by detecting a frequency upconverted signal with a CMOS camera. A fiber laser pumped, double resonant optical parametric oscillator generates THz pulses via difference frequency generation in a quasi-phasematched gallium arsenide (QPM-GaAs) crystal located inside the OPO cavity. The output produced THz pulses centered at 1.5 THz, with an average power up to 1 mW, a linewidth of <100 GHz, and peak power of >2 W. By mixing the THz pulses with a portion of the fiber laser pump (1064 nm) in a second QPM-GaAs crystal, distinct sidebands are observed at 1058 nm and 1070 nm, corresponding to sum and difference frequency generation of the pump pule with the THz pulse. By using a polarizer and long pass filter, the strong pump light can be removed, leaving a nearly background free signal at 1070 nm. For imaging, a Fourier imaging geometry is used, with the object illuminated by the THz beam located one focal length from the GaAs crystal. The spatial Fourier transform is upconverted with a large diameter pump beam, after which a second lens inverse transforms the upconverted spatial components, and the image is detected with a CMOS camera. We have obtained video rate images with spatial resolution of 1mm and field of view ca. 20 mm in diameter without any post processing of the data.

  14. Real-time three-dimensional Fourier-domain optical coherence tomography video image guided microsurgeries

    PubMed Central

    Huang, Yong; Zhang, Kang; Ibrahim, Zuhaib; Cha, Jaepyeong; Lee, W. P. Andrew; Brandacher, Gerald; Gehlbach, Peter L.

    2012-01-01

    Abstract. The authors describe the development of an ultrafast three-dimensional (3D) optical coherence tomography (OCT) imaging system that provides real-time intraoperative video images of the surgical site to assist surgeons during microsurgical procedures. This system is based on a full-range complex conjugate free Fourier-domain OCT (FD-OCT). The system was built in a CPU-GPU heterogeneous computing architecture capable of video OCT image processing. The system displays at a maximum speed of 10  volume/s for an image volume size of 160×80×1024 (X×Y×Z) pixels. We have used this system to visualize and guide two prototypical microsurgical maneuvers: microvascular anastomosis of the rat femoral artery and ultramicrovascular isolation of the retinal arterioles of the bovine retina. Our preliminary experiments using 3D-OCT-guided microvascular anastomosis showed optimal visualization of the rat femoral artery (diameter<0.8  mm), instruments, and suture material. Real-time intraoperative guidance helped facilitate precise suture placement due to optimized views of the vessel wall during anastomosis. Using the bovine retina as a model system, we have performed “ultra microvascular” feasibility studies by guiding handheld surgical micro-instruments to isolate retinal arterioles (diameter∼0.1  mm). Isolation of the microvessels was confirmed by successfully passing a suture beneath the vessel in the 3D imaging environment. PMID:23224164

  15. Analysis of Decorrelation Transform Gain for Uncoded Wireless Image and Video Communication.

    PubMed

    Ruiqin Xiong; Feng Wu; Jizheng Xu; Xiaopeng Fan; Chong Luo; Wen Gao

    2016-04-01

    An uncoded transmission scheme called SoftCast has recently shown great potential for wireless video transmission. Unlike conventional approaches, SoftCast processes input images only by a series of transformations and modulates the coefficients directly to a dense constellation for transmission. The transmission is uncoded and lossy in nature, with its noise level commensurate with the channel condition. This paper presents a theoretical analysis for an uncoded visual communication, focusing on developing a quantitative measurements for the efficiency of decorrelation transform in a generalized uncoded transmission framework. Our analysis reveals that the energy distribution among signal elements is critical for the efficiency of uncoded transmission. A decorrelation transform can potentially bring a significant performance gain by boosting the energy diversity in signal representation. Numerical results on Markov random process and real image and video signals are reported to evaluate the performance gain of using different transforms in uncoded transmission. The analysis presented in this paper is verified by simulated SoftCast transmissions. This provide guidelines for designing efficient uncoded video transmission schemes. PMID:26930682

  16. A synchronized particle image velocimetry and infrared thermography technique applied to an acoustic streaming flow

    PubMed Central

    Sou, In Mei; Layman, Christopher N.; Ray, Chittaranjan

    2013-01-01

    Subsurface coherent structures and surface temperatures are investigated using simultaneous measurements of particle image velocimetry (PIV) and infrared (IR) thermography. Results for coherent structures from acoustic streaming and associated heating transfer in a rectangular tank with an acoustic horn mounted horizontally at the sidewall are presented. An observed vortex pair develops and propagates in the direction along the centerline of the horn. From the PIV velocity field data, distinct kinematic regions are found with the Lagrangian coherent structure (LCS) method. The implications of this analysis with respect to heat transfer and related sonochemical applications are discussed. PMID:24347810

  17. Time-resolved coherent X-ray diffraction imaging of surface acoustic waves

    PubMed Central

    Nicolas, Jan-David; Reusch, Tobias; Osterhoff, Markus; Sprung, Michael; Schülein, Florian J. R.; Krenner, Hubert J.; Wixforth, Achim; Salditt, Tim

    2014-01-01

    Time-resolved coherent X-ray diffraction experiments of standing surface acoustic waves, illuminated under grazing incidence by a nanofocused synchrotron beam, are reported. The data have been recorded in stroboscopic mode at controlled and varied phase between the acoustic frequency generator and the synchrotron bunch train. At each time delay (phase angle), the coherent far-field diffraction pattern in the small-angle regime is inverted by an iterative algorithm to yield the local instantaneous surface height profile along the optical axis. The results show that periodic nanoscale dynamics can be imaged at high temporal resolution in the range of 50 ps (pulse length). PMID:25294979

  18. A synchronized particle image velocimetry and infrared thermography technique applied to an acoustic streaming flow.

    PubMed

    Sou, In Mei; Allen, John S; Layman, Christopher N; Ray, Chittaranjan

    2011-11-01

    Subsurface coherent structures and surface temperatures are investigated using simultaneous measurements of particle image velocimetry (PIV) and infrared (IR) thermography. Results for coherent structures from acoustic streaming and associated heating transfer in a rectangular tank with an acoustic horn mounted horizontally at the sidewall are presented. An observed vortex pair develops and propagates in the direction along the centerline of the horn. From the PIV velocity field data, distinct kinematic regions are found with the Lagrangian coherent structure (LCS) method. The implications of this analysis with respect to heat transfer and related sonochemical applications are discussed. PMID:24347810

  19. Characterization of acoustic streaming and heating using synchronized infrared thermography and particle image velocimetry.

    PubMed

    Layman, Christopher N; Sou, In Mei; Bartak, Rico; Ray, Chittaranjan; Allen, John S

    2011-09-01

    Real-time measurements of acoustic streaming velocities and surface temperature fields using synchronized particle image velocimetry and infrared thermography are reported. Measurements were conducted using a 20 kHz Langevin type acoustic horn mounted vertically in a model sonochemical reactor of either degassed water or a glycerin-water mixture. These dissipative phenomena are found to be sensitive to small variations in the medium viscosity, and a correlation between the heat flux and vorticity was determined for unsteady convective heat transfer. PMID:21514205

  20. Target-acquisition performance in undersampled infrared imagers: static imagery to motion video.

    PubMed

    Krapels, Keith; Driggers, Ronald G; Teaney, Brian

    2005-11-20

    In this research we show that the target-acquisition performance of an undersampled imager improves with sensor or target motion. We provide an experiment designed to evaluate the improvement in observer performance as a function of target motion rate in the video. We created the target motion by mounting a thermal imager on a precision two-axis gimbal and varying the sensor motion rate from 0.25 to 1 instantaneous field of view per frame. A midwave thermal imager was used to permit short integration times and remove the effects of motion blur. It is shown that the human visual system performs a superresolution reconstruction that mitigates some aliasing and provides a higher (than static imagery) effective resolution. This process appears to be relatively independent of motion velocity. The results suggest that the benefits of superresolution reconstruction techniques as applied to imaging systems with motion may be limited. PMID:16318174

  1. Image Size Scalable Full-parallax Coloured Three-dimensional Video by Electronic Holography

    NASA Astrophysics Data System (ADS)

    Sasaki, Hisayuki; Yamamoto, Kenji; Ichihashi, Yasuyuki; Senoh, Takanori

    2014-02-01

    In electronic holography, various methods have been considered for using multiple spatial light modulators (SLM) to increase the image size. In a previous work, we used a monochrome light source for a method that located an optical system containing lens arrays and other components in front of multiple SLMs. This paper proposes a colourization technique for that system based on time division multiplexing using laser light sources of three colours (red, green, and blue). The experimental device we constructed was able to perform video playback (20 fps) in colour of full parallax holographic three-dimensional (3D) images with an image size of 63 mm and a viewing-zone angle of 5.6 degrees without losing any part of the 3D image.

  2. Image Size Scalable Full-parallax Coloured Three-dimensional Video by Electronic Holography

    PubMed Central

    Sasaki, Hisayuki; Yamamoto, Kenji; Ichihashi, Yasuyuki; Senoh, Takanori

    2014-01-01

    In electronic holography, various methods have been considered for using multiple spatial light modulators (SLM) to increase the image size. In a previous work, we used a monochrome light source for a method that located an optical system containing lens arrays and other components in front of multiple SLMs. This paper proposes a colourization technique for that system based on time division multiplexing using laser light sources of three colours (red, green, and blue). The experimental device we constructed was able to perform video playback (20 fps) in colour of full parallax holographic three-dimensional (3D) images with an image size of 63 mm and a viewing-zone angle of 5.6 degrees without losing any part of the 3D image. PMID:24499811

  3. Modern Techniques in Acoustical Signal and Image Processing

    SciTech Connect

    Candy, J V

    2002-04-04

    Acoustical signal processing problems can lead to some complex and intricate techniques to extract the desired information from noisy, sometimes inadequate, measurements. The challenge is to formulate a meaningful strategy that is aimed at performing the processing required even in the face of uncertainties. This strategy can be as simple as a transformation of the measured data to another domain for analysis or as complex as embedding a full-scale propagation model into the processor. The aims of both approaches are the same--to extract the desired information and reject the extraneous, that is, develop a signal processing scheme to achieve this goal. In this paper, we briefly discuss this underlying philosophy from a ''bottom-up'' approach enabling the problem to dictate the solution rather than visa-versa.

  4. An acoustic charge transport imager for high definition television applications

    NASA Technical Reports Server (NTRS)

    Hunt, William D.; Brennan, Kevin F.; Summers, Christopher J.

    1993-01-01

    This report covers: (1) invention of a new, ultra-low noise, low operating voltage APD which is expected to offer far better performance than the existing volume doped APD device; (2) performance of a comprehensive series of experiments on the acoustic and piezoelectric properties of ZnO films sputtered on GaAs which can possibly lead to a decrease in the required rf drive power for ACT devices by 15dB; (3) development of an advanced, hydrodynamic, macroscopic simulator used for evaluating the performance of ACT and CTD devices and aiding in the development of the next generation of devices; (4) experimental development of CTD devices which utilize a p-doped top barrier demonstrating charge storage capacity and low leakage currents; (5) refinements in materials growth techniques and in situ controls to lower surface defect densities to record levels as well as increase material uniformity and quality.

  5. Characteristics of luminous structures in the stratosphere above thunderstorms as imaged by low-light video

    NASA Technical Reports Server (NTRS)

    Lyons, Walter A.

    1994-01-01

    An experiment was conducted in which an image-intensified, low-light video camera systematically monitored the stratosphere above distant (100-800 km) mesoscale convective systems over the high plains of the central U.S. for 21 nights between 6 July and 27 August 1993. Complex, luminous structures were observed above large thunderstorm clusters on eleven nights, with one storm system (7 July 1993) yielding 248 events in 410 minutes. Their duration ranged from 33 to 283 ms, with an average of 98 ms. The luminous structures, generally not visible to the naked, dark-adapted eye, exhibited on video a wide variety of brightness levels and shapes including streaks, aurora-like curtains, smudges, fountains and jets. The structures were often more than 10 km wide and their upper portions extended to above 50 km msl.

  6. Characteristics of luminous structures in the stratosphere above thunderstorms as imaged by low-light video

    SciTech Connect

    Lyons, W.A. , Inc., Ft. Collins, CO )

    1994-05-15

    An experiment was conducted in which an image-intensified, low-light video camera systematically monitored the stratosphere above distant (100-800 km) mesoscale convective systems over the high plains of the central US for 21 nights between 6 July and 27 August 1993. Complex, luminous structures were observed above large thunderstorm clusters on eleven nights, with one storm system (7 July 1993) yielding 248 events in 410 minutes. Their duration ranged from 33 to 283 ms, with an average of 98 ms. The luminous structures, generally not visible to the naked, dark-adapted eye, exhibited on video a wide variety of brightness levels and shapes including streaks, aurora-like curtains, smudges, fountains and jets. The structures were often more than 10 km wide and their upper portions extended to above 50 km msl. 14 refs., 4 figs.

  7. Integrating Acoustic Imaging of Flow Regimes With Bathymetry: A Case Study, Main Endeavor Field

    NASA Astrophysics Data System (ADS)

    Bemis, K. G.; Rona, P. A.; Jackson, D. R.; Jones, C. D.

    2003-12-01

    A unified view of the seafloor and the hydrothermal flow regimes (plumes and diffuse flow) is constructed for three major vent clusters in the Main Endeavour Field (e.g., Grotto, S&M, and Salut) of the Endeavour Segment, Juan de Fuca Ridge. The Main Endeavour Field is one of RIDGE 2000's Integrated Study Sites. A variety of visualization techniques are used to reconstruct the plumes (3D) and the diffuse flow field (2D) based on our acoustic imaging data set (July 2000 cruise). Plumes are identified as volumes of high backscatter intensity (indicating high particulate content or sharp density contrasts due to temperature variations) that remained high intensity when successive acoustic pings were subtracted (indicating that the acoustic targets producing the backscatter were in motion). Areas of diffuse flow are detected using our acoustic scintillation technique (AST). For the Grotto vent region (where a new Doppler technique was used to estimate vertical velocities in the plume), we estimate the areal partitioning between black smoker and diffuse flow in terms of volume fluxes. The volumetric and areal regions, where plume and diffuse flow were imaged, are registered over the bathymetry and compared to geologic maps of each region. The resulting images provide a unified view of the seafloor by integrating hydrothermal flow with geology.

  8. Abnormal Image Detection in Endoscopy Videos Using a Filter Bank and Local Binary Patterns

    PubMed Central

    Nawarathna, Ruwan; Oh, JungHwan; Muthukudage, Jayantha; Tavanapong, Wallapak; Wong, Johnny; de Groen, Piet C.; Tang, Shou Jiang

    2014-01-01

    Finding mucosal abnormalities (e.g., erythema, blood, ulcer, erosion, and polyp) is one of the most essential tasks during endoscopy video review. Since these abnormalities typically appear in a small number of frames (around 5% of the total frame number), automated detection of frames with an abnormality can save physician’s time significantly. In this paper, we propose a new multi-texture analysis method that effectively discerns images showing mucosal abnormalities from the ones without any abnormality since most abnormalities in endoscopy images have textures that are clearly distinguishable from normal textures using an advanced image texture analysis method. The method uses a “texton histogram” of an image block as features. The histogram captures the distribution of different “textons” representing various textures in an endoscopy image. The textons are representative response vectors of an application of a combination of Leung and Malik (LM) filter bank (i.e., a set of image filters) and a set of Local Binary Patterns on the image. Our experimental results indicate that the proposed method achieves 92% recall and 91.8% specificity on wireless capsule endoscopy (WCE) images and 91% recall and 90.8% specificity on colonoscopy images. PMID:25132723

  9. Abnormal Image Detection in Endoscopy Videos Using a Filter Bank and Local Binary Patterns.

    PubMed

    Nawarathna, Ruwan; Oh, JungHwan; Muthukudage, Jayantha; Tavanapong, Wallapak; Wong, Johnny; de Groen, Piet C; Tang, Shou Jiang

    2014-11-20

    Finding mucosal abnormalities (e.g., erythema, blood, ulcer, erosion, and polyp) is one of the most essential tasks during endoscopy video review. Since these abnormalities typically appear in a small number of frames (around 5% of the total frame number), automated detection of frames with an abnormality can save physician's time significantly. In this paper, we propose a new multi-texture analysis method that effectively discerns images showing mucosal abnormalities from the ones without any abnormality since most abnormalities in endoscopy images have textures that are clearly distinguishable from normal textures using an advanced image texture analysis method. The method uses a "texton histogram" of an image block as features. The histogram captures the distribution of different "textons" representing various textures in an endoscopy image. The textons are representative response vectors of an application of a combination of Leung and Malik (LM) filter bank (i.e., a set of image filters) and a set of Local Binary Patterns on the image. Our experimental results indicate that the proposed method achieves 92% recall and 91.8% specificity on wireless capsule endoscopy (WCE) images and 91% recall and 90.8% specificity on colonoscopy images. PMID:25132723

  10. VISDTA: A video imaging system for detection, tracking, and assessment: Prototype development and concept demonstration

    SciTech Connect

    Pritchard, D.A.

    1987-05-01

    It has been demonstrated that thermal imagers are an effective surveillance and assessment tool for security applications because: (1) they work day or night due to their sensitivity to thermal signatures; (2) penetrability through fog, rain, dust, etc., is better than human eyes; (3) short or long range operation is possible with various optics; and (4) they are strictly passive devices providing visible imagery which is readily interpreted by the operator with little training. Unfortunately, most thermal imagers also require the setup of a tripod, connection of batteries, cables, display, etc. When this is accomplished, the operator must manually move the camera back and forth searching for signs of aggressor activity. VISDTA is designed to provide automatic panning, and in a sense, ''watch'' the imagery in place of the operator. The idea behind the development of VISDTA is to provide a small, portable, rugged system to automatically scan areas and detect targets by computer processing of images. It would use a thermal imager and possibly an intensified day/night TV camera, a pan/ tilt mount, and a computer for system control. If mounted on a dedicated vehicle or on a tower, VISDTA will perform video motion detection functions on incoming video imagery, and automatically scan predefined patterns in search of abnormal conditions which may indicate attempted intrusions into the field-of-regard. In that respect, VISDTA is capable of improving the ability of security forces to maintain security of a given area of interest by augmenting present techniques and reducing operator fatigue.

  11. A flexible software architecture for scalable real-time image and video processing applications

    NASA Astrophysics Data System (ADS)

    Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.

    2012-06-01

    Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility because they are normally oriented towards particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse and inefficient execution on multicore processors. This paper presents a novel software architecture for real-time image and video processing applications which addresses these issues. The architecture is divided into three layers: the platform abstraction layer, the messaging layer, and the application layer. The platform abstraction layer provides a high level application programming interface for the rest of the architecture. The messaging layer provides a message passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of messages. The application layer provides a repository for reusable application modules designed for real-time image and video processing applications. These modules, which include acquisition, visualization, communication, user interface and data processing modules, take advantage of the power of other well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, we present different prototypes and applications to show the possibilities of the proposed architecture.

  12. Screen-imaging guidance using a modified portable video macroscope for middle cerebral artery occlusion.

    PubMed

    Zhu, Xingbao; Luo, Junli; Liu, Yun; Chen, Guolong; Liu, Song; Ruan, Qiangjin; Deng, Xunding; Wang, Dianchun; Fan, Quanshui; Pan, Xinghua

    2012-04-25

    The use of operating microscopes is limited by the focal length. Surgeons using these instruments cannot simultaneously view and access the surgical field and must choose one or the other. The longer focal length (more than 1 000 mm) of an operating telescope permits a position away from the operating field, above the surgeon and out of the field of view. This gives the telescope an advantage over an operating microscope. We developed a telescopic system using screen-imaging guidance and a modified portable video macroscope constructed from a Computar MLH-10 × macro lens, a DFK-21AU04 USB CCD Camera and a Dell laptop computer as monitor screen. This system was used to establish a middle cerebral artery occlusion model in rats. Results showed that magnification of the modified portable video macroscope was appropriate (5-20 ×) even though the Computar MLH-10 × macro lens was placed 800 mm away from the operating field rather than at the specified working distance of 152.4 mm with a zoom of 1-40 ×. The screen-imaging telescopic technique was clear, life-like, stereoscopic and matched the actual operation. Screen-imaging guidance led to an accurate, smooth, minimally invasive and comparatively easy surgical procedure. Success rate of the model establishment evaluated by neurological function using the modified neurological score system was 74.07%. There was no significant difference in model establishment time, sensorimotor deficit and infarct volume percentage. Our findings indicate that the telescopic lens is effective in the screen surgical operation mode referred to as "long distance observation and short distance operation" and that screen-imaging guidance using an modified portable video macroscope can be utilized for the establishment of a middle cerebral artery occlusion model and micro-neurosurgery. PMID:25722675

  13. Phase Time and Envelope Time in Time-Distance Analysis and Acoustic Imaging

    NASA Technical Reports Server (NTRS)

    Chou, Dean-Yi; Duvall, Thomas L.; Sun, Ming-Tsung; Chang, Hsiang-Kuang; Jimenez, Antonio; Rabello-Soares, Maria Cristina; Ai, Guoxiang; Wang, Gwo-Ping; Goode Philip; Marquette, William; Ehgamberdiev, Shuhrat; Landenkov, Oleg

    1999-01-01

    Time-distance analysis and acoustic imaging are two related techniques to probe the local properties of solar interior. In this study, we discuss the relation of phase time and envelope time between the two techniques. The location of the envelope peak of the cross correlation function in time-distance analysis is identified as the travel time of the wave packet formed by modes with the same w/l. The phase time of the cross correlation function provides information of the phase change accumulated along the wave path, including the phase change at the boundaries of the mode cavity. The acoustic signals constructed with the technique of acoustic imaging contain both phase and intensity information. The phase of constructed signals can be studied by computing the cross correlation function between time series constructed with ingoing and outgoing waves. In this study, we use the data taken with the Taiwan Oscillation Network (TON) instrument and the Michelson Doppler Imager (MDI) instrument. The analysis is carried out for the quiet Sun. We use the relation of envelope time versus distance measured in time-distance analyses to construct the acoustic signals in acoustic imaging analyses. The phase time of the cross correlation function of constructed ingoing and outgoing time series is twice the difference between the phase time and envelope time in time-distance analyses as predicted. The envelope peak of the cross correlation function between constructed ingoing and outgoing time series is located at zero time as predicted for results of one-bounce at 3 mHz for all four data sets and two-bounce at 3 mHz for two TON data sets. But it is different from zero for other cases. The cause of the deviation of the envelope peak from zero is not known.

  14. Numerical Simulation of Target Range Estimation Using Ambient Noise Imaging with Acoustic Lens

    NASA Astrophysics Data System (ADS)

    Kazuyoshi Mori,; Hanako Ogasawara,; Toshiaki Nakamura,; Takenobu Tsuchiya,; Nobuyuki Endoh,

    2010-07-01

    In ambient noise imaging (ANI), each pixel of a target image is mapped by either monochrome or pseudo color to represent its acoustic intensity in each direction. This intensity is obtained by measuring the target object’s reflecting or scattering wave, with ocean background noise serving as the sound source. In the case of using an acoustic lens, the ANI system creates a C-mode-like image, where receivers are arranged on a focal plane and each pixel’s color corresponds to the intensity of each receiver output. There is no consideration for estimating a target range by this method, because it is impossible to measure the traveling time between a transducer and a target by a method like an active imaging sonar. In this study, we tried to estimate a target range using the ANI system with an acoustic lens. Here, we conducted a numerical simulation of sound propagation based on the principle of the time reversal mirror. First, instead of actual ocean measurements in the forward propagation, we calculated the scattering wave from a rigid target object in an acoustic noise field generated by a large number of point sources using the two-dimensional (2D) finite difference time domain (FDTD) method. The time series of the scattering wave converged by the lens was then recorded on each receiver. The sound pressure distribution assuming that the time-reversed wave of the scattering wave was reradiated from each receiver position was also calculated using the 2D FDTD method in the backward propagation. It was possible to estimate a target range using the ANI system with an acoustic lens, because the maximum position of the reradiated sound pressure field was close to the target position.

  15. Numerical Simulation of Target Range Estimation Using Ambient Noise Imaging with Acoustic Lens

    NASA Astrophysics Data System (ADS)

    Mori, Kazuyoshi; Ogasawara, Hanako; Nakamura, Toshiaki; Tsuchiya, Takenobu; Endoh, Nobuyuki

    2010-07-01

    In ambient noise imaging (ANI), each pixel of a target image is mapped by either monochrome or pseudo color to represent its acoustic intensity in each direction. This intensity is obtained by measuring the target object's reflecting or scattering wave, with ocean background noise serving as the sound source. In the case of using an acoustic lens, the ANI system creates a C-mode-like image, where receivers are arranged on a focal plane and each pixel's color corresponds to the intensity of each receiver output. There is no consideration for estimating a target range by this method, because it is impossible to measure the traveling time between a transducer and a target by a method like an active imaging sonar. In this study, we tried to estimate a target range using the ANI system with an acoustic lens. Here, we conducted a numerical simulation of sound propagation based on the principle of the time reversal mirror. First, instead of actual ocean measurements in the forward propagation, we calculated the scattering wave from a rigid target object in an acoustic noise field generated by a large number of point sources using the two-dimensional (2D) finite difference time domain (FDTD) method. The time series of the scattering wave converged by the lens was then recorded on each receiver. The sound pressure distribution assuming that the time-reversed wave of the scattering wave was reradiated from each receiver position was also calculated using the 2D FDTD method in the backward propagation. It was possible to estimate a target range using the ANI system with an acoustic lens, because the maximum position of the reradiated sound pressure field was close to the target position.

  16. The Effects of Nonlinear Propagation on Acoustic Source Imaging in One-Dimension

    NASA Astrophysics Data System (ADS)

    Shepherd, Micah; Gee, Kent L.

    2006-10-01

    The acoustics of finite-amplitude (nonlinear) sound sources, such as rockets and jets, are not well understood. Characterization of sound pressure amplitudes, aeroacoustic source locations and frequency dependence of these sources is needed to assess the impact of the acoustic field on the launch equipment and surrounding environment. Nonlinear propagation of high-amplitude sound is being studied to determine if a source-imaging method called near-field acoustical holography (NAH), which is based on linear assumptions, can be used to estimate the source information mentioned. A one-dimensional numerical algorithm is being used to linearly and nonlinearly propagate the radiation from a monofrequency source. NAH is used to reconstruct the source information from the simulated data and the error is determined in decibels.

  17. Schlieren imaging of the standing wave field in an ultrasonic acoustic levitator

    NASA Astrophysics Data System (ADS)

    Rendon, Pablo Luis; Boullosa, Ricardo R.; Echeverria, Carlos; Porta, David

    2015-11-01

    We consider a model of a single axis acoustic levitator consisting of two cylinders immersed in air and directed along the same axis. The first cylinder has a flat termination and functions as a sound emitter, and the second cylinder, which is simply a refector, has the side facing the first cylinder cut out by a spherical surface. By making the first cylinder vibrate at ultrasonic frequencies a standing wave is produced in the air between the cylinders which makes it possible, by means of the acoustic radiation pressure, to levitate one or several small objects of different shapes, such as spheres or disks. We use schlieren imaging to observe the acoustic field resulting from the levitation of one or several objects, and compare these results to previous numerical approximations of the field obtained using a finite element method. The authors acknowledge financial support from DGAPA-UNAM through project PAPIIT IN109214.

  18. Imaging of transient surface acoustic waves by full-field photorefractive interferometry

    SciTech Connect

    Xiong, Jichuan; Xu, Xiaodong E-mail: christ.glorieux@fys.kuleuven.be; Glorieux, Christ E-mail: christ.glorieux@fys.kuleuven.be; Matsuda, Osamu; Cheng, Liping

    2015-05-15

    A stroboscopic full-field imaging technique based on photorefractive interferometry for the visualization of rapidly changing surface displacement fields by using of a standard charge-coupled device (CCD) camera is presented. The photorefractive buildup of the space charge field during and after probe laser pulses is simulated numerically. The resulting anisotropic diffraction upon the refractive index grating and the interference between the polarization-rotated diffracted reference beam and the transmitted signal beam are modeled theoretically. The method is experimentally demonstrated by full-field imaging of the propagation of photoacoustically generated surface acoustic waves with a temporal resolution of nanoseconds. The surface acoustic wave propagation in a 23 mm × 17 mm area on an aluminum plate was visualized with 520 × 696 pixels of the CCD sensor, yielding a spatial resolution of 33 μm. The short pulse duration (8 ns) of the probe laser yields the capability of imaging SAWs with frequencies up to 60 MHz.

  19. Acoustic imaging with time reversal methods: From medicine to NDT

    NASA Astrophysics Data System (ADS)

    Fink, Mathias

    2015-03-01

    This talk will present an overview of the research conducted on ultrasonic time-reversal methods applied to biomedical imaging and to non-destructive testing. We will first describe iterative time-reversal techniques that allow both focusing ultrasonic waves on reflectors in tissues (kidney stones, micro-calcifications, contrast agents) or on flaws in solid materials. We will also show that time-reversal focusing does not need the presence of bright reflectors but it can be achieved only from the speckle noise generated by random distributions of non-resolved scatterers. We will describe the applications of this concept to correct distortions and aberrations in ultrasonic imaging and in NDT. In the second part of the talk we will describe the concept of time-reversal processors to get ultrafast ultrasonic images with typical frame rates of order of 10.000 F/s. It is the field of ultrafast ultrasonic imaging that has plenty medical applications and can be of great interest in NDT. We will describe some applications in the biomedical domain: Quantitative Elasticity imaging of tissues by following shear wave propagation to improve cancer detection and Ultrafast Doppler imaging that allows ultrasonic functional imaging.

  20. Multi-acoustic lens design methodology for a low cost C-scan photoacoustic imaging camera

    NASA Astrophysics Data System (ADS)

    Chinni, Bhargava; Han, Zichao; Brown, Nicholas; Vallejo, Pedro; Jacobs, Tess; Knox, Wayne; Dogra, Vikram; Rao, Navalgund

    2016-03-01

    We have designed and implemented a novel acoustic lens based focusing technology into a prototype photoacoustic imaging camera. All photoacoustically generated waves from laser exposed absorbers within a small volume get focused simultaneously by the lens onto an image plane. We use a multi-element ultrasound transducer array to capture the focused photoacoustic signals. Acoustic lens eliminates the need for expensive data acquisition hardware systems, is faster compared to electronic focusing and enables real-time image reconstruction. Using this photoacoustic imaging camera, we have imaged more than 150 several centimeter size ex-vivo human prostate, kidney and thyroid specimens with a millimeter resolution for cancer detection. In this paper, we share our lens design strategy and how we evaluate the resulting quality metrics (on and off axis point spread function, depth of field and modulation transfer function) through simulation. An advanced toolbox in MATLAB was adapted and used for simulating a two-dimensional gridded model that incorporates realistic photoacoustic signal generation and acoustic wave propagation through the lens with medium properties defined on each grid point. Two dimensional point spread functions have been generated and compared with experiments to demonstrate the utility of our design strategy. Finally we present results from work in progress on the use of two lens system aimed at further improving some of the quality metrics of our system.

  1. Vibro-acoustography: An imaging modality based on ultrasound-stimulated acoustic emission

    PubMed Central

    Fatemi, Mostafa; Greenleaf, James F.

    1999-01-01

    We describe theoretical principles of an imaging modality that uses the acoustic response of an object to a highly localized dynamic radiation force of an ultrasound field. In this method, named ultrasound-stimulated vibro-acoustography (USVA), ultrasound is used to exert a low-frequency (in kHz range) force on the object. In response, a portion of the object vibrates sinusoidally in a pattern determined by its viscoelastic properties. The acoustic emission field resulting from object vibration is detected and used to form an image that represents both the ultrasonic and low-frequency (kHz range) mechanical characteristics of the object. We report the relation between the emitted acoustic field and the incident ultrasonic pressure field in terms of object parameters. Also, we present the point-spread function of the imaging system. The experimental images in this report have a resolution of about 700 μm, high contrast, and high signal-to-noise ratio. USVA is sensitive enough to detect object motions on the order of nanometers. Possible applications include medical imaging and material evaluation. PMID:10359758

  2. Multifrequency microwave-induced thermal acoustic imaging for breast cancer detection.

    PubMed

    Guo, Bin; Li, Jian; Zmuda, Henry; Sheplak, Mark

    2007-11-01

    Microwave-induced thermal acoustic imaging (TAI) is a promising early breast cancer detection technique, which combines the advantages of microwave stimulation and ultrasound imaging and offers a high imaging contrast, as well as high spatial resolution at the same time. A new multifrequency microwave-induced thermal acoustic imaging scheme for early breast cancer detection is proposed in this paper. Significantly more information about the human breast can be gathered using multiple frequency microwave stimulation. A multifrequency adaptive and robust technique (MART) is presented for image formation. Due to its data-adaptive nature, MART can achieve better resolution and better interference rejection capability than its data-independent counterparts, such as the delay-and-sum method. The effectiveness of this procedure is shown by several numerical examples based on 2-D breast models. The finite-difference time-domain method is used to simulate the electromagnetic field distribution, the absorbed microwave energy density, and the thermal acoustic field in the breast model. PMID:18018695

  3. Acoustical and optical scattering and imaging of tissues: an overview

    NASA Astrophysics Data System (ADS)

    Ishimaru, Akira

    2001-05-01

    This talk will first give a general discussion on the ultrasound media characteristics of blood and spectral densities of tissues. The first-order scattering theory, multiple scattering theory, Doppler spectrum, cw and pulse scattering, focused beam, beam spot-size, speckle, texture, and rough interface effects will be presented. Imaging through tissues will then be discussed in terms of temporal and spatial resolutions, contrast, MTF (modulation transfer function), SAR and confocal imaging techniques, tomographic and holographic imaging, and inverse scattering. Next, we discuss optical diffusion in blood and tissues, radiative transfer theory, photon density waves, and polarization effects.

  4. Measurement of acoustic velocity in the stack of a thermoacoustic refrigerator using particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Berson, Arganthaël; Michard, Marc; Blanc-Benon, Philippe

    2008-06-01

    Thermoacoustic refrigeration systems generate cooling power from a high-amplitude acoustic standing wave. There has recently been a growing interest in this technology because of its simple and robust architecture and its use of environmentally safe gases. With the prospect of commercialization, it is necessary to enhance the efficiency of thermoacoustic cooling systems and more particularly of some of their components such as the heat exchangers. The characterization of the flow field at the end of the stack plates is a crucial step for the understanding and optimization of heat transfer between the stack and the heat exchangers. In this study, a specific particle image velocimetry measurement is performed inside a thermoacoustic refrigerator. Acoustic velocity is measured using synchronization and phase-averaging. The measurement method is validated inside a void resonator by successfully comparing experimental data with an acoustic plane wave model. Velocity is measured inside the oscillating boundary layers, between the plates of the stack, and compared to a linear model. The flow behind the stack is characterized, and it shows the generation of symmetric pairs of counter-rotating vortices at the end of the stack plates at low acoustic pressure level. As the acoustic pressure level increases, detachment of the vortices and symmetry breaking are observed.

  5. Psychophysical Comparison Of A Video Display System To Film By Using Bone Fracture Images

    NASA Astrophysics Data System (ADS)

    Seeley, George W.; Stempski, Mark; Roehrig, Hans; Nudelman, Sol; Capp, M. P.

    1982-11-01

    This study investigated the possibility of using a video display system instead of film for radiological diagnosis. Also investigated were the relationships between characteristics of the system and the observer's accuracy level. Radiologists were used as observers. Thirty-six clinical bone fractures were separated into two matched sets of equal difficulty. The difficulty parameters and ratings were defined by a panel of expert bone radiologists at the Arizona Health Sciences Center, Radiology Department. These two sets of fracture images were then matched with verifiably normal images using parameters such as film type, angle of view, size, portion of anatomy, the film's density range, and the patient's age and sex. The two sets of images were then displayed, using a counterbalanced design, to each of the participating radiologists for diagnosis. Whenever a response was given to a video image, the radiologist used enhancement controls to "window in" on the grey levels of interest. During the TV phase, the radiologist was required to record the settings of the calibrated controls of the image enhancer during interpretation. At no time did any single radiologist see the same film in both modes. The study was designed so that a standard analysis of variance would show the effects of viewing mode (film vs TV), the effects due to stimulus set, and any interactions with observers. A signal detection analysis of observer performance was also performed. Results indicate that the TV display system is almost as good as the view box display; an average of only two more errors were made on the TV display. The difference between the systems has been traced to four observers who had poor accuracy on a small number of films viewed on the TV display. This information is now being correlated with the video system's signal-to-noise ratio (SNR), signal transfer function (STF), and resolution measurements, to obtain information on the basic display and enhancement requirements for a

  6. The research on binocular stereo video imaging and display system based on low-light CMOS

    NASA Astrophysics Data System (ADS)

    Xie, Ruobing; Li, Li; Jin, Weiqi; Guo, Hong

    2015-10-01

    It is prevalent for the low-light night-vision helmet to equip the binocular viewer with image intensifiers. Such equipment can not only acquire night vision ability, but also obtain the sense of stereo vision to achieve better perception and understanding of the visual field. However, since the image intensifier is for direct-observation, it is difficult to apply the modern image processing technology. As a result, developing digital video technology in night vision is of great significance. In this paper, we design a low-light night-vision helmet with digital imaging device. It consists of three parts: a set of two low-illumination CMOS cameras, a binocular OLED micro display and an image processing PCB. Stereopsis is achieved through the binocular OLED micro display. We choose Speed-Up Robust Feature (SURF) algorithm for image registration. Based on the image matching information and the cameras' calibration parameters, disparity can be calculated in real-time. We then elaborately derive the constraints of binocular stereo display. The sense of stereo vision can be obtained by dynamically adjusting the content of the binocular OLED micro display. There is sufficient space for function extensions in our system. The performance of this low-light night-vision helmet can be further enhanced in combination with The HDR technology and image fusion technology, etc.

  7. Human pose tracking from monocular video by traversing an image motion mapped body pose manifold

    NASA Astrophysics Data System (ADS)

    Basu, Saurav; Poulin, Joshua; Acton, Scott T.

    2010-01-01

    Tracking human pose from monocular video sequences is a challenging problem due to the large number of independent parameters affecting image appearance and nonlinear relationships between generating parameters and the resultant images. Unlike the current practice of fitting interpolation functions to point correspondences between underlying pose parameters and image appearance, we exploit the relationship between pose parameters and image motion flow vectors in a physically meaningful way. Change in image appearance due to pose change is realized as navigating a low dimensional submanifold of the infinite dimensional Lie group of diffeomorphisms of the two dimensional sphere S2. For small changes in pose, image motion flow vectors lie on the tangent space of the submanifold. Any observed image motion flow vector field is decomposed into the basis motion vector flow fields on the tangent space and combination weights are used to update corresponding pose changes in the different dimensions of the pose parameter space. Image motion flow vectors are largely invariant to style changes in experiments with synthetic and real data where the subjects exhibit variation in appearance and clothing. The experiments demonstrate the robustness of our method (within +/-4° of ground truth) to style variance.

  8. Methods And Systems For Using Reference Images In Acoustic Image Processing

    DOEpatents

    Moore, Thomas L.; Barter, Robert Henry

    2005-01-04

    A method and system of examining tissue are provided in which a field, including at least a portion of the tissue and one or more registration fiducials, is insonified. Scattered acoustic information, including both transmitted and reflected waves, is received from the field. A representation of the field, including both the tissue and the registration fiducials, is then derived from the received acoustic radiation.

  9. Three dimensional full-wave nonlinear acoustic simulations: Applications to ultrasound imaging

    SciTech Connect

    Pinton, Gianmarco

    2015-10-28

    Characterization of acoustic waves that propagate nonlinearly in an inhomogeneous medium has significant applications to diagnostic and therapeutic ultrasound. The generation of an ultrasound image of human tissue is based on the complex physics of acoustic wave propagation: diffraction, reflection, scattering, frequency dependent attenuation, and nonlinearity. The nonlinearity of wave propagation is used to the advantage of diagnostic scanners that use the harmonic components of the ultrasonic signal to improve the resolution and penetration of clinical scanners. One approach to simulating ultrasound images is to make approximations that can reduce the physics to systems that have a low computational cost. Here a maximalist approach is taken and the full three dimensional wave physics is simulated with finite differences. This paper demonstrates how finite difference simulations for the nonlinear acoustic wave equation can be used to generate physically realistic two and three dimensional ultrasound images anywhere in the body. A specific intercostal liver imaging scenario for two cases: with the ribs in place, and with the ribs removed. This configuration provides an imaging scenario that cannot be performed in vivo but that can test the influence of the ribs on image quality. Several imaging properties are studied, in particular the beamplots, the spatial coherence at the transducer surface, the distributed phase aberration, and the lesion detectability for imaging at the fundamental and harmonic frequencies. The results indicate, counterintuitively, that at the fundamental frequency the beamplot improves due to the apodization effect of the ribs but at the same time there is more degradation from reverberation clutter. At the harmonic frequency there is significantly less improvement in the beamplot and also significantly less degradation from reverberation. It is shown that even though simulating the full propagation physics is computationally challenging it

  10. Full-wave Nonlinear Inverse Scattering for Acoustic and Electromagnetic Breast Imaging

    NASA Astrophysics Data System (ADS)

    Haynes, Mark Spencer

    Acoustic and electromagnetic full-wave nonlinear inverse scattering techniques are explored in both theory and experiment with the ultimate aim of noninvasively mapping the material properties of the breast. There is evidence that benign and malignant breast tissue have different acoustic and electrical properties and imaging these properties directly could provide higher quality images with better diagnostic certainty. In this dissertation, acoustic and electromagnetic inverse scattering algorithms are first developed and validated in simulation. The forward solvers and optimization cost functions are modified from traditional forms in order to handle the large or lossy imaging scenes present in ultrasonic and microwave breast imaging. An antenna model is then presented, modified, and experimentally validated for microwave S-parameter measurements. Using the antenna model, a new electromagnetic volume integral equation is derived in order to link the material properties of the inverse scattering algorithms to microwave S-parameters measurements allowing direct comparison of model predictions and measurements in the imaging algorithms. This volume integral equation is validated with several experiments and used as the basis of a free-space inverse scattering experiment, where images of the dielectric properties of plastic objects are formed without the use of calibration targets. These efforts are used as the foundation of a solution and formulation for the numerical characterization of a microwave near-field cavity-based breast imaging system. The system is constructed and imaging results of simple targets are given. Finally, the same techniques are used to explore a new self-characterization method for commercial ultrasound probes. The method is used to calibrate an ultrasound inverse scattering experiment and imaging results of simple targets are presented. This work has demonstrated the feasibility of quantitative microwave inverse scattering by way of a self

  11. Three dimensional full-wave nonlinear acoustic simulations: Applications to ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Pinton, Gianmarco

    2015-10-01

    Characterization of acoustic waves that propagate nonlinearly in an inhomogeneous medium has significant applications to diagnostic and therapeutic ultrasound. The generation of an ultrasound image of human tissue is based on the complex physics of acoustic wave propagation: diffraction, reflection, scattering, frequency dependent attenuation, and nonlinearity. The nonlinearity of wave propagation is used to the advantage of diagnostic scanners that use the harmonic components of the ultrasonic signal to improve the resolution and penetration of clinical scanners. One approach to simulating ultrasound images is to make approximations that can reduce the physics to systems that have a low computational cost. Here a maximalist approach is taken and the full three dimensional wave physics is simulated with finite differences. This paper demonstrates how finite difference simulations for the nonlinear acoustic wave equation can be used to generate physically realistic two and three dimensional ultrasound images anywhere in the body. A specific intercostal liver imaging scenario for two cases: with the ribs in place, and with the ribs removed. This configuration provides an imaging scenario that cannot be performed in vivo but that can test the influence of the ribs on image quality. Several imaging properties are studied, in particular the beamplots, the spatial coherence at the transducer surface, the distributed phase aberration, and the lesion detectability for imaging at the fundamental and harmonic frequencies. The results indicate, counterintuitively, that at the fundamental frequency the beamplot improves due to the apodization effect of the ribs but at the same time there is more degradation from reverberation clutter. At the harmonic frequency there is significantly less improvement in the beamplot and also significantly less degradation from reverberation. It is shown that even though simulating the full propagation physics is computationally challenging it

  12. Focused acoustic beam imaging of grain structure and local Young's modulus with Rayleigh and surface skimming longitudinal waves

    SciTech Connect

    Martin, R. W.; Sathish, S.; Blodgett, M. P.

    2013-01-25

    The interaction of a focused acoustic beam with materials generates Rayleigh surface waves (RSW) and surface skimming longitudinal waves (SSLW). Acoustic microscopic investigations have used the RSW amplitude and the velocity measurements, extensively for grain structure analysis. Although, the presence of SSLW has been recognized, it is rarely used in acoustic imaging. This paper presents an approach to perform microstructure imaging and local elastic modulus measurements by combining both RSW and SSLW. The acoustic imaging of grain structure was performed by measuring the amplitude of RSW and SSLW signal. The microstructure images obtained on the same region of the samples with RSW and SSLW are compared and the difference in the contrast observed is discussed based on the propagation characteristics of the individual surface waves. The velocity measurements are determined by two point defocus method. The surface wave velocities of RSW and SSLW of the same regions of the sample are combined and presented as average Young's modulus image.

  13. Quantitative high-resolution acoustic imaging of the seafloor

    NASA Astrophysics Data System (ADS)

    Holland, C. W.; Dettmer, J.; Steininger, G.; Dosso, S. E.

    2013-12-01

    Quantifying the properties of the seafloor interface and near surface (a few tens of meters) is of considerable interest to science as well as industry. Scales of interest range from the order of tens of kilometers (survey size) down to less than a centimeter. These scales can be probed using an AUV equipped with a broadband source and a short streamer. The data are processed for energy (rather than peak) reflection coefficients and scattering cross-section versus bi-static angle. In order to tackle spatial scales ranging over 8 orders of magnitude of, it is useful to divide the parameter space into deterministic and stochastic parameters. The energy reflection coefficients contain information on deterministic properties including sound speed, density and attenuation vs depth in the upper tens of meters of sediment. Vertical resolution is a function of depth, but typically of order 0.1 m near surface. The statistical properties of the smaller scales, i.e., seafloor roughness and/or volume heterogeneities are obtained from the bi-static scattering data. Physics-based models are used to relate the sediment micro-structure (the Buckingham model) and sediment fluctuations (the Von Karman spectrum) to the acoustic observables. Quantitative parameter and inter-parameter uncertainties are obtained from Bayesian methods for both deterministic and stochastic parameters.

  14. Analysis and verification of dominant factor to obtain the high resolution photo-acoustic imaging

    NASA Astrophysics Data System (ADS)

    Hirasawa, T.; Ishihara, M.; Kitagaki, M.; Bansaku, I.; Fujita, M.; Kikuchi, M.

    2011-03-01

    Our goal is to develop a photo-acoustic imaging (PAI) system which offers functional image of living tissues and organs with high resolution. In order to obtain high resolution image, we implemented the Fourier transform reconstruction algorithm which determines an optical absorption distribution from photo-acoustic (PA) signals. However, resolutions of reconstructed images were restricted by the sensor directionality, finite scan width and frequency band width. There was an essential requirement to optimize the sensor specification. In this study, we demonstrated relationship between image resolution and sensor specification by simulation and experiment. In our experimental system, PA signals were acquired by line scanning of our fabricated P(VDF/TrFE) film sensor. As results of simulations and experiments, lateral resolutions of PA images were restricted by the directionality of sensor. Furthermore, by limiting scan width and frequency band width, lateral resolution is decreased at deep region. The optimum sensor specification depends on the imaging region due to some trade-offs, for example, a sensor with wider directionality has less sensitivity, wider scan in same step increases acquisition time. Therefore, the results could indicate the possibility of optimizing sensor directionality, scan width and frequency band width for various depths and volumes of imaging region.

  15. Synthetic aperture acoustic imaging of canonical targets with a 2-15 kHz linear FM chirp

    NASA Astrophysics Data System (ADS)

    Vignola, Joseph F.; Judge, John A.; Good, Chelsea E.; Bishop, Steven S.; Gugino, Peter M.; Soumekh, Mehrdad

    2011-06-01

    Synthetic aperture image reconstruction applied to outdoor acoustic recordings is presented. Acoustic imaging is an alternate method having several military relevant advantages such as being immune to RF jamming, superior spatial resolution, capable of standoff side and forward-looking scanning, and relatively low cost, weight and size when compared to 0.5 - 3 GHz ground penetrating radar technologies. Synthetic aperture acoustic imaging is similar to synthetic aperture radar, but more akin to synthetic aperture sonar technologies owing to the nature of longitudinal or compressive wave propagation in the surrounding acoustic medium. The system's transceiver is a quasi mono-static microphone and audio speaker pair mounted on a rail 5meters in length. Received data sampling rate is 80 kHz with a 2- 15 kHz Linear Frequency Modulated (LFM) chirp, with a pulse repetition frequency (PRF) of 10 Hz and an inter-pulse period (IPP) of 50 milliseconds. Targets are positioned within the acoustic scene at slant range of two to ten meters on grass, dirt or gravel surfaces, and with and without intervening metallic chain link fencing. Acoustic image reconstruction results in means for literal interpretation and quantifiable analyses. A rudimentary technique characterizes acoustic scatter at the ground surfaces. Targets within the acoustic scene are first digitally spotlighted and further processed, providing frequency and aspect angle dependent signature information.

  16. An image reconstruction for Capella with the Steward Observatory/AFGL intensified video speckle interferometry system

    NASA Astrophysics Data System (ADS)

    Cocke, W. J.; Hege, E. K.; Hubbard, E. N.; Strittmatter, P. A.; Worden, S. P.

    Since their invention in 1970, speckle interferometric techniques have evolved from simple optical processing of photographic images to high-speed digital processing of quantum-limited video data. Basic speckle interferometric techniques are discussed, taking into account the implementation of two distinct data-recording/data-processing modes. A description of image reconstruction techniques is also provided. Two methods for image phase retrieval have been implemented, including a phase unwrapping method developed by Cocke (1980) and the phase accumulation method of Knox and Thompson (1974). On February 3, 1981, analogue mode speckle interferograms for Capella and the unresolved star Gamma Ori were obtained with both the phase-unwrapping and the Knox-Thompson method.

  17. Video and thermal imaging system for monitoring interiors of high temperature reaction vessels

    DOEpatents

    Saveliev, Alexei V.; Zelepouga, Serguei A.; Rue, David M.

    2012-01-10

    A system and method for real-time monitoring of the interior of a combustor or gasifier wherein light emitted by the interior surface of a refractory wall of the combustor or gasifier is collected using an imaging fiber optic bundle having a light receiving end and a light output end. Color information in the light is captured with primary color (RGB) filters or complimentary color (GMCY) filters placed over individual pixels of color sensors disposed within a digital color camera in a BAYER mosaic layout, producing RGB signal outputs or GMCY signal outputs. The signal outputs are processed using intensity ratios of the primary color filters or the complimentary color filters, producing video images and/or thermal images of the interior of the combustor or gasifier.

  18. Video-Mosaicing of Reflectance Confocal Images For Rapid Examination of Large Areas of Skin In Vivo

    PubMed Central

    Kose, Kivanc; Cordova, Miguel; Duffy, Megan; Flores, Eileen S.; Brooks, Dana H.; Rajadhyaksha, Milind

    2015-01-01

    Background With reflectance confocal microscopy (RCM) imaging, skin cancers can be diagnosed in vivo and margins detected to guide treatment. Since the field of view of an RCM image is much smaller than the typical size of lesions, mosaicing approaches have been developed to display larger areas of skin. However, the current paradigm for RCM mosaicing in vivo is limited both in speed and to pre-selected rectangular-shaped small areas. Another approach, called “video-mosaicing,” enables higher speeds and real-time operator-selected areas of any size and shape, and will be more useful for RCM examination of skin in vivo. Objectives To demonstrate the feasibility and clinical potential of video-mosaicing of RCM images to rapidly display large areas of skin in vivo. Methods Thirteen videos of benign lesions, melanocytic cancers and residual basal cell carcinoma margins were collected on volunteer subjects with a handheld RCM scanner. The images from each video were processed and stitched into mosaics to display the entire area that was imaged. Results Acquisition of RCM videos covering 5.0–16.0 mm2 was performed in 20–60 seconds. The video-mosaics were visually determined to be of high quality for resolution, contrast and seamless contiguity, and the appearance of cellular-level and morphologic detail. Conclusion Video-mosaicing confocal microscopy, with real-time operator-choice of the shape and size of the area to be imaged, will enable rapid examination of large areas of skin in vivo. This approach may further advance noninvasive detection of skin cancer and, eventually, facilitate wider adoption of RCM imaging in the clinic. PMID:24720744

  19. ACOUSTICAL IMAGING AND MECHANICAL PROPERTIES OF SOFT ROCK AND MARINE SEDIMENTS

    SciTech Connect

    Thurman E. Scott, Jr., Ph.D.; Younane Abousleiman, Ph.D.; Musharraf Zaman, Ph.D., P.E.

    2001-01-31

    During this phase of the project the research team concentrated on acquisition of acoustic emission data from the high porosity rock samples. The initial experiments indicated that the acoustic emission activity from high porosity Danian chalk were of a very low amplitude. Even though the sample underwent yielding and significant plastic deformation the sample did not generate significant AE activity. This was somewhat surprising. These initial results call into question the validity of attempting to locate AE activity in this weak rock type. As a result the testing program was slightly altered to include measuring the acoustic emission activity from many of the rock types listed in the research program. The preliminary experimental results indicate that AE activity in the sandstones is much higher than in the carbonate rocks (i.e., the chalks and limestones). This observation may be particularly important for planning microseismic imaging of reservoir rocks in the field environment. The preliminary results suggest that microseismic imaging of reservoir rock from acoustic emission activity generated from matrix deformation (during compaction and subsidence) would be extremely difficult to accomplish.

  20. Compression of compound images and video for enabling rich media in embedded systems

    NASA Astrophysics Data System (ADS)

    Said, Amir

    2004-01-01

    It is possible to improve the features supported by devices with embedded systems by increasing the processor computing power, but this always results in higher costs, complexity, and power consumption. An interesting alternative is to use the growing networking infrastructures to do remote processing and visualization, with the embedded system mainly responsible for communications and user interaction. This enables devices to behave as if much more "intelligent" to users, at very low costs and power. In this article we explain how compression can make some of these solutions more bandwidth-efficient, enabling devices to simply decompress very rich graphical information and user interfaces that had been rendered elsewhere. The mixture of natural images and video with text, graphics, and animations simultaneously in the same frame is called compound video. We present a new method for compression of compound images and video, which is able to efficiently identify the different components during compression, and use an appropriate coding method. Our system uses lossless compression for graphics and text, and, on natural images and highly detailed parts, it uses lossy compression with dynamically varying quality. Since it was designed for embedded systems with very limited resources, and it has small executable size, and low complexity for classification, compression and decompression. Other compression methods (e.g., MPEG) can do the same, but are very inefficient for compound content. High-level graphics languages can be bandwidth-efficient, but are much less reliable (e.g., supporting Asian fonts), and are many orders of magnitude more complex. Numerical tests show the very significant gains in compression achieved by these systems.

  1. Acoustic Reciprocity of Spatial Coherence in Ultrasound Imaging

    PubMed Central

    Bottenus, Nick; Üstüner, Kutay F.

    2015-01-01

    A conventional ultrasound image is formed by transmitting a focused wave into tissue, time-shifting the backscattered echoes received on an array transducer and summing the resulting signals. The van Cittert-Zernike theorem predicts a particular similarity, or coherence, of these focused signals across the receiving array. Many groups have used an estimate of the coherence to augment or replace the B-mode image in an effort to suppress noise and stationary clutter echo signals, but this measurement requires access to individual receive channel data. Most clinical systems have efficient pipelines for producing focused and summed RF data without any direct way to individually address the receive channels. We describe a method for performing coherence measurements that is more accessible for a wide range of coherence-based imaging. The reciprocity of the transmit and receive apertures in the context of coherence is derived and equivalence of the coherence function is validated experimentally using a research scanner. The proposed method is implemented on a Siemens ACUSON SC2000™ultrasound system and in vivo short-lag spatial coherence imaging is demonstrated using only summed RF data. The components beyond the acquisition hardware and beamformer necessary to produce a real-time ultrasound coherence imaging system are discussed. PMID:25965679

  2. 3D surface reconstruction based on image stitching from gastric endoscopic video sequence

    NASA Astrophysics Data System (ADS)

    Duan, Mengyao; Xu, Rong; Ohya, Jun

    2013-09-01

    This paper proposes a method for reconstructing 3D detailed structures of internal organs such as gastric wall from endoscopic video sequences. The proposed method consists of the four major steps: Feature-point-based 3D reconstruction, 3D point cloud stitching, dense point cloud creation and Poisson surface reconstruction. Before the first step, we partition one video sequence into groups, where each group consists of two successive frames (image pairs), and each pair in each group contains one overlapping part, which is used as a stitching region. Fist, the 3D point cloud of each group is reconstructed by utilizing structure from motion (SFM). Secondly, a scheme based on SIFT features registers and stitches the obtained 3D point clouds, by estimating the transformation matrix of the overlapping part between different groups with high accuracy and efficiency. Thirdly, we select the most robust SIFT feature points as the seed points, and then obtain the dense point cloud from sparse point cloud via a depth testing method presented by Furukawa. Finally, by utilizing Poisson surface reconstruction, polygonal patches for the internal organs are obtained. Experimental results demonstrate that the proposed method achieves a high accuracy and efficiency for 3D reconstruction of gastric surface from an endoscopic video sequence.

  3. Monitoring an eruption fissure in 3D: video recording, particle image velocimetry and dynamics

    NASA Astrophysics Data System (ADS)

    Witt, Tanja; Walter, Thomas R.

    2015-04-01

    The processes during an eruption are very complex. To get a better understanding several parameters are measured. One of the measured parameters is the velocity of particles and patterns, as ash and emitted magma, and of the volcano itself. The resulting velocity field provides insights into the dynamics of a vent. Here we test our algorithm for 3 dimensional velocity fields on videos of the second fissure eruption of Bárdarbunga 2014. There we acquired videos from lava fountains of the main fissure with 2 high speed cameras with small angles between the cameras. Additionally we test the algorithm on videos from the geyser Strokkur, where we had 3 cameras and larger angles between the cameras. The velocity is calculated by a correlation in the Fourier space of contiguous images. Considering that we only have the velocity field of the surface smaller angles result in a better resolution of the existing velocity field in the near field. For general movements also larger angles can be useful, e.g. to get the direction, height and velocity of eruption clouds. In summary, it can be stated that 3D velocimetry can be used for several application and with different setup due to the application.

  4. Determination of quasi-static microaccelerations onboard a satellite using video images of moving objects

    NASA Astrophysics Data System (ADS)

    Levtov, V. L.; Romanov, V. V.; Boguslavsky, A. A.; Sazonov, V. V.; Sokolov, S. M.; Glotov, Yu. N.

    2009-12-01

    A space experiment aimed at determination of quasi-static microaccelerations onboard an artificial satellite of the Earth using video images of the objects executing free motion is considered. The experiment was carried out onboard the Foton M-3 satellite. Several pellets moved in a cubic box fixed on the satellite’s mainframe and having two transparent adjacent walls. Their motion was photographed by a digital video camera. The camera was installed facing one of the transparent walls; a mirror was placed at an angle to another transparent wall. Such an optical system allowed us to have in a single frame two images of the pellets from differing viewpoints. The motion of the pellets was photographed on time intervals lasting 96 s. Pauses between these intervals were also equal to 96 s. A special processing of a separate image allowed us to determine coordinates of the pellet centers in the camera’s coordinate system. The sequence of frames belonging to a continuous interval of photography was processed in the following way. The time dependence of each coordinate of every pellet was approximated by a second degree polynomial using the least squares method. The coefficient of squared time is equal to a half of the corresponding microacceleration component. As has been shown by processing made, the described method of determination of quasi-static microaccelerations turned out to be sufficiently sensitive and accurate.

  5. Performance measure of image and video quality assessment algorithms: subjective root-mean-square error

    NASA Astrophysics Data System (ADS)

    Nuutinen, Mikko; Virtanen, Toni; Häkkinen, Jukka

    2016-03-01

    Evaluating algorithms used to assess image and video quality requires performance measures. Traditional performance measures (e.g., Pearson's linear correlation coefficient, Spearman's rank-order correlation coefficient, and root mean square error) compare quality predictions of algorithms to subjective mean opinion scores (mean opinion score/differential mean opinion score). We propose a subjective root-mean-square error (SRMSE) performance measure for evaluating the accuracy of algorithms used to assess image and video quality. The SRMSE performance measure takes into account dispersion between observers. The other important property of the SRMSE performance measure is its measurement scale, which is calibrated to units of the number of average observers. The results of the SRMSE performance measure indicate the extent to which the algorithm can replace the subjective experiment (as the number of observers). Furthermore, we have presented the concept of target values, which define the performance level of the ideal algorithm. We have calculated the target values for all sample sets of the CID2013, CVD2014, and LIVE multiply distorted image quality databases.The target values and MATLAB implementation of the SRMSE performance measure are available on the project page of this study.

  6. High-speed video analysis system using multiple shuttered charge-coupled device imagers and digital storage

    NASA Astrophysics Data System (ADS)

    Racca, Roberto G.; Stephenson, Owen; Clements, Reginald M.

    1992-06-01

    A fully solid state high-speed video analysis system is presented. It is based on the use of several independent charge-coupled device (CCD) imagers, each shuttered by a liquid crystal light valve. The imagers are exposed in rapid succession and are then read out sequentially at standard video rate into digital memory, generating a time-resolved sequence with as many frames as there are imagers. This design allows the use of inexpensive, consumer-grade camera modules and electronics. A microprocessor-based controller, designed to accept up to ten imagers, handles all phases of the recording from exposure timing to image capture and storage to playback on a standard video monitor. A prototype with three CCD imagers and shutters has been built. It has allowed successful three-image video recordings of phenomena such as the action of an air rifle pellet shattering a piece of glass, using a high-intensity pulsed light emitting diode as the light source. For slower phenomena, recordings in continuous light are also possible by using the shutters themselves to control the exposure time. The system records full-screen black and white images with spatial resolution approaching that of standard television, at rates up to 5000 images per second.

  7. Three-dimensional ghost imaging using acoustic transducer

    NASA Astrophysics Data System (ADS)

    Zhang, Chi; Guo, Shuxu; Guan, Jian; Cao, Junsheng; Gao, Fengli

    2016-06-01

    We propose a novel three-dimensional (3D) ghost imaging method using unfocused ultrasonic transducer, where the transducer is used as the bucket detector to collect the total photoacoustic signal intensity from spherical surfaces with different radius circling the transducer. This collected signal is a time sequence corresponding to the optic absorption information on the spherical surfaces, and the values at the same moments in all the sequences are used as the bucket signals to restore the corresponding spherical images, which are assembled as the object 3D reconstruction. Numerical experiments show this method can effectively accomplish the 3D reconstruction and by adding up each sequence on time domain as a bucket signal it can also realize two dimensional (2D) ghost imaging. The influence of the measurement times on the 3D and 2D reconstruction is analyzed with Peak Signal to Noise Ratio (PSNR) as the yardstick, and the transducer as a bucket detector is also discussed.

  8. The architecture of a video image processor for the space station

    NASA Technical Reports Server (NTRS)

    Yalamanchili, S.; Lee, D.; Fritze, K.; Carpenter, T.; Hoyme, K.; Murray, N.

    1987-01-01

    The architecture of a video image processor for space station applications is described. The architecture was derived from a study of the requirements of algorithms that are necessary to produce the desired functionality of many of these applications. Architectural options were selected based on a simulation of the execution of these algorithms on various architectural organizations. A great deal of emphasis was placed on the ability of the system to evolve and grow over the lifetime of the space station. The result is a hierarchical parallel architecture that is characterized by high level language programmability, modularity, extensibility and can meet the required performance goals.

  9. Automated video-microscopic imaging and data acquisition system for colloid deposition measurements

    DOEpatents

    Abdel-Fattah, Amr I.; Reimus, Paul W.

    2004-12-28

    A video microscopic visualization system and image processing and data extraction and processing method for in situ detailed quantification of the deposition of sub-micrometer particles onto an arbitrary surface and determination of their concentration across the bulk suspension. The extracted data includes (a) surface concentration and flux of deposited, attached and detached colloids, (b) surface concentration and flux of arriving and departing colloids, (c) distribution of colloids in the bulk suspension in the direction perpendicular to the deposition surface, and (d) spatial and temporal distributions of deposited colloids.

  10. Imaging morphodynamics of human blood cells in vivo with video-rate third harmonic generation microscopy

    PubMed Central

    Chen, Chien-Kuo; Liu, Tzu-Ming

    2012-01-01

    With a video-rate third harmonic generation (THG) microscopy system, we imaged the micro-circulation beneath the human skin without labeling. Not only the speed of circulation but also the morpho-hydrodynamics of blood cells can be analyzed. Lacking of nuclei, red blood cells (RBCs) shows typical parachute-like and hollow-core morphology under THG microscopy. Quite different from RBCs, every now and then, round and granule rich blood cells with strong THG contrast appear in circulation. The corresponding volume densities in blood, evaluated from their frequencies of appearance and the velocity of circulation, fall within the physiological range of human white blood cell counts. PMID:23162724

  11. Imaging morphodynamics of human blood cells in vivo with video-rate third harmonic generation microscopy.

    PubMed

    Chen, Chien-Kuo; Liu, Tzu-Ming

    2012-11-01

    With a video-rate third harmonic generation (THG) microscopy system, we imaged the micro-circulation beneath the human skin without labeling. Not only the speed of circulation but also the morpho-hydrodynamics of blood cells can be analyzed. Lacking of nuclei, red blood cells (RBCs) shows typical parachute-like and hollow-core morphology under THG microscopy. Quite different from RBCs, every now and then, round and granule rich blood cells with strong THG contrast appear in circulation. The corresponding volume densities in blood, evaluated from their frequencies of appearance and the velocity of circulation, fall within the physiological range of human white blood cell counts. PMID:23162724

  12. In situ calibration of an infrared imaging video bolometer in the Large Helical Device.

    PubMed

    Mukai, K; Peterson, B J; Pandya, S N; Sano, R

    2014-11-01

    The InfraRed imaging Video Bolometer (IRVB) is a powerful diagnostic to measure multi-dimensional radiation profiles in plasma fusion devices. In the Large Helical Device (LHD), four IRVBs have been installed with different fields of view to reconstruct three-dimensional profiles using a tomography technique. For the application of the measurement to plasma experiments using deuterium gas in LHD in the near future, the long-term effect of the neutron irradiation on the heat characteristics of an IRVB foil should be taken into account by regular in situ calibration measurements. Therefore, in this study, an in situ calibration system was designed. PMID:25430342

  13. Rocket engine plume diagnostics using video digitization and image processing - Analysis of start-up

    NASA Technical Reports Server (NTRS)

    Disimile, P. J.; Shoe, B.; Dhawan, A. P.

    1991-01-01

    Video digitization techniques have been developed to analyze the exhaust plume of the Space Shuttle Main Engine. Temporal averaging and a frame-by-frame analysis provide data used to evaluate the capabilities of image processing techniques for use as measurement tools. Capabilities include the determination of the necessary time requirement for the Mach disk to obtain a fully-developed state. Other results show the Mach disk tracks the nozzle for short time intervals, and that dominate frequencies exist for the nozzle and Mach disk movement.

  14. In situ calibration of an infrared imaging video bolometer in the Large Helical Device

    SciTech Connect

    Mukai, K. Peterson, B. J.; Pandya, S. N.; Sano, R.

    2014-11-15

    The InfraRed imaging Video Bolometer (IRVB) is a powerful diagnostic to measure multi-dimensional radiation profiles in plasma fusion devices. In the Large Helical Device (LHD), four IRVBs have been installed with different fields of view to reconstruct three-dimensional profiles using a tomography technique. For the application of the measurement to plasma experiments using deuterium gas in LHD in the near future, the long-term effect of the neutron irradiation on the heat characteristics of an IRVB foil should be taken into account by regular in situ calibration measurements. Therefore, in this study, an in situ calibration system was designed.

  15. Automated detection framework of the calcified plaque with acoustic shadowing in IVUS images.

    PubMed

    Gao, Zhifan; Guo, Wei; Liu, Xin; Huang, Wenhua; Zhang, Heye; Tan, Ning; Hau, William Kongto; Zhang, Yuan-Ting; Liu, Huafeng

    2014-01-01

    Intravascular Ultrasound (IVUS) is one ultrasonic imaging technology to acquire vascular cross-sectional images for the visualization of the inner vessel structure. This technique has been widely used for the diagnosis and treatment of coronary artery diseases. The detection of the calcified plaque with acoustic shadowing in IVUS images plays a vital role in the quantitative analysis of atheromatous plaques. The conventional method of the calcium detection is manual drawing by the doctors. However, it is very time-consuming, and with high inter-observer and intra-observer variability between different doctors. Therefore, the computer-aided detection of the calcified plaque is highly desired. In this paper, an automated method is proposed to detect the calcified plaque with acoustic shadowing in IVUS images by the Rayleigh mixture model, the Markov random field, the graph searching method and the prior knowledge about the calcified plaque. The performance of our method was evaluated over 996 in-vivo IVUS images acquired from eight patients, and the detected calcified plaques are compared with manually detected calcified plaques by one cardiology doctor. The experimental results are quantitatively analyzed separately by three evaluation methods, the test of the sensitivity and specificity, the linear regression and the Bland-Altman analysis. The first method is used to evaluate the ability to distinguish between IVUS images with and without the calcified plaque, and the latter two methods can respectively measure the correlation and the agreement between our results and manual drawing results for locating the calcified plaque in the IVUS image. High sensitivity (94.68%) and specificity (95.82%), good correlation and agreement (>96.82% results fall within the 95% confidence interval in the Student t-test) demonstrate the effectiveness of the proposed method in the detection of the calcified plaque with acoustic shadowing in IVUS images. PMID:25372784

  16. Near-Field Imaging with Sound: An Acoustic STM Model

    ERIC Educational Resources Information Center

    Euler, Manfred

    2012-01-01

    The invention of scanning tunneling microscopy (STM) 30 years ago opened up a visual window to the nano-world and sparked off a bunch of new methods for investigating and controlling matter and its transformations at the atomic and molecular level. However, an adequate theoretical understanding of the method is demanding; STM images can be…

  17. HF Doppler Acoustic Imaging of the Ocean Surface and Interior

    NASA Astrophysics Data System (ADS)

    Pinkel, Robert; Smith, Jerome A.

    2004-11-01

    HF phased array Doppler sonar represents a new tool for obtaining Three-dimensional (r,q,t) images of the oceanic surface and interior velocity field. While the capabilities of the approach are unique, the design constraints are also unusual. Examples of both are presented in this work.

  18. Observations of Brine Pool Surface Characteristics and Internal Structure Through Remote Acoustic and Structured Light Imaging

    NASA Astrophysics Data System (ADS)

    Smart, C.; Roman, C.; Michel, A.; Wankel, S. D.

    2015-12-01

    Observations and analysis of the surface characteristics and internal structure of deep-sea brine pools are currently limited to discrete in-situ observations. Complementary acoustic and structured light imaging sensors mounted on a remotely operated vehicle (ROV) have demonstrated the ability systematically detect variations in surface characteristics of a brine pool, reveal internal stratification and detect areas of active hydrocarbon activity. The presented visual and acoustic sensors combined with a stereo camera pair are mounted on the 4000m rated ROV Hercules (Ocean Exploration Trust). These three independent sensors operate simultaneously from a typical 3m altitude resulting in visual and bathymetric maps with sub-centimeter resolution. Applying this imaging technology to 2014 and 2015 brine pool surveys in the Gulf of Mexico revealed acoustic and visual anomalies due to the density changes inherent in the brine. Such distinct changes in acoustic impedance allowed the high frequency 1350KHz multibeam sonar to detect multiple interfaces. For instance, distinct acoustic reflections were observed at 3m and 5.5m below the vehicle. Subsequent verification using a CDT and lead line indicated the acoustic return from the brine surface was the signal at 3m, while a thicker muddy and more saline interface occurred at 5.5m, the bottom of the brine pool was not located but is assumed to be deeper than 15m. The multibeam is also capable of remotely detecting emitted gas bubbles within the brine pool, indicative of active hydrocarbon seeps. Bubbles associated with these seeps were not consistently visible above the brine while using the HD camera on the ROV. Additionally, while imaging the surface of brine pool the structured light sheet laser became diffuse, refracting across the main interface. Analysis of this refraction combined with varying acoustic returns allow for systematic and remote detection of the density, stratification and activity levels within and

  19. Development of passive submillimeter-wave video imaging systems for security applications

    NASA Astrophysics Data System (ADS)

    Heinz, Erik; May, Torsten; Born, Detlef; Zieger, Gabriel; Brömel, Anika; Anders, Solveig; Zakosarenko, Vyacheslav; Krause, Torsten; Krüger, André; Schulz, Marco; Bauer, Frank; Meyer, Hans-Georg

    2012-10-01

    Passive submillimeter-wave imaging is a concept that has been in the focus of interest as a promising technology for security applications for a number of years. It utilizes the unique optical properties of submillimeter waves and promises an alternative to millimeter-wave and X-ray backscattering portals for personal security screening in particular. Possible application scenarios demand sensitive, fast, and flexible high-quality imaging techniques. Considering the low radiometric contrast of indoor scenes in the submillimeter range, this objective calls for an extremely high detector sensitivity that can only be achieved using cooled detectors. Our approach to this task is a series of passive standoff video cameras for the 350 GHz band that represent an evolving concept and a continuous development since 2007. The cameras utilize arrays of superconducting transition-edge sensors (TES), i. e. cryogenic microbolometers, as radiation detectors. The TES are operated at temperatures below 1 K, cooled by a closed-cycle cooling system, and coupled to superconducting readout electronics. By this means, background limited photometry (BLIP) mode is achieved providing the maximum possible signal to noise ratio. At video rates, this leads to a pixel NETD well below 1K. The imaging system is completed by reflector optics based on free-form mirrors. For object distances of 3-10 m, a field of view up to 2m height and a diffraction-limited spatial resolution in the order of 1-2 cm is provided. Opto-mechanical scanning systems are part of the optical setup and capable frame rates up to 25 frames per second. Both spiraliform and linear scanning schemes have been developed. Several electronic and software components are used for system control, signal amplification, and data processing. Our objective is the design of an application-ready and user-friendly imaging system. For application in real world security screening scenarios, it can be extended using image processing and

  20. Underwater Acoustic Matched Field Imaging Based on Compressed Sensing

    PubMed Central

    Yan, Huichen; Xu, Jia; Long, Teng; Zhang, Xudong

    2015-01-01

    Matched field processing (MFP) is an effective method for underwater target imaging and localizing, but its performance is not guaranteed due to the nonuniqueness and instability problems caused by the underdetermined essence of MFP. By exploiting the sparsity of the targets in an imaging area, this paper proposes a compressive sensing MFP (CS-MFP) model from wave propagation theory by using randomly deployed sensors. In addition, the model’s recovery performance is investigated by exploring the lower bounds of the coherence parameter of the CS dictionary. Furthermore, this paper analyzes the robustness of CS-MFP with respect to the displacement of the sensors. Subsequently, a coherence-excluding coherence optimized orthogonal matching pursuit (CCOOMP) algorithm is proposed to overcome the high coherent dictionary problem in special cases. Finally, some numerical experiments are provided to demonstrate the effectiveness of the proposed CS-MFP method. PMID:26457708

  1. Underwater Acoustic Matched Field Imaging Based on Compressed Sensing.

    PubMed

    Yan, Huichen; Xu, Jia; Long, Teng; Zhang, Xudong

    2015-01-01

    Matched field processing (MFP) is an effective method for underwater target imaging and localizing, but its performance is not guaranteed due to the nonuniqueness and instability problems caused by the underdetermined essence of MFP. By exploiting the sparsity of the targets in an imaging area, this paper proposes a compressive sensing MFP (CS-MFP) model from wave propagation theory by using randomly deployed sensors. In addition, the model's recovery performance is investigated by exploring the lower bounds of the coherence parameter of the CS dictionary. Furthermore, this paper analyzes the robustness of CS-MFP with respect to the displacement of the sensors. Subsequently, a coherence-excluding coherence optimized orthogonal matching pursuit (CCOOMP) algorithm is proposed to overcome the high coherent dictionary problem in special cases. Finally, some numerical experiments are provided to demonstrate the effectiveness of the proposed CS-MFP method. PMID:26457708

  2. Three-Dimensional Acoustic Tissue Model: A Computational Tissue Phantom for Image Analyses

    NASA Astrophysics Data System (ADS)

    Mamou, J.; Oelze, M. L.; O'Brien, W. D.; Zachary, J. F.

    A novel methodology to obtain three-dimensional (3D) acoustic tissue models (3DATMs) is introduced. 3DATMs can be used as computational tools for ultrasonic imaging algorithm development and analysis. In particular, 3D models of biological structures can provide great benefit to better understand fundamentally how ultrasonic waves interact with biological materials. As an example, such models were used to generate ultrasonic images that characterize tumor tissue microstructures. 3DATMs can be used to evaluate a variety of tissue types. Typically, excised tissue is fixed, embedded, serially sectioned, and stained. The stained sections are digitally imaged (24-bit bitmap) with light microscopy. Contrast of each stained section is equalized and an automated registration algorithm aligns consecutive sections. The normalized mutual information is used as a similarity measure, and simplex optimization is conducted to find the best alignment. Both rigid and non-rigid registrations are performed. During tissue preparation, some sections are generally lost; thus, interpolation prior to 3D reconstruction is performed. Interpolation is conducted after registration using cubic Hermite polynoms. The registered (with interpolated) sections yield a 3D histologic volume (3DHV). Acoustic properties are then assigned to each tissue constituent of the 3DHV to obtain the 3DATMs. As an example, a 3D acoustic impedance tissue model (3DZM) was obtained for a solid breast tumor (EHS mouse sarcoma) and used to estimate ultrasonic scatterer size. The 3DZM results yielded an effective scatterer size of 32.9 (±6.1) μm. Ultrasonic backscatter measurements conducted on the same tumor tissue in vivo yielded an effective scatterer size of 33 (±8) μm. This good agreement shows that 3DATMs may be a powerful modeling tool for acoustic imaging applications

  3. Overview of image processing tools to extract physical information from JET videos

    NASA Astrophysics Data System (ADS)

    Craciunescu, T.; Murari, A.; Gelfusa, M.; Tiseanu, I.; Zoita, V.; EFDA Contributors, JET

    2014-11-01

    In magnetic confinement nuclear fusion devices such as JET, the last few years have witnessed a significant increase in the use of digital imagery, not only for the surveying and control of experiments, but also for the physical interpretation of results. More than 25 cameras are routinely used for imaging on JET in the infrared (IR) and visible spectral regions. These cameras can produce up to tens of Gbytes per shot and their information content can be very different, depending on the experimental conditions. However, the relevant information about the underlying physical processes is generally of much reduced dimensionality compared to the recorded data. The extraction of this information, which allows full exploitation of these diagnostics, is a challenging task. The image analysis consists, in most cases, of inverse problems which are typically ill-posed mathematically. The typology of objects to be analysed is very wide, and usually the images are affected by noise, low levels of contrast, low grey-level in-depth resolution, reshaping of moving objects, etc. Moreover, the plasma events have time constants of ms or tens of ms, which imposes tough conditions for real-time applications. On JET, in the last few years new tools and methods have been developed for physical information retrieval. The methodology of optical flow has allowed, under certain assumptions, the derivation of information about the dynamics of video objects associated with different physical phenomena, such as instabilities, pellets and filaments. The approach has been extended in order to approximate the optical flow within the MPEG compressed domain, allowing the manipulation of the large JET video databases and, in specific cases, even real-time data processing. The fast visible camera may provide new information that is potentially useful for disruption prediction. A set of methods, based on the extraction of structural information from the visual scene, have been developed for the

  4. Security SVGA image sensor with on-chip video data authentication and cryptographic circuit

    NASA Astrophysics Data System (ADS)

    Stifter, P.; Eberhardt, K.; Erni, A.; Hofmann, K.

    2005-10-01

    Security applications of sensors in a networking environment has a strong demand of sensor authentication and secure data transmission due to the possibility of man-in-the-middle and address spoofing attacks. Therefore a secure sensor system should fulfil the three standard requirements of cryptography, namely data integrity, authentication and non-repudiation. This paper is intended to present the unique sensor development by AIM, the so called SecVGA, which is a high performance, monochrome (B/W) CMOS active pixel image sensor. The device is capable of capturing still and motion images with a resolution of 800x600 active pixels and converting the image into a digital data stream. The distinguishing feature of this development in comparison to standard imaging sensors is the on-chip cryptographic engine which provides the sensor authentication, based on a one-way challenge/response protocol. The implemented protocol results in the exchange of a session-key which will secure the following video data transmission. This is achieved by calculating a cryptographic checksum derived from a stateful hash value of the complete image frame. Every sensor contains an EEPROM memory cell for the non-volatile storage of a unique identifier. The imager is programmable via a two-wire I2C compatible interface which controls the integration time, the active window size of the pixel array, the frame rate and various operating modes including the authentication procedure.

  5. An automated form of video image analysis applied to classification of movement disorders.

    PubMed

    Chang, R; Guan, L; Burne, J A

    Video image analysis is able to provide quantitative data on postural and movement abnormalities and thus has an important application in neurological diagnosis and management. The conventional techniques require patients to be videotaped while wearing markers in a highly structured laboratory environment. This restricts the utility of video in routine clinical practise. We have begun development of intelligent software which aims to provide a more flexible system able to quantify human posture and movement directly from whole-body images without markers and in an unstructured environment. The steps involved are to extract complete human profiles from video frames, to fit skeletal frameworks to the profiles and derive joint angles and swing distances. By this means a given posture is reduced to a set of basic parameters that can provide input to a neural network classifier. To test the system's performance we videotaped patients with dopa-responsive Parkinsonism and age-matched normals during several gait cycles, to yield 61 patient and 49 normal postures. These postures were reduced to their basic parameters and fed to the neural network classifier in various combinations. The optimal parameter sets (consisting of both swing distances and joint angles) yielded successful classification of normals and patients with an accuracy above 90%. This result demonstrated the feasibility of the approach. The technique has the potential to guide clinicians on the relative sensitivity of specific postural/gait features in diagnosis. Future studies will aim to improve the robustness of the system in providing accurate parameter estimates from subjects wearing a range of clothing, and to further improve discrimination by incorporating more stages of the gait cycle into the analysis. PMID:10661762

  6. Finite element modelling for the investigation of edge effect in acoustic micro imaging of microelectronic packages

    NASA Astrophysics Data System (ADS)

    Shen Lee, Chean; Zhang, Guang-Ming; Harvey, David M.; Ma, Hong-Wei; Braden, Derek R.

    2016-02-01

    In acoustic micro imaging of microelectronic packages, edge effect is often presented as artifacts of C-scan images, which may potentially obscure the detection of defects such as cracks and voids in the solder joints. The cause of edge effect is debatable. In this paper, a 2D finite element model is developed on the basis of acoustic micro imaging of a flip-chip package using a 230 MHz focused transducer to investigate acoustic propagation inside the package in attempt to elucidate the fundamental mechanism that causes the edge effect. A virtual transducer is designed in the finite element model to reduce the coupling fluid domain, and its performance is characterised against the physical transducer specification. The numerical results showed that the under bump metallization (UBM) structure inside the package has a significant impact on the edge effect. Simulated wavefields also showed that the edge effect is mainly attributed to the horizontal scatter, which is observed in the interface of silicon die-to-the outer radius of solder bump. The horizontal scatter occurs even for a flip-chip package without the UBM structure.

  7. A magnetic resonance imaging study on the articulatory and acoustic speech parameters of Malay vowels

    PubMed Central

    2014-01-01

    The phonetic properties of six Malay vowels are investigated using magnetic resonance imaging (MRI) to visualize the vocal tract in order to obtain dynamic articulatory parameters during speech production. To resolve image blurring due to the tongue movement during the scanning process, a method based on active contour extraction is used to track tongue contours. The proposed method efficiently tracks tongue contours despite the partial blurring of MRI images. Consequently, the articulatory parameters that are effectively measured as tongue movement is observed, and the specific shape of the tongue and its position for all six uttered Malay vowels are determined. Speech rehabilitation procedure demands some kind of visual perceivable prototype of speech articulation. To investigate the validity of the measured articulatory parameters based on acoustic theory of speech production, an acoustic analysis based on the uttered vowels by subjects has been performed. As the acoustic speech and articulatory parameters of uttered speech were examined, a correlation between formant frequencies and articulatory parameters was observed. The experiments reported a positive correlation between the constriction location of the tongue body and the first formant frequency, as well as a negative correlation between the constriction location of the tongue tip and the second formant frequency. The results demonstrate that the proposed method is an effective tool for the dynamic study of speech production. PMID:25060583

  8. High-density optical discs for audio, video, and image applications

    NASA Astrophysics Data System (ADS)

    Gan, Fuxi; Hou, Lisong

    2003-04-01

    Great progress in optical storage has taken place in the last decade. The development of optical discs is always towards higher and higher storage density and data transfer rate in order to meet the ever-increasing requirements of applications in audio, video and image areas. It has been proved a logical and effective approach to employ laser light of shorter wavelength and lenses of higher numerical aperture for increasing storage density, as is shown by the evolution of optical disc from CD family to DVD family. At present, research and development of high density DVD (HD-DVD), blu-ray disc and advanced storage magneto-optical (AS-MO) disc are carried out very extensively. Meanwhile, miniaturization of disc size and use of multiplication techniques to increase the storage density and capacity have already given rise to new formats such as iD Photo disc and Data Play disc as well as multi-layer discs. Digital holographic storage (DHS) disc is also one of the research and development subjects of many companies and research institutions. Some new concept optical storage such as fluorescent multiplayer disc (FMD) is also under intensive development. All these have greatly promoted applications of optical discs in audio, video and image devices.

  9. Algorithm design for automated transportation photo enforcement camera image and video quality diagnostic check modules

    NASA Astrophysics Data System (ADS)

    Raghavan, Ajay; Saha, Bhaskar

    2013-03-01

    Photo enforcement devices for traffic rules such as red lights, toll, stops, and speed limits are increasingly being deployed in cities and counties around the world to ensure smooth traffic flow and public safety. These are typically unattended fielded systems, and so it is important to periodically check them for potential image/video quality problems that might interfere with their intended functionality. There is interest in automating such checks to reduce the operational overhead and human error involved in manually checking large camera device fleets. Examples of problems affecting such camera devices include exposure issues, focus drifts, obstructions, misalignment, download errors, and motion blur. Furthermore, in some cases, in addition to the sub-algorithms for individual problems, one also has to carefully design the overall algorithm and logic to check for and accurately classifying these individual problems. Some of these issues can occur in tandem or have the potential to be confused for each other by automated algorithms. Examples include camera misalignment that can cause some scene elements to go out of focus for wide-area scenes or download errors that can be misinterpreted as an obstruction. Therefore, the sequence in which the sub-algorithms are utilized is also important. This paper presents an overview of these problems along with no-reference and reduced reference image and video quality solutions to detect and classify such faults.

  10. SVD-based quality metric for image and video using machine learning.

    PubMed

    Narwaria, Manish; Lin, Weisi

    2012-04-01

    We study the use of machine learning for visual quality evaluation with comprehensive singular value decomposition (SVD)-based visual features. In this paper, the two-stage process and the relevant work in the existing visual quality metrics are first introduced followed by an in-depth analysis of SVD for visual quality assessment. Singular values and vectors form the selected features for visual quality assessment. Machine learning is then used for the feature pooling process and demonstrated to be effective. This is to address the limitations of the existing pooling techniques, like simple summation, averaging, Minkowski summation, etc., which tend to be ad hoc. We advocate machine learning for feature pooling because it is more systematic and data driven. The experiments show that the proposed method outperforms the eight existing relevant schemes. Extensive analysis and cross validation are performed with ten publicly available databases (eight for images with a total of 4042 test images and two for video with a total of 228 videos). We use all publicly accessible software and databases in this study, as well as making our own software public, to facilitate comparison in future research. PMID:21965214

  11. Guidance for horizontal image translation (HIT) of high definition stereoscopic video production

    NASA Astrophysics Data System (ADS)

    Broberg, David K.

    2011-03-01

    Horizontal image translation (HIT) is an electronic process for shifting the left-eye and right-eye images horizontally as a way to alter the stereoscopic characteristics and alignment of 3D content after signals have been captured by stereoscopic cameras. When used cautiously and with full awareness of the impact on other interrelated aspects of the stereography, HIT is a valuable tool in the post production process as a means to modify stereoscopic content for more comfortable viewing. Most commonly it is used to alter the zero parallax setting (ZPS), to compensate for stereo window violations or to compensate for excessive positive or negative parallax in the source material. As more and more cinematic 3D content migrates to television distribution channels the use of this tool will likely expand. Without proper attention to certain guidelines the use of HIT can actually harm the 3D viewing experience. This paper provides guidance on the most effective use and describes some of the interrelationships and trade-offs. The paper recommends the adoption of the cinematic 2K video format as a 3D source master format for high definition television distribution of stereoscopic 3D video programming.

  12. Frequency-space prediction filtering for acoustic clutter and random noise attenuation in ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Shin, Junseob; Huang, Lianjie

    2016-04-01

    Frequency-space prediction filtering (FXPF), also known as FX deconvolution, is a technique originally developed for random noise attenuation in seismic imaging. FXPF attempts to reduce random noise in seismic data by modeling only real signals that appear as linear or quasilinear events in the aperture domain. In medical ultrasound imaging, channel radio frequency (RF) signals from the main lobe appear as horizontal events after receive delays are applied while acoustic clutter signals from off-axis scatterers and electronic noise do not. Therefore, FXPF is suitable for preserving only the main-lobe signals and attenuating the unwanted contributions from clutter and random noise in medical ultrasound imaging. We adapt FXPF to ultrasound imaging, and evaluate its performance using simulated data sets from a point target and an anechoic cyst. Our simulation results show that using only 5 iterations of FXPF achieves contrast-to-noise ratio (CNR) improvements of 67 % in a simulated noise-free anechoic cyst and 228 % in a simulated anechoic cyst contaminated with random noise of 15 dB signal-to-noise ratio (SNR). Our findings suggest that ultrasound imaging with FXPF attenuates contributions from both acoustic clutter and random noise and therefore, FXPF has great potential to improve ultrasound image contrast for better visualization of important anatomical structures and detection of diseased conditions.

  13. Influence of image compression on the quality of UNB pan-sharpened imagery: a case study with security video image frames

    NASA Astrophysics Data System (ADS)

    Adhamkhiabani, Sina Adham; Zhang, Yun; Fathollahi, Fatemeh

    2014-05-01

    UNB Pan-sharp, also named FuzeGo, is an image fusion technique to produce high resolution color satellite images by fusing a high resolution panchromatic (monochrome) image and a low resolution multispectral (color) image. This is an effective solution that modern satellites have been using to capture high resolution color images at an ultra-high speed. Initial research on security camera systems shows that the UNB Pan-sharp technique can also be utilized to produce high resolution and high sensitive color video images for various imaging and monitoring applications. Based on UNB Pansharp technique, a video camera prototype system, called the UNB Super-camera system, was developed that captures high resolution panchromatic images and low resolution color images simultaneously, and produces real-time high resolution color video images on the fly. In a separate study, it was proved that UNB Super Camera outperforms conventional 1-chip and 3-chip color cameras in image quality, especially when the illumination is low such as in room lighting. In this research the influence of image compression on the quality of UNB Pan-sharped high resolution color images is evaluated, since image compression is widely used in still and video cameras to reduce data volume and speed up data transfer. The results demonstrate that UNB Pan-sharp can consistently produce high resolution color images that have the same detail as the input high resolution panchromatic image and the same color of the input low resolution color image, regardless the compression ratio and lighting condition. In addition, the high resolution color images produced by UNB Pan-sharp have higher sensitivity (signal to noise ratio) and better edge sharpness and color rendering than those of the same generation 1-chip color camera, regardless the compression ratio and lighting condition.

  14. Development of single-channel stereoscopic video imaging modality for real-time retinal imaging

    NASA Astrophysics Data System (ADS)

    Radfar, Edalat; Park, Jihoon; Lee, Sangyeob; Ha, Myungjin; Yu, Sungkon; Jang, Seulki; Jung, Byungjo

    2016-03-01

    Stereoscopic retinal image can effectively help doctors. Most of stereo imaging surgical microscopes are based on dual optical channels and benefit from dual cameras in which left and right cameras capture corresponding left and right eye views. This study developed a single-channel stereoscopic retinal imaging modality based on a transparent rotating deflector (TRD). Two different viewing angles are generated by imaging through the TRD which is mounted on a motor synchronized with a camera and is placed in single optical channel. Because of the function of objective lens in the imaging modality which generate stereo-image from an object at its focal point, and according to eye structure, the optical set up of the imaging modality can compatible for retinal imaging when the cornea and eye lens are engaged in objective lens.

  15. Finite element modeling of atomic force microscopy cantilever dynamics during video rate imaging

    SciTech Connect

    Howard-Knight, J. P.; Hobbs, J. K.

    2011-04-01

    A dynamic finite element model has been constructed to simulate the behavior of low spring constant atomic force microscope (AFM) cantilevers used for imaging at high speed without active feedback as in VideoAFM. The model is tested against experimental data collected at 20 frame/s and good agreement is found. The complex dynamics of the cantilever, consisting of traveling waves coming from the tip sample interaction, reflecting off the cantilever-substrate junction, and interfering with new waves created at the tip, are revealed. The construction of the image from this resulting nonequilibrium cantilever deflection is also examined. Transient tip-sample forces are found to reach values up to 260 nN on a calibration grid sample, and the maximum forces do not always correspond to the position of steepest features as a result of energy stored in the cantilever.

  16. A two camera video imaging system with application to parafoil angle of attack measurements

    NASA Astrophysics Data System (ADS)

    Meyn, Larry A.; Bennett, Mark S.

    1991-01-01

    This paper describes the development of a two-camera, video imaging system for the determination of three-dimensional spatial coordinates from stereo images. This system successfully measured angle of attack at several span-wise locations for large-scale parafoils tested in the NASA Ames 80- by 120-Foot Wind Tunnel. Measurement uncertainty for angle of attack was less than 0.6 deg. The stereo ranging system was the primary source for angle of attack measurements since inclinometers sewn into the fabric ribs of the parafoils had unknown angle offsets acquired during installation. This paper includes discussions of the basic theory and operation of the stereo ranging system, system measurement uncertainty, experimental set-up, calibration results, and test results. Planned improvements and enhancements to the system are also discussed.

  17. Video rate passive millimeter-wave imager utilizing optical upconversion with improved size, weight, and power

    NASA Astrophysics Data System (ADS)

    Martin, Richard D.; Shi, Shouyuan; Zhang, Yifei; Wright, Andrew; Yao, Peng; Shreve, Kevin P.; Schuetz, Christopher A.; Dillon, Thomas E.; Mackrides, Daniel G.; Harrity, Charles E.; Prather, Dennis W.

    2015-05-01

    In this presentation we will discuss the performance and limitations of our 220 channel video rate passive millimeter wave imaging system based on a distributed aperture with optical upconversion architecture. We will cover our efforts to reduce the cost, size, weight, and power (CSWaP) requirements of our next generation imager. To this end, we have developed custom integrated circuit silicon-germanium (SiGe) low noise amplifiers that have been designed to efficiently couple with our high performance lithium niobate upconversion modules. We have also developed millimeter wave packaging and components in multilayer liquid crystal polymer (LCP) substrates which greatly improve the manufacturability of the upconversion modules. These structures include antennas, substrate integrated waveguides, filters, and substrates for InP and SiGe mmW amplifiers.

  18. Low-complexity camera digital signal imaging for video document projection system

    NASA Astrophysics Data System (ADS)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  19. Acoustic radiation pressure: A 'phase contrast' agent for x-ray phase contrast imaging

    SciTech Connect

    Bailat, Claude J.; Hamilton, Theron J.; Rose-Petruck, Christoph; Diebold, Gerald J.

    2004-11-08

    We show that the radiation pressure exerted by a beam of ultrasound can be used for contrast enhancement in high-resolution x-ray imaging of tissue and soft materials. Interfacial features of objects are highlighted as a result of both the displacement introduced by the ultrasound and the inherent sensitivity of x-ray phase contrast imaging to density variations. The potential of the method is demonstrated by imaging microscopic tumor phantoms embedded into tissue with a thickness typically presented in mammography. The detection limit of micrometer size masses exceeds the resolution of currently available mammography imaging systems. The directionality of the acoustic radiation force and its localization in space permits the imaging of ultrasound-selected tissue volumes. The results presented here suggest that the method may permit the detection of tumors in soft tissue in their early stage of development.

  20. Acoustic property reconstruction of a pygmy sperm whale (Kogia breviceps) forehead based on computed tomography imaging.

    PubMed

    Song, Zhongchang; Xu, Xiao; Dong, Jianchen; Xing, Luru; Zhang, Meng; Liu, Xuecheng; Zhang, Yu; Li, Songhai; Berggren, Per

    2015-11-01

    Computed tomography (CT) imaging and sound experimental measurements were used to reconstruct the acoustic properties (density, velocity, and impedance) of the forehead tissues of a deceased pygmy sperm whale (Kogia breviceps). The forehead was segmented along the body axis and sectioned into cross section slices, which were further cut into sample pieces for measurements. Hounsfield units (HUs) of the corresponding measured pieces were obtained from CT scans, and regression analyses were conducted to investigate the linear relationships between the tissues' HUs and velocity, and HUs and density. The distributions of the acoustic properties of the head at axial, coronal, and sagittal cross sections were reconstructed, revealing that the nasal passage system was asymmetric and the cornucopia-shaped spermaceti organ was in the right nasal passage, surrounded by tissues and airsacs. A distinct dense theca was discovered in the posterior-dorsal area of the melon, which was characterized by low velocity in the inner core and high velocity in the outer region. Statistical analyses revealed significant differences in density, velocity, and acoustic impedance between all four structures, melon, spermaceti organ, muscle, and connective tissue (p < 0.001). The obtained acoustic properties of the forehead tissues provide important information for understanding the species' bioacoustic characteristics. PMID:26627786

  1. Military jet noise source imaging using multisource statistically optimized near-field acoustical holography.

    PubMed

    Wall, Alan T; Gee, Kent L; Neilsen, Tracianne B; McKinley, Richard L; James, Michael M

    2016-04-01

    The identification of acoustic sources is critical to targeted noise reduction efforts for jets on high-performance tactical aircraft. This paper describes the imaging of acoustic sources from a tactical jet using near-field acoustical holography techniques. The measurement consists of a series of scans over the hologram with a dense microphone array. Partial field decomposition methods are performed to generate coherent holograms. Numerical extrapolation of data beyond the measurement aperture mitigates artifacts near the aperture edges. A multisource equivalent wave model is used that includes the effects of the ground reflection on the measurement. Multisource statistically optimized near-field acoustical holography (M-SONAH) is used to reconstruct apparent source distributions between 20 and 1250 Hz at four engine powers. It is shown that M-SONAH produces accurate field reconstructions for both inward and outward propagation in the region spanned by the physical hologram measurement. Reconstructions across the set of engine powers and frequencies suggests that directivity depends mainly on estimated source location; sources farther downstream radiate at a higher angle relative to the inlet axis. At some frequencies and engine powers, reconstructed fields exhibit multiple radiation lobes originating from overlapped source regions, which is a phenomenon relatively recently reported for full-scale jets. PMID:27106340

  2. Propagation of large-wavevector acoustic phonons new perspectives from phonon imaging

    NASA Astrophysics Data System (ADS)

    Wolfe, James P.

    Within the last decade a number of attempts have been made to observe the ballistic propagation of large wavevector acoustic phonons in crystals at low temperatures. Time-of-flight heat-pulse methods have difficulty in distinguishing between scattered phonons and ballistic phonons which travel dispersively at subsonic velocities. Fortunately, ballistic phonons can be identified by their highly anisotropic flux, which is observed by phonon imaging techniques. In this paper, several types of phonon imaging experiments are described which reveal the dispersive propagation of large-wavevector phonons and expose interesting details of the phonon scattering processes.

  3. Cherenkov Video Imaging Allows for the First Visualization of Radiation Therapy in Real Time

    SciTech Connect

    Jarvis, Lesley A.; Zhang, Rongxiao; Gladstone, David J.; Jiang, Shudong; Hitchcock, Whitney; Friedman, Oscar D.; Glaser, Adam K.; Jermyn, Michael; Pogue, Brian W.

    2014-07-01

    Purpose: To determine whether Cherenkov light imaging can visualize radiation therapy in real time during breast radiation therapy. Methods and Materials: An intensified charge-coupled device (CCD) camera was synchronized to the 3.25-μs radiation pulses of the clinical linear accelerator with the intensifier set × 100. Cherenkov images were acquired continuously (2.8 frames/s) during fractionated whole breast irradiation with each frame an accumulation of 100 radiation pulses (approximately 5 monitor units). Results: The first patient images ever created are used to illustrate that Cherenkov emission can be visualized as a video during conditions typical for breast radiation therapy, even with complex treatment plans, mixed energies, and modulated treatment fields. Images were generated correlating to the superficial dose received by the patient and potentially the location of the resulting skin reactions. Major blood vessels are visible in the image, providing the potential to use these as biological landmarks for improved geometric accuracy. The potential for this system to detect radiation therapy misadministrations, which can result from hardware malfunction or patient positioning setup errors during individual fractions, is shown. Conclusions: Cherenkoscopy is a unique method for visualizing surface dose resulting in real-time quality control. We propose that this system could detect radiation therapy errors in everyday clinical practice at a time when these errors can be corrected to result in improved safety and quality of radiation therapy.

  4. Realization of a video-rate distributed aperture millimeter-wave imaging system using optical upconversion

    NASA Astrophysics Data System (ADS)

    Schuetz, Christopher; Martin, Richard; Dillon, Thomas; Yao, Peng; Mackrides, Daniel; Harrity, Charles; Zablocki, Alicia; Shreve, Kevin; Bonnett, James; Curt, Petersen; Prather, Dennis

    2013-05-01

    Passive imaging using millimeter waves (mmWs) has many advantages and applications in the defense and security markets. All terrestrial bodies emit mmW radiation and these wavelengths are able to penetrate smoke, fog/clouds/marine layers, and even clothing. One primary obstacle to imaging in this spectrum is that longer wavelengths require larger apertures to achieve the resolutions desired for many applications. Accordingly, lens-based focal plane systems and scanning systems tend to require large aperture optics, which increase the achievable size and weight of such systems to beyond what can be supported by many applications. To overcome this limitation, a distributed aperture detection scheme is used in which the effective aperture size can be increased without the associated volumetric increase in imager size. This distributed aperture system is realized through conversion of the received mmW energy into sidebands on an optical carrier. This conversion serves, in essence, to scale the mmW sparse aperture array signals onto a complementary optical array. The side bands are subsequently stripped from the optical carrier and recombined to provide a real time snapshot of the mmW signal. Using this technique, we have constructed a real-time, video-rate imager operating at 75 GHz. A distributed aperture consisting of 220 upconversion channels is used to realize 2.5k pixels with passive sensitivity. Details of the construction and operation of this imager as well as field testing results will be presented herein.

  5. Toward high-sensitivity and high-resolution submillimeter-wave video imaging

    NASA Astrophysics Data System (ADS)

    Heinz, Erik; May, Torsten; Born, Detlef; Zieger, Gabriel; Thorwirth, Günter; Anders, Solveig; Zakosarenko, Vyacheslav; Krause, Torsten; Krüger, André; Schulz, Marco; Meyer, Hans-Georg; Schubert, Marco; Starkloff, Michael

    2011-11-01

    Against a background of newly emerged security threats, the well-established idea of utilizing submillimeter-wave radiation for personal security screening applications has recently evolved into a promising technology. Possible application scenarios demand sensitive, fast, flexible and high-quality imaging techniques. At present, best results are obtained by passive imaging using cryogenic microbolometers as radiation detectors. Building upon the concept of a passive submillimeter-wave stand-off video camera introduced previously, we present the evolution of this concept into a practical application-ready imaging device. This has been achieved using a variety of measures such as optimizing the detector parameters, improving the scanning mechanism, increasing the sampling speed, and enhancing the image generation software. The camera concept is based on a Cassegrain-type mirror optics, an optomechanical scanner, an array of 20 superconducting transition-edge sensors operated at a temperature of 450 to 650 mK, and a closed-cycle cryogen-free cooling system. The main figures of the system include: a frequency band of 350+/-40 GHz, an object distance of 7 to 10 m, a circular field of view of 1.05 m diameter, and a spatial resolution in the image center of 2 cm at 8.5 m distance, a noise equivalent temperature difference of 0.1 to 0.4 K, and a maximum frame rate of 10 Hz.

  6. A comparison of traffic estimates of nocturnal flying animals using radar, thermal imaging, and acoustic recording.

    PubMed

    Horton, Kyle G; Shriver, W Gregory; Buler, Jeffrey J

    2015-03-01

    There are several remote-sensing tools readily available for the study of nocturnally flying animals (e.g., migrating birds), each possessing unique measurement biases. We used three tools (weather surveillance radar, thermal infrared camera, and acoustic recorder) to measure temporal and spatial patterns of nocturnal traffic estimates of flying animals during the spring and fall of 2011 and 2012 in Lewes, Delaware, USA. Our objective was to compare measures among different technologies to better understand their animal detection biases. For radar and thermal imaging, the greatest observed traffic rate tended to occur at, or shortly after, evening twilight, whereas for the acoustic recorder, peak bird flight-calling activity was observed just prior to morning twilight. Comparing traffic rates during the night for all seasons, we found that mean nightly correlations between acoustics and the other two tools were weakly correlated (thermal infrared camera and acoustics, r = 0.004 ± 0.04 SE, n = 100 nights; radar and acoustics, r = 0.14 ± 0.04 SE, n = 101 nights), but highly variable on an individual nightly basis (range = -0.84 to 0.92, range = -0.73 to 0.94). The mean nightly correlations between traffic rates estimated by radar and by thermal infrared camera during the night were more strongly positively correlated (r = 0.39 ± 0.04 SE, n = 125 nights), but also were highly variable for individual nights (range = -0.76 to 0.98). Through comparison with radar data among numerous height intervals, we determined that flying animal height above the ground influenced thermal imaging positively and flight call detections negatively. Moreover, thermal imaging detections decreased with the presence of cloud cover and increased with mean ground flight speed of animals, whereas acoustic detections showed no relationship with cloud cover presence but did decrease with increased flight speed. We found sampling methods to be positively correlated when comparing mean nightly

  7. The development and potential of acoustic radiation force impulse (ARFI) imaging for carotid artery plaque characterization.

    PubMed

    Allen, Jason D; Ham, Katherine L; Dumont, Douglas M; Sileshi, Bantayehu; Trahey, Gregg E; Dahl, Jeremy J

    2011-08-01

    Stroke is the third leading cause of death and long-term disability in the USA. Currently, surgical intervention decisions in asymptomatic patients are based upon the degree of carotid artery stenosis. While there is a clear benefit of endarterectomy for patients with severe (> 70%) stenosis, in those with high/moderate (50-69%) stenosis the evidence is less clear. Evidence suggests ischemic stroke is associated less with calcified and fibrous plaques than with those containing softer tissue, especially when accompanied by a thin fibrous cap. A reliable mechanism for the identification of individuals with atherosclerotic plaques which confer the highest risk for stroke is fundamental to the selection of patients for vascular interventions. Acoustic radiation force impulse (ARFI) imaging is a new ultrasonic-based imaging method that characterizes the mechanical properties of tissue by measuring displacement resulting from the application of acoustic radiation force. These displacements provide information about the local stiffness of tissue and can differentiate between soft and hard areas. Because arterial walls, soft tissue, atheromas, and calcifications have a wide range in their stiffness properties, they represent excellent candidates for ARFI imaging. We present information from early phantom experiments and excised human limb studies to in vivo carotid artery scans and provide evidence for the ability of ARFI to provide high-quality images which highlight mechanical differences in tissue stiffness not readily apparent in matched B-mode images. This allows ARFI to identify soft from hard plaques and differentiate characteristics associated with plaque vulnerability or stability. PMID:21447606

  8. Modeling of video traffic in packet networks, low rate video compression, and the development of a lossy+lossless image compression algorithm

    NASA Technical Reports Server (NTRS)

    Sayood, K.; Chen, Y. C.; Wang, X.

    1992-01-01

    During this reporting period we have worked on three somewhat different problems. These are modeling of video traffic in packet networks, low rate video compression, and the development of a lossy + lossless image compression algorithm, which might have some application in browsing algorithms. The lossy + lossless scheme is an extension of work previously done under this grant. It provides a simple technique for incorporating browsing capability. The low rate coding scheme is also a simple variation on the standard discrete cosine transform (DCT) coding approach. In spite of its simplicity, the approach provides surprisingly high quality reconstructions. The modeling approach is borrowed from the speech recognition literature, and seems to be promising in that it provides a simple way of obtaining an idea about the second order behavior of a particular coding scheme. Details about these are presented.

  9. Microstructure Imaging Using Frequency Spectrum Spatially Resolved Acoustic Spectroscopy F-Sras

    NASA Astrophysics Data System (ADS)

    Sharples, S. D.; Li, W.; Clark, M.; Somekh, M. G.

    2010-02-01

    Material microstructure can have a profound effect on the mechanical properties of a component, such as strength and resistance to creep and fatigue. SRAS—spatially resolved acoustic spectroscopy—is a laser ultrasonic technique which can image microstructure using highly localized surface acoustic wave (SAW) velocity as a contrast mechanism, as this is sensitive to crystallographic orientation. The technique is noncontact, nondestructive, rapid, can be used on large components, and is highly tolerant of acoustic aberrations. Previously, the SRAS technique has been demonstrated using a fixed frequency excitation laser and a variable grating period (к-vector) to determine the most efficiently generated SAWs, and hence the velocity. Here, we demonstrate an implementation which uses a fixed grating period with a broadband laser excitation source. The velocity is determined by analyzing the measured frequency spectrum. Experimental results using this "frequency spectrum SRAS" (f-SRAS) method are presented. Images of microstructure on an industrially relevant material are compared to those obtained using the previous SRAS method ("k-SRAS"), excellent agreement is observed. Moreover, f-SRAS is much simpler and potentially much more rapid than k-SRAS as the velocity can be determined at each sample point in one single laser shot, rather than scanning the grating period.

  10. Eigenfunction analysis of stochastic backscatter for characterization of acoustic aberration in medical ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Varslot, Trond; Krogstad, Harald; Mo, Eirik; Angelsen, Bjørn A.

    2004-06-01

    Presented here is a characterization of aberration in medical ultrasound imaging. The characterization is optimal in the sense of maximizing the expected energy in a modified beamformer output of the received acoustic backscatter. Aberration correction based on this characterization takes the form of an aberration correction filter. The situation considered is frequently found in applications when imaging organs through a body wall: aberration is introduced in a layer close to the transducer, and acoustic backscatter from a scattering region behind the body wall is measured at the transducer surface. The scattering region consists of scatterers randomly distributed with very short correlation length compared to the acoustic wavelength of the transmit pulse. The scatterer distribution is therefore assumed to be δ correlated. This paper shows how maximizing the expected energy in a modified beamformer output signal naturally leads to eigenfunctions of a Fredholm integral operator, where the associated kernel function is a spatial correlation function of the received stochastic signal. Aberration characterization and aberration correction are presented for simulated data constructed to mimic aberration introduced by the abdominal wall. The results compare well with what is obtainable using data from a simulated point source.

  11. Study of a prototype high quantum efficiency thick scintillation crystal video-electronic portal imaging device

    SciTech Connect

    Samant, Sanjiv S.; Gopal, Arun

    2006-08-15

    Image quality in portal imaging suffers significantly from the loss in contrast and spatial resolution that results from the excessive Compton scatter associated with megavoltage x rays. In addition, portal image quality is further reduced due to the poor quantum efficiency (QE) of current electronic portal imaging devices (EPIDs). Commercial video-camera-based EPIDs or VEPIDs that utilize a thin phosphor screen in conjunction with a metal buildup plate to convert the incident x rays to light suffer from reduced light production due to low QE (<2% for Eastman Kodak Lanex Fast-B). Flat-panel EPIDs that utilize the same luminescent screen along with an a-Si:H photodiode array provide improved image quality compared to VEPIDs, but they are expensive and can be susceptible to radiation damage to the peripheral electronics. In this article, we present a prototype VEPID system for high quality portal imaging at sub-monitor-unit (subMU) exposures based on a thick scintillation crystal (TSC) that acts as a high QE luminescent screen. The prototype TSC system utilizes a 12 mm thick transparent CsI(Tl) (thallium-activated cesium iodide) scintillator for QE=0.24, resulting in significantly higher light production compared to commercial phosphor screens. The 25x25 cm{sup 2} CsI(Tl) screen is coupled to a high spatial and contrast resolution Video-Optics plumbicon-tube camera system (1240x1024 pixels, 250 {mu}m pixel width at isocenter, 12-bit ADC). As a proof-of-principle prototype, the TSC system with user-controlled camera target integration was adapted for use in an existing clinical gantry (Siemens BEAMVIEW{sup PLUS}) with the capability for online intratreatment fluoroscopy. Measurements of modulation transfer function (MTF) were conducted to characterize the TSC spatial resolution. The measured MTF along with measurements of the TSC noise power spectrum (NPS) were used to determine the system detective quantum efficiency (DQE). A theoretical expression of DQE(0) was

  12. Study of a prototype high quantum efficiency thick scintillation crystal video-electronic portal imaging device.

    PubMed

    Samant, Sanjiv S; Gopal, Arun

    2006-08-01

    Image quality in portal imaging suffers significantly from the loss in contrast and spatial resolution that results from the excessive Compton scatter associated with megavoltage x rays. In addition, portal image quality is further reduced due to the poor quantum efficiency (QE) of current electronic portal imaging devices (EPIDs). Commercial video-camera-based EPIDs or VEPIDs that utilize a thin phosphor screen in conjunction with a metal buildup plate to convert the incident x rays to light suffer from reduced light production due to low QE (<2% for Eastman Kodak Lanex Fast-B). Flat-panel EPIDs that utilize the same luminescent screen along with an a-Si:H photodiode array provide improved image quality compared to VEPIDs, but they are expensive and can be susceptible to radiation damage to the peripheral electronics. In this article, we present a prototype VEPID system for high quality portal imaging at sub-monitor-unit (subMU) exposures based on a thick scintillation crystal (TSC) that acts as a high QE luminescent screen. The prototype TSC system utilizes a 12 mm thick transparent CsI(Tl) (thallium-activated cesium iodide) scintillator for QE=0.24, resulting in significantly higher light production compared to commercial phosphor screens. The 25 X 25 cm2 CsI(Tl) screen is coupled to a high spatial and contrast resolution Video-Optics plumbicon-tube camera system (1240 X 1024 pixels, 250 microm pixel width at isocenter, 12-bit ADC). As a proof-of-principle prototype, the TSC system with user-controlled camera target integration was adapted for use in an existing clinical gantry (Siemens BEAMVIEW(PLUS)) with the capability for online intratreatment fluoroscopy. Measurements of modulation transfer function (MTF) were conducted to characterize the TSC spatial resolution. The measured MTF along with measurements of the TSC noise power spectrum (NPS) were used to determine the system detective quantum efficiency (DQE). A theoretical expression of DQE(0) was developed

  13. High-resolution, high-speed, three-dimensional video imaging with digital fringe projection techniques.

    PubMed

    Ekstrand, Laura; Karpinsky, Nikolaus; Wang, Yajun; Zhang, Song

    2013-01-01

    Digital fringe projection (DFP) techniques provide dense 3D measurements of dynamically changing surfaces. Like the human eyes and brain, DFP uses triangulation between matching points in two views of the same scene at different angles to compute depth. However, unlike a stereo-based method, DFP uses a digital video projector to replace one of the cameras(1). The projector rapidly projects a known sinusoidal pattern onto the subject, and the surface of the subject distorts these patterns in the camera's field of view. Three distorted patterns (fringe images) from the camera can be used to compute the depth using triangulation. Unlike other 3D measurement methods, DFP techniques lead to systems that tend to be faster, lower in equipment cost, more flexible, and easier to develop. DFP systems can also achieve the same measurement resolution as the camera. For this reason, DFP and other digital structured light techniques have recently been the focus of intense research (as summarized in(1-5)). Taking advantage of DFP, the graphics processing unit, and optimized algorithms, we have developed a system capable of 30 Hz 3D video data acquisition, reconstruction, and display for over 300,000 measurement points per frame(6,7). Binary defocusing DFP methods can achieve even greater speeds(8). Diverse applications can benefit from DFP techniques. Our collaborators have used our systems for facial function analysis(9), facial animation(10), cardiac mechanics studies(11), and fluid surface measurements, but many other potential applications exist. This video will teach the fundamentals of DFP techniques and illustrate the design and operation of a binary defocusing DFP system. PMID:24326674

  14. High-resolution, High-speed, Three-dimensional Video Imaging with Digital Fringe Projection Techniques

    PubMed Central

    Ekstrand, Laura; Karpinsky, Nikolaus; Wang, Yajun; Zhang, Song

    2013-01-01

    Digital fringe projection (DFP) techniques provide dense 3D measurements of dynamically changing surfaces. Like the human eyes and brain, DFP uses triangulation between matching points in two views of the same scene at different angles to compute depth. However, unlike a stereo-based method, DFP uses a digital video projector to replace one of the cameras1. The projector rapidly projects a known sinusoidal pattern onto the subject, and the surface of the subject distorts these patterns in the camera’s field of view. Three distorted patterns (fringe images) from the camera can be used to compute the depth using triangulation. Unlike other 3D measurement methods, DFP techniques lead to systems that tend to be faster, lower in equipment cost, more flexible, and easier to develop. DFP systems can also achieve the same measurement resolution as the camera. For this reason, DFP and other digital structured light techniques have recently been the focus of intense research (as summarized in1-5). Taking advantage of DFP, the graphics processing unit, and optimized algorithms, we have developed a system capable of 30 Hz 3D video data acquisition, reconstruction, and display for over 300,000 measurement points per frame6,7. Binary defocusing DFP methods can achieve even greater speeds8. Diverse applications can benefit from DFP techniques. Our collaborators have used our systems for facial function analysis9, facial animation10, cardiac mechanics studies11, and fluid surface measurements, but many other potential applications exist. This video will teach the fundamentals of DFP techniques and illustrate the design and operation of a binary defocusing DFP system. PMID:24326674

  15. Bond-selective photoacoustic imaging by converting molecular vibration into acoustic waves

    PubMed Central

    Hui, Jie; Li, Rui; Phillips, Evan H.; Goergen, Craig J.; Sturek, Michael; Cheng, Ji-Xin

    2016-01-01

    The quantized vibration of chemical bonds provides a way of detecting specific molecules in a complex tissue environment. Unlike pure optical methods, for which imaging depth is limited to a few hundred micrometers by significant optical scattering, photoacoustic detection of vibrational absorption breaks through the optical diffusion limit by taking advantage of diffused photons and weak acoustic scattering. Key features of this method include both high scalability of imaging depth from a few millimeters to a few centimeters and chemical bond selectivity as a novel contrast mechanism for photoacoustic imaging. Its biomedical applications spans detection of white matter loss and regeneration, assessment of breast tumor margins, and diagnosis of vulnerable atherosclerotic plaques. This review provides an overview of the recent advances made in vibration-based photoacoustic imaging and various biomedical applications enabled by this new technology. PMID:27069873

  16. An efficient system for reliably transmitting image and video data over low bit rate noisy channels

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Huang, Y. F.; Stevenson, Robert L.

    1994-01-01

    This research project is intended to develop an efficient system for reliably transmitting image and video data over low bit rate noisy channels. The basic ideas behind the proposed approach are the following: employ statistical-based image modeling to facilitate pre- and post-processing and error detection, use spare redundancy that the source compression did not remove to add robustness, and implement coded modulation to improve bandwidth efficiency and noise rejection. Over the last six months, progress has been made on various aspects of the project. Through our studies of the integrated system, a list-based iterative Trellis decoder has been developed. The decoder accepts feedback from a post-processor which can detect channel errors in the reconstructed image. The error detection is based on the Huber Markov random field image model for the compressed image. The compression scheme used here is that of JPEG (Joint Photographic Experts Group). Experiments were performed and the results are quite encouraging. The principal ideas here are extendable to other compression techniques. In addition, research was also performed on unequal error protection channel coding, subband vector quantization as a means of source coding, and post processing for reducing coding artifacts. Our studies on unequal error protection (UEP) coding for image transmission focused on examining the properties of the UEP capabilities of convolutional codes. The investigation of subband vector quantization employed a wavelet transform with special emphasis on exploiting interband redundancy. The outcome of this investigation included the development of three algorithms for subband vector quantization. The reduction of transform coding artifacts was studied with the aid of a non-Gaussian Markov random field model. This results in improved image decompression. These studies are summarized and the technical papers included in the appendices.

  17. ACOUSTICAL IMAGING AND MECHANICAL PROPERTIES OF SOFT ROCK AND MARINE SEDIMENTS

    SciTech Connect

    Thurman E. Scott, Jr., Ph.D.; Younane Abousleiman, Ph.D.; Musharraf Zaman, Ph.D., P.E.

    2002-11-18

    During the sixth quarter of this research project the research team developed a method and the experimental procedures for acquiring the data needed for ultrasonic tomography of rock core samples under triaxial stress conditions as outlined in Task 10. Traditional triaxial compression experiments, where compressional and shear wave velocities are measured, provide little or no information about the internal spatial distribution of mechanical damage within the sample. The velocities measured between platen-to-platen or sensor-to-sensor reflects an averaging of all the velocities occurring along that particular raypath across the boundaries of the rock. The research team is attempting to develop and refine a laboratory equivalent of seismic tomography for use on rock samples deformed under triaxial stress conditions. Seismic tomography, utilized for example in crosswell tomography, allows an imaging of the velocities within a discrete zone within the rock. Ultrasonic or acoustic tomography is essentially the extension of that field technology applied to rock samples deforming in the laboratory at high pressures. This report outlines the technical steps and procedures for developing this technology for use on weak, soft chalk samples. Laboratory tests indicate that the chalk samples exhibit major changes in compressional and shear wave velocities during compaction. Since chalk is the rock type responsible for the severe subsidence and compaction in the North Sea it was selected for the first efforts at tomographic imaging of soft rocks. Field evidence from the North Sea suggests that compaction, which has resulted in over 30 feet of subsidence to date, is heterogeneously distributed within the reservoir. The research team will attempt to image this very process in chalk samples. The initial tomographic studies (Scott et al., 1994a,b; 1998) were accomplished on well cemented, competent rocks such as Berea sandstone. The extension of the technology to weaker samples is

  18. Guiding synchrotron X-ray diffraction by multimodal video-rate protein crystal imaging.

    PubMed

    Newman, Justin A; Zhang, Shijie; Sullivan, Shane Z; Dow, Ximeng Y; Becker, Michael; Sheedlo, Michael J; Stepanov, Sergey; Carlsen, Mark S; Everly, R Michael; Das, Chittaranjan; Fischetti, Robert F; Simpson, Garth J

    2016-07-01

    Synchronous digitization, in which an optical sensor is probed synchronously with the firing of an ultrafast laser, was integrated into an optical imaging station for macromolecular crystal positioning prior to synchrotron X-ray diffraction. Using the synchronous digitization instrument, second-harmonic generation, two-photon-excited fluorescence and bright field by laser transmittance were all acquired simultaneously with perfect image registry at up to video-rate (15 frames s(-1)). A simple change in the incident wavelength enabled simultaneous imaging by two-photon-excited ultraviolet fluorescence, one-photon-excited visible fluorescence and laser transmittance. Development of an analytical model for the signal-to-noise enhancement afforded by synchronous digitization suggests a 15.6-fold improvement over previous photon-counting techniques. This improvement in turn allowed acquisition on nearly an order of magnitude more pixels than the preceding generation of instrumentation and reductions of well over an order of magnitude in image acquisition times. These improvements have allowed detection of protein crystals on the order of 1 µm in thickness under cryogenic conditions in the beamline. These capabilities are well suited to support serial crystallography of crystals approaching 1 µm or less in dimension. PMID:27359145

  19. Acoustic Image Models for Obstacle Avoidance with Forward-Looking Sonar

    NASA Astrophysics Data System (ADS)

    Masek, T.; Kölsch, M.

    Long-range forward-looking sonars (FLS) have recently been deployed in autonomous unmanned vehicles (AUV). We present models for various features in acoustic images, with the goal of using this sensor for altitude maintenance, obstacle detection and obstacle avoidance. First, we model the backscatter and FLS noise as pixel-based, spatially-varying intensity distributions. Experiments show that these models predict noise with an accuracy of over 98%. Next, the presence of acoustic noise from two other sources including a modem is reliably detected with a template-based filter and a threshold learned from training data. Lastly, the ocean floor location and orientation is estimated with a gradient-descent method using a site-independent template, yielding sufficiently accurate results in 95% of the frames. Temporal information is expected to further improve the performance.

  20. Quantitative Analysis Of Sperm Motion Kinematics From Real-Time Video-Edge Images

    NASA Astrophysics Data System (ADS)

    Davis, Russell O...; Katz, David F.

    1988-02-01

    A new model of sperm swimming kinematics, which uses signal processing methods and multivariate statistical techniques to identify individual cell-motion parameters and unique cell populations, is presented. Swimming paths of individual cells are obtained using real-time, video-edge digitization. Raw paths are adaptively filtered to identify average paths, and measurements of space-time oscillations about average paths are made. Time-dependent frequency information is extracted from spatial variations about average paths using harmonic analysis. Raw-path and average-path measures such as curvature, curve length, and straight-line length, and measures of oscillations about average paths such as time-dependent amplitude and frequency variations, are used in a multivariate, cluster analysis to identify unique cell populations. The entire process, including digitization of sperm video images, is computer-automated. Preliminary results indicate that this method of tracking, digitization, and kinematic analysis accurately identifies unique cell subpopulations, including: the relative numbers of cells in each subpopulation, how subpopulations differ, and the extent and significance of such differences. With appropriate work, this approach may be useful for clinical discrimination between normal and abnormal semen specimens.

  1. Acoustic imaging of the Mediterranean water outflowing through the Strait of Gibraltar

    NASA Astrophysics Data System (ADS)

    Biescas Gorriz, Berta; Carniel, Sandro; Sallarès, Valentí; Rodriguez Ranero, Cesar

    2016-04-01

    Acoustic imaging of the Mediterranean water outflowing through the Strait of Gibraltar Berta Biescas (1), Sandro Carniel (2) , Valentí Sallarès (3) and Cesar R. Ranero(3) (1) Istituto di Scienze Marine, CNR, Bologna, Italy (2) Istituto di Scienze Marine, CNR, Venice, Italy (3) Institut de Ciències del Mar, CSIC, Barcelona, Spain Acoustic reflectivity acquired with multichannel seismic reflection (MCS) systems allow to detect and explore the thermohaline structure in the ocean with vertical and lateral resolutions in the order of 10 m, covering hundreds of kilometers in the lateral dimension and the full-depth water column. In this work we present a MCS 2D profile that crosses the Strait of Gibraltar, from the Alboran Sea to the internal Gulf of Cadiz (NE Atlantic Ocean). The MCS data was acquired during the Topomed-Gassis Cruise (European Science Foundation TopoEurope), which was carried out on board of the Spanish R/V Sarmiento de Gamboa in October 2011. The strong thermohaline contrast between the Mediterranean water and the Atlantic water, characterizes this area and allows to visualize, with unprecedented resolution, the acoustic reflectivity associated to the dense flow of the Mediterranean water outflowing through the prominent slope of the Strait of Gibraltar. During the first kilometers, the dense flow drops attached to the continental slope until it reaches the buoyancy depth at 700 m. Then, it detaches from the sea floor and continues flowing towards the Atlantic Ocean, occupying the layer at 700-1500 m deep and developing clear staircase layers. The reflectivity images display near seabed reflections that could well correspond to turbidity layers. The XBT data acquired coincident in time and space with the MCS data will help us in the interpretation and analysis of the acoustic data.

  2. Imaging living cells with a combined high-resolution multi-photon-acoustic microscope

    NASA Astrophysics Data System (ADS)

    Schenkl, Selma; Weiss, Eike; Stark, Martin; Stracke, Frank; Riemann, Iris; Lemor, Robert; König, Karsten

    2007-02-01

    With increasing demand for in-vivo observation of living cells, microscope techniques that do not need staining become more and more important. In this talk we present a combined multi-photon-acoustic microscope with the possibility to measure synchronously properties addressed by ultrasound and two-photon fluorescence. Ultrasound probes the local mechanical properties of a cell, while the high resolution image of the two-photon fluorescence delivers insight in cell morphology and activity. In the acoustic part of the microscope an ultrasound wave, with a frequency of GHz, is focused by an acoustic sapphire lens and detected by a piezo electric transducer assembled to the lens. The achieved lateral resolution is in the range of 1μm. Contrast in the images arises mainly from the local absorption of sound in the cells, related to properties, such as mass density, stiffness and viscose damping. Additionally acoustic microscopy can access the cell shape and the state of the cell membrane as it is a intrinsic volume scanning technique.The optical part bases on the emission of fluorescent biomolecules naturally present in cells (e.g. NAD(P)H, protophorphyrin IX, lipofuscin, melanin). The nonlinear effect of two-photon absorption provides a high lateral and axial resolution without the need of confocal detection. In addition, in the near-IR cell damages are drastically reduced in comparison to direct excitation in the visible or UV. Both methods can be considered as minimal invasive, as they relay on intrinsic contrast mechanisms and dispense with the need of staining. First results on living cells are presented and discussed.

  3. ACOUSTICAL IMAGING AND MECHANICAL PROPERTIES OF SOFT ROCK AND MARINE SEDIMENTS

    SciTech Connect

    Thurman E. Scott, Jr., Ph.D.; Younane Abousleiman, Ph.D.; Musharraf Zaman, Ph.D., P.E.

    2002-11-18

    During the seven quarter of the project the research team analyzed some of the acoustic velocity data and rock deformation data. The goal is to create a series of ''deformation-velocity maps'' which can outline the types of rock deformational mechanisms which can occur at high pressures and then associate those with specific compressional or shear wave velocity signatures. During this quarter, we began to analyze both the acoustical and deformational properties of the various rock types. Some of the preliminary velocity data from the Danian chalk will be presented in this report. This rock type was selected for the initial efforts as it will be used in the tomographic imaging study outlined in Task 10. This is one of the more important rock types in the study as the Danian chalk is thought to represent an excellent analog to the Ekofisk chalk that has caused so many problems in the North Sea. Some of the preliminary acoustic velocity data obtained during this phase of the project indicates that during pore collapse and compaction of this chalk, the acoustic velocities can change by as much as 200 m/s. Theoretically, this significant velocity change should be detectable during repeated successive 3-D seismic images. In addition, research continues with an analysis of the unconsolidated sand samples at high confining pressures obtained in Task 9. The analysis of the results indicate that sands with 10% volume of fines can undergo liquefaction at lower stress conditions than sand samples which do not have fines added. This liquefaction and/or sand flow is similar to ''shallow water'' flows observed during drilling in the offshore Gulf of Mexico.

  4. Three-dimensional imaging applications in Earth Sciences using video data acquired from an unmanned aerial vehicle

    NASA Astrophysics Data System (ADS)

    McLeod, Tara

    For three dimensional (3D) aerial images, unmanned aerial vehicles (UAVs) are cheaper to operate and easier to fly than the typical manned craft mounted with a laser scanner. This project explores the feasibility of using 2D video images acquired with a UAV and transforming them into 3D point clouds. The Aeryon Scout -- a quad-copter micro UAV -- flew two missions: the first at York University Keele campus and the second at the Canadian Wollastonite Mine Property. Neptec's ViDAR software was used to extract 3D information from the 2D video using structure from motion. The resulting point clouds were sparsely populated, yet captured vegetation well. They were used successfully to measure fracture orientation in rock walls. Any improvement in the video resolution would cascade through the processing and improve the overall results.

  5. Assembly of a Multi-channel Video System to Simultaneously Record Cerebral Emboli with Cerebral Imaging

    PubMed Central

    Stoner-Duncan, Benjamin; Kim, Sae Jin; Mergeche, Joanna L.; Anastasian, Zirka H.; Heyer, Eric J.

    2011-01-01

    Stroke remains a significant risk of carotid revascularization for atherosclerotic disease. Emboli generated at the time of treatment either using endarterectomy or stent-angioplasty may progress with blood flow and lodge in brain arteries. Recently, the use of protection devices to trap emboli created at the time of revascularization has helped to establish a role for stent-supported angioplasty compared with endarterectomy. Several devices have been developed to reduce or detect emboli that may be dislodged during carotid artery stenting (CAS) to treat carotid artery stenosis. A significant challenge in assessing the efficacy of these devices is precisely determining when emboli are dislodged in real-time. To address this challenge, we devised a method of simultaneously recording fluoroscopic images, transcranial Doppler (TCD) data, vital signs, and digital video of the patient/physician. This method permits accurate causative analysis and allows procedural events to be precisely correlated to embolic events in real-time. PMID:21441834

  6. Video imaging system for automated shaping and analysis of complex locomotory behavior.

    PubMed

    Publicover, Nelson G; Hayes, Linda J; Fernando Guerrero, L; Hunter, Kenneth W

    2009-08-30

    Although many observational technologies have been developed for the study of behavior, most of these technologies have suffered from the inability to engender highly reproducible behaviors that can be observed and modified. We have developed ACROBAT (Automated Control in Real-Time of Operant Behavior and Training), a video imaging system and associated computer algorithms that allow the fully automated shaping and analysis of complex locomotory behaviors. While this operant conditioning system is particularly useful for measuring the acquisition and maintenance of complex topographies, it also provides a more general and user friendly platform on which to develop novel paradigms for the study of learning and memory in animals. In this paper we describe the instrumentation and software developed, demonstrate the use of ACROBAT to shape a specific topography, and show how the system can be used to facilitate the study of arthritic pain in mice. PMID:19501618

  7. Image and video based remote target localization and tracking on smartphones

    NASA Astrophysics Data System (ADS)

    Wang, Qia; Lobzhanidze, Alex; Jang, Hyun; Zeng, Wenjun; Shang, Yi; Yang, Jingyu

    2012-06-01

    Smartphones are becoming popular nowadays not only because of its communication functionality but also, more importantly, its powerful sensing and computing capability. In this paper, we describe a novel and accurate image and video based remote target localization and tracking system using the Android smartphones, by leveraging its built-in sensors such as camera, digital compass, GPS, etc. Even though many other distance estimation or localization devices are available, our all-in-one, easy-to-use localization and tracking system on low cost and commodity smartphones is first of its kind. Furthermore, smartphones' exclusive user-friendly interface has been effectively taken advantage of by our system to facilitate low complexity and high accuracy. Our experimental results show that our system works accurately and efficiently.

  8. A video imaging technique for assessing dermal exposure. II. Fluorescent tracer testing.

    PubMed

    Fenske, R A; Wong, S M; Leffingwell, J T; Spear, R C

    1986-12-01

    Laboratory and field evaluations were conducted to determine the suitability of employing a fluorescent tracer in conjunction with video imaging analysis to measure dermal exposure during pesticide applications. The Fluorescent Whitening Agent 4-methyl-7-diethylaminocoumarin and the organophosphate malathion were highly correlated (r = .985) when sprayed under controlled conditions. Deposition levels during field studies were correlated similarly (r = .942); however, variability in deposition ratios requires that field sampling be conducted to determine the ratio for a particular application. Penetration of the two compounds through cotton/polyester workshirt material demonstrated a high correlation (r = .979), whereas penetration of cotton/polyester coverall material was more variable (r = .834). The slopes of the regression lines for the two materials were not significantly different. The ratio of pesticide and tracer recovered from targets was consistently higher than the initial tank ratio due to differences in solubility and mixing. PMID:3799477

  9. Image and video compression/decompression based on human visual perception system and transform coding

    SciTech Connect

    Fu, Chi Yung., Petrich, L.I., Lee, M.

    1997-02-01

    The quantity of information has been growing exponentially, and the form and mix of information have been shifting into the image and video areas. However, neither the storage media nor the available bandwidth can accommodated the vastly expanding requirements for image information. A vital, enabling technology here is compression/decompression. Our compression work is based on a combination of feature-based algorithms inspired by the human visual- perception system (HVS), and some transform-based algorithms (such as our enhanced discrete cosine transform, wavelet transforms), vector quantization and neural networks. All our work was done on desktop workstations using the C++ programming language and commercially available software. During FY 1996, we explored and implemented an enhanced feature-based algorithms, vector quantization, and neural- network-based compression technologies. For example, we improved the feature compression for our feature-based algorithms by a factor of two to ten, a substantial improvement. We also found some promising results when using neural networks and applying them to some video sequences. In addition, we also investigated objective measures to characterize compression results, because traditional means such as the peak signal- to-noise ratio (PSNR) are not adequate to fully characterize the results, since such measures do not take into account the details of human visual perception. We have successfully used our one- year LDRD funding as seed money to explore new research ideas and concepts, the results of this work have led us to obtain external funding from the dud. At this point, we are seeking matching funds from DOE to match the dud funding so that we can bring such technologies into fruition. 9 figs., 2 tabs.

  10. Video flowmeter

    DOEpatents

    Lord, D.E.; Carter, G.W.; Petrini, R.R.

    1983-08-02

    A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid containing entrained particles is formed and positioned by a rod optic lens assembly on the raster area of a low-light level television camera. The particles are illuminated by light transmitted through a bundle of glass fibers surrounding the rod optic lens assembly. Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen. The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid. 4 figs.

  11. Video flowmeter

    DOEpatents

    Lord, D.E.; Carter, G.W.; Petrini, R.R.

    1981-06-10

    A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid containing entrained particles is formed and positioned by a rod optic lens assembly on the raster area of a low-light level television camera. The particles are illuminated by light transmitted through a bundle of glass fibers surrounding the rod optic lens assembly. Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen. The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid.

  12. Video flowmeter

    DOEpatents

    Lord, David E.; Carter, Gary W.; Petrini, Richard R.

    1983-01-01

    A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid (10) containing entrained particles (12) is formed and positioned by a rod optic lens assembly (31) on the raster area of a low-light level television camera (20). The particles (12) are illuminated by light transmitted through a bundle of glass fibers (32) surrounding the rod optic lens assembly (31). Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen (40). The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid (10).

  13. Video Image Analysis of Turbulent Buoyant Jets Using a Novel Laboratory Apparatus

    NASA Astrophysics Data System (ADS)

    Crone, T. J.; Colgan, R. E.; Ferencevych, P. G.

    2012-12-01

    Turbulent buoyant jets play an important role in the transport of heat and mass in a variety of environmental settings on Earth. Naturally occurring examples include the discharges from high-temperature seafloor hydrothermal vents and from some types of subaerial volcanic eruptions. Anthropogenic examples include flows from industrial smokestacks and the flow from the damaged well after the Deepwater Horizon oil leak of 2010. Motivated by a desire to find non-invasive methods for measuring the volumetric flow rates of turbulent buoyant jets, we have constructed a laboratory apparatus that can generate these types of flows with easily adjustable nozzle velocities and fluid densities. The jet fluid comprises a variable mixture of nitrogen and carbon dioxide gas, which can be injected at any angle with respect to the vertical into the quiescent surrounding air. To make the flow visible we seed the jet fluid with a water fog generated by an array of piezoelectric diaphragms oscillating at ultrasonic frequencies. The system can generate jets that have initial densities ranging from approximately 2-48% greater than the ambient air. We obtain independent estimates of the volumetric flow rates using well-calibrated rotameters, and collect video image sequences for analysis at frame rates up to 120 frames per second using a machine vision camera. We are using this apparatus to investigate several outstanding problems related to the physics of these flows and their analysis using video imagery. First, we are working to better constrain several theoretical parameters that describe the trajectory of these flows when their initial velocities are not parallel to the buoyancy force. The ultimate goal of this effort is to develop well-calibrated methods for establishing volumetric flow rates using trajectory analysis. Second, we are working to refine optical plume velocimetry (OPV), a non-invasive technique for estimating flow rates using temporal cross-correlation of image

  14. Fast Orientation of Video Images of Buildings Acquired from a UAV without Stabilization

    PubMed Central

    Kedzierski, Michal; Delis, Paulina

    2016-01-01

    The aim of this research was to assess the possibility of conducting an absolute orientation procedure for video imagery, in which the external orientation for the first image was typical for aerial photogrammetry whereas the external orientation of the second was typical for terrestrial photogrammetry. Starting from the collinearity equations, assuming that the camera tilt angle is equal to 90°, a simplified mathematical model is proposed. The proposed method can be used to determine the X, Y, Z coordinates of points based on a set of collinearity equations of a pair of images. The use of simplified collinearity equations can considerably shorten the processing tine of image data from Unmanned Aerial Vehicles (UAVs), especially in low cost systems. The conducted experiments have shown that it is possible to carry out a complete photogrammetric project of an architectural structure using a camera tilted 85°–90° (φ or ω) and simplified collinearity equations. It is also concluded that there is a correlation between the speed of the UAV and the discrepancy between the established and actual camera tilt angles. PMID:27347954

  15. Multiwavelength Fluorescence Otoscope for Video-Rate Chemical Imaging of Middle Ear Pathology

    PubMed Central

    2015-01-01

    A common motif in otolaryngology is the lack of certainty regarding diagnosis for middle ear conditions, resulting in many patients being overtreated under the worst-case assumption. Although pneumatic otoscopy and adjunctive tests offer additional information, white light otoscopy has been the main tool for diagnosis of external auditory canal and middle ear pathologies for over a century. In middle ear pathologies, the inability to avail high-resolution structural and/or molecular imaging is particularly glaring, leading to a complicated and erratic decision analysis. Here, we propose a novel multiwavelength fluorescence-based video-rate imaging strategy that combines readily available optical elements and software components to create a novel otoscopic device. This modified otoscope enables low-cost, detailed and objective diagnosis of common middle ear pathological conditions. Using the detection of congenital cholesteatoma as a specific example, we demonstrate the feasibility of fluorescence imaging to differentiate this proliferative lesion from uninvolved middle ear tissue based on the characteristic autofluorescence signals. Availability of real-time, wide-field chemical information should enable more complete removal of cholesteatoma, allowing for better hearing preservation and substantially reducing the well-documented risks, costs and psychological effects of repeated surgical procedures. PMID:25226556

  16. Fast Orientation of Video Images of Buildings Acquired from a UAV without Stabilization.

    PubMed

    Kedzierski, Michal; Delis, Paulina

    2016-01-01

    The aim of this research was to assess the possibility of conducting an absolute orientation procedure for video imagery, in which the external orientation for the first image was typical for aerial photogrammetry whereas the external orientation of the second was typical for terrestrial photogrammetry. Starting from the collinearity equations, assuming that the camera tilt angle is equal to 90°, a simplified mathematical model is proposed. The proposed method can be used to determine the X, Y, Z coordinates of points based on a set of collinearity equations of a pair of images. The use of simplified collinearity equations can considerably shorten the processing tine of image data from Unmanned Aerial Vehicles (UAVs), especially in low cost systems. The conducted experiments have shown that it is possible to carry out a complete photogrammetric project of an architectural structure using a camera tilted 85°-90° ( φ or ω) and simplified collinearity equations. It is also concluded that there is a correlation between the speed of the UAV and the discrepancy between the established and actual camera tilt angles. PMID:27347954

  17. Using numerical models and volume rendering to interpret acoustic imaging of hydrothermal flow

    NASA Astrophysics Data System (ADS)

    Bemis, K. G.; Bennett, K.; Takle, J.; Rona, P. A.; Silver, D.

    2009-12-01

    Our acoustic imaging system will be installed onto the Neptune Canada observatory at the Main Endeavour Field, Juan de Fuca Ridge, which is a Ridge 2000 Integrated Study Site. Thereafter, 16-30 Gb of acoustic imaging data will be collected daily. We are developing a numerical model of merging plumes that will be used to guide expectations and volume rendering software that transforms volumetric acoustic data into photo-like images. Hydrothermal flow is modeled as a combination of merged point sources which can be configured in any geometry. The model stipulates the dissipation or dilution of the flow and uses potential fields and complex analysis to combine the entrainment fields produced by each source. The strengths of this model are (a) the ability to handle a variety of scales especially the small scale as the potential fields can be specified with an effectively infinite boundary condition, (b) the ability to handle line, circle and areal source configurations, and (c) the ability to handle both high temperature focused flow and low temperature diffuse flow. This model predicts the vertical and horizontal velocities and the spatial distribution of effluent from combined sources of variable strength in a steady ambient velocity field. To verify the accuracy of the model’s results, we compare the model predictions of plume centerlines for the merging of two relatively strong point sources with the acoustic imaging data collected at Clam Acres, Southwest Vent Field, EPR 21°N in 1990. The two chimneys are 3.5 m apart and the plumes emanating from their tops merge approximately 18 mab. The model is able to predict the height of merging and the bending of the centerlines. Merging is implicitly observed at Grotto Vent, Main Endeavour Field, in our VIP 2000 data from July 2000: although there are at least 5 vigorous black smokers only a single plume is discernable in the acoustic imaging data. Furthermore, the observed Doppler velocity data increases with height

  18. Sensing the delivery and endocytosis of nanoparticles using magneto-photo-acoustic imaging

    PubMed Central

    Qu, M.; Mehrmohammadi, M.; Emelianov, S.Y.

    2015-01-01

    Many biomedical applications necessitate a targeted intracellular delivery of the nanomaterial to specific cells. Therefore, a non-invasive and reliable imaging tool is required to detect both the delivery and cellular endocytosis of the nanoparticles. Herein, we demonstrate that magneto-photo-acoustic (MPA) imaging can be used to monitor the delivery and to identify endocytosis of magnetic and optically absorbing nanoparticles. The relationship between photoacoustic (PA) and magneto-motive ultrasound (MMUS) signals from the in vitro samples were analyzed to identify the delivery and endocytosis of nanoparticles. The results indicated that during the delivery of nanoparticles to the vicinity of the cells, both PA and MMUS signals are almost linearly proportional. However, accumulation of nanoparticles within the cells leads to nonlinear MMUS-PA relationship, due to non-linear MMUS signal amplification. Therefore, through longitudinal MPA imaging, it is possible to monitor the delivery of nanoparticles and identify the endocytosis of the nanoparticles by living cells. PMID:26640773

  19. Ultrasound-Stimulated Acoustic Emission in Thermal Image-Guided HIFU Therapy: A Phantom Study

    SciTech Connect

    Jiang, C. P.; Lin, W. T.; Chen, W. S.

    2006-05-08

    Magnetic resonance image (MRI) is a promising monitoring tool for non-invasive real-time thermal guidance in high intensity focused ultrasound (HIFU) during thermal ablation surgery. However, this approach has two main drawbacks: 1) majority of components need to be redesigned to be MR compatible in order to avoid effecting MR images, and 2) the cost of operating MRI facilities is high. Alternately, ultrasound-stimulated acoustic emission (USAE) method has been applied for detecting thermal variations in tissues. An optical transparent phantom, made from polyacrylamide, containing thermal sensitive indicator protein (Bovine Serum Albumin), was prepared for observing the HIFU-induced denaturalization. A thermal-couple was set up for validation of temperature distribution. Experimental results show that thermal image can be captured clearly under stationary conditions.

  20. Capturing and displaying microscopic images used in medical diagnostics and forensic science using 4K video resolution - an application in higher education.

    PubMed

    Maier, Hans; de Heer, Gert; Ortac, Ajda; Kuijten, Jan

    2015-11-01

    To analyze, interpret and evaluate microscopic images, used in medical diagnostics and forensic science, video images for educational purposes were made with a very high resolution of 4096 × 2160 pixels (4K), which is four times as many pixels as High-Definition Video (1920 × 1080 pixels). The unprecedented high resolution makes it possible to see details that remain invisible to any other video format. The images of the specimens (blood cells, tissue sections, hair, fibre, etc.) are recorded using a 4K video camera which is attached to a light microscope. After processing, this resulted in very sharp and highly detailed images. This material was then used in education for classroom discussion. Spoken explanation by experts in the field of medical diagnostics and forensic science was also added to the high-resolution video images to make it suitable for self-study. PMID:26250075

  1. OPTIMISATION OF OCCUPATIONAL RADIATION PROTECTION IN IMAGE-GUIDED INTERVENTIONS: EXPLORING VIDEO RECORDINGS AS A TOOL IN THE PROCESS.

    PubMed

    Almén, Anja; Sandblom, Viktor; Rystedt, Hans; von Wrangel, Alexa; Ivarsson, Jonas; Båth, Magnus; Lundh, Charlotta

    2016-06-01

    The overall purpose of this work was to explore how video recordings can contribute to the process of optimising occupational radiation protection in image-guided interventions. Video-recorded material from two image-guided interventions was produced and used to investigate to what extent it is conceivable to observe and assess dose-affecting actions in video recordings. Using the recorded material, it was to some extent possible to connect the choice of imaging techniques to the medical events during the procedure and, to a less extent, to connect these technical and medical issues to the occupational exposure. It was possible to identify a relationship between occupational exposure level to staff and positioning and use of shielding. However, detailed values of the dose rates were not possible to observe on the recordings, and the change in occupational exposure level from adjustments of exposure settings was not possible to identify. In conclusion, the use of video recordings is a promising tool to identify dose-affecting instances, allowing for a deeper knowledge of the interdependency between the management of the medical procedure, the applied imaging technology and the occupational exposure level. However, for a full information about the dose-affecting actions, the equipment used and the recording settings have to be thoroughly planned. PMID:27056142

  2. Transactions and Answer Judging in Multimedia Instruction: A Way to Transact with Features Appearing in Video and Graphic Images.

    ERIC Educational Resources Information Center

    Casey, Carl

    1992-01-01

    Discussion of transactions in computer-based instruction for ill-structured and visual domains focuses on two transactions developed for meteorology training that provide the capability to interact with video and graphic images at a very detailed level. Potential applications for the transactions are suggested, and early evaluation reports are…

  3. INVESTIGATION OF TRANSIENT ASPECTS OF ATMOSPHERIC DISPERSION PROCESSES IN THE WAKE OF A BUILDING THROUGH VIDEO IMAGE ANALYSIS

    EPA Science Inventory

    The processing of continuous video images is now very feasible and applicable to the study of the transient nature of atmospheric transport and the highly variable pollutant concentrations near buildings. Research is now ongoing to best develop and refine appropriate methods of a...

  4. Breaking the acoustic diffraction limit via nonlinear effect and thermal confinement for potential deep-tissue high-resolution imaging

    PubMed Central

    Yuan, Baohong; Pei, Yanbo; Kandukuri, Jayanth

    2013-01-01

    Our recently developed ultrasound-switchable fluorescence (USF) imaging technique showed that it was feasible to conduct high-resolution fluorescence imaging in a centimeter-deep turbid medium. Because the spatial resolution of this technique highly depends on the ultrasound-induced temperature focal size (UTFS), minimization of UTFS becomes important for further improving the spatial resolution USF technique. In this study, we found that UTFS can be significantly reduced below the diffraction-limited acoustic intensity focal size via nonlinear acoustic effects and thermal confinement by appropriately controlling ultrasound power and exposure time, which can be potentially used for deep-tissue high-resolution imaging. PMID:23479498

  5. Contribution of the supraglottic larynx to the vocal product: imaging and acoustic analysis

    NASA Astrophysics Data System (ADS)

    Gracco, L. Carol

    1996-04-01

    Horizontal supraglottic laryngectomy is a surgical procedure to remove a mass lesion located in the region of the pharynx superior to the true vocal folds. In contrast to full or partial laryngectomy, patients who undergo horizontal supraglottic laryngectomy often present with little or nor involvement to the true vocal folds. This population provides an opportunity to examine the acoustic consequences of altering the pharynx while sparing the laryngeal sound source. Acoustic and magnetic resonance imaging (MRI) data were acquired in a group of four patients before and after supraglottic laryngectomy. Acoustic measures included the identification of vocal tract resonances and the fundamental frequency of the vocal fold vibration. 3D reconstruction of the pharyngeal portion of each subjects' vocal tract were made from MRIs taken during phonation and volume measures were obtained. These measures reveal a variable, but often dramatic difference in the surgically-altered area of the pharynx and changes in the formant frequencies of the vowel/i/post surgically. In some cases the presence of the tumor created a deviation from the expected formant values pre-operatively with post-operative values approaching normal. Patients who also underwent radiation treatment post surgically tended to have greater constriction in the pharyngeal area of the vocal tract.

  6. Acoustic Property Reconstruction of a Neonate Yangtze Finless Porpoise's (Neophocaena asiaeorientalis) Head Based on CT Imaging

    PubMed Central

    Wei, Chong; Wang, Zhitao; Song, Zhongchang; Wang, Kexiong; Wang, Ding; Au, Whitlow W. L.; Zhang, Yu

    2015-01-01

    The reconstruction of the acoustic properties of a neonate finless porpoise’s head was performed using X-ray computed tomography (CT). The head of the deceased neonate porpoise was also segmented across the body axis and cut into slices. The averaged sound velocity and density were measured, and the Hounsfield units (HU) of the corresponding slices were obtained from computed tomography scanning. A regression analysis was employed to show the linear relationships between the Hounsfield unit and both sound velocity and density of samples. Furthermore, the CT imaging data were used to compare the HU value, sound velocity, density and acoustic characteristic impedance of the main tissues in the porpoise’s head. The results showed that the linear relationships between HU and both sound velocity and density were qualitatively consistent with previous studies on Indo-pacific humpback dolphins and Cuvier’s beaked whales. However, there was no significant increase of the sound velocity and acoustic impedance from the inner core to the outer layer in this neonate finless porpoise’s melon. PMID:25856588

  7. Acoustic property reconstruction of a neonate Yangtze finless porpoise's (Neophocaena asiaeorientalis) head based on CT imaging.

    PubMed

    Wei, Chong; Wang, Zhitao; Song, Zhongchang; Wang, Kexiong; Wang, Ding; Au, Whitlow W L; Zhang, Yu

    2015-01-01

    The reconstruction of the acoustic properties of a neonate finless porpoise's head was performed using X-ray computed tomography (CT). The head of the deceased neonate porpoise was also segmented across the body axis and cut into slices. The averaged sound velocity and density were measured, and the Hounsfield units (HU) of the corresponding slices were obtained from computed tomography scanning. A regression analysis was employed to show the linear relationships between the Hounsfield unit and both sound velocity and density of samples. Furthermore, the CT imaging data were used to compare the HU value, sound velocity, density and acoustic characteristic impedance of the main tissues in the porpoise's head. The results showed that the linear relationships between HU and both sound velocity and density were qualitatively consistent with previous studies on Indo-pacific humpback dolphins and Cuvier's beaked whales. However, there was no significant increase of the sound velocity and acoustic impedance from the inner core to the outer layer in this neonate finless porpoise's melon. PMID:25856588

  8. The concept of cyclic sound intensity and its application to acoustical imaging

    NASA Astrophysics Data System (ADS)

    Lafon, B.; Antoni, J.; Sidahmed, M.; Polac, L.

    2011-04-01

    This paper demonstrates how to take advantage of the cyclostationarity property of engine signals to define a new acoustical quantity, the cyclic sound intensity, which displays the instantaneous flux of acoustical energy in the angle-frequency domain during an average engine cycle. This quantity is attractive in that it possesses the ability of being instantaneous and averaged at the same time, thus reconciling two conflicting properties into a rigourous and unambiguous framework. Cyclic sound intensity is a rich concept with several original ramifications. Among other things, it returns a unique decomposition into instantaneous active and reactive parts. Associated to acoustical imaging techniques, it allows the construction of sound radiation movies that evolve within the engine cycle and whose each frame is a sound intensity map calculated at a specific time - or crankshaft angle - in the engine cycle. This enables the accurate localisation of sources in space, in frequency and in time (crankshaft angle). Furthermore, associated to cyclic Wiener filtering, this methodology makes it possible to decompose the overall radiated sound into several noise source contributions whose cyclic sound intensities can then be analysed independently.

  9. Imaging of transient surface acoustic waves by full-field photorefractive interferometry.

    PubMed

    Xiong, Jichuan; Xu, Xiaodong; Glorieux, Christ; Matsuda, Osamu; Cheng, Liping

    2015-05-01

    A stroboscopic full-field imaging technique based on photorefractive interferometry for the visualization of rapidly changing surface displacement fields by using of a standard charge-coupled device (CCD) camera is presented. The photorefractive buildup of the space charge field during and after probe laser pulses is simulated numerically. The resulting anisotropic diffraction upon the refractive index grating and the interference between the polarization-rotated diffracted reference beam and the transmitted signal beam are modeled theoretically. The method is experimentally demonstrated by full-field imaging of the propagation of photoacoustically generated surface acoustic waves with a temporal resolution of nanoseconds. The surface acoustic wave propagation in a 23 mm × 17 mm area on an aluminum plate was visualized with 520 × 696 pixels of the CCD sensor, yielding a spatial resolution of 33 μm. The short pulse duration (8 ns) of the probe laser yields the capability of imaging SAWs with frequencies up to 60 MHz. PMID:26026514

  10. Three-dimensional tomographic imaging for dynamic radiation behavior study using infrared imaging video bolometers in large helical device plasma.

    PubMed

    Sano, Ryuichi; Peterson, Byron J; Teranishi, Masaru; Iwama, Naofumi; Kobayashi, Masahiro; Mukai, Kiyofumi; Pandya, Shwetang N

    2016-05-01

    A three-dimensional (3D) tomography system using four InfraRed imaging Video Bolometers (IRVBs) has been designed with a helical periodicity assumption for the purpose of plasma radiation measurement in the large helical device. For the spatial inversion of large sized arrays, the system has been numerically and experimentally examined using the Tikhonov regularization with the criterion of minimum generalized cross validation, which is the standard solver of inverse problems. The 3D transport code EMC3-EIRENE for impurity behavior and related radiation has been used to produce phantoms for numerical tests, and the relative calibration of the IRVB images has been carried out with a simple function model of the decaying plasma in a radiation collapse. The tomography system can respond to temporal changes in the plasma profile and identify the 3D dynamic behavior of radiation, such as the radiation enhancement that starts from the inboard side of the torus, during the radiation collapse. The reconstruction results are also consistent with the output signals of a resistive bolometer. These results indicate that the designed 3D tomography system is available for the 3D imaging of radiation. The first 3D direct tomographic measurement of a magnetically confined plasma has been achieved. PMID:27250418

  11. Three-dimensional tomographic imaging for dynamic radiation behavior study using infrared imaging video bolometers in large helical device plasma

    NASA Astrophysics Data System (ADS)

    Sano, Ryuichi; Peterson, Byron J.; Teranishi, Masaru; Iwama, Naofumi; Kobayashi, Masahiro; Mukai, Kiyofumi; Pandya, Shwetang N.

    2016-05-01

    A three-dimensional (3D) tomography system using four InfraRed imaging Video Bolometers (IRVBs) has been designed with a helical periodicity assumption for the purpose of plasma radiation measurement in the large helical device. For the spatial inversion of large sized arrays, the system has been numerically and experimentally examined using the Tikhonov regularization with the criterion of minimum generalized cross validation, which is the standard solver of inverse problems. The 3D transport code EMC3-EIRENE for impurity behavior and related radiation has been used to produce phantoms for numerical tests, and the relative calibration of the IRVB images has been carried out with a simple function model of the decaying plasma in a radiation collapse. The tomography system can respond to temporal changes in the plasma profile and identify the 3D dynamic behavior of radiation, such as the radiation enhancement that starts from the inboard side of the torus, during the radiation collapse. The reconstruction results are also consistent with the output signals of a resistive bolometer. These results indicate that the designed 3D tomography system is available for the 3D imaging of radiation. The first 3D direct tomographic measurement of a magnetically confined plasma has been achieved.

  12. Imaging of Acoustically Coupled Oscillations Due to Flow Past a Shallow Cavity: Effect of Cavity Length Scale

    SciTech Connect

    P. Oshkai; M. Geveci; D. Rockwell; M. Pollack

    2002-12-12

    Flow-acoustic interactions due to fully turbulent inflow past a shallow axisymmetric cavity mounted in a pipe are investigated using a technique of high-image-density particle image velocimetry in conjunction with unsteady pressure measurements. This imaging leads to patterns of velocity, vorticity, streamline topology, and hydrodynamic contributions to the acoustic power integral. Global instantaneous images, as well as time-averaged images, are evaluated to provide insight into the flow physics during tone generation. Emphasis is on the manner in which the streamwise length scale of the cavity alters the major features of the flow structure. These image-based approaches allow identification of regions of the unsteady shear layer that contribute to the instantaneous hydrodynamic component of the acoustic power, which is necessary to maintain a flow tone. In addition, combined image analysis and pressure measurements allow categorization of the instantaneous flow patterns that are associated with types of time traces and spectra of the fluctuating pressure. In contrast to consideration based solely on pressure spectra, it is demonstrated that locked-on tones may actually exhibit intermittent, non-phase-locked images, apparently due to low damping of the acoustic resonator. Locked-on flow tones (without modulation or intermittency), locked-on flow tones with modulation, and non-locked-on oscillations with short-term, highly coherent fluctuations are defined and represented by selected cases. Depending on which of,these regimes occur, the time-averaged Q (quality)-factor and the dimensionless peak pressure are substantially altered.

  13. Investigating the emotional response to room acoustics: A functional magnetic resonance imaging study.

    PubMed

    Lawless, M S; Vigeant, M C

    2015-10-01

    While previous research has demonstrated the powerful influence of pleasant and unpleasant music on emotions, the present study utilizes functional magnetic resonance imaging (fMRI) to assess the positive and negative emotional responses as demonstrated in the brain when listening to music convolved with varying room acoustic conditions. During fMRI scans, subjects rated auralizations created in a simulated concert hall with varying reverberation times. The analysis detected activations in the dorsal striatum, a region associated with anticipation of reward, for two individuals for the highest rated stimulus, though no activations were found for regions associated with negative emotions in any subject. PMID:26520354

  14. Acoustic radiation force impulse (ARFI) imaging of zebrafish embryo by high-frequency coded excitation sequence.

    PubMed

    Park, Jinhyoung; Lee, Jungwoo; Lau, Sien Ting; Lee, Changyang; Huang, Ying; Lien, Ching-Ling; Kirk Shung, K

    2012-04-01

    Acoustic radiation force impulse (ARFI) imaging has been developed as a non-invasive method for quantitative illustration of tissue stiffness or displacement. Conventional ARFI imaging (2-10 MHz) has been implemented in commercial scanners for illustrating elastic properties of several organs. The image resolution, however, is too coarse to study mechanical properties of micro-sized objects such as cells. This article thus presents a high-frequency coded excitation ARFI technique, with the ultimate goal of displaying elastic characteristics of cellular structures. Tissue mimicking phantoms and zebrafish embryos are imaged with a 100-MHz lithium niobate (LiNbO₃) transducer, by cross-correlating tracked RF echoes with the reference. The phantom results show that the contrast of ARFI image (14 dB) with coded excitation is better than that of the conventional ARFI image (9 dB). The depths of penetration are 2.6 and 2.2 mm, respectively. The stiffness data of the zebrafish demonstrate that the envelope is harder than the embryo region. The temporal displacement change at the embryo and the chorion is as large as 36 and 3.6 μm. Consequently, this high-frequency ARFI approach may serve as a remote palpation imaging tool that reveals viscoelastic properties of small biological samples. PMID:22101757

  15. A Review on Video/Image Authentication and Tamper Detection Techniques

    NASA Astrophysics Data System (ADS)

    Parmar, Zarna; Upadhyay, Saurabh

    2013-02-01

    With the innovations and development in sophisticated video editing technology and a wide spread of video information and services in our society, it is becoming increasingly significant to assure the trustworthiness of video information. Therefore in surveillance, medical and various other fields, video contents must be protected against attempt to manipulate them. Such malicious alterations could affect the decisions based on these videos. A lot of techniques are proposed by various researchers in the literature that assure the authenticity of video information in their own way. In this paper we present a brief survey on video authentication techniques with their classification. These authentication techniques are generally classified into following categories: digital signature based techniques, watermark based techniques, and other authentication techniques.

  16. Video-rate in vivo fluorescence imaging with a line-scanned dual-axis confocal microscope

    NASA Astrophysics Data System (ADS)

    Chen, Ye; Wang, Danni; Khan, Altaz; Wang, Yu; Borwege, Sabine; Sanai, Nader; Liu, Jonathan T. C.

    2015-10-01

    Video-rate optical-sectioning microscopy of living organisms would allow for the investigation of dynamic biological processes and would also reduce motion artifacts, especially for in vivo imaging applications. Previous feasibility studies, with a slow stage-scanned line-scanned dual-axis confocal (LS-DAC) microscope, have demonstrated that LS-DAC microscopy is capable of imaging tissues with subcellular resolution and high contrast at moderate depths of up to several hundred microns. However, the sensitivity and performance of a video-rate LS-DAC imaging system, with low-numerical aperture optics, have yet to be demonstrated. Here, we report on the construction and validation of a video-rate LS-DAC system that possesses sufficient sensitivity to visualize fluorescent contrast agents that are topically applied or systemically delivered in animal and human tissues. We present images of murine oral mucosa that are topically stained with methylene blue, and images of protoporphyrin IX-expressing brain tumor from glioma patients that have been administered 5-aminolevulinic acid prior to surgery. In addition, we demonstrate in vivo fluorescence imaging of red blood cells trafficking within the capillaries of a mouse ear, at frame rates of up to 30 fps. These results can serve as a benchmark for miniature in vivo microscopy devices under development.

  17. Video Event Trigger

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.; Lichter, Michael J.

    1994-01-01

    Video event trigger (VET) processes video image data to generate trigger signal when image shows significant change like motion or appearance, disappearance, change in color, change in brightness, or dilation of object. System aids in efficient utilization of image-data-storage and image-data-processing equipment in applications in which many video frames show no changes and are wasteful to record and analyze all frames when only relatively few frames show changes of interest. Applications include video recording of automobile crash tests, automated video monitoring of entrances, exits, parking lots, and secure areas.

  18. Towards high-sensitivity and high-resolution submillimeter-wave video imaging

    NASA Astrophysics Data System (ADS)

    Heinz, Erik; May, Torsten; Born, Detlef; Zieger, Gabriel; Anders, Solveig; Zakosarenko, Viatcheslav; Schubert, Marco; Krause, Torsten; Krüger, André; Schulz, Marco; Meyer, Hans-Georg

    2011-05-01

    Against a background of newly emerged security threats the well-established idea of utilizing submillimeter-wave radiation for personal security screening applications has recently evolved into a promising technology. Possible application scenarios demand sensitive, fast, flexible and high-quality imaging techniques. At present, best results are obtained by passive imaging using cryogenic microbolometers as radiation detectors. Building upon the concept of a passive submillimeter-wave stand-off video camera introduced previously, we present the evolution of this concept in a practical application-ready imaging device. This has been achieved using a variety of measures such as optimizing the detector parameters, improving the scanning mechanism, increasing the sampling speed, and enhancing the camera software. The image generation algorithm has been improved and an automatic sensor calibration technique has been implemented taking advantage of redundancy in the sensor data. The concept is based on a Cassegrain-type mirror optics, an opto-mechanical scanner providing spiraliform scanning traces, and an array of 20 superconducting transition-edge sensors (TES) operated at a temperature of 450-650 mK. The TES are cooled by a closed-cycle cooling system and read out by superconducting quantum interference devices (SQUIDs). The frequency band of operation centers around 350 GHz. The camera can operate at an object distance of 7-10 m. At 9m distance it covers a field of view of 110 cm diameter, achieves a spatial resolution of 2 cm and a pixel NETD (noise equivalent temperature difference) of 0.1-0.4 K. The maximum frame rate is 10 frames per second.

  19. A Marker-less Monitoring System for Movement Analysis of Infants Using Video Images

    NASA Astrophysics Data System (ADS)

    Shima, Keisuke; Osawa, Yuko; Bu, Nan; Tsuji, Tokuo; Tsuji, Toshio; Ishii, Idaku; Matsuda, Hiroshi; Orito, Kensuke; Ikeda, Tomoaki; Noda, Shunichi

    This paper proposes a marker-less motion measurement and analysis system for infants. This system calculates eight types of evaluation indices related to the movement of an infant such as “amount of body motion” and “activity of body” from binary images that are extracted from video images using the background difference and frame difference. Thus, medical doctors can intuitively understand the movements of infants without long-term observations, and this may be helpful in supporting their diagnoses and detecting disabilities and diseases in the early stages. The distinctive feature of this system is that the movements of infants can be measured without using any markers for motion capture and thus it is expected that the natural and inherent tendencies of infants can be analyzed and evaluated. In this paper, the evaluation indices and features of movements between full-term infants (FTIs) and low birth weight infants (LBWIs) are compared using the developed prototype. We found that the amount of body motion and symmetry of upper and lower body movements of LBWIs became lower than those of FTIs. The difference between the movements of FTIs and LBWIs can be evaluated using the proposed system.

  20. Acoustic structure quantification by using ultrasound Nakagami imaging for assessing liver fibrosis

    PubMed Central

    Tsui, Po-Hsiang; Ho, Ming-Chih; Tai, Dar-In; Lin, Ying-Hsiu; Wang, Chiao-Yin; Ma, Hsiang-Yang

    2016-01-01

    Acoustic structure quantification (ASQ) is a recently developed technique widely used for detecting liver fibrosis. Ultrasound Nakagami parametric imaging based on the Nakagami distribution has been widely used to model echo amplitude distribution for tissue characterization. We explored the feasibility of using ultrasound Nakagami imaging as a model-based ASQ technique for assessing liver fibrosis. Standard ultrasound examinations were performed on 19 healthy volunteers and 91 patients with chronic hepatitis B and C (n = 110). Liver biopsy and ultrasound Nakagami imaging analysis were conducted to compare the METAVIR score and Nakagami parameter. The diagnostic value of ultrasound Nakagami imaging was evaluated using receiver operating characteristic (ROC) curves. The Nakagami parameter obtained through ultrasound Nakagami imaging decreased with an increase in the METAVIR score (p < 0.0001), representing an increase in the extent of pre-Rayleigh statistics for echo amplitude distribution. The area under the ROC curve (AUROC) was 0.88 for the diagnosis of any degree of fibrosis (≥F1), whereas it was 0.84, 0.69, and 0.67 for ≥F2, ≥F3, and ≥F4, respectively. Ultrasound Nakagami imaging is a model-based ASQ technique that can be beneficial for the clinical diagnosis of early liver fibrosis. PMID:27605260

  1. Acoustic structure quantification by using ultrasound Nakagami imaging for assessing liver fibrosis.

    PubMed

    Tsui, Po-Hsiang; Ho, Ming-Chih; Tai, Dar-In; Lin, Ying-Hsiu; Wang, Chiao-Yin; Ma, Hsiang-Yang

    2016-01-01

    Acoustic structure quantification (ASQ) is a recently developed technique widely used for detecting liver fibrosis. Ultrasound Nakagami parametric imaging based on the Nakagami distribution has been widely used to model echo amplitude distribution for tissue characterization. We explored the feasibility of using ultrasound Nakagami imaging as a model-based ASQ technique for assessing liver fibrosis. Standard ultrasound examinations were performed on 19 healthy volunteers and 91 patients with chronic hepatitis B and C (n = 110). Liver biopsy and ultrasound Nakagami imaging analysis were conducted to compare the METAVIR score and Nakagami parameter. The diagnostic value of ultrasound Nakagami imaging was evaluated using receiver operating characteristic (ROC) curves. The Nakagami parameter obtained through ultrasound Nakagami imaging decreased with an increase in the METAVIR score (p < 0.0001), representing an increase in the extent of pre-Rayleigh statistics for echo amplitude distribution. The area under the ROC curve (AUROC) was 0.88 for the diagnosis of any degree of fibrosis (≥F1), whereas it was 0.84, 0.69, and 0.67 for ≥F2, ≥F3, and ≥F4, respectively. Ultrasound Nakagami imaging is a model-based ASQ technique that can be beneficial for the clinical diagnosis of early liver fibrosis. PMID:27605260

  2. Acoustic quasi-holographic images of scattering by vertical cylinders from one-dimensional bistatic scans.

    PubMed

    Baik, Kyungmin; Dudley, Christopher; Marston, Philip L

    2011-12-01

    When synthetic aperture sonar (SAS) is used to image elastic targets in water, subtle features can be present in the images associated with the dynamical response of the target being viewed. In an effort to improve the understanding of such responses, as well as to explore alternative image processing methods, a laboratory-based system was developed in which targets were illuminated by a transient acoustic source, and bistatic responses were recorded by scanning a hydrophone along a rail system. Images were constructed using a relatively conventional bistatic SAS algorithm and were compared with images based on supersonic holography. The holographic method is a simplification of one previously used to view the time evolution of a target's response [Hefner and Marston, ARLO 2, 55-60 (2001)]. In the holographic method, the space-time evolution of the scattering was used to construct a two-dimensional image with cross range and time as coordinates. Various features for vertically hung cylindrical targets were interpreted using high frequency ray theory. This includes contributions from guided surface elastic waves, as well as transmitted-wave features and specular reflection. PMID:22225041

  3. Stress-Induced Fracturing of Reservoir Rocks: Acoustic Monitoring and μCT Image Analysis

    NASA Astrophysics Data System (ADS)

    Pradhan, Srutarshi; Stroisz, Anna M.; Fjær, Erling; Stenebråten, Jørn F.; Lund, Hans K.; Sønstebø, Eyvind F.

    2015-11-01

    Stress-induced fracturing in reservoir rocks is an important issue for the petroleum industry. While productivity can be enhanced by a controlled fracturing operation, it can trigger borehole instability problems by reactivating existing fractures/faults in a reservoir. However, safe fracturing can improve the quality of operations during CO2 storage, geothermal installation and gas production at and from the reservoir rocks. Therefore, understanding the fracturing behavior of different types of reservoir rocks is a basic need for planning field operations toward these activities. In our study, stress-induced fracturing of rock samples has been monitored by acoustic emission (AE) and post-experiment computer tomography (CT) scans. We have used hollow cylinder cores of sandstones and chalks, which are representatives of reservoir rocks. The fracture-triggering stress has been measured for different rocks and compared with theoretical estimates. The population of AE events shows the location of main fracture arms which is in a good agreement with post-test CT image analysis, and the fracture patterns inside the samples are visualized through 3D image reconstructions. The amplitudes and energies of acoustic events clearly indicate initiation and propagation of the main fractures. Time evolution of the radial strain measured in the fracturing tests will later be compared to model predictions of fracture size.

  4. Negative refraction and imaging of acoustic waves in a two-dimensional square chiral lattice structure

    NASA Astrophysics Data System (ADS)

    Zhao, Sheng-Dong; Wang, Yue-Sheng

    2016-05-01

    The negative refraction behavior and imaging effect for acoustic waves in a kind of two-dimensional square chiral lattice structure are studied in this paper. The unit cell of the proposed structure consists of four zigzag arms connected through a thin circular ring at the central part. The relation of the symmetry of the unit cell and the negative refraction phenomenon is investigated. Using the finite element method, we calculate the band structures and the equi-frequency surfaces of the system, and confirm the frequency range where the negative refraction is present. Due to the rotational symmetry of the unit cell, a phase difference is induced to the waves propagating from a point source through the structure to the other side. The phase difference is related to the width of the structure and the frequency of the source, so we can get a tunable deviated imaging. This kind of phenomenon is also demonstrated by the numerical simulation of two Gaussian beams that are symmetrical about the interface normal with the same incident angle, and the different negative refractive indexes are presented. Based on this special performance, a double-functional mirror-symmetrical slab is proposed for realizing acoustic focusing and beam separation. xml:lang="fr"

  5. Image formation and system analysis of a scanning tomographic acoustic microscope

    NASA Astrophysics Data System (ADS)

    Kent, Samuel Davis, III

    This dissertation focuses on research that has been conducted to implement an automated Scanning Tomographic Acoustic Microscope (STAM), and research that has been performed to increase the understanding of the performance characteristics of the STAM. STAM technology permits high resolution microscopy which yields important information on the internal structure and acoustic properties of thick specimens, provided that technology is utilized in a cohesive manner. Prior to the research conducted for this dissertation, only a proof-of-concept STAM had been developed; actual STAM imaging was difficult and impractical. This dissertation describes the hardware and software development that has led to the first automated STAM. It focuses on significant problems that were encountered and their solutions. Specifically, accurate data acquisition necessitated the development of special-purpose data acquisition hardware, rotational controls, frequency controls, and automation controls. Inaccuracies in the laser scanning hardware were identified as a significant source of reconstruction error. This error was removed by estimation and correction algorithms. Rotation of the specimen for multiple-angle tomography required the development of a noise-tolerant projection-pose estimation algorithm. An iterative technique for image enhancement is also presented. The resulting STAM system is evaluated to determine its performance characteristics. A component-wise resolution analysis is presented that specifies the resolution-limit in both range and cross-range. The dependency of reconstruction quality on accurate representation of the magnitude and phase of the detected wave fields is also provided.

  6. A novel imaging technique based on the spatial coherence of backscattered waves: demonstration in the presence of acoustical clutter

    NASA Astrophysics Data System (ADS)

    Dahl, Jeremy J.; Pinton, Gianmarco F.; Lediju, Muyinatu; Trahey, Gregg E.

    2011-03-01

    In the last 20 years, the number of suboptimal and inadequate ultrasound exams has increased. This trend has been linked to the increasing population of overweight and obese individuals. The primary causes of image degradation in these individuals are often attributed to phase aberration and clutter. Phase aberration degrades image quality by distorting the transmitted and received pressure waves, while clutter degrades image quality by introducing incoherent acoustical interference into the received pressure wavefront. Although significant research efforts have pursued the correction of image degradation due to phase aberration, few efforts have characterized or corrected image degradation due to clutter. We have developed a novel imaging technique that is capable of differentiating ultrasonic signals corrupted by acoustical interference. The technique, named short-lag spatial coherence (SLSC) imaging, is based on the spatial coherence of the received ultrasonic wavefront at small spatial distances across the transducer aperture. We demonstrate comparative B-mode and SLSC images using full-wave simulations that include the effects of clutter and show that SLSC imaging generates contrast-to-noise ratios (CNR) and signal-to-noise ratios (SNR) that are significantly better than B-mode imaging under noise-free conditions. In the presence of noise, SLSC imaging significantly outperforms conventional B-mode imaging in all image quality metrics. We demonstrate the use of SLSC imaging in vivo and compare B-mode and SLSC images of human thyroid and liver.

  7. A clinical pilot study of a modular video-CT augmentation system for image-guided skull base surgery

    NASA Astrophysics Data System (ADS)

    Liu, Wen P.; Mirota, Daniel J.; Uneri, Ali; Otake, Yoshito; Hager, Gregory; Reh, Douglas D.; Ishii, Masaru; Gallia, Gary L.; Siewerdsen, Jeffrey H.

    2012-02-01

    Augmentation of endoscopic video with preoperative or intraoperative image data [e.g., planning data and/or anatomical segmentations defined in computed tomography (CT) and magnetic resonance (MR)], can improve navigation, spatial orientation, confidence, and tissue resection in skull base surgery, especially with respect to critical neurovascular structures that may be difficult to visualize in the video scene. This paper presents the engineering and evaluation of a video augmentation system for endoscopic skull base surgery translated to use in a clinical study. Extension of previous research yielded a practical system with a modular design that can be applied to other endoscopic surgeries, including orthopedic, abdominal, and thoracic procedures. A clinical pilot study is underway to assess feasibility and benefit to surgical performance by overlaying CT or MR planning data in realtime, high-definition endoscopic video. Preoperative planning included segmentation of the carotid arteries, optic nerves, and surgical target volume (e.g., tumor). An automated camera calibration process was developed that demonstrates mean re-projection accuracy (0.7+/-0.3) pixels and mean target registration error of (2.3+/-1.5) mm. An IRB-approved clinical study involving fifteen patients undergoing skull base tumor surgery is underway in which each surgery includes the experimental video-CT system deployed in parallel to the standard-of-care (unaugmented) video display. Questionnaires distributed to one neurosurgeon and two otolaryngologists are used to assess primary outcome measures regarding the benefit to surgical confidence in localizing critical structures and targets by means of video overlay during surgical approach, resection, and reconstruction.

  8. Imaging the position-dependent 3D force on microbeads subjected to acoustic radiation forces and streaming.

    PubMed

    Lamprecht, Andreas; Lakämper, Stefan; Baasch, Thierry; Schaap, Iwan A T; Dual, Jurg

    2016-07-01

    Acoustic particle manipulation in microfluidic channels is becoming a powerful tool in microfluidics to control micrometer sized objects in medical, chemical and biological applications. By creating a standing acoustic wave in the channel, the resulting pressure field can be employed to trap or sort particles. To design efficient and reproducible devices, it is important to characterize the pressure field throughout the volume of the microfluidic device. Here, we used an optically trapped particle as probe to measure the forces in all three dimensions. By moving the probe through the volume of the channel, we imaged spatial variations in the pressure field. In the direction of the standing wave this revealed a periodic energy landscape for 2 μm beads, resulting in an effective stiffness of 2.6 nN m(-1) for the acoustic trap. We found that multiple fabricated devices showed consistent pressure fields. Surprisingly, forces perpendicular to the direction of the standing wave reached values of up to 20% of the main-axis-values. To separate the direct acoustic force from secondary effects, we performed experiments with different bead sizes, which attributed some of the perpendicular forces to acoustic streaming. This method to image acoustically generated forces in 3D can be used to either minimize perpendicular forces or to employ them for specific applications in novel acoustofluidic designs. PMID:27302661

  9. Use Of Video In Microscopic And Ultrasonic Inspection

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.; Roth, Don J.

    1994-01-01

    Two combinations of video and image-data-processing techniques, tone-pulse encoding and precision acoustic imaging, yield grain- and pore-size distributions. Knowledge of such and of fiber orientation important because these characteristics directly related to tensile strength, hardness, fracture toughness, fracture stress, and resistance to impact. One of these combinations of techniques used in nondestructive evaluation of composite parts; both play important roles in development of lightweight composites for use at high temperatures in advanced engines and aircraft. Video system provides easy access to information on diffraction and refraction like that described in article, "Ultrasonic Inspection With Angular-Power-Spectrum Scanning" (LEW-15386).

  10. Second generation video imaging technique for assessing dermal exposure (VITAE System).

    PubMed

    Fenske, R A; Birnbaum, S G

    1997-09-01

    Development of a second-generation video imaging technique for assessing occupational skin exposure (VITAE) is described, its performance evaluated, and new procedures for exposure quantification are presented. The current VITAE system has higher resolution in regard to both its picture element array and gray scale when compared with the prototype system. System performance was evaluated during extended field deployment: variability was 3-4% during data acquisition for individual worker evaluation session, and 10% over a 22-day study period. Variabilities attributable to subject positioning and image outlining procedures were 2.7 and 1.2%, respectively. Visual observations of fluorescent tracer deposition on skin were used to classify specific body regions as either exposed of unexposed, and two computer-based classification criteria were tested against the visual classification. These criteria were generally better at minimizing false negative and false positive classification; sensitivity and predictive value reached 95 and 99%, respectively, when analysis was preceded by presampling of a subset of images. Variability in skin pigmentation was found to have a substantial effect on fluorescent tracer qualification, leading to development of new calibration procedures. Standard curves were generated by spotting a range of tracer concentrations on volunteer subjects and quantifying fluorescence with the VITAE system. These data were then grouped either by subject or by the magnitude of the background signal of the unexposed skin. The ability to control for the effects of skin pigmentation was found to be comparable for these grouping methods, indicating that calibration curves can be developed without the creation of a unique curve for each subject. PMID:9291561

  11. Green's Function Retrieval and Marchenko Imaging in a Dissipative Acoustic Medium.

    PubMed

    Slob, Evert

    2016-04-22

    Single-sided Marchenko equations for Green's function construction and imaging relate the measured reflection response of a lossless heterogeneous medium to an acoustic wave field inside this medium. I derive two sets of single-sided Marchenko equations for the same purpose, each in a heterogeneous medium, with one medium being dissipative and the other a corresponding medium with negative dissipation. Double-sided scattering data of the dissipative medium are required as input to compute the surface reflection response in the corresponding medium with negative dissipation. I show that each set of single-sided Marchenko equations leads to Green's functions with a virtual receiver inside the medium: one exists inside the dissipative medium and one in the medium with negative dissipation. This forms the basis of imaging inside a dissipative heterogeneous medium. I relate the Green's functions to the reflection response inside each medium, from which the image can be constructed. I illustrate the method with a one-dimensional example that shows the image quality. The method has a potentially wide range of imaging applications where the material under test is accessible from two sides. PMID:27152808

  12. Green's Function Retrieval and Marchenko Imaging in a Dissipative Acoustic Medium

    NASA Astrophysics Data System (ADS)

    Slob, Evert

    2016-04-01

    Single-sided Marchenko equations for Green's function construction and imaging relate the measured reflection response of a lossless heterogeneous medium to an acoustic wave field inside this medium. I derive two sets of single-sided Marchenko equations for the same purpose, each in a heterogeneous medium, with one medium being dissipative and the other a corresponding medium with negative dissipation. Double-sided scattering data of the dissipative medium are required as input to compute the surface reflection response in the corresponding medium with negative dissipation. I show that each set of single-sided Marchenko equations leads to Green's functions with a virtual receiver inside the medium: one exists inside the dissipative medium and one in the medium with negative dissipation. This forms the basis of imaging inside a dissipative heterogeneous medium. I relate the Green's functions to the reflection response inside each medium, from which the image can be constructed. I illustrate the method with a one-dimensional example that shows the image quality. The method has a potentially wide range of imaging applications where the material under test is accessible from two sides.

  13. Design factors of intravascular dual frequency transducers for super-harmonic contrast imaging and acoustic angiography

    NASA Astrophysics Data System (ADS)

    Ma, Jianguo; Martin, K. Heath; Li, Yang; Dayton, Paul A.; Shung, K. Kirk; Zhou, Qifa; Jiang, Xiaoning

    2015-05-01

    Imaging of coronary vasa vasorum may lead to assessment of the vulnerable plaque development in diagnosis of atherosclerosis diseases. Dual frequency transducers capable of detection of microbubble super-harmonics have shown promise as a new contrast-enhanced intravascular ultrasound (CE-IVUS) platform with the capability of vasa vasorum imaging. Contrast-to-tissue ratio (CTR) in CE-IVUS imaging can be closely associated with low frequency transmitter performance. In this paper, transducer designs encompassing different transducer layouts, transmitting frequencies, and transducer materials are compared for optimization of imaging performance. In the layout selection, the stacked configuration showed superior super-harmonic imaging compared with the interleaved configuration. In the transmitter frequency selection, a decrease in frequency from 6.5 MHz to 5 MHz resulted in an increase of CTR from 15 dB to 22 dB when receiving frequency was kept constant at 30 MHz. In the material selection, the dual frequency transducer with the lead magnesium niobate-lead titanate (PMN-PT) 1-3 composite transmitter yielded higher axial resolution compared to single crystal transmitters (70 μm compared to 150 μm pulse length). These comparisons provide guidelines for the design of intravascular acoustic angiography transducers.

  14. Design factors of intravascular dual frequency transducers for super-harmonic contrast imaging and acoustic angiography.

    PubMed

    Ma, Jianguo; Martin, K Heath; Li, Yang; Dayton, Paul A; Shung, K Kirk; Zhou, Qifa; Jiang, Xiaoning

    2015-05-01

    Imaging of coronary vasa vasorum may lead to assessment of the vulnerable plaque development in diagnosis of atherosclerosis diseases. Dual frequency transducers capable of detection of microbubble super-harmonics have shown promise as a new contrast-enhanced intravascular ultrasound (CE-IVUS) platform with the capability of vasa vasorum imaging. Contrast-to-tissue ratio (CTR) in CE-IVUS imaging can be closely associated with low frequency transmitter performance. In this paper, transducer designs encompassing different transducer layouts, transmitting frequencies, and transducer materials are compared for optimization of imaging performance. In the layout selection, the stacked configuration showed superior super-harmonic imaging compared with the interleaved configuration. In the transmitter frequency selection, a decrease in frequency from 6.5 MHz to 5 MHz resulted in an increase of CTR from 15 dB to 22 dB when receiving frequency was kept constant at 30 MHz. In the material selection, the dual frequency transducer with the lead magnesium niobate-lead titanate (PMN-PT) 1-3 composite transmitter yielded higher axial resolution compared to single crystal transmitters (70 μm compared to 150 μm pulse length). These comparisons provide guidelines for the design of intravascular acoustic angiography transducers. PMID:25856384

  15. Design factors of intravascular dual frequency transducers for super-harmonic contrast imaging and acoustic angiography

    PubMed Central

    Ma, Jianguo; Martin, K. Heath; Li, Yang; Dayton, Paul A.; Shung, K. Kirk; Zhou, Qifa; Jiang, Xiaoning

    2015-01-01

    Imaging of coronary vasa vasorum may lead to assessment of the vulnerable plaque development in diagnosis of atherosclerosis diseases. Dual frequency transducers capable of detection of microbubble super-harmonics have shown promise as a new contrast-enhanced intravascular ultrasound (CE-IVUS) platform with the capability of vasa vasorum imaging. Contrast-to-tissue ratio (CTR) in CE-IVUS imaging can be closely associated with the low frequency transmitter performance. In this paper, transducer designs encompassing different transducer layouts, transmitting frequencies, and transducer materials are compared for optimization of imaging performance. In the layout selection, the stacked configuration showed superior super-harmonic imaging compared with the interleaved configuration. In the transmitter frequency selection, a decrease in frequency from 6.5 MHz to 5 MHz resulted in an increase of CTR from 15 dB to 22 dB when receiving frequency was kept constant at 30 MHz. In the material selection, the dual frequency transducer with the lead magnesium niobate-lead titanate (PMN-PT) 1-3 composite transmitter yielded higher axial resolution compared to single crystal transmitters (70 μm compared to 150 μm pulse length). These comparisons provide guidelines for design of intravascular acoustic angiography transducers. PMID:25856384

  16. ACOUSTICAL IMAGING AND MECHANICAL PROPERTIES OF SOFT ROCK AND MARINE SEDIMENTS

    SciTech Connect

    Thurman E. Scott, Jr., Ph.D.; Musharraf Zaman, Ph.D.; Younane Abousleiman, Ph.D.

    2001-04-01

    The oil and gas industry has encountered significant problems in the production of oil and gas from weak rocks (such as chalks and limestones) and from unconsolidated sand formations. Problems include subsidence, compaction, sand production, and catastrophic shallow water sand flows during deep water drilling. Together these cost the petroleum industry hundreds of millions of dollars annually. The goals of this first quarterly report is to document the progress on the project to provide data on the acoustic imaging and mechanical properties of soft rock and marine sediments. The project is intended to determine the geophysical (acoustic velocities) rock properties of weak, poorly cemented rocks and unconsolidated sands. In some cases these weak formations can create problems for reservoir engineers. For example, it cost Phillips Petroleum 1 billion dollars to repair of offshore production facilities damaged during the unexpected subsidence and compaction of the Ekofisk Field in the North Sea (Sulak 1991). Another example is the problem of shallow water flows (SWF) occurring in sands just below the seafloor encountered during deep water drilling operations. In these cases the unconsolidated sands uncontrollably flow up around the annulus of the borehole resulting in loss of the drill casing. The $150 million dollar loss of the Ursa development project in the U.S. Gulf Coast resulted from an uncontrolled SWF (Furlow 1998a,b; 1999a,b). The first three tasks outlined in the work plan are: (1) obtain rock samples, (2) construct new acoustic platens, (3) calibrate and test the equipment. These have been completed as scheduled. Rock Mechanics Institute researchers at the University of Oklahoma have obtained eight different types of samples for the experimental program. These include: (a) Danian Chalk, (b) Cordoba Cream Limestone, (c) Indiana Limestone, (d) Ekofisk Chalk, (e) Oil Creek Sandstone, (f) unconsolidated Oil Creek sand, and (g) unconsolidated Brazos river sand

  17. Variable ultrasound trigger delay for improved magnetic resonance acoustic radiation force imaging

    NASA Astrophysics Data System (ADS)

    Mougenot, Charles; Waspe, Adam; Looi, Thomas; Drake, James M.

    2016-01-01

    Magnetic resonance acoustic radiation force imaging (MR-ARFI) allows the quantification of microscopic displacements induced by ultrasound pulses, which are proportional to the local acoustic intensity. This study describes a new method to acquire MR-ARFI maps, which reduces the measurement noise in the quantification of displacement as well as improving its robustness in the presence of motion. Two MR-ARFI sequences were compared in this study. The first sequence ‘variable MSG’ involves switching the polarity of the motion sensitive gradient (MSG) between odd and even image frames. The second sequence named ‘static MSG’ involves a variable ultrasound trigger delay to sonicate during the first or second MSG for odd and even image frames, respectively. As previously published, the data acquired with a variable MSG required the use of reference data acquired prior to any sonication to process displacement maps. In contrary, data acquired with a static MSG were converted to displacement maps without using reference data acquired prior to the sonication. Displacement maps acquired with both sequences were compared by performing sonications for three different conditions: in a polyacrylamide phantom, in the leg muscle of a freely breathing pig and in the leg muscle of pig under apnea. The comparison of images acquired at even image frames and odd image frames indicates that the sequence with a static MSG provides a significantly better steady state (p  <  0.001 based on a Student’s t-test) than the images acquired with a variable MSG. In addition no reference data prior to sonication were required to process displacement maps for data acquired with a static MSG. The absence of reference data prior to sonication provided a 41% reduction of the spatial distribution of noise (p  <  0.001 based on a Student’s t-test) and reduced the sensitivity to motion for displacements acquired with a static MSG. No significant differences were expected and

  18. Multimodal Translation System Using Texture-Mapped Lip-Sync Images for Video Mail and Automatic Dubbing Applications

    NASA Astrophysics Data System (ADS)

    Morishima, Shigeo; Nakamura, Satoshi

    2004-12-01

    We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.

  19. Chlamydomonas Xanthophyll Cycle Mutants Identified by Video Imaging of Chlorophyll Fluorescence Quenching.

    PubMed Central

    Niyogi, K. K.; Bjorkman, O.; Grossman, A. R.

    1997-01-01

    The photosynthetic apparatus in plants is protected against oxidative damage by processes that dissipate excess absorbed light energy as heat within the light-harvesting complexes. This dissipation of excitation energy is measured as nonphotochemical quenching of chlorophyll fluorescence. Nonphotochemical quenching depends primarily on the [delta]pH that is generated by photosynthetic electron transport, and it is also correlated with the amounts of zeaxanthin and antheraxanthin that are formed from violaxanthin by the operation of the xanthophyll cycle. To perform a genetic dissection of nonphotochemical quenching, we have isolated npq mutants of Chlamydomonas by using a digital video-imaging system. In excessive light, the npq1 mutant is unable to convert violaxanthin to antheraxanthin and zeaxanthin; this reaction is catalyzed by violaxanthin de-epoxidase. The npq2 mutant appears to be defective in zeaxanthin epoxidase activity, because it accumulates zeaxanthin and completely lacks antheraxanthin and violaxanthin under all light conditions. Characterization of these mutants demonstrates that a component of nonphotochemical quenching that develops in vivo in Chlamydomonas depends on the accumulation of zeaxanthin and antheraxanthin via the xanthophyll cycle. However, observation of substantial, rapid, [delta]pH-dependent nonphotochemical quenching in the npq1 mutant demonstrates that the formation of zeaxanthin and antheraxanthin via violaxanthin de-epoxidase activity is not required for all [delta]pH-dependent nonphotochemical quenching in this alga. Furthermore, the xanthophyll cycle is not required for survival of Chlamydomonas in excessive light. PMID:12237386

  20. A new engineering approach to reveal correlation of physiological change and spontaneous expression from video images

    NASA Astrophysics Data System (ADS)

    Yang, Fenglei; Hu, Sijung; Ma, Xiaoyun; Hassan, Harnani; Wei, Dongqing

    2015-03-01

    Spontaneous expression is associated with physiological states, i.e., heart rate, respiration, oxygen saturation (SpO2%), and heart rate variability (HRV). There have yet not sufficient efforts to explore correlation of physiological change and spontaneous expression. This study aims to study how spontaneous expression is associated with physiological changes with an approved protocol or through the videos provided from Denver Intensity of Spontaneous Facial Action Database. Not like a posed expression, motion artefact in spontaneous expression is one of evitable challenges to be overcome in the study. To obtain a physiological signs from a region of interest (ROI), a new engineering approach is being developed with an artefact-reduction method consolidated 3D active appearance model (AAM) based track, affine transformation based alignment with opto-physiological mode based imaging photoplethysmography. Also, a statistical association spaces is being used to interpret correlation of spontaneous expressions and physiological states including their probability densities by means of Gaussian Mixture Model. The present work is revealing a new avenue of study associations of spontaneous expressions and physiological states with its prospect of applications on physiological and psychological assessment.

  1. Application of Video Image Correlation Techniques to the Space Shuttle External Tank Foam Materials

    NASA Technical Reports Server (NTRS)

    Hilburger, Mark W.; Nemeth, Michael P.

    2005-01-01

    Results that illustrate the use of a video-image-correlation-based displacement and strain measurement system to assess the effects of material nonuniformities on the behavior of the sprayed-on foam insulation (SOFI) used for the thermal protection system on the Space Shuttle External Tank are presented. Standard structural verification specimens for the SOFI material with and without cracks and subjected to mechanical or thermal loading conditions were tested. Measured full-field displacements and strains are presented for selected loading conditions to illustrate the behavior of the foam and the viability of the measurement technology. The results indicate that significant strain localization can occur in the foam because of material nonuniformities. In particular, elongated cells in the foam can interact with other geometric or material discontinuities in the foam and develop large-magnitude localized strain concentrations that likely initiate failures. Furthermore, some of the results suggest that continuum mechanics and linear elastic fracture mechanics might not adequately represent the physical behavior of the foam, and failure predictions based on homogeneous linear material models are likely to be inadequate.

  2. Application of Video Image Correlation Techniques to the Space Shuttle External Tank Foam Materials

    NASA Technical Reports Server (NTRS)

    Hilburger, Mark W.; Nemeth, Michael P.

    2006-01-01

    Results that illustrate the use of a video-image-correlation-based displacement and strain measurement system to assess the effects of material nonuniformities on the behavior of the sprayed-on foam insulation (SOFI) used for the thermal protection system on the Space Shuttle External Tank are presented. Standard structural verification specimens for the SOFI material with and without cracks and subjected to mechanical or thermal loading conditions were tested. Measured full-field displacements and strains are presented for selected loading conditions to illustrate the behavior of the foam and the viability of the measurement technology. The results indicate that significant strain localization can occur in the foam because of material nonuniformities. In particular, elongated cells in the foam can interact with other geometric or material discontinuities in the foam and develop large-magnitude localized strain concentrations that likely initiate failures. Furthermore, some of the results suggest that continuum mechanics and linear elastic fracture mechanics might not adequately represent the physical behavior of the foam, and failure predictions based on homogeneous linear material models are likely to be inadequate.

  3. All-optical video-image encryption with enforced security level using independent component analysis

    NASA Astrophysics Data System (ADS)

    Alfalou, A.; Mansour, A.

    2007-10-01

    In the last two decades, wireless communications have been introduced in various applications. However, the transmitted data can be, at any moment, intercepted by non-authorized people. That could explain why data encryption and secure transmission have gained enormous popularity. In order to secure data transmission, we should pay attention to two aspects: transmission rate and encryption security level. In this paper, we address these two aspects by proposing a new video-image transmission scheme. This new system consists in using the advantage of optical high transmission rate and some powerful signal processing tools to secure the transmitted data. The main idea of our approach is to secure transmitted information at two levels: at the classical level by using an adaptation of standard optical techniques and at a second level (spatial diversity) by using independent transmitters. In the second level, a hacker would need to intercept not only one channel but all of them in order to retrieve information. At the receiver, we can easily apply ICA algorithms to decrypt the received signals and retrieve information.

  4. The utility of acoustic radiation force impulse imaging in diagnosing acute appendicitis and staging its severity

    PubMed Central

    Göya, Cemil; Hamidi, Cihad; Okur, Mehmet Hanifi; İçer, Mustafa; Oğuz, Abdullah; Hattapoğlu, Salih; Çetinçakmak, Mehmet Güli; Teke, Memik

    2014-01-01

    PURPOSE The aim of this study was to investigate the feasibility of using acoustic radiation force impulse (ARFI) imaging to diagnose acute appendicitis. METHODS Abdominal ultrasonography (US) and ARFI imaging were performed in 53 patients that presented with right lower quadrant pain, and the results were compared with those obtained in 52 healthy subjects. Qualitative evaluation of the patients was conducted by Virtual Touch™ tissue imaging (VTI), while quantitative evaluation was performed by Virtual Touch™ tissue quantification (VTQ) measuring the shear wave velocity (SWV). The severity of appendix inflammation was observed and rated using ARFI imaging in patients diagnosed with acute appendicitis. Alvarado scores were determined for all patients presenting with right lower quadrant pain. All patients diagnosed with appendicitis received appendectomies. The sensitivity and specificity of ARFI imaging relative to US was determined upon confirming the diagnosis of acute appendicitis via histopathological analysis. RESULTS The Alvarado score had a sensitivity and specificity of 70.8% and 20%, respectively, in detecting acute appendicitis. Abdominal US had 83.3% sensitivity and 80% specificity, while ARFI imaging had 100% sensitivity and 98% specificity, in diagnosing acute appendicitis. The median SWV value was 1.11 m/s (range, 0.6–1.56 m/s) for healthy appendix and 3.07 m/s (range, 1.37–4.78 m/s) for acute appendicitis. CONCLUSION ARFI imaging may be useful in guiding the clinical management of acute appendicitis, by helping its diagnosis and determining the severity of appendix inflammation. PMID:25323836

  5. Evaluation of real-time acoustical holography for breast imaging and biopsy guidance

    NASA Astrophysics Data System (ADS)

    Lehman, Constance D.; Andre, Michael P.; Fecht, Barbara A.; Johansen, Jennifer M.; Shelby, Ronald L.; Shelby, Jerod O.

    1999-05-01

    Ultrasound is an attractive modality for adjunctive characterization of certain breast lesions, but it is not considered specific for cancer and it is not recommended for screening. An imaging technique remarkably different from pulse-echo ultrasound, termed Optical SonographyTM (Advanced Diagnostics, Inc.), uses the through-transmission signal. The method was applied to breast examinations in 41 asymptomatic and symptomatic women ranging in age from 18 to 83 years to evaluate this imaging modality for detection and characterization of breast disease and normal tissue. This approach uses coherent sound and coherent light to produce real-time, large field-of-view images with pronounced edge definition in soft tissues of the body. The system patient interface was modified to improve coupling to the breast and bring the chest wall to within 3 cm of the sound beam. System resolution (full width half maximum of the line-spread function) was 0.5 mm for a swept-frequency beam centered at 2.7 MHz. Resolution degrades slightly in the periphery of the very large 15.2-cm field of view. Dynamic range of the reconstructed 'raw' images (no post processing) was 3000:1. Included in the study population were women with dense parenchyma, palpable ductal carcinoma in situ with negative mammography, superficial and deep fibroadenomas, and calcifications. Successful breast imaging was performed in 40 of 41 women. These images were then compared with images generated using conventional X-ray mammography and pulse-echo ultrasound. Margins of lesions and internal textures were particularly well defined and provided substantial contrast to fatty and dense parenchyma. In two malignant lesions, Optical SonographyTM appeared to approximate more closely tumor extent compared to mammography than pulse-echo sonography. These preliminary studies indicate the method has unique potential for detecting, differentiating, and guiding the biopsy of breast lesions using real-time acoustical holography.

  6. Preliminary study of copper oxide nanoparticles acoustic and magnetic properties for medical imaging

    NASA Astrophysics Data System (ADS)

    Perlman, Or; Weitz, Iris S.; Azhari, Haim

    2015-03-01

    The implementation of multimodal imaging in medicine is highly beneficial as different physical properties may provide complementary information, augmented detection ability, and diagnosis verification. Nanoparticles have been recently used as contrast agents for various imaging modalities. Their significant advantage over conventional large-scale contrast agents is the ability of detection at early stages of the disease, being less prone to obstacles on their path to the target region, and possible conjunction to therapeutics. Copper ions play essential role in human health. They are used as a cofactor for multiple key enzymes involved in various fundamental biochemistry processes. Extremely small size copper oxide nanoparticles (CuO-NPs) are readily soluble in water with high colloidal stability yielding high bioavailability. The goal of this study was to examine the magnetic and acoustic characteristics of CuO-NPs in order to evaluate their potential to serve as contrast imaging agent for both MRI and ultrasound. CuO-NPs 7nm in diameter were synthesized by hot solution method. The particles were scanned using a 9.4T MRI and demonstrated a concentration dependent T1 relaxation time shortening phenomenon. In addition, it was revealed that CuO-NPs can be detected using the ultrasonic B-scan imaging. Finally, speed of sound based ultrasonic computed tomography was applied and showed that CuO-NPs can be clearly imaged. In conclusion, the preliminary results obtained, positively indicate that CuO-NPs may be imaged by both MRI and ultrasound. The results motivate additional in-vivo studies, in which the clinical utility of fused images derived from both modalities for diagnosis improvement will be studied.

  7. A method for the frequency control in time-resolved two-dimensional gigahertz surface acoustic wave imaging

    SciTech Connect

    Kaneko, Shogo; Tomoda, Motonobu; Matsuda, Osamu

    2014-01-15

    We describe an extension of the time-resolved two-dimensional gigahertz surface acoustic wave imaging based on the optical pump-probe technique with periodic light source at a fixed repetition frequency. Usually such imaging measurement may generate and detect acoustic waves with their frequencies only at or near the integer multiples of the repetition frequency. Here we propose a method which utilizes the amplitude modulation of the excitation pulse train to modify the generation frequency free from the mentioned limitation, and allows for the first time the discrimination of the resulted upper- and lower-side-band frequency components in the detection. The validity of the method is demonstrated in a simple measurement on an isotropic glass plate covered by a metal thin film to extract the dispersion curves of the surface acoustic waves.

  8. Extraction of Benthic Cover Information from Video Tows and Photographs Using Object-Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Estomata, M. T. L.; Blanco, A. C.; Nadaoka, K.; Tomoling, E. C. M.

    2012-07-01

    Mapping benthic cover in deep waters comprises a very small proportion of studies in the field of research. Majority of benthic cover mapping makes use of satellite images and usually, classification is carried out only for shallow waters. To map the seafloor in optically deep waters, underwater videos and photos are needed. Some researchers have applied this method on underwater photos, but made use of different classification methods such as: Neural Networks, and rapid classification via down sampling. In this study, accurate bathymetric data obtained using a multi-beam echo sounder (MBES) was attempted to be used as complementary data with the underwater photographs. Due to the absence of a motion reference unit (MRU), which applies correction to the data gathered by the MBES, accuracy of the said depth data was compromised. Nevertheless, even with the absence of accurate bathymetric data, object-based image analysis (OBIA), which used rule sets based on information such as shape, size, area, relative distance, and spectral information, was still applied. Compared to pixel-based classifications, OBIA was able to classify more specific benthic cover types other than coral and sand, such as rubble and fish. Through the use of rule sets on area, less than or equal to 700 pixels for fish and between 700 to 10,000 pixels for rubble, as well as standard deviation values to distinguish texture, fish and rubble were identified. OBIA produced benthic cover maps that had higher overall accuracy, 93.78±0.85%, as compared to pixel-based methods that had an average accuracy of only 87.30±6.11% (p-value = 0.0001, α = 0.05).

  9. Full-Wave Iterative Image Reconstruction in Photoacoustic Tomography With Acoustically Inhomogeneous Media

    PubMed Central

    Huang, Chao; Wang, Kun; Nie, Liming; Wang, Lihong V.; Anastasio, Mark A.

    2014-01-01

    Existing approaches to image reconstruction in photoacoustic computed tomography (PACT) with acoustically heterogeneous media are limited to weakly varying media, are computationally burdensome, and/or cannot effectively mitigate the effects of measurement data incompleteness and noise. In this work, we develop and investigate a discrete imaging model for PACT that is based on the exact photoacoustic (PA) wave equation and facilitates the circumvention of these limitations. A key contribution of the work is the establishment of a procedure to implement a matched forward and backprojection operator pair associated with the discrete imaging model, which permits application of a wide-range of modern image reconstruction algorithms that can mitigate the effects of data incompleteness and noise. The forward and backprojection operators are based on the k-space pseudospectral method for computing numerical solutions to the PA wave equation in the time domain. The developed reconstruction methodology is investigated by use of both computer-simulated and experimental PACT measurement data. PMID:23529196

  10. Acoustically active liposome-nanobubble complexes for enhanced ultrasonic imaging and ultrasound-triggered drug delivery.

    PubMed

    Nguyen, An T; Wrenn, Steven P

    2014-01-01

    Ultrasound is well known as a safe, reliable imaging modality. A historical limitation of ultrasound, however, was its inability to resolve structures at length scales less than nominally 20 µm, which meant that classical ultrasound could not be used in applications such as echocardiography and angiogenesis where one requires the ability to image small blood vessels. The advent of ultrasound contrast agents, or microbubbles, removed this limitation and ushered in a new wave of enhanced ultrasound applications. In recent years, the microbubbles have been designed to achieve yet another application, namely ultrasound-triggered drug delivery. Ultrasound contrast agents are thus tantamount to 'theranostic' vehicles, meaning they can do both therapy (drug delivery) and imaging (diagnostics). The use of ultrasound contrast agents as drug delivery vehicles, however, is perhaps less than ideal when compared to traditional drug delivery vehicles (e.g., polymeric microcapsules and liposomes) which have greater drug carrying capacities. The drawback of the traditional drug delivery vehicles is that they are not naturally acoustically active and cannot be used for imaging. The notion of a theranostic vehicle is sufficiently intriguing that many attempts have been made in recent years to achieve a vehicle that combines the echogenicity of microbubbles with the drug carrying capacity of liposomes. The attempts can be classified into three categories, namely entrapping, tethering, and nesting. Of these, nesting is the newest-and perhaps the most promising. PMID:24459007

  11. Photoacoustic and ultrasound imaging with a gas-coupled laser acoustic line detector

    NASA Astrophysics Data System (ADS)

    Johnson, Jami L.; van Wijk, Kasper; Caron, James N.; Timmerman, Miriam

    2016-03-01

    Conventional contacting transducers are highly sensitive and readily available for ultrasonic and photoacoustic imaging. On the other hand, optical detection can be advantageous when a small sensor footprint, large bandwidth and no contact are essential. However, most optical methods utilizing interferometry or Doppler vibrometry rely on the reflection of light from the object. We present a non-contact detection method for photoacoustic and ultrasound imaging--termed Gas-Coupled Laser Acoustic Detection (GCLAD)--that does not involve surface reflectivity. GCLAD measures the displacement along a line in the air parallel to the object. Information about point displacements along the line is lost with this method, but resolution is increased over techniques that utilize finite point-detectors when used as an integrating line detector. In this proceeding, we present a formula for quantifying surface displacement remotely with GCLAD. We will validate this result by comparison with a commercial vibrometer. Finally, we will present two-dimensional imaging results using GCLAD as a line detector for photoacoustic and laser-ultrasound imaging.

  12. Integrated homeland security system with passive thermal imaging and advanced video analytics

    NASA Astrophysics Data System (ADS)

    Francisco, Glen; Tillman, Jennifer; Hanna, Keith; Heubusch, Jeff; Ayers, Robert

    2007-04-01

    A complete detection, management, and control security system is absolutely essential to preempting criminal and terrorist assaults on key assets and critical infrastructure. According to Tom Ridge, former Secretary of the US Department of Homeland Security, "Voluntary efforts alone are not sufficient to provide the level of assurance Americans deserve and they must take steps to improve security." Further, it is expected that Congress will mandate private sector investment of over $20 billion in infrastructure protection between 2007 and 2015, which is incremental to funds currently being allocated to key sites by the department of Homeland Security. Nearly 500,000 individual sites have been identified by the US Department of Homeland Security as critical infrastructure sites that would suffer severe and extensive damage if a security breach should occur. In fact, one major breach in any of 7,000 critical infrastructure facilities threatens more than 10,000 people. And one major breach in any of 123 facilities-identified as "most critical" among the 500,000-threatens more than 1,000,000 people. Current visible, nightvision or near infrared imaging technology alone has limited foul-weather viewing capability, poor nighttime performance, and limited nighttime range. And many systems today yield excessive false alarms, are managed by fatigued operators, are unable to manage the voluminous data captured, or lack the ability to pinpoint where an intrusion occurred. In our 2006 paper, "Critical Infrastructure Security Confidence Through Automated Thermal Imaging", we showed how a highly effective security solution can be developed by integrating what are now available "next-generation technologies" which include: Thermal imaging for the highly effective detection of intruders in the dark of night and in challenging weather conditions at the sensor imaging level - we refer to this as the passive thermal sensor level detection building block Automated software detection

  13. Imaging of 3D Ocean Turbulence Microstructure Using Low Frequency Acoustic Waves

    NASA Astrophysics Data System (ADS)

    Minakov, Alexander; Kolyukhin, Dmitriy; Keers, Henk

    2015-04-01

    In the past decade the technique of imaging the ocean structure with low-frequency signal (Hz), produced by air-guns and typically employed during conventional multichannel seismic data acquisition, has emerged. The method is based on extracting and stacking the acoustic energy back-scattered by the ocean temperature and salinity micro- and meso-structure (1 - 100 meters). However, a good understanding of the link between the scattered wavefield utilized by the seismic oceanography and physical processes in the ocean is still lacking. We describe theory and the numerical implementation of a 3D time-dependent stochastic model of ocean turbulence. The velocity and temperature are simulated as homogeneous Gaussian isotropic random fields with the Kolmogorov-Obukhov energy spectrum in the inertial subrange. Numerical modeling technique is employed for sampling of realizations of random fields with a given spatial-temporal spectral tensor. The model used is shown to be representative for a wide range of scales. Using this model, we provide a framework to solve the forward and inverse acoustic scattering problem using marine seismic data. Our full-waveform inversion method is based on the ray-Born approximation which is specifically suitable for the modelling of small velocity perturbations in the ocean. This is illustrated by showing a good match between synthetic seismograms computed using ray-Born and synthetic seismograms produced with a more computationally expensive finite-difference method.

  14. Measurement of microbubble-induced acoustic microstreaming using microparticle image velocimetry

    NASA Astrophysics Data System (ADS)

    Tho, Paul; Zhu, Yonggang; Manasseh, Richard; Ooi, Andrew

    2005-02-01

    Micro particle image velocimetry (PIV) measurements of the velocity fields around oscillating gas bubbles in microfluidic geometries were undertaken. Two sets of experiments were performed. The first measured the acoustic microstreaming around a gas bubble with a radius of 195 μm attached to a wall in a chamber of 30 mm× 30 mm× 0.66 mm. Under acoustic excitation, vigorous streaming in the form of a circulation around on the bubble was observed. The streaming flow was highest near the surface of the bubble with velocities around 1mm/s measured. The velocity magnitude decreased rapidly with increasing distance from the bubble. The velocity field determined by micro-PIV matched the streaklines of the fluorescent particles very well. The second set of experiments measured the streaming at the interface between a trapped air bubble and water inside a microchannel of cross section 100 μm × 90 μm. The streaming flow was limited to within a short distance from the interface and was observed as a looping flow, moving towards the interface from the top and being circulated back from the bottom of the channel. The characteristic streaming velocity was in the order of 100 μm/s.

  15. Evaluating the intensity of the acoustic radiation force impulse (ARFI) in intravascular ultrasound (IVUS) imaging: Preliminary in vitro results.

    PubMed

    Shih, Cho-Chiang; Lai, Ting-Yu; Huang, Chih-Chung

    2016-08-01

    The ability to measure the elastic properties of plaques and vessels is significant in clinical diagnosis, particularly for detecting a vulnerable plaque. A novel concept of combining intravascular ultrasound (IVUS) imaging and acoustic radiation force impulse (ARFI) imaging has recently been proposed. This method has potential in elastography for distinguishing between the stiffness of plaques and arterial vessel walls. However, the intensity of the acoustic radiation force requires calibration as a standard for the further development of an ARFI-IVUS imaging device that could be used in clinical applications. In this study, a dual-frequency transducer with 11MHz and 48MHz was used to measure the association between the biological tissue displacement and the applied acoustic radiation force. The output intensity of the acoustic radiation force generated by the pushing element ranged from 1.8 to 57.9mW/cm(2), as measured using a calibrated hydrophone. The results reveal that all of the acoustic intensities produced by the transducer in the experiments were within the limits specified by FDA regulations and could still displace the biological tissues. Furthermore, blood clots with different hematocrits, which have elastic properties similar to the lipid pool of plaques, with stiffness ranging from 0.5 to 1.9kPa could be displaced from 1 to 4μm, whereas the porcine arteries with stiffness ranging from 120 to 291kPa were displaced from 0.4 to 1.3μm when an acoustic intensity of 57.9mW/cm(2) was used. The in vitro ARFI images of the artery with a blood clot and artificial arteriosclerosis showed a clear distinction of the stiffness distributions of the vessel wall. All the results reveal that ARFI-IVUS imaging has the potential to distinguish the elastic properties of plaques and vessels. Moreover, the acoustic intensity used in ARFI imaging has been experimentally quantified. Although the size of this two-element transducer is unsuitable for IVUS imaging, the

  16. Comparison of ultrasound B-mode, strain imaging, acoustic radiation force impulse displacement and shear wave velocity imaging using real time clinical breast images

    NASA Astrophysics Data System (ADS)

    Manickam, Kavitha; Machireddy, Ramasubba Reddy; Raghavan, Bagyam

    2016-04-01

    It has been observed that many pathological process increase the elastic modulus of soft tissue compared to normal. In order to image tissue stiffness using ultrasound, a mechanical compression is applied to tissues of interest and local tissue deformation is measured. Based on the mechanical excitation, ultrasound stiffness imaging methods are classified as compression or strain imaging which is based on external compression and Acoustic Radiation Force Impulse (ARFI) imaging which is based on force generated by focused ultrasound. When ultrasound is focused on tissue, shear wave is generated in lateral direction and shear wave velocity is proportional to stiffness of tissues. The work presented in this paper investigates strain elastography and ARFI imaging in clinical cancer diagnostics using real time patient data. Ultrasound B-mode imaging, strain imaging, ARFI displacement and ARFI shear wave velocity imaging were conducted on 50 patients (31 Benign and 23 malignant categories) using Siemens S2000 machine. True modulus contrast values were calculated from the measured shear wave velocities. For ultrasound B-mode, ARFI displacement imaging and strain imaging, observed image contrast and Contrast to Noise Ratio were calculated for benign and malignant cancers. Observed contrast values were compared based on the true modulus contrast values calculated from shear wave velocity imaging. In addition to that, student unpaired t-test was conducted for all the four techniques and box plots are presented. Results show that, strain imaging is better for malignant cancers whereas ARFI imaging is superior than strain imaging and B-mode for benign lesions representations.

  17. On the efficiency of image completion methods for intra prediction in video coding with large block structures

    NASA Astrophysics Data System (ADS)

    Doshkov, Dimitar; Jottrand, Oscar; Wiegand, Thomas; Ndjiki-Nya, Patrick

    2013-02-01

    Intra prediction is a fundamental tool in video coding with hybrid block-based architecture. Recent investigations have shown that one of the most beneficial elements for a higher compression performance in high-resolution videos is the incorporation of larger block structures. Thus in this work, we investigate the performance of novel intra prediction modes based on different image completion techniques in a new video coding scheme with large block structures. Image completion methods exploit the fact that high frequency image regions yield high coding costs when using classical H.264/AVC prediction modes. This problem is tackled by investigating the incorporation of several intra predictors using the concept of Laplace partial differential equation (PDE), Least Square (LS) based linear prediction and the Auto Regressive model. A major aspect of this article is the evaluation of the coding performance in a qualitative (i.e. coding efficiency) manner. Experimental results show significant improvements in compression (up to 7.41 %) by integrating the LS-based linear intra prediction.

  18. Development of acoustic observation method for seafloor hydrothermal flows

    NASA Astrophysics Data System (ADS)

    Mochizuki, M.; Tamura, H.; Asada, A.; Kinoshita, M.; Tamaki, K.

    2012-12-01

    In October 2009, we conducted seafloor reconnaissance using a manned deep-sea submersible Shinkai6500 in Central Indian Ridge 18-20deg.S, where hydrothermal plume signatures were previously perceived. Acoustic video camera "DIDSON" was equipped on the top of Shinkai6500 in order to get acoustic video images of hydrothermal plumes. The acoustic video images of the hydrothermal plumes had been captured in three of seven dives. We could identify shadings inside the acoustic video images of the hydrothermal plumes. Silhouettes of the hydrothermal plumes varied from second to second, and the shadings inside them also varied. These variations corresponded to internal structures and flows of the plumes. DIDSON (Dual-Frequency IDentification SONar) is acoustic lens-based sonar. It has sufficiently high resolution and rapid refresh rate that it can substitute for optical system in turbid or dark water where optical systems fail. Ins. of Industrial Science, University of Tokyo has understood DIDSON's superior performance and tried to develop a new observation method based on DIDSON for hydrothermal discharging from seafloor vent. We expected DIDSON to reveal whole image of hydrothermal plume as well as detail inside the plume. The proposed method to observe and measure hydrothermal flow is the one to utilize a sheet-like acoustic beam. Scanning with concentrated acoustic beam gives distances to the edges of the hydrothermal flows. And then, the shapes of the flows can be identified even in low and zero visibility conditions. Tank experiment was conducted. The purposes of this experiment were to make an attempt at proposed method to delineate underwater hydrothermal flows and to understand relationships among acoustic video image, flow rate and water temperature. Water was heated in the hot tub and pumped to the water tank through the silicon tube. We observed water flows discharging from the tip of the tube with DIDSON. Flow rate had been controlled and temperatures of the

  19. Failure prediction in ceramic composites using acoustic emission and digital image correlation

    NASA Astrophysics Data System (ADS)

    Whitlow, Travis; Jones, Eric; Przybyla, Craig

    2016-02-01

    The objective of the work performed here was to develop a methodology for linking in-situ detection of localized matrix cracking to the final failure location in continuous fiber reinforced CMCs. First, the initiation and growth of matrix cracking are measured and triangulated via acoustic emission (AE) detection. High amplitude events at relatively low static loads can be associated with initiation of large matrix cracks. When there is a localization of high amplitude events, a measurable effect on the strain field can be observed. Full field surface strain measurements were obtained using digital image correlation (DIC). An analysis using the combination of the AE and DIC data was able to predict the final failure location.

  20. Digital image processing of sectorial oscillations for acoustically levitated drops and surface tension measurement

    NASA Astrophysics Data System (ADS)

    Shen, Changle; Xie, Wenjun; Wei, Bingbo

    2010-12-01

    A type of non-axisymmetric oscillations of acoustically levitated drops is excited by modulating the ultrasound field at proper frequencies. These oscillations are recorded by a high speed camera and analyzed with a digital image processing method. They are demonstrated to be the third mode sectorial oscillations, and their frequencies are found to decrease with the increase of equatorial radius of the drops, which can be described by a modified Rayleigh equation. These oscillations decay exponentially after the cessation of ultrasound field modulation. The decaying rates agree reasonably with Lamb's prediction. The rotating rate of the drops accompanying the shape oscillations is found to be less than 1.5 rounds per second. The surface tension of aqueous ethanol has been measured according to the modified Rayleigh equation. The results agree well with previous reports, which demonstrates the possible application of this kind of sectorial oscillations in noncontact measurement of liquid surface tension.