Science.gov

Sample records for acoustic video images

  1. Acoustic Neuroma Educational Video

    MedlinePlus

    ... Watch and Wait Radiation Microsurgery Acoustic Neuroma Decision Tree Questions for Your Physician Questions to Ask Yourself ... Watch and Wait Radiation Microsurgery Acoustic Neuroma Decision Tree Questions for Your Physician Questions to Ask Yourself ...

  2. Inferences of Particle Size and Composition From Video-like Images Based on Acoustic Data: Grotto Plume, Main Endeavor Field

    NASA Astrophysics Data System (ADS)

    Bemis, K. G.; Rona, P. A.; Santilli, K.; Dastur, J.; Silver, D.

    2004-12-01

    Optical and acoustic scattering from particles in a seafloor hydrothermal plume can be related if the particle properties and scattering mechanisms are known. We assume Rayleigh backscattering of sound and Mie forward scattering of light. We then use the particle concentrations implicit in the observed acoustic backscatter intensity to recreate the optical image a camera would see given a particular lighting level. The motivation for this study is to discover what information on particle size and composition in the buoyant plume can be inferred from a comparison of the calculated optical images (based on acoustic data) with actual video images from the acoustic acquisition cruise and the IMAX film "Volcanoes of the Deep Sea" (Stephen Low Productions, Inc.). Because the geologists, biologists and oceanographers involved in the study of seafloor hydrothermal plumes all "see" plumes in different ways, an additional motivation is to create more realistic plume images from the acoustic data. By using visualization techniques, with realistic lighting models, we can convert the plume image from mechanical waves (sound) to electromagnetic waves (light). The resulting image depends on assumptions about the particle size distribution and composition. Conversion of the volume scattering coefficients from Rayleigh to Mie scattering is accomplished by an extinction scale factor that depends on the wavelengths of light and sound and on the average particle size. We also make an adjustment to the scattered light based on the particles reflectivity (albedo) and color. We present a series of images of acoustic data for Grotto Plume, Main Endeavour Field (within the Endeavour ISS Site) using both realistic lighting models and traditional visualization techniques to investigate the dependence of the images on assumptions about particle composition and size. Sensitivity analysis suggests that the visibility of the buoyant plume increases as the intensity of supplied light increases

  3. Video Toroid Cavity Imager

    DOEpatents

    Gerald, II, Rex E.; Sanchez, Jairo; Rathke, Jerome W.

    2004-08-10

    A video toroid cavity imager for in situ measurement of electrochemical properties of an electrolytic material sample includes a cylindrical toroid cavity resonator containing the sample and employs NMR and video imaging for providing high-resolution spectral and visual information of molecular characteristics of the sample on a real-time basis. A large magnetic field is applied to the sample under controlled temperature and pressure conditions to simultaneously provide NMR spectroscopy and video imaging capabilities for investigating electrochemical transformations of materials or the evolution of long-range molecular aggregation during cooling of hydrocarbon melts. The video toroid cavity imager includes a miniature commercial video camera with an adjustable lens, a modified compression coin cell imager with a fiat circular principal detector element, and a sample mounted on a transparent circular glass disk, and provides NMR information as well as a video image of a sample, such as a polymer film, with micrometer resolution.

  4. Video image position determination

    DOEpatents

    Christensen, Wynn; Anderson, Forrest L.; Kortegaard, Birchard L.

    1991-01-01

    An optical beam position controller in which a video camera captures an image of the beam in its video frames, and conveys those images to a processing board which calculates the centroid coordinates for the image. The image coordinates are used by motor controllers and stepper motors to position the beam in a predetermined alignment. In one embodiment, system noise, used in conjunction with Bernoulli trials, yields higher resolution centroid coordinates.

  5. Ultrasound Imaging System Video

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In this video, astronaut Peggy Whitson uses the Human Research Facility (HRF) Ultrasound Imaging System in the Destiny Laboratory of the International Space Station (ISS) to image her own heart. The Ultrasound Imaging System provides three-dimension image enlargement of the heart and other organs, muscles, and blood vessels. It is capable of high resolution imaging in a wide range of applications, both research and diagnostic, such as Echocardiography (ultrasound of the heart), abdominal, vascular, gynecological, muscle, tendon, and transcranial ultrasound.

  6. Observation of hydrothermal flows with acoustic video camera

    NASA Astrophysics Data System (ADS)

    Mochizuki, M.; Asada, A.; Tamaki, K.; Scientific Team Of Yk09-13 Leg 1

    2010-12-01

    Ridge 18-20deg.S, where hydrothermal plume signatures were previously perceived. DIDSON was equipped on the top of Shinkai6500 in order to get acoustic video images of hydrothermal plumes. In this cruise, seven dives of Shinkai6500 were conducted. The acoustic video images of the hydrothermal plumes had been captured in three of seven dives. These are only a few acoustic video images of the hydrothermal plumes. Processing and analyzing the acoustic video image data are going on. We will report the overview of the acoustic video image of the hydrothermal plumes and discuss possibility of DIDSON as an observation tool for seafloor hydrothermal activity.

  7. Video Image Stabilization and Registration

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor); Meyer, Paul J. (Inventor)

    2002-01-01

    A method of stabilizing and registering a video image in multiple video fields of a video sequence provides accurate determination of the image change in magnification, rotation and translation between video fields, so that the video fields may be accurately corrected for these changes in the image in the video sequence. In a described embodiment, a key area of a key video field is selected which contains an image which it is desired to stabilize in a video sequence. The key area is subdivided into nested pixel blocks and the translation of each of the pixel blocks from the key video field to a new video field is determined as a precursor to determining change in magnification, rotation and translation of the image from the key video field to the new video field.

  8. Video Image Stabilization and Registration

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor); Meyer, Paul J. (Inventor)

    2003-01-01

    A method of stabilizing and registering a video image in multiple video fields of a video sequence provides accurate determination of the image change in magnification, rotation and translation between video fields, so that the video fields may be accurately corrected for these changes in the image in the video sequence. In a described embodiment, a key area of a key video field is selected which contains an image which it is desired to stabilize in a video sequence. The key area is subdivided into nested pixel blocks and the translation of each of the pixel blocks from the key video field to a new video field is determined as a precursor to determining change in magnification, rotation and translation of the image from the key video field to the new video field.

  9. Acoustic Imaging in Helioseismology

    NASA Astrophysics Data System (ADS)

    Chou, Dean-Yi; Chang, Hsiang-Kuang; Sun, Ming-Tsung; LaBonte, Barry; Chen, Huei-Ru; Yeh, Sheng-Jen; Team, The TON

    1999-04-01

    The time-variant acoustic signal at a point in the solar interior can be constructed from observations at the surface, based on the knowledge of how acoustic waves travel in the Sun: the time-distance relation of the p-modes. The basic principle and properties of this imaging technique are discussed in detail. The helioseismic data used in this study were taken with the Taiwan Oscillation Network (TON). The time series of observed acoustic signals on the solar surface is treated as a phased array. The time-distance relation provides the phase information among the phased array elements. The signal at any location at any time can be reconstructed by summing the observed signal at array elements in phase and with a proper normalization. The time series of the constructed acoustic signal contains information on frequency, phase, and intensity. We use the constructed intensity to obtain three-dimensional acoustic absorption images. The features in the absorption images correlate with the magnetic field in the active region. The vertical extension of absorption features in the active region is smaller in images constructed with shorter wavelengths. This indicates that the vertical resolution of the three-dimensional images depends on the range of modes used in constructing the signal. The actual depths of the absorption features in the active region may be smaller than those shown in the three-dimensional images.

  10. Acoustic imaging system

    DOEpatents

    Smith, Richard W.

    1979-01-01

    An acoustic imaging system for displaying an object viewed by a moving array of transducers as the array is pivoted about a fixed point within a given plane. A plurality of transducers are fixedly positioned and equally spaced within a laterally extending array and operatively directed to transmit and receive acoustic signals along substantially parallel transmission paths. The transducers are sequentially activated along the array to transmit and receive acoustic signals according to a preestablished sequence. Means are provided for generating output voltages for each reception of an acoustic signal, corresponding to the coordinate position of the object viewed as the array is pivoted. Receptions from each of the transducers are presented on the same display at coordinates corresponding to the actual position of the object viewed to form a plane view of the object scanned.

  11. Non-intrusive telemetry applications in the oilsands: from visible light and x-ray video to acoustic imaging and spectroscopy

    NASA Astrophysics Data System (ADS)

    Shaw, John M.

    2013-06-01

    While the production, transport and refining of oils from the oilsands of Alberta, and comparable resources elsewhere is performed at industrial scales, numerous technical and technological challenges and opportunities persist due to the ill defined nature of the resource. For example, bitumen and heavy oil comprise multiple bulk phases, self-organizing constituents at the microscale (liquid crystals) and the nano scale. There are no quantitative measures available at the molecular level. Non-intrusive telemetry is providing promising paths toward solutions, be they enabling technologies targeting process design, development or optimization, or more prosaic process control or process monitoring applications. Operation examples include automated large object and poor quality ore during mining, and monitoring the thickness and location of oil water interfacial zones within separation vessels. These applications involve real-time video image processing. X-ray transmission video imaging is used to enumerate organic phases present within a vessel, and to detect individual phase volumes, densities and elemental compositions. This is an enabling technology that provides phase equilibrium and phase composition data for production and refining process development, and fluid property myth debunking. A high-resolution two-dimensional acoustic mapping technique now at the proof of concept stage is expected to provide simultaneous fluid flow and fluid composition data within porous inorganic media. Again this is an enabling technology targeting visualization of diverse oil production process fundamentals at the pore scale. Far infrared spectroscopy coupled with detailed quantum mechanical calculations, may provide characteristic molecular motifs and intermolecular association data required for fluid characterization and process modeling. X-ray scattering (SAXS/WAXS/USAXS) provides characteristic supramolecular structure information that impacts fluid rheology and process

  12. Synergy of seismic, acoustic, and video signals in blast analysis

    SciTech Connect

    Anderson, D.P.; Stump, B.W.; Weigand, J.

    1997-09-01

    The range of mining applications from hard rock quarrying to coal exposure to mineral recovery leads to a great variety of blasting practices. A common characteristic of many of the sources is that they are detonated at or near the earth`s surface and thus can be recorded by camera or video. Although the primary interest is in the seismic waveforms that these blasts generate, the visual observations of the blasts provide important constraints that can be applied to the physical interpretation of the seismic source function. In particular, high speed images can provide information on detonation times of individuals charges, the timing and amount of mass movement during the blasting process and, in some instances, evidence of wave propagation away from the source. All of these characteristics can be valuable in interpreting the equivalent seismic source function for a set of mine explosions and quantifying the relative importance of the different processes. This paper documents work done at the Los Alamos National Laboratory and Southern Methodist University to take standard Hi-8 video of mine blasts, recover digital images from them, and combine them with ground motion records for interpretation. The steps in the data acquisition, processing, display, and interpretation are outlined. The authors conclude that the combination of video with seismic and acoustic signals can be a powerful diagnostic tool for the study of blasting techniques and seismology. A low cost system for generating similar diagnostics using consumer-grade video camera and direct-to-disk video hardware is proposed. Application is to verification of the Comprehensive Test Ban Treaty.

  13. Video image cliff notes

    NASA Astrophysics Data System (ADS)

    Szu, Harold; Hsu, Charles

    2012-06-01

    Can a compressive sampling expert system help to build a summary of a video in a composited picture? Digital Internet age has provided all with an information degree of freedom; but, comes with it, the societal trash being accumulated beyond analysts to sort through, to summary video automatically as a digital library category. While we wish preserve the spirit of democratic Smartphone-Internet to all, we provide an automation and unbiased tool called the compressive sampling expert system (CSpES) to summarize the video content at user's own discretion.

  14. Normalization method for video images

    SciTech Connect

    Donohoe, G.W.; Hush, D.R.

    1992-12-31

    The present invention relates to a method and apparatus for automatically and adaptively normalizing analog signals representative of video images in object detection systems. Such normalization maximizes the average information content of the video images and, thereby, provides optimal digitized images for object detection and identification. The present invention manipulates two system control signals -- gain control signal and offset control signal -- to convert an analog image signal into a transformed analog image signal, such that the corresponding digitized image contains the maximum amount of information achievable with a conventional object detection system. In some embodiments of the present invention, information content is measured using parameters selected from image entropy, image mean, and image variance.

  15. Imaging of Acoustic Waves in Sand

    SciTech Connect

    Deason, Vance Albert; Telschow, Kenneth Louis; Watson, Scott Marshall

    2003-08-01

    There is considerable interest in detecting objects such as landmines shallowly buried in loose earth or sand. Various techniques involving microwave, acoustic, thermal and magnetic sensors have been used to detect such objects. Acoustic and microwave sensors have shown promise, especially if used together. In most cases, the sensor package is scanned over an area to eventually build up an image or map of anomalies. We are proposing an alternate, acoustic method that directly provides an image of acoustic waves in sand or soil, and their interaction with buried objects. The INEEL Laser Ultrasonic Camera utilizes dynamic holography within photorefractive recording materials. This permits one to image and demodulate acoustic waves on surfaces in real time, without scanning. A video image is produced where intensity is directly and linearly proportional to surface motion. Both specular and diffusely reflecting surfaces can be accomodated and surface motion as small as 0.1 nm can be quantitatively detected. This system was used to directly image acoustic surface waves in sand as well as in solid objects. Waves as frequencies of 16 kHz were generated using modified acoustic speakers. These waves were directed through sand toward partially buried objects. The sand container was not on a vibration isolation table, but sat on the lab floor. Interaction of wavefronts with buried objects showed reflection, diffraction and interference effects that could provide clues to location and characteristics of buried objects. Although results are preliminary, success in this effort suggests that this method could be applied to detection of buried landmines or other near-surface items such as pipes and tanks.

  16. Video image stabilization and registration--plus

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor)

    2009-01-01

    A method of stabilizing a video image displayed in multiple video fields of a video sequence includes the steps of: subdividing a selected area of a first video field into nested pixel blocks; determining horizontal and vertical translation of each of the pixel blocks in each of the pixel block subdivision levels from the first video field to a second video field; and determining translation of the image from the first video field to the second video field by determining a change in magnification of the image from the first video field to the second video field in each of horizontal and vertical directions, and determining shear of the image from the first video field to the second video field in each of the horizontal and vertical directions.

  17. Aerial Video Imaging

    NASA Technical Reports Server (NTRS)

    1991-01-01

    When Michael Henry wanted to start an aerial video service, he turned to Johnson Space Center for assistance. Two NASA engineers - one had designed and developed TV systems in Apollo, Skylab, Apollo- Soyuz and Space Shuttle programs - designed a wing-mounted fiberglass camera pod. Camera head and angles are adjustable, and the pod is shaped to reduce vibration. The controls are located so a solo pilot can operate the system. A microprocessor displays latitude, longitude, and bearing, and a GPS receiver provides position data for possible legal references. The service has been successfully utilized by railroads, oil companies, real estate companies, etc.

  18. Acoustic subwavelength imaging of subsurface objects with acoustic resonant metalens

    SciTech Connect

    Cheng, Ying; Liu, XiaoJun; Zhou, Chen; Wei, Qi; Wu, DaJian

    2013-11-25

    Early research into acoustic metamaterials has shown the possibility of achieving subwavelength near-field acoustic imaging. However, a major restriction of acoustic metamaterials is that the imaging objects must be placed in close vicinity of the devices. Here, we present an approach for acoustic imaging of subsurface objects far below the diffraction limit. An acoustic metalens made of holey-structured metamaterials is used to magnify evanescent waves, which can rebuild an image at the central plane. Without changing the physical structure of the metalens, our proposed approach can image objects located at certain distances from the input surface, which provides subsurface signatures of the objects with subwavelength spatial resolution.

  19. Radiation effects on video imagers

    SciTech Connect

    Yates, G.J.; Bujnosek, J.J.; Jaramillo, S.A.; Walton, R.B.; Martinez, T.M.; Black, J.P.

    1985-01-01

    Radiation sensitivity of several photoconductive, photoemissive, and solid state silicon-based video imagers was measured by analyzing stored photocharge induced by irradiation with continuous and pulsed sources of high energy photons and neutrons. Transient effects as functions of absorbed dose, dose rate, fluences, and ionizing particle energy are presented.

  20. Computerized tomography using video recorded fluoroscopic images

    NASA Technical Reports Server (NTRS)

    Kak, A. C.; Jakowatz, C. V., Jr.; Baily, N. A.; Keller, R. A.

    1975-01-01

    A computerized tomographic imaging system is examined which employs video-recorded fluoroscopic images as input data. By hooking the video recorder to a digital computer through a suitable interface, such a system permits very rapid construction of tomograms.

  1. Acoustical Imaging with Negative Refraction

    NASA Astrophysics Data System (ADS)

    Gan, W. S.

    It is well known that the resolution limit of acoustical images is limited by diffraction to λ/2 where λ is the sound wavelength. Negative refraction proposed by Veselago in 1968 shows possibility of defeating the diffraction limit. His work is for electromagnetic waves. Recently it has been shown experimentally that negative refraction can be achieved for both electromagnetic waves and sound waves by using photonic crystals and phononic crystals respectively. John Pendry proposed the concept of `perfect lens' using negative refraction for electromagnetic waves. In this paper, we propose a `perfect lens' for sound waves and an acoustical imaging system incorporating the `perfect lens' is also outlined

  2. Video imaging systems: A survey

    NASA Astrophysics Data System (ADS)

    Kefauver, H. Lee

    1989-07-01

    Recent technological advances in the field of electronics have made video imaging a viable substitute for the traditional Polaroid(trademark) picture used to create photo ID credentials. New families of hardware and software products, when integrated into a system, provide an exciting and powerful toll which can be used simply to make badges or enhance an access control system. The reader is made aware of who is currently in this business and compare their capabilities.

  3. Acoustic image-processing software

    NASA Astrophysics Data System (ADS)

    Several algorithims that display, enhance and analyze side-scan sonar images of the seafloor, have been developed by the University of Washington, Seattle, as part of an Office of Naval Research funded program in acoustic image analysis. One of these programs, PORTAL, is a small (less than 100K) image display and enhancement program that can run on MS-DOS computers with VGA boards. This program is now available in the public domain for general use in acoustic image processing.PORTAL is designed to display side-scan sonar data that is stored in most standard formats, including SeaMARC I, II, 150 and GLORIA data. (See image.) In addition to the “standard” formats, PORTAL has a module “front end” that allows the user to modify the program to accept other image formats. In addition to side-scan sonar data, the program can also display digital optical images from scanners and “framegrabbers,” gridded bathymetry data from Sea Beam and other sources, and potential field (magnetics/gravity) data. While limited in image analysis capability, the program allows image enhancement by histogram manipulation, and basic filtering operations, including multistage filtering. PORTAL can print reasonably high-quality images on Postscript laser printers and lower-quality images on non-Postscript printers with HP Laserjet emulation. Images suitable only for index sheets are also possible on dot matrix printers.

  4. Image-guided acoustic therapy.

    PubMed

    Vaezy, S; Andrew, M; Kaczkowski, P; Crum, L

    2001-01-01

    The potential role of therapeutic ultrasound in medicine is promising. Currently, medical devices are being developed that utilize high-intensity focused ultrasound as a noninvasive method to treat tumors and to stop bleeding (hemostasis). The primary advantage of ultrasound that lends the technique so readily to use in noninvasive therapy is its ability to penetrate deep into the body and deliver to a specific site thermal or mechanical energy with submillimeter accuracy. Realizing the full potential of acoustic therapy, however, requires precise targeting and monitoring. Fortunately, several imaging modalities can be utilized for this purpose, thus leading to the concept of image-guided acoustic therapy. This article presents a review of high-intensity focused ultrasound therapy, including its mechanisms of action, the imaging modalities used for guidance and monitoring, some current applications, and the requirements and technology associated with this exciting and promising field.

  5. Acoustic Waves in Medical Imaging and Diagnostics

    PubMed Central

    Sarvazyan, Armen P.; Urban, Matthew W.; Greenleaf, James F.

    2013-01-01

    Up until about two decades ago acoustic imaging and ultrasound imaging were synonymous. The term “ultrasonography,” or its abbreviated version “sonography” meant an imaging modality based on the use of ultrasonic compressional bulk waves. Since the 1990s numerous acoustic imaging modalities started to emerge based on the use of a different mode of acoustic wave: shear waves. It was demonstrated that imaging with these waves can provide very useful and very different information about the biological tissue being examined. We will discuss physical basis for the differences between these two basic modes of acoustic waves used in medical imaging and analyze the advantages associated with shear acoustic imaging. A comprehensive analysis of the range of acoustic wavelengths, velocities, and frequencies that have been used in different imaging applications will be presented. We will discuss the potential for future shear wave imaging applications. PMID:23643056

  6. Acoustic waves in medical imaging and diagnostics.

    PubMed

    Sarvazyan, Armen P; Urban, Matthew W; Greenleaf, James F

    2013-07-01

    Up until about two decades ago acoustic imaging and ultrasound imaging were synonymous. The term ultrasonography, or its abbreviated version sonography, meant an imaging modality based on the use of ultrasonic compressional bulk waves. Beginning in the 1990s, there started to emerge numerous acoustic imaging modalities based on the use of a different mode of acoustic wave: shear waves. Imaging with these waves was shown to provide very useful and very different information about the biological tissue being examined. We discuss the physical basis for the differences between these two basic modes of acoustic waves used in medical imaging and analyze the advantages associated with shear acoustic imaging. A comprehensive analysis of the range of acoustic wavelengths, velocities and frequencies that have been used in different imaging applications is presented. We discuss the potential for future shear wave imaging applications.

  7. Colour thresholding in video imaging.

    PubMed Central

    Fermin, C D; Degraw, S

    1995-01-01

    The basic aspects of video imaging are reviewed as they relate to measurements of histological and anatomical features, with particular emphasis on the advantages and disadvantages of colour and black-and-white imaging modes. In black-and-white imaging, calculations are based on the manipulation of picture elements (pixels) that contain 0-255 levels of information. Black is represented by the absence of light (0) and white by 255 grades of light. In colour imaging, the pixels contain variation of hues for the primary (red, green and blue) and secondary (magenta, yellow, cyan, pink) colours. Manipulation of pixels with colour information is more computer intense than that for black-and-white pixels, because there are over 16 million possible combinations of colour in a system with a 24-bit resolution. The narrow 128 possible grades of separation in black and white often makes distinction between pixels with overlapping intensities difficult. Such difficulty is greatly reduced by colour thresholding of systems that base the representation of colour on a combination of hue-saturation-intensity (HSI) format. Images Fig. 3 Fig. 5 Fig. 6 Fig. 7 Fig. 8 Fig. 9 Fig. 10 Fig. 11 Fig. 12 Fig. 13 Fig. 14 Fig. 15 Fig. 16 Fig. 17 Fig. 18 Fig. 19 Fig. 20 PMID:7559121

  8. Acoustic imaging microscope

    DOEpatents

    Deason, Vance A.; Telschow, Kenneth L.

    2006-10-17

    An imaging system includes: an object wavefront source and an optical microscope objective all positioned to direct an object wavefront onto an area of a vibrating subject surface encompassed by a field of view of the microscope objective, and to direct a modulated object wavefront reflected from the encompassed surface area through a photorefractive material; and a reference wavefront source and at least one phase modulator all positioned to direct a reference wavefront through the phase modulator and to direct a modulated reference wavefront from the phase modulator through the photorefractive material to interfere with the modulated object wavefront. The photorefractive material has a composition and a position such that interference of the modulated object wavefront and modulated reference wavefront occurs within the photorefractive material, providing a full-field, real-time image signal of the encompassed surface area.

  9. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA Marshall Space Flight Center, atmospheric scientist Paul Meyer (left) and solar physicist Dr. David Hathaway, have developed promising new software, called Video Image Stabilization and Registration (VISAR), that may help law enforcement agencies to catch criminals by improving the quality of video recorded at crime scenes, VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects; produces clearer images of moving objects; smoothes jagged edges; enhances still images; and reduces video noise of snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of Ultrasounds which are infamous for their grainy, blurred quality. It would be especially useful for tornadoes, tracking whirling objects and helping to determine the tornado's wind speed. This image shows two scientists reviewing an enhanced video image of a license plate taken from a moving automobile.

  10. Reflective echo tomographic imaging using acoustic beams

    DOEpatents

    Kisner, Roger; Santos-Villalobos, Hector J

    2014-11-25

    An inspection system includes a plurality of acoustic beamformers, where each of the plurality of acoustic beamformers including a plurality of acoustic transmitter elements. The system also includes at least one controller configured for causing each of the plurality of acoustic beamformers to generate an acoustic beam directed to a point in a volume of interest during a first time. Based on a reflected wave intensity detected at a plurality of acoustic receiver elements, an image of the volume of interest can be generated.

  11. Video imaging for Nuclear Safeguards

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.; Brown, J.E.; Rodriguez, C.A.; Stoltz, L.A.

    1994-04-01

    The field of Nuclear Safeguards has received increasing amounts of public attention since the events of the Iraq-UN conflict over Kuwait, the dismantlement of the former Soviet Union, and more recently, the North Korean resistance to nuclear facility inspections by the International Atomic Energy Agency (IAEA). The role of nuclear safeguards in these and other events relating to the world`s nuclear material inventory is to assure safekeeping of these materials and to verify the inventory and usage of these materials as reported by states that have signed the Nuclear Nonproliferation Treaty. Nuclear Safeguards are measures prescribed by domestic and international regulatory bodies and implemented by the nuclear facility or the regulatory body. These measures include destructive and nondestructive analysis of product materials and process by-products for materials control and accountancy purposes, physical protection for domestic safeguards, and containment and surveillance for international safeguards. In this presentation we will introduce digital video image processing and analysis systems that have been developed at Los Alamos National Laboratory for application to the nuclear safeguards problem. Of specific interest to this audience is the detector-activated predictive wavelet transform image coding used to reduce drastically the data storage requirements for these unattended, remote safeguards systems.

  12. Video and acoustic camera techniques for studying fish under ice: a review and comparison

    SciTech Connect

    Mueller, Robert P.; Brown, Richard S.; Hop, Haakon H.; Moulton, Larry

    2006-09-05

    Researchers attempting to study the presence, abundance, size, and behavior of fish species in northern and arctic climates during winter face many challenges, including the presence of thick ice cover, snow cover, and, sometimes, extremely low temperatures. This paper describes and compares the use of video and acoustic cameras for determining fish presence and behavior in lakes, rivers, and streams with ice cover. Methods are provided for determining fish density and size, identifying species, and measuring swimming speed and successful applications of previous surveys of fish under the ice are described. These include drilling ice holes, selecting batteries and generators, deploying pan and tilt cameras, and using paired colored lasers to determine fish size and habitat associations. We also discuss use of infrared and white light to enhance image-capturing capabilities, deployment of digital recording systems and time-lapse techniques, and the use of imaging software. Data are presented from initial surveys with video and acoustic cameras in the Sagavanirktok River Delta, Alaska, during late winter 2004. These surveys represent the first known successful application of a dual-frequency identification sonar (DIDSON) acoustic camera under the ice that achieved fish detection and sizing at camera ranges up to 16 m. Feasibility tests of video and acoustic cameras for determining fish size and density at various turbidity levels are also presented. Comparisons are made of the different techniques in terms of suitability for achieving various fisheries research objectives. This information is intended to assist researchers in choosing the equipment that best meets their study needs.

  13. Enhanced Video Surveillance (EVS) with speckle imaging

    SciTech Connect

    Carrano, C J

    2004-01-13

    Enhanced Video Surveillance (EVS) with Speckle Imaging is a high-resolution imaging system that substantially improves resolution and contrast in images acquired over long distances. This technology will increase image resolution up to an order of magnitude or greater for video surveillance systems. The system's hardware components are all commercially available and consist of a telescope or large-aperture lens assembly, a high-performance digital camera, and a personal computer. The system's software, developed at LLNL, extends standard speckle-image-processing methods (used in the astronomical community) to solve the atmospheric blurring problem associated with imaging over medium to long distances (hundreds of meters to tens of kilometers) through horizontal or slant-path turbulence. This novel imaging technology will not only enhance national security but also will benefit law enforcement, security contractors, and any private or public entity that uses video surveillance to protect their assets.

  14. Marking spatial parts within stereoscopic video images

    NASA Astrophysics Data System (ADS)

    Belz, Constance; Boehm, Klaus; Duong, Thanh; Kuehn, Volker; Weber, Martin

    1996-04-01

    The technology of stereoscopic imaging enables reliable online telediagnoses. Applications of telediagnosis include the fields of medicine and in general telerobotics. For allowing the participants in a telediagnosis to mark spatial parts within the stereoscopic video image, graphic tools and automatism have to be provided. The process of marking spatial parts and objects inside a stereoscopic video image is a non trivial interaction technique. The markings themselves have to be 3D elements instead of 2D markings which would lead to an alienated effect `in' the stereoscopic video image. Furthermore, one problem to be tackled here, is that the content of the stereoscopic video image is unknown. This is in contrast to 3D Virtual Reality scenes, which enable an easy 3D interaction because all the objects and their position within the 3D scene are known. The goals of our research comprised the development of new interaction paradigms and marking techniques in stereoscopic video images, as well as an investigation of input devices appropriate for this interaction task. We have implemented these interaction techniques in a test environment and integrated therefore computer graphics into stereoscopic video images. In order to evaluate the new interaction techniques a user test was carried out. The results of our research will be presented here.

  15. First images of thunder: Acoustic imaging of triggered lightning

    NASA Astrophysics Data System (ADS)

    Dayeh, M. A.; Evans, N. D.; Fuselier, S. A.; Trevino, J.; Ramaekers, J.; Dwyer, J. R.; Lucia, R.; Rassoul, H. K.; Kotovsky, D. A.; Jordan, D. M.; Uman, M. A.

    2015-07-01

    An acoustic camera comprising a linear microphone array is used to image the thunder signature of triggered lightning. Measurements were taken at the International Center for Lightning Research and Testing in Camp Blanding, FL, during the summer of 2014. The array was positioned in an end-fire orientation thus enabling the peak acoustic reception pattern to be steered vertically with a frequency-dependent spatial resolution. On 14 July 2014, a lightning event with nine return strokes was successfully triggered. We present the first acoustic images of individual return strokes at high frequencies (>1 kHz) and compare the acoustically inferred profile with optical images. We find (i) a strong correlation between the return stroke peak current and the radiated acoustic pressure and (ii) an acoustic signature from an M component current pulse with an unusual fast rise time. These results show that acoustic imaging enables clear identification and quantification of thunder sources as a function of lightning channel altitude.

  16. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR). VISAR may help law enforcement agencies catch criminals by improving the quality of video recorded at crime scenes. In this photograph, the single frame at left, taken at night, was brightened in order to enhance details and reduce noise or snow. To further overcome the video defects in one frame, Law enforcement officials can use VISAR software to add information from multiple frames to reveal a person. Images from less than a second of videotape were added together to create the clarified image at right. VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. The software can be used for defense application by improving recornaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  17. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center, atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  18. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  19. Acoustic 3D imaging of dental structures

    SciTech Connect

    Lewis, D.K.; Hume, W.R.; Douglass, G.D.

    1997-02-01

    Our goals for the first year of this three dimensional electodynamic imaging project was to determine how to combine flexible, individual addressable; preprocessing of array source signals; spectral extrapolation or received signals; acoustic tomography codes; and acoustic propagation modeling code. We investigated flexible, individually addressable acoustic array material to find the best match in power, sensitivity and cost and settled on PVDF sheet arrays and 3-1 composite material.

  20. Video Image Communication And Retrieval - Updated

    NASA Technical Reports Server (NTRS)

    Wall, Ray J.; Jepsen, Paul L.; Andersen, Kurt K.; Bartholomew, Paul D.; Deen, Robert G.; Girard, Michael A.; Greer, Thomas C.; Hodges, David R.; Jentoft-Nilsen, Merit; Lewicki, Scott A.; Lorre, Jean J.; Meisl, Chris C.; Moss, Florance F.; O'Shaughnessy, Megan A.; Pohorsky, Steven; Tews, Sheila M.; Runkle, Allan J.; Vasquez, Cesar A.; Vuong, Neil; Yagi, Gary M.

    1991-01-01

    Video Image Communication and Retrieval (VICAR) package of computer programs is general-purpose image-processing software system. Intended for processing data from Jet Propulsion Laboratory's unmanned planetary spacecraft, now used in variety of other applications, including processing of biomedical images, cartography, studies of Earth resources, and geological exploration. Development of newest version of VICAR emphasizes standardized, easily-understood user interface, shield between user and host operating system, and comprehensive array of image-processing capabilities.

  1. Acoustic emissions of digital data video projectors- Investigating noise sources and their change during product aging

    NASA Astrophysics Data System (ADS)

    White, Michael Shane

    2005-09-01

    Acoustic emission testing continues to be a growing part of IT and telecommunication product design, as product noise is increasingly becoming a differentiator in the marketplace. This is especially true for digital/video display companies, such as InFocus Corporation, considering the market shift of these products to the home entertainment consumer as retail prices drop and performance factors increase. Projectors and displays using Digital Light Processing(tm) [DLP(tm)] technology incorporate a device known as a ColorWheel(tm) to generate the colors displayed at each pixel in the image. These ColorWheel(tm) devices spin at very high speeds and can generate high-frequency tones not typically heard in liquid crystal displays and other display technologies. Also, acoustic emission testing typically occurs at the beginning of product life and is a measure of acoustic energy emitted at this point in the lifecycle. Since the product is designed to be used over a long period of time, there is concern as to whether the acoustic emissions change over the lifecycle of the product, whether these changes will result in a level of nuisance to the average customer, and does this nuisance begin to develop prior to the intended lifetime of the product.

  2. Image and video compression for HDR content

    NASA Astrophysics Data System (ADS)

    Zhang, Yang; Reinhard, Erik; Agrafiotis, Dimitris; Bull, David R.

    2012-10-01

    High Dynamic Range (HDR) technology can offer high levels of immersion with a dynamic range meeting and exceeding that of the Human Visual System (HVS). A primary drawback with HDR images and video is that memory and bandwidth requirements are significantly higher than for conventional images and video. Many bits can be wasted coding redundant imperceptible information. The challenge is therefore to develop means for efficiently compressing HDR imagery to a manageable bit rate without compromising perceptual quality. In this paper, we build on previous work of ours and propose a compression method for both HDR images and video, based on an HVS optimised wavelet subband weighting method. The method has been fully integrated into a JPEG 2000 codec for HDR image compression and implemented as a pre-processing step for HDR video coding (an H.264 codec is used as the host codec for video compression). Experimental results indicate that the proposed method outperforms previous approaches and operates in accordance with characteristics of the HVS, tested objectively using a HDR Visible Difference Predictor (VDP). Aiming to further improve the compression performance of our method, we additionally present the results of a psychophysical experiment, carried out with the aid of a high dynamic range display, to determine the difference in the noise visibility threshold between HDR and Standard Dynamic Range (SDR) luminance edge masking. Our findings show that noise has increased visibility on the bright side of a luminance edge. Masking is more consistent on the darker side of the edge.

  3. Intergraph video and images exploitation capabilities

    NASA Astrophysics Data System (ADS)

    Colla, Simone; Manesis, Charalampos

    2013-08-01

    The current paper focuses on the capture, fusion and process of aerial imagery in order to leverage full motion video, giving analysts the ability to collect, analyze, and maximize the value of video assets. Unmanned aerial vehicles (UAV) have provided critical real-time surveillance and operational support to military organizations, and are a key source of intelligence, particularly when integrated with other geospatial data. In the current workflow, at first, the UAV operators plan the flight by using a flight planning software. During the flight the UAV send a live video stream directly on the field to be processed by Intergraph software, to generate and disseminate georeferenced images trough a service oriented architecture based on ERDAS Apollo suite. The raw video-based data sources provide the most recent view of a situation and can augment other forms of geospatial intelligence - such as satellite imagery and aerial photos - to provide a richer, more detailed view of the area of interest. To effectively use video as a source of intelligence, however, the analyst needs to seamlessly fuse the video with these other types of intelligence, such as map features and annotations. Intergraph has developed an application that automatically generates mosaicked georeferenced image, tags along the video route which can then be seamlessly integrated with other forms of static data, such as aerial photos, satellite imagery, or geospatial layers and features. Consumers will finally have the ability to use a single, streamlined system to complete the entire geospatial information lifecycle: capturing geospatial data using sensor technology; processing vector, raster, terrain data into actionable information; managing, fusing, and sharing geospatial data and video toghether; and finally, rapidly and securely delivering integrated information products, ensuring individuals can make timely decisions.

  4. Video surveillance with speckle imaging

    DOEpatents

    Carrano, Carmen J.; Brase, James M.

    2007-07-17

    A surveillance system looks through the atmosphere along a horizontal or slant path. Turbulence along the path causes blurring. The blurring is corrected by speckle processing short exposure images recorded with a camera. The exposures are short enough to effectively freeze the atmospheric turbulence. Speckle processing is used to recover a better quality image of the scene.

  5. Latino Film and Video Images.

    ERIC Educational Resources Information Center

    Vazquez, Blanca, Ed.

    1990-01-01

    This theme issue of the "Centro Bulletin" examines media stereotypes of Latinos and presents examples of alternatives. "From Assimilation to Annihilation: Puerto Rican Images in U.S. Films" (R. Perez) traces the representation of Puerto Ricans from the early days of television to the films of the 1970s. "The Latino 'Boom' in Hollywood" (C. Fusco)…

  6. Video guidance, landing, and imaging systems

    NASA Technical Reports Server (NTRS)

    Schappell, R. T.; Knickerbocker, R. L.; Tietz, J. C.; Grant, C.; Rice, R. B.; Moog, R. D.

    1975-01-01

    The adaptive potential of video guidance technology for earth orbital and interplanetary missions was explored. The application of video acquisition, pointing, tracking, and navigation technology was considered to three primary missions: planetary landing, earth resources satellite, and spacecraft rendezvous and docking. It was found that an imaging system can be mechanized to provide a spacecraft or satellite with a considerable amount of adaptability with respect to its environment. It also provides a level of autonomy essential to many future missions and enhances their data gathering ability. The feasibility of an autonomous video guidance system capable of observing a planetary surface during terminal descent and selecting the most acceptable landing site was successfully demonstrated in the laboratory. The techniques developed for acquisition, pointing, and tracking show promise for recognizing and tracking coastlines, rivers, and other constituents of interest. Routines were written and checked for rendezvous, docking, and station-keeping functions.

  7. Millimeter-wave video rate imagers

    NASA Astrophysics Data System (ADS)

    Huguenin, G. Richard

    1997-06-01

    The author will describe millimeter wave focal plane array (FPA) imagers developed primarily for concealed weapons detection (CWD) and through wall surveillance (TWS) applications. Both passive (radiometric) and active (radar) imagers will be described. The technology employed in these cameras is ideally suited to a wide range of other applications as well. Traditionally, passive millimeter wave images have been generated using scanned sensors of various types ranging from single elements to line arrays. A line scanner using FPA technology is being developed at Millimetrix for CWD and other applications. Scanning imagers, however, cannot meet the frame rate and sensitivity requirements for some applications. Certain CWD applications, in particular, require a passive, video rate (30 fps) imagers which we are also developing using a patented focal plane array technology we call MillivisionTM. Similarly, TWS applications demand an active, video rate imager which shares much of the same MillivisionTM FPA technology. Customers always need more resolution, more sensitivity, and a wider field-of-view all in the smallest possible package and at the lowest cost. To meet these difficult requirements, the MillivisionTM video rate imagers operate near 94 GHz and employ active optics and filled focal plane arrays, both of which will be briefly described. The optimally filled FPA is small angle scanned (dithered) electronically relative to the scene in a 4 X 4 matrix to achieve a 2 X 2 oversampling of the image. A multiplicative `super resolution' algorithm is then used to digitally enhance the spatial frequency resolution of the resulting image by a factor of approximately 2.

  8. Underwater imaging with a moving acoustic lens.

    PubMed

    Kamgar-Parsi, B; Rosenblum, L J; Belcher, E O

    1998-01-01

    The acoustic lens is a high-resolution, forward-looking sonar for three dimensional (3-D) underwater imaging. We discuss processing the lens data for recreating and visualizing the scene. Acoustical imaging, compared to optical imaging, is sparse and low resolution. To achieve higher resolution, we obtain a denser sample by mounting the lens on a moving platform and passing over the scene. This introduces the problem of data fusion from multiple overlapping views for scene formation, which we discuss. We also discuss the improvements in object reconstruction by combining data from several passes over an object. We present algorithms for pass registration and show that this process can be done with enough accuracy to improve the image and provide greater detail about the object. The results of in-water experiments show the degree to which size and shape can be obtained under (nearly) ideal conditions.

  9. Video imaging of cardiac transmembrane activity

    NASA Astrophysics Data System (ADS)

    Baxter, William T.; Davidenko, Jorge; Cabo, Candido; Jalife, Jose

    1994-05-01

    High resolution movies of transmembrane electrical activity in thin (0.5 mm) slices of sheep epicardial muscle were recorded by optical imaging with voltage-sensitive dyes and a CCD video camera. Activity was monitored at approximately 65,000 picture elements per 2 cm2 tissue for several seconds at a 16 msec sampling rate. Simple image processing operations permitted visualization and analysis of the optical signal, while isochrome maps depicted complex patterns of propagation. Maps of action potential duration and regional intermittent conduction block showed that even these small preparations may exhibit considerable spatial heterogeneity. Self-sustaining reentrant activity in the form of spiral waves was consistently initiated and observed either drifting across the tissue or anchored to small heterogeneities. The current limitations of video optical mappings are a low signal-to- noise ratio and low temporal resolution. The advantages include high spatial resolution and direct correlation of electrical activity with anatomy. Video optical mapping permits the analysis of the electrophysiological properties of any region of the preparation during both regular stimulation and reentrant activation, providing a useful tool for studying cardiac arrhythmias.

  10. Fusion of acoustic measurements with video surveillance for estuarine threat detection

    NASA Astrophysics Data System (ADS)

    Bunin, Barry; Sutin, Alexander; Kamberov, George; Roh, Heui-Seol; Luczynski, Bart; Burlick, Matt

    2008-04-01

    Stevens Institute of Technology has established a research laboratory environment in support of the U.S. Navy in the area of Anti-Terrorism and Force Protection. Called the Maritime Security Laboratory, or MSL, it provides the capabilities of experimental research to enable development of novel methods of threat detection in the realistic environment of the Hudson River Estuary. In MSL, this is done through a multi-modal interdisciplinary approach. In this paper, underwater acoustic measurements and video surveillance are combined. Stevens' researchers have developed a specialized prototype video system to identify, video-capture, and map surface ships in a sector of the estuary. The combination of acoustic noise with video data for different kinds of ships in Hudson River enabled estimation of sound attenuation in a wide frequency band. Also, it enabled the collection of a noise library of various ships that can be used for ship classification by passive acoustic methods. Acoustics and video can be used to determine a ship's position. This knowledge can be used for ship noise suppression in hydrophone arrays in underwater threat detection. Preliminary experimental results of position determination are presented in the paper.

  11. Feature-preserving image/video compression

    NASA Astrophysics Data System (ADS)

    Al-Jawad, Naseer; Jassim, Sabah

    2005-10-01

    Advances in digital image processing, the advents of multimedia computing, and the availability of affordable high quality digital cameras have led to increased demand for digital images/videos. There has been a fast growth in the number of information systems that benefit from digital imaging techniques and present many tough challenges. In this paper e are concerned with applications for which image quality is a critical requirement. The fields of medicine, remote sensing, real time surveillance, and image-based automatic fingerprint/face identification systems are all but few examples of such applications. Medical care is increasingly dependent on imaging for diagnostics, surgery, and education. It is estimated that medium size hospitals in the US generate terabytes of MRI images and X-Ray images are generated to be stored in very large databases which are frequently accessed and searched for research and training. On the other hand, the rise of international terrorism and the growth of identity theft have added urgency to the development of new efficient biometric-based person verification/authentication systems. In future, such systems can provide an additional layer of security for online transactions or for real-time surveillance.

  12. Full-Field Imaging of GHz Film Bulk Acoustic Resonator Motion

    SciTech Connect

    Telschow, Kenneth Louis; Deason, Vance Albert; Cottle, David Lynn; Larson III, J. D.

    2003-10-01

    A full-field view laser ultrasonic imaging method has been developed that measures acoustic motion at a surface without scanning. Images are recorded at normal video frame rates by using dynamic holography with photorefractive interferometric detection. By extending the approach to ultra high frequencies, an acoustic microscope has been developed that is capable of operation at gigahertz frequency and micron length scales. Both acoustic amplitude and phase are recorded, allowing full calibration and determination of phases to within a single arbitrary constant. Results are presented of measurements at frequencies of 800-900 MHz, illustrating a multitude of normal mode behavior in electrically driven thin film acoustic resonators. Coupled with microwave electrical impedance measurements, this imaging mode provides an exceptionally fast method for evaluation of electric-to-acoustic coupling of these devices and their performance. Images of 256 /spl times/ 240 pixels are recorded at 18 fps rates synchronized to obtain both in-phase and quadrature detection of the acoustic motion. Simple averaging provides sensitivity to the subnanometer level at each pixel calibrated over the image using interferometry. Identification of specific acoustic modes and their relationship to electrical impedance characteristics show the advantages and overall high speed of the technique.

  13. Full-Field Imaging of Acoustic Motion at Nanosecond Time and Micron Length Scales

    SciTech Connect

    Telschow, Kenneth Louis; Deason, Vance Albert; Cottle, David Lynn; Larson III, John D.

    2002-10-01

    A full-field view laser ultrasonic imaging method has been developed that measures acoustic motion at a surface without scanning. Images are recorded at normal video frame rates by employing dynamic holography using photorefractive interferometric detection. By extending the approach to ultra high frequencies, an acoustic microscope has been developed capable of operation on the nanosecond time and micron length scales. Both acoustic amplitude and phase are recorded allowing full calibration and determination of phases to within a single arbitrary constant. Results are presented of measurements at frequencies at 800-900 MHz illustrating a multitude of normal mode behavior in electrically driven thin film acoustic resonators. Coupled with microwave electrical impedance measurements, this imaging mode provides an exceptionally fast method for evaluation of electric to acoustic coupling and performance of these devices. Images of 256x240 pixels are recorded at 18Hz rates synchronized to obtain both in-phase and quadrature detection of the acoustic motion. Simple averaging provides sensitivity to the subnanometer level calibrated over the image using interferometry. Identification of specific acoustic modes and their relationship to electrical impedance characteristics show the advantages and overall high speed of the technique.

  14. Computerized tomography using video recorded fluoroscopic images

    NASA Technical Reports Server (NTRS)

    Kak, A. C.; Jakowatz, C. V., Jr.; Baily, N. A.; Keller, R. A.

    1977-01-01

    The use of video-recorded fluoroscopic images as input data for digital reconstruction of objects from their projections is examined. The fluoroscopic and the scanning apparatus used for the experiments are of a commercial type already in existence in most hospitals. It is shown that for beams with divergence up to about 15 deg, one can use a convolution algorithm designed for the parallel radiation case with negligible degradation both quantitatively and from a visual quality standpoint. This convolution algorithm is computationally more efficient than either the algebraic techniques or the convolution algorithms for radially diverging data. Results from studies on Lucite phantoms and a freshly sacrificed rat are included.

  15. Snapshot spectral and polarimetric imaging; target identification with multispectral video

    NASA Astrophysics Data System (ADS)

    Bartlett, Brent D.; Rodriguez, Mikel D.

    2013-05-01

    As the number of pixels continue to grow in consumer and scientific imaging devices, it has become feasible to collect the incident light field. In this paper, an imaging device developed around light field imaging is used to collect multispectral and polarimetric imagery in a snapshot fashion. The sensor is described and a video data set is shown highlighting the advantage of snapshot spectral imaging. Several novel computer vision approaches are applied to the video cubes to perform scene characterization and target identification. It is shown how the addition of spectral and polarimetric data to the video stream allows for multi-target identification and tracking not possible with traditional RGB video collection.

  16. Ocular torsion quantification with video images.

    PubMed

    Bos, J E; de Graaf, B

    1994-04-01

    The present paper describes a technique to quantify eye rotations about the visual axis (ocular torsion). Two digitized polar transformed images of the iris are displayed on a video monitor in order to facilitate a visual comparison and manual interaction. Emphasis is placed on error analysis and the method's simplicity when applied to static ocular torsion measurement. The implementation, applying averaging over ocular torsion determined in partitioned iris images, yields a theoretical resolution of 5' of arc. In a control experiment with an artificial eye, the accuracy showed to be better than 14' of arc. In practice, the measuring device was validated with the data from the literature by means of an experiment about ocular torsion in humans during tilt and hypergravity conditions (up to 3 g).

  17. Methane distribution in porewaters of the Eastern Siberian Shelf Sea - chemical, acoustic, and video observations

    NASA Astrophysics Data System (ADS)

    Bruchert, V.; Sawicka, J. E.; Samarkin, V.; Noormets, R.; Stockmann, G. J.; Bröder, L.; Rattray, J.; Steinbach, J.

    2015-12-01

    We present porewater methane and sulfate concentrations, and the isotope composition of carbon dioxide from 18 sites in areas of reported high methane water column concentrations on the Siberian shelf. Echosounder imaging and video imagery of the benthic environment were used to detect potential bubble emission from the sea bottom and to locate high methane emission areas. In areas where bubble flares were identified by acoustic echsounder imaging, recovered sediment cores provided evidence for slightly elevated porewater methane concentrations 10 cm below the sediment surface relative to sites without flares. Throughout the recovered sediment depth intervals porewater concentrations of methane were more than a factor 300 below the gas saturation limit at sea surface pressure. In addition, surface sediment video recordings provided no evidence for bubble emissions in the investigated methane hotspot areas although at nearby sites bubbles were detected higher in the water column. The conflicting observations of acoustic indications of rising bubbles and the absence of bubbles and methane oversaturation in any of the sediment cores during the whole SWERUS cruise suggest that advective methane seepage is a spatially limited phenomenon that is difficult to capture with routine ship-based core sampling methods in this field area. Recovery of a sediment core from one high-activity site indicated steep gradients in dissolved sulfate and methane in the first 8 cm of sediment pointing to the presence of anaerobic methane oxidation at a site with a high upward flux of methane. Based on the decrease of methane towards the sediment surface and the rates of sulfate reduction-coupled methane oxidation, most of the upward-transported methane was oxidized within the sediment. This conclusion is further supported by the stable isotope composition of dissolved carbon dioxide in porewaters and the precipitation of calcium carbonate minerals only found in sediment at this site

  18. Acoustic streaming in lithotripsy fields: preliminary observation using a particle image velocimetry method.

    PubMed

    Choi, Min Joo; Doh, Doeg Hee; Hwang, Tae Gyu; Cho, Chu Hyun; Paeng, Dong Guk; Rim, Gun Hee; Coleman, A J

    2006-02-01

    This study considers the acoustic streaming in water produced by a lithotripsy pulse. Particle image velocimetry (PIV) method was employed to visualize the acoustic streaming produced by an electromagnetic shock wave generator using video images of the light scattering particles suspended in water. Visualized streaming features including several local peaks and vortexes around or at the beam focus were easily seen with naked eyes over all settings of the lithotripter from 10 to 18 kV. Magnitudes of the peak streaming velocity measured vary in the range of 10-40 mm s(-1) with charging voltage settings. Since the streaming velocity was estimated on the basis of a series of the video images of particles averaged over 1/60s, the time resolution limited by the video frame rate which is 1-2 orders of magnitude larger than driving acoustic activities, measured velocities are expected to be underestimated and were shown a similar order of magnitude lower than those calculated from a simple theoretical consideration. Despite such an underestimation, it was shown that, as predicted by theory, the magnitude of the streaming velocity measured by the present PIV method was proportional to acoustic intensity. In particular it has almost a linear correlation with peak negative pressures (r=0.98683, p=0.0018). PMID:16376400

  19. Quantitative Ultrasound Imaging Using Acoustic Backscatter Coefficients.

    NASA Astrophysics Data System (ADS)

    Boote, Evan Jeffery

    Current clinical ultrasound scanners render images which have brightness levels related to the degree of backscattered energy from the tissue being imaged. These images offer the interpreter a qualitative impression of the scattering characteristics of the tissue being examined, but due to the complex factors which affect the amplitude and character of the echoed acoustic energy, it is difficult to make quantitative assessments of scattering nature of the tissue, and thus, difficult to make precise diagnosis when subtle disease effects are present. In this dissertation, a method of data reduction for determining acoustic backscatter coefficients is adapted for use in forming quantitative ultrasound images of this parameter. In these images, the brightness level of an individual pixel corresponds to the backscatter coefficient determined for the spatial position represented by that pixel. The data reduction method utilized rigorously accounts for extraneous factors which affect the scattered echo waveform and has been demonstrated to accurately determine backscatter coefficients under a wide range of conditions. The algorithms and procedures used to form backscatter coefficient images are described. These were tested using tissue-mimicking phantoms which have regions of varying scattering levels. Another phantom has a fat-mimicking layer for testing these techniques under more clinically relevant conditions. Backscatter coefficient images were also formed of in vitro human liver tissue. A clinical ultrasound scanner has been adapted for use as a backscatter coefficient imaging platform. The digital interface between the scanner and the computer used for data reduction are described. Initial tests, using phantoms are presented. A study of backscatter coefficient imaging of in vivo liver was performed using several normal, healthy human subjects.

  20. Acoustic Imaging of Snowpack Physical Properties

    NASA Astrophysics Data System (ADS)

    Kinar, N. J.; Pomeroy, J. W.

    2011-12-01

    Measurements of snowpack depth, density, structure and temperature have often been conducted by the use of snowpits and invasive measurement devices. Previous research has shown that acoustic waves passing through snow are capable of measuring these properties. An experimental observation device (SAS2, System for the Acoustic Sounding of Snow) was used to autonomously send audible sound waves into the top of the snowpack and to receive and process the waves reflected from the interior and bottom of the snowpack. A loudspeaker and microphone array separated by an offset distance was suspended in the air above the surface of the snowpack. Sound waves produced from a loudspeaker as frequency-swept sequences and maximum length sequences were used as source signals. Up to 24 microphones measured the audible signal from the snowpack. The signal-to-noise ratio was compared between sequences in the presence of environmental noise contributed by wind and reflections from vegetation. Beamforming algorithms were used to reject spurious reflections and to compensate for movement of the sensor assembly during the time of data collection. A custom-designed circuit with digital signal processing hardware implemented an inversion algorithm to relate the reflected sound wave data to snowpack physical properties and to create a two-dimensional image of snowpack stratigraphy. The low power consumption circuit was powered by batteries and through WiFi and Bluetooth interfaces enabled the display of processed data on a mobile device. Acoustic observations were logged to an SD card after each measurement. The SAS2 system was deployed at remote field locations in the Rocky Mountains of Alberta, Canada. Acoustic snow properties data was compared with data collected from gravimetric sampling, thermocouple arrays, radiometers and snowpit observations of density, stratigraphy and crystal structure. Aspects for further research and limitations of the acoustic sensing system are also discussed.

  1. Real-time adaptive video image enhancement

    NASA Astrophysics Data System (ADS)

    Garside, John R.; Harrison, Chris G.

    1999-07-01

    As part of a continuing collaboration between the University of Manchester and British Aerospace, a signal processing array has been constructed to demonstrate that it is feasible to compensate a video signal for the degradation caused by atmospheric haze in real-time. Previously reported work has shown good agreement between a simple physical model of light scattering by atmospheric haze and the observed loss of contrast. This model predicts a characteristic relationship between contrast loss in the image and the range from the camera to the scene. For an airborne camera, the slant-range to a point on the ground may be estimated from the airplane's pose, as reported by the inertial navigation system, and the contrast may be obtained from the camera's output. Fusing data from these two streams provides a means of estimating model parameters such as the visibility and the overall illumination of the scene. This knowledge allows the same model to be applied in reverse, thus restoring the contrast lost to atmospheric haze. An efficient approximation of range is vital for a real-time implementation of the method. Preliminary results show that an adaptive approach to fitting the model's parameters, exploiting the temporal correlation between video frames, leads to a robust implementation with a significantly accelerated throughput.

  2. VLSI-based video event triggering for image data compression

    NASA Astrophysics Data System (ADS)

    Williams, Glenn L.

    1994-02-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  3. VLSI-based Video Event Triggering for Image Data Compression

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    1994-01-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  4. Method and apparatus for acoustic imaging of objects in water

    DOEpatents

    Deason, Vance A.; Telschow, Kenneth L.

    2005-01-25

    A method, system and underwater camera for acoustic imaging of objects in water or other liquids includes an acoustic source for generating an acoustic wavefront for reflecting from a target object as a reflected wavefront. The reflected acoustic wavefront deforms a screen on an acoustic side and correspondingly deforms the opposing optical side of the screen. An optical processing system is optically coupled to the optical side of the screen and converts the deformations on the optical side of the screen into an optical intensity image of the target object.

  5. Acoustic noise during functional magnetic resonance imaging.

    PubMed

    Ravicz, M E; Melcher, J R; Kiang, N Y

    2000-10-01

    Functional magnetic resonance imaging (fMRI) enables sites of brain activation to be localized in human subjects. For studies of the auditory system, acoustic noise generated during fMRI can interfere with assessments of this activation by introducing uncontrolled extraneous sounds. As a first step toward reducing the noise during fMRI, this paper describes the temporal and spectral characteristics of the noise present under typical fMRI study conditions for two imagers with different static magnetic field strengths. Peak noise levels were 123 and 138 dB re 20 microPa in a 1.5-tesla (T) and a 3-T imager, respectively. The noise spectrum (calculated over a 10-ms window coinciding with the highest-amplitude noise) showed a prominent maximum at 1 kHz for the 1.5-T imager (115 dB SPL) and at 1.4 kHz for the 3-T imager (131 dB SPL). The frequency content and timing of the most intense noise components indicated that the noise was primarily attributable to the readout gradients in the imaging pulse sequence. The noise persisted above background levels for 300-500 ms after gradient activity ceased, indicating that resonating structures in the imager or noise reverberating in the imager room were also factors. The gradient noise waveform was highly repeatable. In addition, the coolant pump for the imager's permanent magnet and the room air-handling system were sources of ongoing noise lower in both level and frequency than gradient coil noise. Knowledge of the sources and characteristics of the noise enabled the examination of general approaches to noise control that could be applied to reduce the unwanted noise during fMRI sessions. PMID:11051496

  6. Extended image differencing for change detection in UAV video mosaics

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Krüger, Wolfgang; Schumann, Arne

    2014-03-01

    Change detection is one of the most important tasks when using unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. We address changes of short time scale, i.e. the observations are taken in time distances from several minutes up to a few hours. Each observation is a short video sequence acquired by the UAV in near-nadir view and the relevant changes are, e.g., recently parked or moved vehicles. In this paper we extend our previous approach of image differencing for single video frames to video mosaics. A precise image-to-image registration combined with a robust matching approach is needed to stitch the video frames to a mosaic. Additionally, this matching algorithm is applied to mosaic pairs in order to align them to a common geometry. The resulting registered video mosaic pairs are the input of the change detection procedure based on extended image differencing. A change mask is generated by an adaptive threshold applied to a linear combination of difference images of intensity and gradient magnitude. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are stereo disparity at 3D structures of the scene, changed size of shadows, and compression or transmission artifacts. The special effects of video mosaicking such as geometric distortions and artifacts at moving objects have to be considered, too. In our experiments we analyze the influence of these effects on the change detection results by considering several scenes. The results show that for video mosaics this task is more difficult than for single video frames. Therefore, we extended the image registration by estimating an elastic transformation using a thin plate spline approach. The results for mosaics are comparable to that of single video frames and are useful for interactive image exploitation due to a larger scene coverage.

  7. VICAR - VIDEO IMAGE COMMUNICATION AND RETRIEVAL

    NASA Technical Reports Server (NTRS)

    Wall, R. J.

    1994-01-01

    VICAR (Video Image Communication and Retrieval) is a general purpose image processing software system that has been under continuous development since the late 1960's. Originally intended for data from the NASA Jet Propulsion Laboratory's unmanned planetary spacecraft, VICAR is now used for a variety of other applications including biomedical image processing, cartography, earth resources, and geological exploration. The development of this newest version of VICAR emphasized a standardized, easily-understood user interface, a shield between the user and the host operating system, and a comprehensive array of image processing capabilities. Structurally, VICAR can be divided into roughly two parts; a suite of applications programs and an executive which serves as the interfaces between the applications, the operating system, and the user. There are several hundred applications programs ranging in function from interactive image editing, data compression/decompression, and map projection, to blemish, noise, and artifact removal, mosaic generation, and pattern recognition and location. An information management system designed specifically for handling image related data can merge image data with other types of data files. The user accesses these programs through the VICAR executive, which consists of a supervisor and a run-time library. From the viewpoint of the user and the applications programs, the executive is an environment that is independent of the operating system. VICAR does not replace the host computer's operating system; instead, it overlays the host resources. The core of the executive is the VICAR Supervisor, which is based on NASA Goddard Space Flight Center's Transportable Applications Executive (TAE). Various modifications and extensions have been made to optimize TAE for image processing applications, resulting in a user friendly environment. The rest of the executive consists of the VICAR Run-Time Library, which provides a set of subroutines (image

  8. Interpreting Underwater Acoustic Images of the Upper Ocean Boundary Layer

    ERIC Educational Resources Information Center

    Ulloa, Marco J.

    2007-01-01

    A challenging task in physical studies of the upper ocean using underwater sound is the interpretation of high-resolution acoustic images. This paper covers a number of basic concepts necessary for undergraduate and postgraduate students to identify the most distinctive features of the images, providing a link with the acoustic signatures of…

  9. Video Imaging System Particularly Suited for Dynamic Gear Inspection

    NASA Technical Reports Server (NTRS)

    Broughton, Howard (Inventor)

    1999-01-01

    A digital video imaging system that captures the image of a single tooth of interest of a rotating gear is disclosed. The video imaging system detects the complete rotation of the gear and divide that rotation into discrete time intervals so that each tooth of interest of the gear is precisely determined when it is at a desired location that is illuminated in unison with a digital video camera so as to record a single digital image for each tooth. The digital images are available to provide instantaneous analysis of the tooth of interest, or to be stored and later provide images that yield a history that may be used to predict gear failure, such as gear fatigue. The imaging system is completely automated by a controlling program so that it may run for several days acquiring images without supervision from the user.

  10. An Acoustic Charge Transport Imager for High Definition Television

    NASA Technical Reports Server (NTRS)

    Hunt, William D.; Brennan, Kevin; May, Gary; Glenn, William E.; Richardson, Mike; Solomon, Richard

    1999-01-01

    This project, over its term, included funding to a variety of companies and organizations. In addition to Georgia Tech these included Florida Atlantic University with Dr. William E. Glenn as the P.I., Kodak with Mr. Mike Richardson as the P.I. and M.I.T./Polaroid with Dr. Richard Solomon as the P.I. The focus of the work conducted by these organizations was the development of camera hardware for High Definition Television (HDTV). The focus of the research at Georgia Tech was the development of new semiconductor technology to achieve a next generation solid state imager chip that would operate at a high frame rate (I 70 frames per second), operate at low light levels (via the use of avalanche photodiodes as the detector element) and contain 2 million pixels. The actual cost required to create this new semiconductor technology was probably at least 5 or 6 times the investment made under this program and hence we fell short of achieving this rather grand goal. We did, however, produce a number of spin-off technologies as a result of our efforts. These include, among others, improved avalanche photodiode structures, significant advancement of the state of understanding of ZnO/GaAs structures and significant contributions to the analysis of general GaAs semiconductor devices and the design of Surface Acoustic Wave resonator filters for wireless communication. More of these will be described in the report. The work conducted at the partner sites resulted in the development of 4 prototype HDTV cameras. The HDTV camera developed by Kodak uses the Kodak KAI-2091M high- definition monochrome image sensor. This progressively-scanned charge-coupled device (CCD) can operate at video frame rates and has 9 gm square pixels. The photosensitive area has a 16:9 aspect ratio and is consistent with the "Common Image Format" (CIF). It features an active image area of 1928 horizontal by 1084 vertical pixels and has a 55% fill factor. The camera is designed to operate in continuous mode

  11. Research on defogging technology of video image based on FPGA

    NASA Astrophysics Data System (ADS)

    Liu, Shuo; Piao, Yan

    2015-03-01

    As the effect of atmospheric particles scattering, the video image captured by outdoor surveillance system has low contrast and brightness, which directly affects the application value of the system. The traditional defogging technology is mostly studied by software for the defogging algorithms of the single frame image. Moreover, the algorithms have large computation and high time complexity. Then, the defogging technology of video image based on Digital Signal Processing (DSP) has the problem of complex peripheral circuit. It can't be realized in real-time processing, and it's hard to debug and upgrade. In this paper, with the improved dark channel prior algorithm, we propose a kind of defogging technology of video image based on Field Programmable Gate Array (FPGA). Compared to the traditional defogging methods, the video image with high resolution can be processed in real-time. Furthermore, the function modules of the system have been designed by hardware description language. At last, the results show that the defogging system based on FPGA can process the video image with minimum resolution of 640×480 in real-time. After defogging, the brightness and contrast of video image are improved effectively. Therefore, the defogging technology proposed in the paper has a great variety of applications including aviation, forest fire prevention, national security and other important surveillance.

  12. Transthoracic Cardiac Acoustic Radiation Force Impulse Imaging

    NASA Astrophysics Data System (ADS)

    Bradway, David Pierson

    This dissertation investigates the feasibility of a real-time transthoracic Acoustic Radiation Force Impulse (ARFI) imaging system to measure myocardial function non-invasively in clinical setting. Heart failure is an important cardiovascular disease and contributes to the leading cause of death for developed countries. Patients exhibiting heart failure with a low left ventricular ejection fraction (LVEF) can often be identified by clinicians, but patients with preserved LVEF might be undetected if they do not exhibit other signs and symptoms of heart failure. These cases motivate development of transthoracic ARFI imaging to aid the early diagnosis of the structural and functional heart abnormalities leading to heart failure. M-Mode ARFI imaging utilizes ultrasonic radiation force to displace tissue several micrometers in the direction of wave propagation. Conventional ultrasound tracks the response of the tissue to the force. This measurement is repeated rapidly at a location through the cardiac cycle, measuring timing and relative changes in myocardial stiffness. ARFI imaging was previously shown capable of measuring myocardial properties and function via invasive open-chest and intracardiac approaches. The prototype imaging system described in this dissertation is capable of rapid acquisition, processing, and display of ARFI images and shear wave elasticity imaging (SWEI) movies. Also presented is a rigorous safety analysis, including finite element method (FEM) simulations of tissue heating, hydrophone intensity and mechanical index (MI) measurements, and thermocouple transducer face heating measurements. For the pulse sequences used in later animal and clinical studies, results from the safety analysis indicates that transthoracic ARFI imaging can be safely applied at rates and levels realizable on the prototype ARFI imaging system. Preliminary data are presented from in vivo trials studying changes in myocardial stiffness occurring under normal and abnormal

  13. Using SAS (trade name) color graphics for video image analysis

    SciTech Connect

    Borek, J.; Huber, A.

    1988-04-01

    Wind-tunnel studies are conducted to evaluate the temporal and spatial distributions of pollutants in the wake of a model building. As part of these studies, video pictures of smoke are being used to study the dispersion patterns of pollution in the wake of buildings. The video-image format has potential as a quantifiable electronic medium. Analysis of series of selected pixels (picture elements) for video images is used to evaluate temporal and spatial scales of smoke puffs in the wake of the building.

  14. Optimization of a Biometric System Based on Acoustic Images

    PubMed Central

    Izquierdo Fuente, Alberto; Del Val Puente, Lara; Villacorta Calvo, Juan J.; Raboso Mateos, Mariano

    2014-01-01

    On the basis of an acoustic biometric system that captures 16 acoustic images of a person for 4 frequencies and 4 positions, a study was carried out to improve the performance of the system. On a first stage, an analysis to determine which images provide more information to the system was carried out showing that a set of 12 images allows the system to obtain results that are equivalent to using all of the 16 images. Finally, optimization techniques were used to obtain the set of weights associated with each acoustic image that maximizes the performance of the biometric system. These results improve significantly the performance of the preliminary system, while reducing the time of acquisition and computational burden, since the number of acoustic images was reduced. PMID:24616643

  15. Optimization of a biometric system based on acoustic images.

    PubMed

    Izquierdo Fuente, Alberto; Del Val Puente, Lara; Villacorta Calvo, Juan J; Raboso Mateos, Mariano

    2014-01-01

    On the basis of an acoustic biometric system that captures 16 acoustic images of a person for 4 frequencies and 4 positions, a study was carried out to improve the performance of the system. On a first stage, an analysis to determine which images provide more information to the system was carried out showing that a set of 12 images allows the system to obtain results that are equivalent to using all of the 16 images. Finally, optimization techniques were used to obtain the set of weights associated with each acoustic image that maximizes the performance of the biometric system. These results improve significantly the performance of the preliminary system, while reducing the time of acquisition and computational burden, since the number of acoustic images was reduced.

  16. PIZZARO: Forensic analysis and restoration of image and video data.

    PubMed

    Kamenicky, Jan; Bartos, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozamsky, Adam; Saic, Stanislav; Sroubek, Filip; Sorel, Michal; Zita, Ales; Zitova, Barbara; Sima, Zdenek; Svarc, Petr; Horinek, Jan

    2016-07-01

    This paper introduces a set of methods for image and video forensic analysis. They were designed to help to assess image and video credibility and origin and to restore and increase image quality by diminishing unwanted blur, noise, and other possible artifacts. The motivation came from the best practices used in the criminal investigation utilizing images and/or videos. The determination of the image source, the verification of the image content, and image restoration were identified as the most important issues of which automation can facilitate criminalists work. Novel theoretical results complemented with existing approaches (LCD re-capture detection and denoising) were implemented in the PIZZARO software tool, which consists of the image processing functionality as well as of reporting and archiving functions to ensure the repeatability of image analysis procedures and thus fulfills formal aspects of the image/video analysis work. Comparison of new proposed methods with the state of the art approaches is shown. Real use cases are presented, which illustrate the functionality of the developed methods and demonstrate their applicability in different situations. The use cases as well as the method design were solved in tight cooperation of scientists from the Institute of Criminalistics, National Drug Headquarters of the Criminal Police and Investigation Service of the Police of the Czech Republic, and image processing experts from the Czech Academy of Sciences.

  17. PIZZARO: Forensic analysis and restoration of image and video data.

    PubMed

    Kamenicky, Jan; Bartos, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozamsky, Adam; Saic, Stanislav; Sroubek, Filip; Sorel, Michal; Zita, Ales; Zitova, Barbara; Sima, Zdenek; Svarc, Petr; Horinek, Jan

    2016-07-01

    This paper introduces a set of methods for image and video forensic analysis. They were designed to help to assess image and video credibility and origin and to restore and increase image quality by diminishing unwanted blur, noise, and other possible artifacts. The motivation came from the best practices used in the criminal investigation utilizing images and/or videos. The determination of the image source, the verification of the image content, and image restoration were identified as the most important issues of which automation can facilitate criminalists work. Novel theoretical results complemented with existing approaches (LCD re-capture detection and denoising) were implemented in the PIZZARO software tool, which consists of the image processing functionality as well as of reporting and archiving functions to ensure the repeatability of image analysis procedures and thus fulfills formal aspects of the image/video analysis work. Comparison of new proposed methods with the state of the art approaches is shown. Real use cases are presented, which illustrate the functionality of the developed methods and demonstrate their applicability in different situations. The use cases as well as the method design were solved in tight cooperation of scientists from the Institute of Criminalistics, National Drug Headquarters of the Criminal Police and Investigation Service of the Police of the Czech Republic, and image processing experts from the Czech Academy of Sciences. PMID:27182830

  18. Method and apparatus for reading meters from a video image

    DOEpatents

    Lewis, Trevor J.; Ferguson, Jeffrey J.

    1997-01-01

    A method and system to enable acquisition of data about an environment from one or more meters using video images. One or more meters are imaged by a video camera and the video signal is digitized. Then, each region of the digital image which corresponds to the indicator of the meter is calibrated and the video signal is analyzed to determine the value indicated by each meter indicator. Finally, from the value indicated by each meter indicator in the calibrated region, a meter reading is generated. The method and system offer the advantages of automatic data collection in a relatively non-intrusive manner without making any complicated or expensive electronic connections, and without requiring intensive manpower.

  19. Method and apparatus for reading meters from a video image

    SciTech Connect

    Lewis, T.J.; Ferguson, J.J.

    1995-12-31

    A method and system enable acquisition of data about an environment from one or more meters using video images. One or more meters are imaged by a video camera and the video signal is digitized. Then, each region of the digital image which corresponds to the indicator of the meter is calibrated and the video signal is analyzed to determine the value indicated by each meter indicator. Finally, from the value indicated by each meter indicator in the calibrated region, a meter reading is generated. The method and system offer the advantages of automatic data collection in a relatively non-intrusive manner without making any complicated or expensive electronic connections, and without requiring intensive manpower.

  20. Aerospace video imaging systems for rangeland management

    NASA Technical Reports Server (NTRS)

    Everitt, J. H.; Escobar, D. E.; Richardson, A. J.; Lulla, K.

    1990-01-01

    This paper presents an overview on the application of airborne video imagery (VI) for assessment of rangeland resources. Multispectral black-and-white video with visible/NIR sensitivity; color-IR, normal color, and black-and-white MIR; and thermal IR video have been used to detect or distinguish among many rangeland and other natural resource variables such as heavy grazing, drought-stressed grass, phytomass levels, burned areas, soil salinity, plant communities and species, and gopher and ant mounds. The digitization and computer processing of VI have also been demonstrated. VI does not have the detailed resolution of film, but these results have shown that it has considerable potential as an applied remote sensing tool for rangeland management. In the future, spaceborne VI may provide additional data for monitoring and management of rangelands.

  1. Acoustic radiation force-based elasticity imaging methods

    PubMed Central

    Palmeri, Mark L.; Nightingale, Kathryn R.

    2011-01-01

    Conventional diagnostic ultrasound images portray differences in the acoustic properties of soft tissues, whereas ultrasound-based elasticity images portray differences in the elastic properties of soft tissues (i.e. stiffness, viscosity). The benefit of elasticity imaging lies in the fact that many soft tissues can share similar ultrasonic echogenicities, but may have different mechanical properties that can be used to clearly visualize normal anatomy and delineate pathological lesions. Acoustic radiation force-based elasticity imaging methods use acoustic radiation force to transiently deform soft tissues, and the dynamic displacement response of those tissues is measured ultrasonically and is used to estimate the tissue's mechanical properties. Both qualitative images and quantitative elasticity metrics can be reconstructed from these measured data, providing complimentary information to both diagnose and longitudinally monitor disease progression. Recently, acoustic radiation force-based elasticity imaging techniques have moved from the laboratory to the clinical setting, where clinicians are beginning to characterize tissue stiffness as a diagnostic metric, and commercial implementations of radiation force-based ultrasonic elasticity imaging are beginning to appear on the commercial market. This article provides an overview of acoustic radiation force-based elasticity imaging, including a review of the relevant soft tissue material properties, a review of radiation force-based methods that have been proposed for elasticity imaging, and a discussion of current research and commercial realizations of radiation force based-elasticity imaging technologies. PMID:22419986

  2. Acoustic Radiation Force Elasticity Imaging in Diagnostic Ultrasound

    PubMed Central

    Doherty, Joshua R.; Trahey, Gregg E.; Nightingale, Kathryn R.; Palmeri, Mark L.

    2013-01-01

    The development of ultrasound-based elasticity imaging methods has been the focus of intense research activity since the mid-1990s. In characterizing the mechanical properties of soft tissues, these techniques image an entirely new subset of tissue properties that cannot be derived with conventional ultrasound techniques. Clinically, tissue elasticity is known to be associated with pathological condition and with the ability to image these features in vivo, elasticity imaging methods may prove to be invaluable tools for the diagnosis and/or monitoring of disease. This review focuses on ultrasound-based elasticity imaging methods that generate an acoustic radiation force to induce tissue displacements. These methods can be performed non-invasively during routine exams to provide either qualitative or quantitative metrics of tissue elasticity. A brief overview of soft tissue mechanics relevant to elasticity imaging is provided, including a derivation of acoustic radiation force, and an overview of the various acoustic radiation force elasticity imaging methods. PMID:23549529

  3. Image/video encryption using single shot digital holography

    NASA Astrophysics Data System (ADS)

    Li, Xiaoyu; Tang, Chen; Zhu, Xinjun; Li, Biyuan; Wang, Linlin; Yan, Xiusheng

    2015-05-01

    We propose a method for image/video encryption that combines double random-phase encoding in the Fresnel domain with a single shot digital holography. In this method, a complex object field can be reconstructed with only single frame hologram based on a constrained optimization method. The system without multiple shots and Fourier lens is simple, and allows to dynamically encrypt information. We test the proposed method on a computer simulated image, a grayscale image and a video in AVI format. Also we investigate the quality of the decryption process and the performance against noise attacks. The experimental results demonstrate the performance of the method.

  4. Indexing Film and Video Images for Storage and Retrieval.

    ERIC Educational Resources Information Center

    Turner, James

    1994-01-01

    Discussion of indexing needs for film and video images focuses on appropriate access points for the storage and retrieval of individual shots which have not yet been included in a production. A study at the National Film Board of Canada is described that investigated ways to index non-art images. (18 references) (LRW)

  5. Object classification and acoustic imaging with active sonar.

    PubMed

    Kelly, J G; Carpenter, R N; Tague, J A

    1992-04-01

    The theoretical underpinnings of underwater acoustic classification and imaging using high-frequency active sonar are studied. All essential components of practical classification systems are incorporated in a Bayesian theoretic framework. The optimum decision rules and array processing are presented and evaluated. A systematic performance evaluation methodology is derived. New results quantify the relationship between classifier performance and object geometry, acoustic imaging, and the accuracy of a priori knowledge infused into the processor.

  6. Nondestructive imaging of shallow buried objects using acoustic computed tomography.

    PubMed

    Younis, Waheed A; Stergiopoulos, Stergios; Havelock, David; Grodski, Julius

    2002-05-01

    The nondestructive three-dimensional acoustic tomography concept of the present investigation combines computerized tomography image reconstruction algorithms using acoustic diffracting waves together with depth information to produce a three-dimensional (3D) image of an underground section. The approach illuminates the underground area of interest with acoustic plane waves of frequencies 200-3000 Hz. For each transmitted pulse, the reflected-refracted signals are received by a line array of acoustic sensors located at a diametrically opposite point from the acoustic source line array. For a stratified underground medium and for a given depth, which is represented by a time delay in the received signal, a horizontal tomographic 2D image is reconstructed from the received projections. Integration of the depth dependent sequence of cross-sectional reconstructed images provides a complete three-dimensional overview of the inspected terrain. The method has been tested with an experimental system that consists of a line array of four-acoustic sources, providing plane waves, and a receiving line array of 32-acoustic sensors. The results indicate both the potential and the challenges facing the new methodology. Suggestions are made for improved performance, including an adaptive noise cancellation scheme and a numerical interpolation technique.

  7. Nondestructive imaging of shallow buried objects using acoustic computed tomography

    NASA Astrophysics Data System (ADS)

    Younis, Waheed A.; Stergiopoulos, Stergios; Havelock, David; Grodski, Julius

    2002-05-01

    The nondestructive three-dimensional acoustic tomography concept of the present investigation combines computerized tomography image reconstruction algorithms using acoustic diffracting waves together with depth information to produce a three-dimensional (3D) image of an underground section. The approach illuminates the underground area of interest with acoustic plane waves of frequencies 200-3000 Hz. For each transmitted pulse, the reflected-refracted signals are received by a line array of acoustic sensors located at a diametrically opposite point from the acoustic source line array. For a stratified underground medium and for a given depth, which is represented by a time delay in the received signal, a horizontal tomographic 2D image is reconstructed from the received projections. Integration of the depth dependent sequence of cross-sectional reconstructed images provides a complete three-dimensional overview of the inspected terrain. The method has been tested with an experimental system that consists of a line array of four-acoustic sources, providing plane waves, and a receiving line array of 32-acoustic sensors. The results indicate both the potential and the challenges facing the new methodology. Suggestions are made for improved performance, including an adaptive noise cancellation scheme and a numerical interpolation technique.

  8. Image quality of up-converted 2D video from frame-compatible 3D video

    NASA Astrophysics Data System (ADS)

    Speranza, Filippo; Tam, Wa James; Vázquez, Carlos; Renaud, Ronald; Blanchfield, Phil

    2011-03-01

    In the stereoscopic frame-compatible format, the separate high-definition left and high-definition right views are reduced in resolution and packed to fit within the same video frame as a conventional two-dimensional high-definition signal. This format has been suggested for 3DTV since it does not require additional transmission bandwidth and entails only small changes to the existing broadcasting infrastructure. In some instances, the frame-compatible format might be used to deliver both 2D and 3D services, e.g., for over-the-air television services. In those cases, the video quality of the 2D service is bound to decrease since the 2D signal will have to be generated by up-converting one of the two views. In this study, we investigated such loss by measuring the perceptual image quality of 1080i and 720p up-converted video as compared to that of full resolution original 2D video. The video was encoded with either a MPEG-2 or a H.264/AVC codec at different bit rates and presented for viewing with either no polarized glasses (2D viewing mode) or with polarized glasses (3D viewing mode). The results confirmed a loss of video quality of the 2D video up-converted material. The loss due to the sampling processes inherent to the frame-compatible format was rather small for both 1080i and 720p video formats; the loss became more substantial with encoding, particularly for MPEG-2 encoding. The 3D viewing mode provided higher quality ratings, possibly because the visibility of the degradations was reduced.

  9. Video rate multispectral imaging for camouflaged target detection

    NASA Astrophysics Data System (ADS)

    Henry, Sam

    2015-05-01

    The ability to detect and identify camouflaged targets is critical in combat environments. Hyperspectral and Multispectral cameras allow a soldier to identify threats more effectively than traditional RGB cameras due to both increased color resolution and ability to see beyond visible light. Static imagers have proven successful, however the development of video rate imagers allows for continuous real time target identification and tracking. This paper presents an analysis of existing anomaly detection algorithms and how they can be adopted to video rates, and presents a general purpose semisupervised real time anomaly detection algorithm using multiple frame sampling.

  10. Acoustic force mapping in a hybrid acoustic-optical micromanipulation device supporting high resolution optical imaging.

    PubMed

    Thalhammer, Gregor; McDougall, Craig; MacDonald, Michael Peter; Ritsch-Marte, Monika

    2016-04-21

    Many applications in the life-sciences demand non-contact manipulation tools for forceful but nevertheless delicate handling of various types of sample. Moreover, the system should support high-resolution optical imaging. Here we present a hybrid acoustic/optical manipulation system which utilizes a transparent transducer, making it compatible with high-NA imaging in a microfluidic environment. The powerful acoustic trapping within a layered resonator, which is suitable for highly parallel particle handling, is complemented by the flexibility and selectivity of holographic optical tweezers, with the specimens being under high quality optical monitoring at all times. The dual acoustic/optical nature of the system lends itself to optically measure the exact acoustic force map, by means of direct force measurements on an optically trapped particle. For applications with (ultra-)high demand on the precision of the force measurements, the position of the objective used for the high-NA imaging may have significant influence on the acoustic force map in the probe chamber. We have characterized this influence experimentally and the findings were confirmed by model simulations. We show that it is possible to design the chamber and to choose the operating point in such a way as to avoid perturbations due to the objective lens. Moreover, we found that measuring the electrical impedance of the transducer provides an easy indicator for the acoustic resonances. PMID:27025398

  11. Calibration method for video and radiation imagers

    DOEpatents

    Cunningham, Mark F.; Fabris, Lorenzo; Gee, Timothy F.; Goddard, Jr., James S.; Karnowski, Thomas P.; Ziock, Klaus-peter

    2011-07-05

    The relationship between the high energy radiation imager pixel (HERIP) coordinate and real-world x-coordinate is determined by a least square fit between the HERIP x-coordinate and the measured real-world x-coordinates of calibration markers that emit high energy radiation imager and reflect visible light. Upon calibration, a high energy radiation imager pixel position may be determined based on a real-world coordinate of a moving vehicle. Further, a scale parameter for said high energy radiation imager may be determined based on the real-world coordinate. The scale parameter depends on the y-coordinate of the moving vehicle as provided by a visible light camera. The high energy radiation imager may be employed to detect radiation from moving vehicles in multiple lanes, which correspondingly have different distances to the high energy radiation imager.

  12. 3D holoscopic video imaging system

    NASA Astrophysics Data System (ADS)

    Steurer, Johannes H.; Pesch, Matthias; Hahne, Christopher

    2012-03-01

    Since many years, integral imaging has been discussed as a technique to overcome the limitations of standard still photography imaging systems where a three-dimensional scene is irrevocably projected onto two dimensions. With the success of 3D stereoscopic movies, a huge interest in capturing three-dimensional motion picture scenes has been generated. In this paper, we present a test bench integral imaging camera system aiming to tailor the methods of light field imaging towards capturing integral 3D motion picture content. We estimate the hardware requirements needed to generate high quality 3D holoscopic images and show a prototype camera setup that allows us to study these requirements using existing technology. The necessary steps that are involved in the calibration of the system as well as the technique of generating human readable holoscopic images from the recorded data are discussed.

  13. Investigation of an acoustical holography system for real-time imaging

    NASA Astrophysics Data System (ADS)

    Fecht, Barbara A.; Andre, Michael P.; Garlick, George F.; Shelby, Ronald L.; Shelby, Jerod O.; Lehman, Constance D.

    1998-07-01

    A new prototype imaging system based on ultrasound transmission through the object of interest -- acoustical holography -- was developed which incorporates significant improvements in acoustical and optical design. This system is being evaluated for potential clinical application in the musculoskeletal system, interventional radiology, pediatrics, monitoring of tumor ablation, vascular imaging and breast imaging. System limiting resolution was estimated using a line-pair target with decreasing line thickness and equal separation. For a swept frequency beam from 2.6 - 3.0 MHz, the minimum resolution was 0.5 lp/mm. Apatite crystals were suspended in castor oil to approximate breast microcalcifications. Crystals from 0.425 - 1.18 mm in diameter were well resolved in the acoustic zoom mode. Needle visibility was examined with both a 14-gauge biopsy needle and a 0.6 mm needle. The needle tip was clearly visible throughout the dynamic imaging sequence as it was slowly inserted into a RMI tissue-equivalent breast biopsy phantom. A selection of human images was acquired in several volunteers: a 25 year-old female volunteer with normal breast tissue, a lateral view of the elbow joint showing muscle fascia and tendon insertions, and the superficial vessels in the forearm. Real-time video images of these studies will be presented. In all of these studies, conventional sonography was used for comparison. These preliminary investigations with the new prototype acoustical holography system showed favorable results in comparison to state-of-the-art pulse-echo ultrasound and demonstrate it to be suitable for further clinical study. The new patient interfaces will facilitate orthopedic soft tissue evaluation, study of superficial vascular structures and potentially breast imaging.

  14. Submillimeter video imaging with a superconducting bolometer array

    NASA Astrophysics Data System (ADS)

    Becker, Daniel Thomas

    Millimeter wavelength radiation holds promise for detection of security threats at a distance, including suicide bombers and maritime threats in poor weather. The high sensitivity of superconducting Transition Edge Sensor (TES) bolometers makes them ideal for passive imaging of thermal signals at millimeter and submillimeter wavelengths. I have built a 350 GHz video-rate imaging system using an array of feedhorn-coupled TES bolometers. The system operates at standoff distances of 16 m to 28 m with a measured spatial resolution of 1.4 cm (at 17 m). It currently contains one 251-detector sub-array, and can be expanded to contain four sub-arrays for a total of 1004 detectors. The system has been used to take video images that reveal the presence of weapons concealed beneath a shirt in an indoor setting. This dissertation describes the design, implementation and characterization of this system. It presents an overview of the challenges associated with standoff passive imaging and how these problems can be overcome through the use of large-format TES bolometer arrays. I describe the design of the system and cover the results of detector and optical characterization. I explain the procedure used to generate video images using the system, and present a noise analysis of those images. This analysis indicates that the Noise Equivalent Temperature Difference (NETD) of the video images is currently limited by artifacts of the scanning process. More sophisticated image processing algorithms can eliminate these artifacts and reduce the NETD to 100 mK, which is the target value for the most demanding passive imaging scenarios. I finish with an overview of future directions for this system.

  15. Acoustic imaging in a water filled metallic pipe

    SciTech Connect

    Kolbe, W.F.; Turko, B.T.; Leskovar, B.

    1984-04-01

    A method is described for the imaging of the interior of a water filled metallic pipe using acoustical techniques. The apparatus consists of an array of 20 acoustic transducers mounted circumferentially around the pipe. Each transducer is pulsed in sequence, and the echos resulting from bubbles in the interior are digitized and processed by a computer to generate an image. The electronic control and digitizing system and the software processing of the echo signals are described. The performance of the apparatus is illustrated by the imaging of simulated bubbles consisting of thin walled glass spheres suspended in the pipe.

  16. Time-Reversal Acoustics and Maximum-Entropy Imaging

    SciTech Connect

    Berryman, J G

    2001-08-22

    Target location is a common problem in acoustical imaging using either passive or active data inversion. Time-reversal methods in acoustics have the important characteristic that they provide a means of determining the eigenfunctions and eigenvalues of the scattering operator for either of these problems. Each eigenfunction may often be approximately associated with an individual scatterer. The resulting decoupling of the scattered field from a collection of targets is a very useful aid to localizing the targets, and suggests a number of imaging and localization algorithms. Two of these are linear subspace methods and maximum-entropy imaging.

  17. Edge adaptive intra field de-interlacing of video images

    NASA Astrophysics Data System (ADS)

    Lachine, Vladimir; Smith, Gregory; Lee, Louie

    2013-02-01

    Expanding image by an arbitrary scale factor and thereby creating an enlarged image is a crucial image processing operation. De-interlacing is an example of such operation where a video field is enlarged in vertical direction with 1 to 2 scale factor. The most advanced de-interlacing algorithms use a few consequent input fields to generate one output frame. In order to save hardware resources in video processors, missing lines in each field may be generated without reference to the other fields. Line doubling, known as "bobbing", is the simplest intra field de-interlacing method. However, it may generate visual artifacts. For example, interpolation of an inserted line from a few neighboring lines by vertical filter may produce such visual artifacts as "jaggies." In this work we present edge adaptive image up-scaling and/or enhancement algorithm, which can produce "jaggies" free video output frames. As a first step, an edge and its parameters in each interpolated pixel are detected from gradient squared tensor based on local signal variances. Then, according to the edge parameters including orientation, anisotropy and variance strength, the algorithm determines footprint and frequency response of two-dimensional interpolation filter for the output pixel. Filter's coefficients are defined by edge parameters, so that quality of the output frame is controlled by local content. The proposed method may be used for image enlargement or enhancement (for example, anti-aliasing without resampling). It has been hardware implemented in video display processor for intra field de-interlacing of video images.

  18. Acoustic Radiation Force Impulse (ARFI) Imaging: a Review

    PubMed Central

    Nightingale, Kathy

    2012-01-01

    Acoustic radiation force based elasticity imaging methods are under investigation by many groups. These methods differ from traditional ultrasonic elasticity imaging methods in that they do not require compression of the transducer, and are thus expected to be less operator dependent. Methods have been developed that utilize impulsive (i.e. < 1 ms), harmonic (pulsed), and steady state radiation force excitations. The work discussed herein utilizes impulsive methods, for which two imaging approaches have been pursued: 1) monitoring the tissue response within the radiation force region of excitation (ROE) and generating images of relative differences in tissue stiffness (Acoustic Radiation Force Impulse (ARFI) imaging); and 2) monitoring the speed of shear wave propagation away from the ROE to quantify tissue stiffness (Shear Wave Elasticity Imaging (SWEI)). For these methods, a single ultrasound transducer on a commercial ultrasound system can be used to both generate acoustic radiation force in tissue, and to monitor the tissue displacement response. The response of tissue to this transient excitation is complicated and depends upon tissue geometry, radiation force field geometry, and tissue mechanical and acoustic properties. Higher shear wave speeds and smaller displacements are associated with stiffer tissues, and slower shear wave speeds and larger displacements occur with more compliant tissues. ARFI images have spatial resolution comparable to that of B-mode, often with greater contrast, providing matched, adjunctive information. SWEI images provide quantitative information about the tissue stiffness, typically with lower spatial resolution. A review these methods and examples of clinical applications are presented herein. PMID:22545033

  19. New strategy for image and video quality assessment

    NASA Astrophysics Data System (ADS)

    Ma, Qi; Zhang, Liming; Wang, Bin

    2010-01-01

    Image and video quality assessment (QA) is a critical issue in image and video processing applications. General full-reference (FR) QA criteria such as peak signal-to-noise ratio (PSNR) and mean squared error (MSE) do not accord well with human subjective assessment. Some QA indices that consider human visual sensitivity, such as mean structural similarity (MSSIM) with structural sensitivity, visual information fidelity (VIF) with statistical sensitivity, etc., were proposed in view of the differences between reference and distortion frames on a pixel or local level. However, they ignore the role of human visual attention (HVA). Recently, some new strategies with HVA have been proposed, but the methods extracting the visual attention are too complex for real-time realization. We take advantage of the phase spectrum of quaternion Fourier transform (PQFT), a very fast algorithm we previously proposed, to extract saliency maps of color images or videos. Then we propose saliency-based methods for both image QA (IQA) and video QA (VQA) by adding weights related to saliency features to these original IQA or VQA criteria. Experimental results show that our saliency-based strategy can approach more closely to human subjective assessment compared with these original IQA or VQA methods and does not take more time because of the fast PQFT algorithm.

  20. An investigation of dehazing effects on image and video coding.

    PubMed

    Gibson, Kristofor B; Võ, Dung T; Nguyen, Truong Q

    2012-02-01

    This paper makes an investigation of the dehazing effects on image and video coding for surveillance systems. The goal is to achieve good dehazed images and videos at the receiver while sustaining low bitrates (using compression) in the transmission pipeline. At first, this paper proposes a novel method for single-image dehazing, which is used for the investigation. It operates at a faster speed than current methods and can avoid halo effects by using the median operation. We then consider the dehazing effects in compression by investigating the coding artifacts and motion estimation in cases of applying any dehazing method before or after compression. We conclude that better dehazing performance with fewer artifacts and better coding efficiency is achieved when the dehazing is applied before compression. Simulations for Joint Photographers Expert Group images in addition to subjective and objective tests with H.264 compressed sequences validate our conclusion. PMID:21896391

  1. High-sensitivity hyperspectral imager for biomedical video diagnostic applications

    NASA Astrophysics Data System (ADS)

    Leitner, Raimund; Arnold, Thomas; De Biasio, Martin

    2010-04-01

    Video endoscopy allows physicians to visually inspect inner regions of the human body using a camera and only minimal invasive optical instruments. It has become an every-day routine in clinics all over the world. Recently a technological shift was done to increase the resolution from PAL/NTSC to HDTV. But, despite a vast literature on invivo and in-vitro experiments with multi-spectral point and imaging instruments that suggest that a wealth of information for diagnostic overlays is available in the visible spectrum, the technological evolution from colour to hyper-spectral video endoscopy is overdue. There were two approaches (NBI, OBI) that tried to increase the contrast for a better visualisation by using more than three wavelengths. But controversial discussions about the real benefit of a contrast enhancement alone, motivated a more comprehensive approach using the entire spectrum and pattern recognition algorithms. Up to now the hyper-spectral equipment was too slow to acquire a multi-spectral image stack at reasonable video rates rendering video endoscopy applications impossible. Recently, the availability of fast and versatile tunable filters with switching times below 50 microseconds made an instrumentation for hyper-spectral video endoscopes feasible. This paper describes a demonstrator for hyper-spectral video endoscopy and the results of clinical measurements using this demonstrator for measurements after otolaryngoscopic investigations and thorax surgeries. The application investigated here is the detection of dysplastic tissue, although hyper-spectral video endoscopy is of course not limited to cancer detection. Other applications are the detection of dysplastic tissue or polyps in the colon or the gastrointestinal tract.

  2. Video polarimetry: a new imaging technique in atmospheric science

    SciTech Connect

    Prosch, T.; Hennings, D.; Raschke, E.

    1983-05-01

    An imaging polarimeter has been built to study the polarization of solar radiation (lambda = 550 nm) scattered and reflected from the natural environment. The instrument generates false color images as multiparameter display of the degree of polarization, azimuth of polarization, and the radiance. These video signals can be digitized into a computer-compatible format. As an example of application, the polarization properties of light reflected from a lake and its environment are discussed here.

  3. Data-Driven Affective Filtering for Images and Videos.

    PubMed

    Li, Teng; Ni, Bingbing; Xu, Mengdi; Wang, Meng; Gao, Qingwei; Yan, Shuicheng

    2015-10-01

    In this paper, a novel system is developed for synthesizing user-specified emotions onto arbitrary input images or videos. Other than defining the visual affective model based on empirical knowledge, a data-driven learning framework is proposed to extract the emotion-related knowledge from a set of emotion-annotated images. In a divide-and-conquer manner, the images are clustered into several emotion-specific scene subgroups for model learning. The visual affection is modeled with Gaussian mixture models based on color features of local image patches. For the purpose of affective filtering, the feature distribution of the target is aligned to the statistical model constructed from the emotion-specific scene subgroup, through a piecewise linear transformation. The transformation is derived through a learning algorithm, which is developed with the incorporation of a regularization term enforcing spatial smoothness, edge preservation, and temporal smoothness for the derived image or video transformation. Optimization of the objective function is sought via standard nonlinear method. Intensive experimental results and user studies demonstrate that the proposed affective filtering framework can yield effective and natural effects for images and videos. PMID:25675469

  4. Acoustic-optical imaging without immersion

    NASA Technical Reports Server (NTRS)

    Liu, H.

    1979-01-01

    System using membraneous end wall of Bragg cell to separate test specimen from acoustic transmission medium, operates in real time and uses readily available optical components. System can be easily set up and maintained by people with little or no training in holography.

  5. Quantitative Determination of Lateral Mode Dispersion in Film Bulk Acoustic Resonators through Laser Acoustic Imaging

    SciTech Connect

    Ken Telschow; John D. Larson III

    2006-10-01

    Film Bulk Acoustic Resonators are useful for many signal processing applications. Detailed knowledge of their operation properties are needed to optimize their design for specific applications. The finite size of these resonators precludes their use in single acoustic modes; rather, multiple wave modes, such as, lateral wave modes are always excited concurrently. In order to determine the contributions of these modes, we have been using a newly developed full-field laser acoustic imaging approach to directly measure their amplitude and phase throughout the resonator. This paper describes new results comparing modeling of both elastic and piezoelectric effects in the active material with imaging measurement of all excited modes. Fourier transformation of the acoustic amplitude and phase displacement images provides a quantitative determination of excited mode amplitude and wavenumber at any frequency. Images combined at several frequencies form a direct visualization of lateral mode excitation and dispersion for the device under test allowing mode identification and comparison with predicted operational properties. Discussion and analysis are presented for modes near the first longitudinal thickness resonance (~900 MHz) in an AlN thin film resonator. Plate wave modeling, taking account of material crystalline orientation, elastic and piezoelectric properties and overlayer metallic films, will be discussed in relation to direct image measurements.

  6. Techniques for estimating blood pressure variation using video images.

    PubMed

    Sugita, Norihiro; Obara, Kazuma; Yoshizawa, Makoto; Abe, Makoto; Tanaka, Akira; Homma, Noriyasu

    2015-01-01

    It is important to know about a sudden blood pressure change that occurs in everyday life and may pose a danger to human health. However, monitoring the blood pressure variation in daily life is difficult because a bulky and expensive sensor is needed to measure the blood pressure continuously. In this study, a new non-contact method is proposed to estimate the blood pressure variation using video images. In this method, the pulse propagation time difference or instantaneous phase difference is calculated between two pulse waves obtained from different parts of a subject's body captured by a video camera. The forehead, left cheek, and right hand are selected as regions to obtain pulse waves. Both the pulse propagation time difference and instantaneous phase difference were calculated from the video images of 20 healthy subjects performing the Valsalva maneuver. These indices are considered to have a negative correlation with the blood pressure variation because they approximate the pulse transit time obtained from a photoplethysmograph. However, the experimental results showed that the correlation coefficients between the blood pressure and the proposed indices were approximately 0.6 for the pulse wave obtained from the right hand. This result is considered to be due to the difference in the transmission depth into the skin between the green and infrared light used as light sources for the video image and conventional photoplethysmogram, respectively. In addition, the difference in the innervation of the face and hand may be related to the results.

  7. Face retrieval in video sequences using Web images database

    NASA Astrophysics Data System (ADS)

    Leo, M.; Battisti, F.; Carli, M.; Neri, A.

    2015-03-01

    Face processing techniques for automatic recognition in broadcast video attract the research interest because of its value in applications, such as video indexing, retrieval, and summarization. In multimedia press review, the automatic annotation of broadcasting news programs is a challenging task because people can appear with large appearance variations such as hair styles, illumination conditions and poses that make the comparison between similar faces more difficult. In this paper a technique for automatic face identification in TV broadcasting programs based on a gallery of faces downloaded from Web is proposed. The approach is based on a joint use of Scale Invariant Feature Transform descriptor and Eigenfaces-based algorithms and it has been tested on video sequences using a database of images acquired starting from a web search. Experimental results show that the joint use of these two approaches improves the recognition rate in case of use Standard Definition (SD) and High Definition (HD) standards.

  8. Laser Imaging of Airborne Acoustic Emission by Nonlinear Defects

    NASA Astrophysics Data System (ADS)

    Solodov, Igor; Döring, Daniel; Busse, Gerd

    2008-06-01

    Strongly nonlinear vibrations of near-surface fractured defects driven by an elastic wave radiate acoustic energy into adjacent air in a wide frequency range. The variations of pressure in the emitted airborne waves change the refractive index of air thus providing an acoustooptic interaction with a collimated laser beam. Such an air-coupled vibrometry (ACV) is proposed for detecting and imaging of acoustic radiation of nonlinear spectral components by cracked defects. The photoelastic relation in air is used to derive induced phase modulation of laser light in the heterodyne interferometer setup. The sensitivity of the scanning ACV to different spatial components of the acoustic radiation is analyzed. The animated airborne emission patterns are visualized for the higher harmonic and frequency mixing fields radiated by planar defects. The results confirm a high localization of the nonlinear acoustic emission around the defects and complicated directivity patterns appreciably different from those observed for fundamental frequencies.

  9. Acoustic angiography: a new imaging modality for assessing microvasculature architecture.

    PubMed

    Gessner, Ryan C; Frederick, C Brandon; Foster, F Stuart; Dayton, Paul A

    2013-01-01

    The purpose of this paper is to provide the biomedical imaging community with details of a new high resolution contrast imaging approach referred to as "acoustic angiography." Through the use of dual-frequency ultrasound transducer technology, images acquired with this approach possess both high resolution and a high contrast-to-tissue ratio, which enables the visualization of microvascular architecture without significant contribution from background tissues. Additionally, volumetric vessel-tissue integration can be visualized by using b-mode overlays acquired with the same probe. We present a brief technical overview of how the images are acquired, followed by several examples of images of both healthy and diseased tissue volumes. 3D images from alternate modalities often used in preclinical imaging, contrast-enhanced micro-CT and photoacoustics, are also included to provide a perspective on how acoustic angiography has qualitatively similar capabilities to these other techniques. These preliminary images provide visually compelling evidence to suggest that acoustic angiography may serve as a powerful new tool in preclinical and future clinical imaging. PMID:23997762

  10. Optimal flushing agents for integrated optical and acoustic imaging systems

    NASA Astrophysics Data System (ADS)

    Li, Jiawen; Minami, Hataka; Steward, Earl; Ma, Teng; Mohar, Dilbahar; Robertson, Claire; Shung, Kirk; Zhou, Qifa; Patel, Pranav; Chen, Zhongping

    2015-05-01

    An increasing number of integrated optical and acoustic intravascular imaging systems have been developed and hold great promise for accurately diagnosing vulnerable plaques and guiding atherosclerosis treatment. However, in any intravascular environment, the vascular lumen is filled with blood, a high-scattering source for optical and high-frequency ultrasound signals. Blood must be flushed away to provide clearer images. To our knowledge, no research has been performed to find the ideal flushing agent for combined optical and acoustic imaging techniques. We selected three solutions as potential flushing agents for their image-enhancing effects: mannitol, dextran, and iohexol. Testing of these flushing agents was performed in a closed-loop circulation model and in vivo on rabbits. We found that a high concentration of dextran was the most useful for simultaneous intravascular ultrasound and optical coherence tomography imaging.

  11. Optimal flushing agents for integrated optical and acoustic imaging systems

    PubMed Central

    Li, Jiawen; Minami, Hataka; Steward, Earl; Ma, Teng; Mohar, Dilbahar; Robertson, Claire; Shung, Kirk; Zhou, Qifa; Patel, Pranav; Chen, Zhongping

    2015-01-01

    Abstract. An increasing number of integrated optical and acoustic intravascular imaging systems have been developed and hold great promise for accurately diagnosing vulnerable plaques and guiding atherosclerosis treatment. However, in any intravascular environment, the vascular lumen is filled with blood, a high-scattering source for optical and high-frequency ultrasound signals. Blood must be flushed away to provide clearer images. To our knowledge, no research has been performed to find the ideal flushing agent for combined optical and acoustic imaging techniques. We selected three solutions as potential flushing agents for their image-enhancing effects: mannitol, dextran, and iohexol. Testing of these flushing agents was performed in a closed-loop circulation model and in vivo on rabbits. We found that a high concentration of dextran was the most useful for simultaneous intravascular ultrasound and optical coherence tomography imaging. PMID:25985096

  12. Epipolar geometry of opti-acoustic stereo imaging.

    PubMed

    Negahdaripour, Shahriar

    2007-10-01

    Optical and acoustic cameras are suitable imaging systems to inspect underwater structures, both in regular maintenance and security operations. Despite high resolution, optical systems have limited visibility range when deployed in turbid waters. In contrast, the new generation of high-frequency (MHz) acoustic cameras can provide images with enhanced target details in highly turbid waters, though their range is reduced by one to two orders of magnitude compared to traditional low-/midfrequency (10s-100s KHz) sonar systems. It is conceivable that an effective inspection strategy is the deployment of both optical and acoustic cameras on a submersible platform, to enable target imaging in a range of turbidity conditions. Under this scenario and where visibility allows, registration of the images from both cameras arranged in binocular stereo configuration provides valuable scene information that cannot be readily recovered from each sensor alone. We explore and derive the constraint equations for the epipolar geometry and stereo triangulation in utilizing these two sensing modalities with different projection models. Theoretical results supported by computer simulations show that an opti-acoustic stereo imaging system outperforms a traditional binocular vision with optical cameras, particularly for increasing target distance and (or) turbidity.

  13. Application of acoustic reflection tomography to sonar imaging.

    PubMed

    Ferguson, Brian G; Wyber, Ron J

    2005-05-01

    Computer-aided tomography is a technique for providing a two-dimensional cross-sectional view of a three-dimensional object through the digital processing of many one-dimensional views (or projections) taken at different look directions. In acoustic reflection tomography, insonifying the object and then recording the backscattered signal provides the projection information for a given look direction (or aspect angle). Processing the projection information for all possible aspect angles enables an image to be reconstructed that represents the two-dimensional spatial distribution of the object's acoustic reflectivity function when projected on the imaging plane. The shape of an idealized object, which is an elliptical cylinder, is reconstructed by applying standard backprojection, Radon transform inversion (using both convolution and filtered backprojections), and direct Fourier inversion to simulated projection data. The relative merits of the various reconstruction algorithms are assessed and the resulting shape estimates compared. For bandpass sonar data, however, the wave number components of the acoustic reflectivity function that are outside the passband are absent. This leads to the consideration of image reconstruction for bandpass data. Tomographic image reconstruction is applied to real data collected with an ultra-wideband sonar transducer to form high-resolution acoustic images of various underwater objects when the sonar and object are widely separated.

  14. Biased lineup instructions and face identification from video images.

    PubMed

    Thompson, W Burt; Johnson, Jaime

    2008-01-01

    Previous eyewitness memory research has shown that biased lineup instructions reduce identification accuracy, primarily by increasing false-positive identifications in target-absent lineups. Because some attempts at identification do not rely on a witness's memory of the perpetrator but instead involve matching photos to images on surveillance video, the authors investigated the effects of biased instructions on identification accuracy in a matching task. In Experiment 1, biased instructions did not affect the overall accuracy of participants who used video images as an identification aid, but nearly all correct decisions occurred with target-present photo spreads. Both biased and unbiased instructions resulted in high false-positive rates. In Experiment 2, which focused on video-photo matching accuracy with target-absent photo spreads, unbiased instructions led to more correct responses (i.e., fewer false positives). These findings suggest that investigators should not relax precautions against biased instructions when people attempt to match photos to an unfamiliar person recorded on video. PMID:18318406

  15. Multiple 2D video/3D medical image registration algorithm

    NASA Astrophysics Data System (ADS)

    Clarkson, Matthew J.; Rueckert, Daniel; Hill, Derek L.; Hawkes, David J.

    2000-06-01

    In this paper we propose a novel method to register at least two vide images to a 3D surface model. The potential applications of such a registration method could be in image guided surgery, high precision radiotherapy, robotics or computer vision. Registration is performed by optimizing a similarity measure with respect to the pose parameters. The similarity measure is based on 'photo-consistency' and computes for each surface point, how consistent the corresponding video image information in each view is with a lighting model. We took four video views of a volunteer's face, and used an independent method to reconstruct a surface that was intrinsically registered to the four views. In addition, we extracted a skin surface from the volunteer's MR scan. The surfaces were misregistered from a gold standard pose and our algorithm was used to register both types of surfaces to the video images. For the reconstructed surface, the mean 3D error was 1.53 mm. For the MR surface, the standard deviation of the pose parameters after registration ranged from 0.12 to 0.70 mm and degrees. The performance of the algorithm is accurate, precise and robust.

  16. Imaging and detection of mines from acoustic measurements

    NASA Astrophysics Data System (ADS)

    Witten, Alan J.; DiMarzio, Charles A.; Li, Wen; McKnight, Stephen W.

    1999-08-01

    A laboratory-scale acoustic experiment is described where a buried target, a hockey puck cut in half, is shallowly buried in a sand box. To avoid the need for source and receiver coupling to the host sand, an acoustic wave is generated in the subsurface by a pulsed laser suspended above the air-sand interface. Similarly, an airborne microphone is suspended above this interface and moved in unison with the laser. After some pre-processing of the data, reflections for the target, although weak, could clearly be identified. While the existence and location of the target can be determined by inspection of the data, its unique shape can not. Since target discrimination is important in mine detection, a 3D imaging algorithm was applied to the acquired acoustic data. This algorithm yielded a reconstructed image where the shape of the target was resolved.

  17. A hierarchical variational Bayesian approximation approach in acoustic imaging

    NASA Astrophysics Data System (ADS)

    Chu, Ning; Mohammad-Djafari, Ali; Gac, Nicolas; Picheral, José

    2015-01-01

    Acoustic imaging is a powerful technique for acoustic source localization and power reconstruction from limited noisy measurements at microphone sensors. But it inevitably confronts a very ill-posed inverse problem which causes unexpected solution uncertainty. Recently, the Bayesian inference methods using sparse priors have been effectively investigated. In this paper, we propose to use a hierarchical variational Bayesian approximation for robust acoustic imaging. And we explore the Student-t priors with heavy tails to enforce source sparsity, and to model non-Gaussian noise respectively. Compared to conventional methods, the proposed approach can achieve the higher spatial resolution and wider dynamic range of source powers for real data from automobile wind tunnel.

  18. Turbulent structure of concentration plumes through application of video imaging

    SciTech Connect

    Dabberdt, W.F.; Martin, C.; Hoydysh, W.G.; Holynskyj, O.

    1994-12-31

    Turbulent flows and dispersion in the presence of building wakes and terrain-induced local circulations are particularly difficult to simulate with numerical models or measure with conventional fluid modeling and ambient measurement techniques. The problem stems from the complexity of the kinematics and the difficulty in making representative concentration measurements. New laboratory video imaging techniques are able to overcome many of these limitations and are being applied to study a range of difficult problems. Here the authors apply {open_quotes}tomographic{close_quotes} video imaging techniques to the study of the turbulent structure of an ideal elevated plume and the relationship of short-period peak concentrations to long-period average values. A companion paper extends application of the technique to characterization of turbulent plume-concentration fields in the wake of a complex building configuration.

  19. Ideal flushing agents for integrated optical acoustic imaging systems

    NASA Astrophysics Data System (ADS)

    Li, Jiawen; Minami, Hataka; Steward, Earl; Ma, Teng; Mohar, Dilbahar; Robertson, Claire; Shung, K. Kirk; Zhou, Qifa; Patel, Pranav M.; Chen, Zhongping

    2015-02-01

    An increased number of integrated optical acoustic intravascular imaging systems have been researched and hold great hope for accurate diagnosing of vulnerable plaques and for guiding atherosclerosis treatment. However, in any intravascular environment, vascular lumen is filled with blood, which is a high-scattering source for optical and high frequency ultrasound signals. Blood must be flushed away to make images clear. To our knowledge, no research has been performed to find the ideal flushing agent that works for both optical and acoustic imaging techniques. We selected three solutions, mannitol, dextran and iohexol, as flushing agents because of their image-enhancing effects and low toxicities. Quantitative testing of these flushing agents was performed in a closed loop circulation model and in vivo on rabbits.

  20. Object segmentation based on guided layering from video image

    NASA Astrophysics Data System (ADS)

    Lin, Guangfeng; Zhu, Hong; Fan, Caixia; Zhang, Erhu

    2011-09-01

    When the object is similar to the background, it is difficult to segment the completed human body object from video images. To solve the problem, this paper proposes an object segmentation algorithm based on guided layering from video images. This algorithm adopts the structure of advance by degrees, including three parts altogether. Each part constructs the different energy function in terms of the spatiotemporal information to maximize the posterior probability of segmentation label. In part one, the energy functions are established, respectively, with the frame difference information in the first layer and second layer. By optimization, the initial segmentation is solved in the first layer, and then the amended segmentation is obtained in the second layer. In part two, the energy function is built in the interframe with the shape feature as the prior guiding to eliminate the interframe difference of the segmentation result. In art three, the segmentation results in the previous two parts are fused to suppress or inhibit the over-repairing segmentation and the object shape variations in the adjacent two-frame. The results from the compared experiment indicate that this algorithm can obtain the completed human body object in the case of the video image with similarity between object and background.

  1. Block-based embedded color image and video coding

    NASA Astrophysics Data System (ADS)

    Nagaraj, Nithin; Pearlman, William A.; Islam, Asad

    2004-01-01

    Set Partitioned Embedded bloCK coder (SPECK) has been found to perform comparable to the best-known still grayscale image coders like EZW, SPIHT, JPEG2000 etc. In this paper, we first propose Color-SPECK (CSPECK), a natural extension of SPECK to handle color still images in the YUV 4:2:0 format. Extensions to other YUV formats are also possible. PSNR results indicate that CSPECK is among the best known color coders while the perceptual quality of reconstruction is superior than SPIHT and JPEG2000. We then propose a moving picture based coding system called Motion-SPECK with CSPECK as the core algorithm in an intra-based setting. Specifically, we demonstrate two modes of operation of Motion-SPECK, namely the constant-rate mode where every frame is coded at the same bit-rate and the constant-distortion mode, where we ensure the same quality for each frame. Results on well-known CIF sequences indicate that Motion-SPECK performs comparable to Motion-JPEG2000 while the visual quality of the sequence is in general superior. Both CSPECK and Motion-SPECK automatically inherit all the desirable features of SPECK such as embeddedness, low computational complexity, highly efficient performance, fast decoding and low dynamic memory requirements. The intended applications of Motion-SPECK would be high-end and emerging video applications such as High Quality Digital Video Recording System, Internet Video, Medical Imaging etc.

  2. Fish population dynamics revealed by instantaneous continental-shelf scale acoustic imaging

    NASA Astrophysics Data System (ADS)

    Ratilal, Purnima; Symonds, Deanelle; Makris, Nicholas C.; Nero, Redwood

    2005-04-01

    Video images of fish population densities over vast areas of the New Jersey continental shelf have been produced from acoustic data collected on a long range bistatic sonar system during the Acoustic Clutter 2003 experiment. Areal fish population densities were obtained after correcting the acoustic data for two-way transmission loss modeled using the range-dependent parabolic equation, spatially varying beampattern of the array, source level and mean target strength per fish. The wide-area fish density images reveal the temporal evolution of fish school distributions, their migration, as well as shoal formation and fragmentation at 50 s interval. Time series of the fish population within various density thresholds were made over the period of a day in an area containing millions of fish that at some instances formed a massive shoal extending over 12 km. The analysis shows that fish population in the area can be decomposed into a stable ambient population from lower-fish-density regions and a time-varying population composed from higher-density regions. Estimates of the differential speed between population centers of various shoals show that the average speed is on the order of a slow-moving surface vessel or submarine.

  3. 17 CFR 232.304 - Graphic, image, audio and video material.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in...

  4. 17 CFR 232.304 - Graphic, image, audio and video material.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in...

  5. 17 CFR 232.304 - Graphic, image, audio and video material.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in...

  6. 17 CFR 232.304 - Graphic, image, audio and video material.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in...

  7. Computer simulation of orthognathic surgery with video imaging

    NASA Astrophysics Data System (ADS)

    Sader, Robert; Zeilhofer, Hans-Florian U.; Horch, Hans-Henning

    1994-04-01

    Patients with extreme jaw imbalance must often undergo operative corrections. The goal of therapy is to harmonize the stomatognathic system and an aesthetical correction of the face profile. A new procedure will be presented which supports the maxillo-facial surgeon in planning the operation and which also presents the patient the result of the treatment by video images. Once an x-ray has been digitized it is possible to produce individualized cephalometric analyses. Using a ceph on screen, all current orthognathic operations can be simulated, whereby the bony segments are moved according to given parameters, and a new soft tissue profile can be calculated. The profile of the patient is fed into the computer by way of a video system and correlated to the ceph. Using the simulated operation the computer calculates a new video image of the patient which presents the expected postoperative appearance. In studies of patients treated between 1987-91, 76 out of 121 patients were able to be evaluated. The deviation in profile change varied between .0 and 1.6mm. A side effect of the practical applications was an increase in patient compliance.

  8. OHIO INTERNATIONAL TELEVISION AND VIDEO FESTIVAL AWARD WINNERS FROM THE IMAGING TECHNOLOGY CENTER IT

    NASA Technical Reports Server (NTRS)

    2000-01-01

    OHIO INTERNATIONAL TELEVISION AND VIDEO FESTIVAL AWARD WINNERS FROM THE IMAGING TECHNOLOGY CENTER ITC KEVIN BURKE - BILL FLETCHER - GARY NOLAN - EMERY ADANICH FOR THE VIDEO ENTITLED ICING FOR REGIONAL AND CORPORATE PILOTS

  9. Video multiple watermarking technique based on image interlacing using DWT.

    PubMed

    Ibrahim, Mohamed M; Abdel Kader, Neamat S; Zorkany, M

    2014-01-01

    Digital watermarking is one of the important techniques to secure digital media files in the domains of data authentication and copyright protection. In the nonblind watermarking systems, the need of the original host file in the watermark recovery operation makes an overhead over the system resources, doubles memory capacity, and doubles communications bandwidth. In this paper, a robust video multiple watermarking technique is proposed to solve this problem. This technique is based on image interlacing. In this technique, three-level discrete wavelet transform (DWT) is used as a watermark embedding/extracting domain, Arnold transform is used as a watermark encryption/decryption method, and different types of media (gray image, color image, and video) are used as watermarks. The robustness of this technique is tested by applying different types of attacks such as: geometric, noising, format-compression, and image-processing attacks. The simulation results show the effectiveness and good performance of the proposed technique in saving system resources, memory capacity, and communications bandwidth. PMID:25587570

  10. Video multiple watermarking technique based on image interlacing using DWT.

    PubMed

    Ibrahim, Mohamed M; Abdel Kader, Neamat S; Zorkany, M

    2014-01-01

    Digital watermarking is one of the important techniques to secure digital media files in the domains of data authentication and copyright protection. In the nonblind watermarking systems, the need of the original host file in the watermark recovery operation makes an overhead over the system resources, doubles memory capacity, and doubles communications bandwidth. In this paper, a robust video multiple watermarking technique is proposed to solve this problem. This technique is based on image interlacing. In this technique, three-level discrete wavelet transform (DWT) is used as a watermark embedding/extracting domain, Arnold transform is used as a watermark encryption/decryption method, and different types of media (gray image, color image, and video) are used as watermarks. The robustness of this technique is tested by applying different types of attacks such as: geometric, noising, format-compression, and image-processing attacks. The simulation results show the effectiveness and good performance of the proposed technique in saving system resources, memory capacity, and communications bandwidth.

  11. Video Multiple Watermarking Technique Based on Image Interlacing Using DWT

    PubMed Central

    Ibrahim, Mohamed M.; Abdel Kader, Neamat S.; Zorkany, M.

    2014-01-01

    Digital watermarking is one of the important techniques to secure digital media files in the domains of data authentication and copyright protection. In the nonblind watermarking systems, the need of the original host file in the watermark recovery operation makes an overhead over the system resources, doubles memory capacity, and doubles communications bandwidth. In this paper, a robust video multiple watermarking technique is proposed to solve this problem. This technique is based on image interlacing. In this technique, three-level discrete wavelet transform (DWT) is used as a watermark embedding/extracting domain, Arnold transform is used as a watermark encryption/decryption method, and different types of media (gray image, color image, and video) are used as watermarks. The robustness of this technique is tested by applying different types of attacks such as: geometric, noising, format-compression, and image-processing attacks. The simulation results show the effectiveness and good performance of the proposed technique in saving system resources, memory capacity, and communications bandwidth. PMID:25587570

  12. Optical and opto-acoustic imaging.

    PubMed

    Ntziachristos, Vasilis; Razansky, Daniel

    2013-01-01

     Since the inception of the microscope, optical imaging is serving the biological discovery for more than four centuries. With the recent emergence of methods appropriate for in vivo staining, such as bioluminescence, fluorescent molecular probes, and proteins, as well as nanoparticle-based targeted agents, significant attention has been shifted toward in vivo interrogations of different dynamic biological processes at the molecular level. This progress has been largely supported by the development of advanced optical tomographic imaging technologies suitable for obtaining volumetric visualization of biomarker distributions in small animals at a whole-body or whole-organ scale, an imaging frontier that is not accessible by the existing tissue-sectioning microscopic techniques due to intensive light scattering beyond the depth of a few hundred microns. Biomedical optoacoustics has also emerged in the recent decade as a powerful tool for high-resolution visualization of optical contrast, overcoming a variety of longstanding limitations imposed by light scattering in deep tissues. By detecting tiny sound vibrations, resulting from selective absorption of light at multiple wavelengths, multispectral optoacoustic tomography methods can now "hear color" in three dimensions, i.e., deliver volumetric spectrally enriched (color) images from deep living tissues at high spatial resolution and in real time. These new-found imaging abilities directly relate to preclinical screening applications in animal models and are foreseen to significantly impact clinical decision making as well.

  13. Opto-acoustic breast imaging with co-registered ultrasound

    NASA Astrophysics Data System (ADS)

    Zalev, Jason; Clingman, Bryan; Herzog, Don; Miller, Tom; Stavros, A. Thomas; Oraevsky, Alexander; Kist, Kenneth; Dornbluth, N. Carol; Otto, Pamela

    2014-03-01

    We present results from a recent study involving the ImagioTM breast imaging system, which produces fused real-time two-dimensional color-coded opto-acoustic (OA) images that are co-registered and temporally inter- leaved with real-time gray scale ultrasound using a specialized duplex handheld probe. The use of dual optical wavelengths provides functional blood map images of breast tissue and tumors displayed with high contrast based on total hemoglobin and oxygen saturation of the blood. This provides functional diagnostic information pertaining to tumor metabolism. OA also shows morphologic information about tumor neo-vascularity that is complementary to the morphological information obtained with conventional gray scale ultrasound. This fusion technology conveniently enables real-time analysis of the functional opto-acoustic features of lesions detected by readers familiar with anatomical gray scale ultrasound. We demonstrate co-registered opto-acoustic and ultrasonic images of malignant and benign tumors from a recent clinical study that provide new insight into the function of tumors in-vivo. Results from the Feasibility Study show preliminary evidence that the technology may have the capability to improve characterization of benign and malignant breast masses over conventional diagnostic breast ultrasound alone and to improve overall accuracy of breast mass diagnosis. In particular, OA improved speci city over that of conventional diagnostic ultrasound, which could potentially reduce the number of negative biopsies performed without missing cancers.

  14. Reconstruction of an acoustic pressure field in a resonance tube by particle image velocimetry.

    PubMed

    Kuzuu, K; Hasegawa, S

    2015-11-01

    A technique for estimating an acoustic field in a resonance tube is suggested. The estimation of an acoustic field in a resonance tube is important for the development of the thermoacoustic engine, and can be conducted employing two sensors to measure pressure. While this measurement technique is known as the two-sensor method, care needs to be taken with the location of pressure sensors when conducting pressure measurements. In the present study, particle image velocimetry (PIV) is employed instead of a pressure measurement by a sensor, and two-dimensional velocity vector images are extracted as sequential data from only a one- time recording made by a video camera of PIV. The spatial velocity amplitude is obtained from those images, and a pressure distribution is calculated from velocity amplitudes at two points by extending the equations derived for the two-sensor method. By means of this method, problems relating to the locations and calibrations of multiple pressure sensors are avoided. Furthermore, to verify the accuracy of the present method, the experiments are conducted employing the conventional two-sensor method and laser Doppler velocimetry (LDV). Then, results by the proposed method are compared with those obtained with the two-sensor method and LDV.

  15. Acoustic and photoacoustic microscopy imaging of single leukocytes

    NASA Astrophysics Data System (ADS)

    Strohm, Eric M.; Moore, Michael J.; Kolios, Michael C.

    2016-03-01

    An acoustic/photoacoustic microscope was used to create micrometer resolution images of stained cells from a blood smear. Pulse echo ultrasound images were made using a 1000 MHz transducer with 1 μm resolution. Photoacoustic images were made using a fiber coupled 532 nm laser, where energy losses through stimulated Raman scattering enabled output wavelengths from 532 nm to 620 nm. The laser was focused onto the sample using a 20x objective, and the laser spot co-aligned with the 1000 MHz transducer opposite the laser. The blood smear was stained with Wright-Giemsa, a common metachromatic dye that differentially stains the cellular components for visual identification. A neutrophil, lymphocyte and a monocyte were imaged using acoustic and photoacoustic microscopy at two different wavelengths, 532 nm and 600 nm. Unique features in each imaging modality enabled identification of the different cell types. This imaging method provides a new way of imaging stained leukocytes, with applications towards identifying and differentiating cell types, and detecting disease at the single cell level.

  16. [Purkinje images in slit lamp videography : Video article].

    PubMed

    Gellrich, M-M; Kandzia, C

    2016-09-01

    Reflexes that accompany every examination with the slit lamp are usually regarded as annoying and therefore do not receive much attention. In the video available online, clinical information "hidden" in the Purkinje images is analyzed according to our concept of slit lamp videography. In the first part of the video, the four Purkinje images which are reflections on the eye's optical surfaces are introduced for the phakic eye. In the pseudophakic eye, however, the refracting surfaces of the intraocular lens (IOL) have excellent optical properties and therefore form Purkinje images 3 and 4 of high quality. Especially the third Purkinje image from the anterior IOL surface, which is usually hardly visible in the phakic eye can be detected deep in the vitreous, enlarged through the eye's own optics like a magnifying glass. Its area of reflection can be used to visualize changes of the anterior segment at high contrast. The third Purkinje image carries valuable information about the anterior curvature and, thus, about the power of the IOL. If the same IOL type is implanted in a patient, often a difference between right and left of 0.5 diopter in its power can be detected by the difference in size of the respective third Purkinje image. In a historical excursion to the "prenatal phase" of the slit lamp in Uppsala, we show that our most important instrument in clinical work was originally designed for catoptric investigations (of specular reflections). Accordingly A. Gullstrand called it an ophthalmometric Nernst lamp. PMID:27558688

  17. A combined parabolic-integral equation approach to the acoustic simulation of vibro-acoustic imaging.

    PubMed

    Malcolm, A E; Reitich, F; Yang, J; Greenleaf, J F; Fatemi, M

    2008-11-01

    This paper aims to model ultrasound vibro-acoustography to improve our understanding of the underlying physics of the technique thus facilitating the collection of better images. Ultrasound vibro-acoustography is a novel imaging technique combining the resolution of high-frequency imaging with the clean (speckle-free) images obtained with lower frequency techniques. The challenge in modeling such an experiment is in the variety of scales important to the final image. In contrast to other approaches for modeling such problems, we break the experiment into three parts: high-frequency propagation, non-linear interaction and the propagation of the low-frequency acoustic emission. We then apply different modeling strategies to each part. For the high-frequency propagation we choose a parabolic approximation as the field has a strong preferred direction and small propagation angles. The non-linear interaction is calculated directly with Fourier methods for computing derivatives. Because of the low-frequency omnidirectional nature of the acoustic emission field and the piecewise constant medium we model the low-frequency field with a surface integral approach. We use our model to compare with experimental data and to visualize the relevant fields at points in the experiment where laboratory data is difficult to collect, in particular the source of the low-frequency field. To simulate experimental conditions we perform the simulations with the two frequencies 3 and 3.05 MHz with an inclusion of varying velocity submerged in water.

  18. Near-Field Imaging with Sound: An Acoustic STM Model

    NASA Astrophysics Data System (ADS)

    Euler, Manfred

    2012-10-01

    The invention of scanning tunneling microscopy (STM) 30 years ago opened up a visual window to the nano-world and sparked off a bunch of new methods for investigating and controlling matter and its transformations at the atomic and molecular level. However, an adequate theoretical understanding of the method is demanding; STM images can be considered quantum theory condensed into a pictorial representation. A hands-on model is presented for demonstrating the imaging principles in introductory teaching. It uses sound waves and computer visualization to create mappings of acoustic resonators. The macroscopic simile is made possible by quantum-classical analogies between matter and sound waves. Grounding STM in acoustic experience may help to make the underlying quantum concepts such as tunneling less abstract to students.

  19. Video-rate visible to LWIR hyperspectral image generation exploitation

    NASA Astrophysics Data System (ADS)

    Dombrowski, Mark S.; Willson, Paul

    1999-10-01

    Hyperspectral imaging is the latest advent in imaging technology, providing the potential to extract information about the objects in a scene that is unavailable to panchromatic imagers. This increased utility, however, comes at the cost of tremendously increased data. The ultimate utility of hyperspectral imagery is in the information that can be gleaned from the spectral dimension, rather than in the hyperspectral imagery itself. To have the broadest range of applications, extraction of this information must occur in real-time. Attempting to produce and exploit complete cubes of hyperspectral imagery at video rates, however, presents unique problems for both the imager and the processor, since data rates are scaled by the number of spectral planes in the cube. MIDIS, the Multi-band Identification and Discrimination Imaging Spectroradiometer, allows both real-time collection and processing of hyperspectral imagery over the range of 0.4 micrometer to 12 micrometer. Presented here are the major design challenges and solutions associated with producing high-speed, high-sensitivity hyperspectral imagers operating in the Vis/NIR, SWIR/MWIR and LWIR, and of the electronics capable of handling data rates up to 160 mega-pixels per second, continuously. Beyond design and performance issues associated with producing and processing hyperspectral imagery at such high speeds, this paper also discusses applications of real-time hyperspectral imaging technology. Example imagery includes such problems as buried mine detection, inspecting surfaces, and countering CCD (camouflage, concealment, and deception).

  20. Evaluation schemes for video and image anomaly detection algorithms

    NASA Astrophysics Data System (ADS)

    Parameswaran, Shibin; Harguess, Josh; Barngrover, Christopher; Shafer, Scott; Reese, Michael

    2016-05-01

    Video anomaly detection is a critical research area in computer vision. It is a natural first step before applying object recognition algorithms. There are many algorithms that detect anomalies (outliers) in videos and images that have been introduced in recent years. However, these algorithms behave and perform differently based on differences in domains and tasks to which they are subjected. In order to better understand the strengths and weaknesses of outlier algorithms and their applicability in a particular domain/task of interest, it is important to measure and quantify their performance using appropriate evaluation metrics. There are many evaluation metrics that have been used in the literature such as precision curves, precision-recall curves, and receiver operating characteristic (ROC) curves. In order to construct these different metrics, it is also important to choose an appropriate evaluation scheme that decides when a proposed detection is considered a true or a false detection. Choosing the right evaluation metric and the right scheme is very critical since the choice can introduce positive or negative bias in the measuring criterion and may favor (or work against) a particular algorithm or task. In this paper, we review evaluation metrics and popular evaluation schemes that are used to measure the performance of anomaly detection algorithms on videos and imagery with one or more anomalies. We analyze the biases introduced by these by measuring the performance of an existing anomaly detection algorithm.

  1. An improved architecture for video rate image transformations

    NASA Technical Reports Server (NTRS)

    Fisher, Timothy E.; Juday, Richard D.

    1989-01-01

    Geometric image transformations are of interest to pattern recognition algorithms for their use in simplifying some aspects of the pattern recognition process. Examples include reducing sensitivity to rotation, scale, and perspective of the object being recognized. The NASA Programmable Remapper can perform a wide variety of geometric transforms at full video rate. An architecture is proposed that extends its abilities and alleviates many of the first version's shortcomings. The need for the improvements are discussed in the context of the initial Programmable Remapper and the benefits and limitations it has delivered. The implementation and capabilities of the proposed architecture are discussed.

  2. Ultra high frequency imaging acoustic microscope

    DOEpatents

    Deason, Vance A.; Telschow, Kenneth L.

    2006-05-23

    An imaging system includes: an object wavefront source and an optical microscope objective all positioned to direct an object wavefront onto an area of a vibrating subject surface encompassed by a field of view of the microscope objective, and to direct a modulated object wavefront reflected from the encompassed surface area through a photorefractive material; and a reference wavefront source and at least one phase modulator all positioned to direct a reference wavefront through the phase modulator and to direct a modulated reference wavefront from the phase modulator through the photorefractive material to interfere with the modulated object wavefront. The photorefractive material has a composition and a position such that interference of the modulated object wavefront and modulated reference wavefront occurs within the photorefractive material, providing a full-field, real-time image signal of the encompassed surface area.

  3. Evaluation of video-printer images as secondary CT images for clinical use

    SciTech Connect

    Doi, K.; Rubin, J.

    1983-01-01

    Video-printer (VP) images of 24 abnormal views from a body CT scanner were made. Although the physical quality of printer images was poor, a group of radiologists and clinicians found that VP images are adequate to confirm the lesion described in the radiology report. The VP images can be used as secondary images, and they can be attached to a report as a part of the radiology service to increase communication between radiologists and clinicians and to prevent the loss of primary images from the radiology file.

  4. Using underwater video imaging as an assessment tool for coastal condition

    EPA Science Inventory

    As part of an effort to monitor ecological conditions in nearshore habitats, from 2009-2012 underwater videos were captured at over 400 locations throughout the Laurentian Great Lakes. This study focuses on developing a video rating system and assessing video images. This ratin...

  5. Application of time reversal acoustics focusing for nonlinear imaging ms

    NASA Astrophysics Data System (ADS)

    Sarvazyan, Armen; Sutin, Alexander

    2001-05-01

    Time reversal acoustic (TRA) focusing of ultrasound appears to be an effective tool for nonlinear imaging in industrial and medical applications because of its ability to efficiently concentrate ultrasonic energy (close to diffraction limit) in heterogeneous media. In this study, we used two TRA systems to focus ultrasonic beams with different frequencies in coinciding focal points, thus causing the generation of ultrasonic waves with combination frequencies. Measurements of the intensity of these combination frequency waves provide information on the nonlinear parameter of medium in the focal region. Synchronized stirring of two TRA focused beams enables obtaining 3-D acoustic nonlinearity images of the object. Each of the TRA systems employed an aluminum resonator with piezotransducers glued to its facet. One of the free facets of each resonator was submerged into a water tank and served as a virtual phased array capable of ultrasound focusing and beam steering. To mimic a medium with spatially varying acoustical nonlinearity a simplest model such as a microbubble column in water was used. Microbubbles were generated by electrolysis of water using a needle electrode. An order of magnitude increase of the sum frequency component was observed when the ultrasound beams were focused in the area with bubbles.

  6. Feature Extraction in Sequential Multimedia Images: with Applications in Satellite Images and On-line Videos

    NASA Astrophysics Data System (ADS)

    Liang, Yu-Li

    Multimedia data is increasingly important in scientific discovery and people's daily lives. Content of massive multimedia is often diverse and noisy, and motion between frames is sometimes crucial in analyzing those data. Among all, still images and videos are commonly used formats. Images are compact in size but do not contain motion information. Videos record motion but are sometimes too big to be analyzed. Sequential images, which are a set of continuous images with low frame rate, stand out because they are smaller than videos and still maintain motion information. This thesis investigates features in different types of noisy sequential images, and the proposed solutions that intelligently combined multiple features to successfully retrieve visual information from on-line videos and cloudy satellite images. The first task is detecting supraglacial lakes above ice sheet in sequential satellite images. The dynamics of supraglacial lakes on the Greenland ice sheet deeply affect glacier movement, which is directly related to sea level rise and global environment change. Detecting lakes above ice is suffering from diverse image qualities and unexpected clouds. A new method is proposed to efficiently extract prominent lake candidates with irregular shapes, heterogeneous backgrounds, and in cloudy images. The proposed system fully automatize the procedure that track lakes with high accuracy. We further cooperated with geoscientists to examine the tracked lakes and found new scientific findings. The second one is detecting obscene content in on-line video chat services, such as Chatroulette, that randomly match pairs of users in video chat sessions. A big problem encountered in such systems is the presence of flashers and obscene content. Because of various obscene content and unstable qualities of videos capture by home web-camera, detecting misbehaving users is a highly challenging task. We propose SafeVchat, which is the first solution that achieves satisfactory

  7. Application Of Digital Image Processing To Acoustic Ambiguity Functions

    NASA Astrophysics Data System (ADS)

    Sharkey, J. Brian

    1983-03-01

    The passive acoustic ambiguity function is a measure of the cross-spectrum in a Doppler-shift and time-delay space that arises when two or more passive receivers are used to monitor a moving acoustic source. Detection of a signal source in the presence of noise has been treated in the past from a communications-theory point of view, with considerable effort devoted to establishing a threshold to which the maximum value of the function is compared. That approach disregards ambiguity function topography information which in practice is manually used to interpret source characteristics and source kinematics. Because of the two-dimensional representation of the ambiguity function, digital image processing techniques can be easily applied for the purposes of topography enhancement and characterization. This work presents an overview of techniques previously reported as well as more current research being conducted to improve detection performance and automate topography characterization.

  8. Super deep 3D images from a 3D omnifocus video camera.

    PubMed

    Iizuka, Keigo

    2012-02-20

    When using stereographic image pairs to create three-dimensional (3D) images, a deep depth of field in the original scene enhances the depth perception in the 3D image. The omnifocus video camera has no depth of field limitations and produces images that are in focus throughout. By installing an attachment on the omnifocus video camera, real-time super deep stereoscopic pairs of video images were obtained. The deeper depth of field creates a larger perspective image shift, which makes greater demands on the binocular fusion of human vision. A means of reducing the perspective shift without harming the depth of field was found.

  9. Development of passive submillimeter-wave video imaging systems

    NASA Astrophysics Data System (ADS)

    Heinz, Erik; May, Torsten; Born, Detlef; Zieger, Gabriel; Peiselt, Katja; Brömel, Anika; Anders, Solveig; Zakosarenko, Vyacheslav; Krause, Torsten; Krüger, André; Schulz, Marco; Meyer, Hans-Georg

    2013-05-01

    Passive submillimeter wave imaging is a concept that has been in the focus of interest as a promising technology for security applications for a number of years. It utilizes the unique optical properties of submillimeter waves and promises an alternative to millimeter-wave and X-ray backscattering portals for personal security screening in particular. Possible application scenarios demand sensitive, fast, and fleixible high-quality imaging techniques. Considering the low radiometric contrast of indoor scenes in the submillimeter range, this objective calls for an extremely high detector sensitivity that can only be achieved using cooled detectors. Our approach to this task is a series of passives standoff video cameras for the 350 GHz band that represent an evolving concept and a continuous development since 2007. The cameras utilize arrays of superconducting transition-edge sensors (TES), i.e. cryogenic microbolometers, as radiation detectors. The TES are operate at temperatures below 1K, cooled by a closed-cycle cooling system, and coupled to superconducting readout electronics. By this means, background limited photometry (BLIP) mode is achieved providing the maximum possible signal to noise ratio. At video rates, this leads to a pixel NETD well below 1K. The imaging system is completed by reflector optics based on free-form mirrors. For object distances of 3-10m, a field of view up to 2m height and a diffraction-limited spatial resolution in the order of 1-2cm is provided. Opto-mechanical scanning systems are part of the optical setup and capable frame rates up to 25 frames per second. Both spiraliform and linear scanning schemes have been developed.

  10. Acquisition and Analysis of Dynamic Responses of a Historic Pedestrian Bridge using Video Image Processing

    NASA Astrophysics Data System (ADS)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; O'Donnell, Deirdre; Wright, Robert; Pakrashi, Vikram

    2015-07-01

    Video based tracking is capable of analysing bridge vibrations that are characterised by large amplitudes and low frequencies. This paper presents the use of video images and associated image processing techniques to obtain the dynamic response of a pedestrian suspension bridge in Cork, Ireland. This historic structure is one of the four suspension bridges in Ireland and is notable for its dynamic nature. A video camera is mounted on the river-bank and the dynamic responses of the bridge have been measured from the video images. The dynamic response is assessed without the need of a reflector on the bridge and in the presence of various forms of luminous complexities in the video image scenes. Vertical deformations of the bridge were measured in this regard. The video image tracking for the measurement of dynamic responses of the bridge were based on correlating patches in time-lagged scenes in video images and utilisinga zero mean normalisedcross correlation (ZNCC) metric. The bridge was excited by designed pedestrian movement and by individual cyclists traversing the bridge. The time series data of dynamic displacement responses of the bridge were analysedto obtain the frequency domain response. Frequencies obtained from video analysis were checked against accelerometer data from the bridge obtained while carrying out the same set of experiments used for video image based recognition.

  11. Acquisition and Analysis of Dynamic Responses of a Historic Pedestrian Bridge using Video Image Processing

    NASA Astrophysics Data System (ADS)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; O'Donnell, Deirdre; Wright, Robert; Pakrashi, Vikram

    2015-07-01

    Video based tracking is capable of analysing bridge vibrations that are characterised by large amplitudes and low frequencies. This paper presents the use of video images and associated image processing techniques to obtain the dynamic response of a pedestrian suspension bridge in Cork, Ireland. This historic structure is one of the four suspension bridges in Ireland and is notable for its dynamic nature. A video camera is mounted on the river-bank and the dynamic responses of the bridge have been measured from the video images. The dynamic response is assessed without the need of a reflector on the bridge and in the presence of various forms of luminous complexities in the video image scenes. Vertical deformations of the bridge were measured in this regard. The video image tracking for the measurement of dynamic responses of the bridge were based on correlating patches in time-lagged scenes in video images and utilisinga zero mean normalised cross correlation (ZNCC) metric. The bridge was excited by designed pedestrian movement and by individual cyclists traversing the bridge. The time series data of dynamic displacement responses of the bridge were analysedto obtain the frequency domain response. Frequencies obtained from video analysis were checked against accelerometer data from the bridge obtained while carrying out the same set of experiments used for video image based recognition.

  12. An introduction to video image compression and authentication technology for safeguards applications

    SciTech Connect

    Johnson, C.S.

    1995-07-01

    Verification of a video image has been a major problem for safeguards for several years. Various verification schemes have been tried on analog video signals ever since the mid-1970`s. These schemes have provided a measure of protection but have never been widely adopted. The development of reasonably priced complex video processing integrated circuits makes it possible to digitize a video image and then compress the resulting digital file into a smaller file without noticeable loss of resolution. Authentication and/or encryption algorithms can be more easily applied to digital video files that have been compressed. The compressed video files require less time for algorithm processing and image transmission. An important safeguards application for authenticated, compressed, digital video images is in unattended video surveillance systems and remote monitoring systems. The use of digital images in the surveillance system makes it possible to develop remote monitoring systems that send images over narrow bandwidth channels such as the common telephone line. This paper discusses the video compression process, authentication algorithm, and data format selected to transmit and store the authenticated images.

  13. A novel Kalman filter based video image processing scheme for two-photon fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Sun, Wenqing; Huang, Xia; Li, Chunqiang; Xiao, Chuan; Qian, Wei

    2016-03-01

    Two-photon fluorescence microscopy (TPFM) is a perfect optical imaging equipment to monitor the interaction between fast moving viruses and hosts. However, due to strong unavoidable background noises from the culture, videos obtained by this technique are too noisy to elaborate this fast infection process without video image processing. In this study, we developed a novel scheme to eliminate background noises, recover background bacteria images and improve video qualities. In our scheme, we modified and implemented the following methods for both host and virus videos: correlation method, round identification method, tree-structured nonlinear filters, Kalman filters, and cell tracking method. After these procedures, most of noises were eliminated and host images were recovered with their moving directions and speed highlighted in the videos. From the analysis of the processed videos, 93% bacteria and 98% viruses were correctly detected in each frame on average.

  14. Spatially reduced image extraction from MPEG-2 video: fast algorithms and applications

    NASA Astrophysics Data System (ADS)

    Song, Junehwa; Yeo, Boon-Lock

    1997-12-01

    The MPEG-2 video standards are targeted for high-quality video broadcast and distribution, and are optimized for efficient storage and transmission. However, it is difficult to process MPEG-2 for video browsing and database applications without first decompressing the video. Yeo and Liu have proposed fast algorithms for the direct extraction of spatially reduced images from MPEG-1 video. Reduced images have been demonstrated to be effective for shot detection, shot browsing and editing, and temporal processing of video for video presentation and content annotation. In this paper, we develop new tools to handle the extra complexity in MPEG-2 video for extracting spatially reduced images. In particular, we propose new classes of discrete cosine transform (DCT) domain and DCT inverse motion compensation operations for handling the interlaced modes in the different frame types of MPEG-2, and design new and efficient algorithms for generating spatially reduced images of an MPEG-2 video. We also describe key video applications on the extracted reduced images.

  15. Identifying Vulnerable Plaques with Acoustic Radiation Force Impulse Imaging

    NASA Astrophysics Data System (ADS)

    Doherty, Joshua Ryan

    The rupture of arterial plaques is the most common cause of ischemic complications including stroke, the fourth leading cause of death and number one cause of long term disability in the United States. Unfortunately, because conventional diagnostic tools fail to identify plaques that confer the highest risk, often a disabling stroke and/or sudden death is the first sign of disease. A diagnostic method capable of characterizing plaque vulnerability would likely enhance the predictive ability and ultimately the treatment of stroke before the onset of clinical events. This dissertation evaluates the hypothesis that Acoustic Radiation Force Impulse (ARFI) imaging can noninvasively identify lipid regions, that have been shown to increase a plaque's propensity to rupture, within carotid artery plaques in vivo. The work detailed herein describes development efforts and results from simulations and experiments that were performed to evaluate this hypothesis. To first demonstrate feasibility and evaluate potential safety concerns, finite- element method simulations are used to model the response of carotid artery plaques to an acoustic radiation force excitation. Lipid pool visualization is shown to vary as a function of lipid pool geometry and stiffness. A comparison of the resulting Von Mises stresses indicates that stresses induced by an ARFI excitation are three orders of magnitude lower than those induced by blood pressure. This thesis also presents the development of a novel pulse inversion harmonic tracking method to reduce clutter-imposed errors in ultrasound-based tissue displacement estimates. This method is validated in phantoms and was found to reduce bias and jitter displacement errors for a marked improvement in image quality in vivo. Lastly, this dissertation presents results from a preliminary in vivo study that compares ARFI imaging derived plaque stiffness with spatially registered composition determined by a Magnetic Resonance Imaging (MRI) gold standard

  16. Nonlinear acoustic time reversal imaging using the scaling subtraction method

    NASA Astrophysics Data System (ADS)

    Scalerandi, M.; Gliozzi, A. S.; Bruno, C. L. E.; Van Den Abeele, K.

    2008-11-01

    Lab experiments have shown that the imaging of nonlinear scatterers using time reversal acoustics can be a very promising tool for early stage damage detection. The potential applications are however limited by the need for an extremely accurate acquisition system. In order to let nonlinear features emerge from the background noise it is necessary to enhance the signal-to-noise ratio as much as possible. A comprehensive analysis to determine the nonlinear components in a recorded time signal, an alternative to those usually adopted (e.g. fast Fourier), is proposed here. The method is based on the nonlinear physical properties of the solution of the wave equation and takes advantage of the deficient system response scalability with the excitation amplitude. In this contribution, we outline the adopted procedure and apply it to a nonlinear time reversal imaging simulation to highlight the advantages with respect to traditional imaging based on a fast Fourier analysis of the recorded signals.

  17. Human body motion capture from multi-image video sequences

    NASA Astrophysics Data System (ADS)

    D'Apuzzo, Nicola

    2003-01-01

    In this paper is presented a method to capture the motion of the human body from multi image video sequences without using markers. The process is composed of five steps: acquisition of video sequences, calibration of the system, surface measurement of the human body for each frame, 3-D surface tracking and tracking of key points. The image acquisition system is currently composed of three synchronized progressive scan CCD cameras and a frame grabber which acquires a sequence of triplet images. Self calibration methods are applied to gain exterior orientation of the cameras, the parameters of internal orientation and the parameters modeling the lens distortion. From the video sequences, two kinds of 3-D information are extracted: a three-dimensional surface measurement of the visible parts of the body for each triplet and 3-D trajectories of points on the body. The approach for surface measurement is based on multi-image matching, using the adaptive least squares method. A full automatic matching process determines a dense set of corresponding points in the triplets. The 3-D coordinates of the matched points are then computed by forward ray intersection using the orientation and calibration data of the cameras. The tracking process is also based on least squares matching techniques. Its basic idea is to track triplets of corresponding points in the three images through the sequence and compute their 3-D trajectories. The spatial correspondences between the three images at the same time and the temporal correspondences between subsequent frames are determined with a least squares matching algorithm. The results of the tracking process are the coordinates of a point in the three images through the sequence, thus the 3-D trajectory is determined by computing the 3-D coordinates of the point at each time step by forward ray intersection. Velocities and accelerations are also computed. The advantage of this tracking process is twofold: it can track natural points

  18. Modern transform design for advanced image/video coding applications

    NASA Astrophysics Data System (ADS)

    Tran, Trac D.; Topiwala, Pankaj N.

    2008-08-01

    This paper offers an overall review of recent advances in the design of modern transforms for image and video coding applications. Transforms have been an integral part of signal coding applications from the beginning, but emphasis had been on true floating-point transforms for most of that history. Recently, with the proliferation of low-power handheld multimedia devices, a new vision of integer-only transforms that provide high performance yet very low complexity has quickly gained ascendency. We explore two key design approaches to creating integer transforms, and focus on a systematic, universal method based on decomposition into lifting steps, and use of (dyadic) rational coefficients. This method provides a wealth of solutions, many of which are already in use in leading media codecs today, such as H.264, HD Photo/JPEG XR, and scalable audio. We give early indications in this paper, and more fully elsewhere.

  19. Video-rate terahertz electric-field vector imaging

    SciTech Connect

    Takai, Mayuko; Takeda, Masatoshi; Sasaki, Manabu; Tachizaki, Takehiro; Yasumatsu, Naoya; Watanabe, Shinichi

    2014-10-13

    We present an experimental setup to dramatically reduce a measurement time for obtaining spatial distributions of terahertz electric-field (E-field) vectors. The method utilizes the electro-optic sampling, and we use a charge-coupled device to detect a spatial distribution of the probe beam polarization rotation by the E-field-induced Pockels effect in a 〈110〉-oriented ZnTe crystal. A quick rotation of the ZnTe crystal allows analyzing the terahertz E-field direction at each image position, and the terahertz E-field vector mapping at a fixed position of an optical delay line is achieved within 21 ms. Video-rate mapping of terahertz E-field vectors is likely to be useful for achieving real-time sensing of terahertz vector beams, vector vortices, and surface topography. The method is also useful for a fast polarization analysis of terahertz beams.

  20. Video Skimming and Characterization through the Combination of Image and Language Understanding Techniques

    NASA Technical Reports Server (NTRS)

    Smith, Michael A.; Kanade, Takeo

    1997-01-01

    Digital video is rapidly becoming important for education, entertainment, and a host of multimedia applications. With the size of the video collections growing to thousands of hours, technology is needed to effectively browse segments in a short time without losing the content of the video. We propose a method to extract the significant audio and video information and create a "skim" video which represents a very short synopsis of the original. The goal of this work is to show the utility of integrating language and image understanding techniques for video skimming by extraction of significant information, such as specific objects, audio keywords and relevant video structure. The resulting skim video is much shorter, where compaction is as high as 20:1, and yet retains the essential content of the original segment.

  1. ATR/OTR-SY Tank Camera Purge System and in Tank Color Video Imaging System

    SciTech Connect

    Werry, S.M.

    1995-06-06

    This procedure will document the satisfactory operation of the 101-SY tank Camera Purge System (CPS) and 101-SY in tank Color Camera Video Imaging System (CCVIS). Included in the CPRS is the nitrogen purging system safety interlock which shuts down all the color video imaging system electronics within the 101-SY tank vapor space during loss of nitrogen purge pressure.

  2. A Dual Communication and Imaging Underwater Acoustic System

    NASA Astrophysics Data System (ADS)

    Fu, Tricia C.

    A dual communication and imaging underwater acoustic system is proposed and developed throughout this dissertation. Due to the wide variation in underwater channel characteristics, the research here focuses more on robustness to multipath in the shallow underwater acoustic environment, rather than high bit-rate applications and signaling schemes. Lower bit-rate (in the hundreds of bits per second (bps) to low kbps), applications such as the transfer of ecological telemetry data, e.g. conductivity or temperature data, are the primary focus of this dissertation. The parallels between direct sequence spread spectrum in digital communication and pulse-echo with pulse compression in imaging, and channel estimation in communication and range profile estimation in imaging are drawn, leading to a unified communications and imaging platform. A digital communication algorithm for channel order and channel coefficient estimation and symbol demodulation using Matching Pursuit (MP) with Generalized Multiple Hypothesis Testing (GMHT) is implemented in programmable DSP in real time with field experiment results in varying underwater environments for the single receiver (Rx), single transmitter (Tx) case. The custom and off-the-shelf hardware used in the single receiver, single transmitter set of experiments are detailed as well. This work is then extended to the single-input multiple-output (SIMO) case, and then to the full multiple-input multiple-output (MIMO) case. The results of channel estimation are used for simple range profile imaging reconstructions. Successful simulated and experimental results for both transducer array configurations are presented and analyzed. Non-real-time symbol demodulation and channel estimation is performed using experimental data from a scaled testing environment. New hardware based on cost-effective fish-finder transducers for a 6 Rx--1 Tx and 6 Rx--4 Tx transducer array is detailed. Lastly, in an application that is neither communication nor

  3. Passive 350 GHz Video Imaging Systems for Security Applications

    NASA Astrophysics Data System (ADS)

    Heinz, E.; May, T.; Born, D.; Zieger, G.; Anders, S.; Zakosarenko, V.; Meyer, H.-G.; Schäffel, C.

    2015-10-01

    Passive submillimeter-wave imaging is a concept that has been in the focus of interest as a promising technology for personal security screening for a number of years. In contradiction to established portal-based millimeter-wave scanning techniques, it allows for scanning people from a distance in real time with high throughput and without a distinct inspection procedure. This opens up new possibilities for scanning, which directly address an urgent security need of modern societies: protecting crowds and critical infrastructure from the growing threat of individual terror attacks. Considering the low radiometric contrast of indoor scenes in the submillimeter range, this objective calls for an extremely high detector sensitivity that can only be achieved using cooled detectors. Our approach to this task is a series of passive standoff video cameras for the 350 GHz band that represent an evolving concept and a continuous development since 2007. Arrays of superconducting transition-edge sensors (TES), operated at temperatures below 1 K, are used as radiation detectors. By this means, background limited performance (BLIP) mode is achieved, providing the maximum possible signal to noise ratio. At video rates, this leads to a temperature resolution well below 1 K. The imaging system is completed by reflector optics based on free-form mirrors. For object distances of 5-25 m, a field of view up to 2 m height and a diffraction-limited spatial resolution in the order of 1-2 cm is provided. Opto-mechanical scanning systems are part of the optical setup and capable of frame rates of up to 25 frames per second.

  4. Acoustic Radiation Force Impulse (ARFI) Imaging-Based Needle Visualization

    PubMed Central

    Rotemberg, Veronica; Palmeri, Mark; Rosenzweig, Stephen; Grant, Stuart; Macleod, David; Nightingale, Kathryn

    2011-01-01

    Ultrasound-guided needle placement is widely used in the clinical setting, particularly for central venous catheter placement, tissue biopsy and regional anesthesia. Difficulties with ultrasound guidance in these areas often result from steep needle insertion angles and spatial offsets between the imaging plane and the needle. Acoustic Radiation Force Impulse (ARFI) imaging leads to improved needle visualization because it uses a standard diagnostic scanner to perform radiation force based elasticity imaging, creating a displacement map that displays tissue stiffness variations. The needle visualization in ARFI images is independent of needle-insertion angle and also extends needle visibility out of plane. Although ARFI images portray needles well, they often do not contain the usual B-mode landmarks. Therefore, a three-step segmentation algorithm has been developed to identify a needle in an ARFI image and overlay the needle prediction on a coregistered B-mode image. The steps are: (1) contrast enhancement by median filtration and Laplacian operator filtration, (2) noise suppression through displacement estimate correlation coefficient thresholding and (3) smoothing by removal of outliers and best-fit line prediction. The algorithm was applied to data sets from horizontal 18, 21 and 25 gauge needles between 0–4 mm offset in elevation from the transducer imaging plane and to 18G needles on the transducer axis (in plane) between 10° and 35° from the horizontal. Needle tips were visualized within 2 mm of their actual position for both horizontal needle orientations up to 1.5 mm off set in elevation from the transducer imaging plane and on-axis angled needles between 10°–35° above the horizontal orientation. We conclude that segmented ARFI images overlaid on matched B-mode images hold promise for improved needle visibility in many clinical applications. PMID:21608445

  5. Composing with Images: A Study of High School Video Producers.

    ERIC Educational Resources Information Center

    Reilly, Brian

    At Bell High School (Los Angeles, California), students have been using video cameras, computers and editing machines to create videos in a variety of forms and on a variety of topics; in this setting, video is the textual medium of expression. A study was conducted using participant-observation and interviewing over the course of one school year…

  6. Energy Efficient Image/Video Data Transmission on Commercial Multi-Core Processors

    PubMed Central

    Lee, Sungju; Kim, Heegon; Chung, Yongwha; Park, Daihee

    2012-01-01

    In transmitting image/video data over Video Sensor Networks (VSNs), energy consumption must be minimized while maintaining high image/video quality. Although image/video compression is well known for its efficiency and usefulness in VSNs, the excessive costs associated with encoding computation and complexity still hinder its adoption for practical use. However, it is anticipated that high-performance handheld multi-core devices will be used as VSN processing nodes in the near future. In this paper, we propose a way to improve the energy efficiency of image and video compression with multi-core processors while maintaining the image/video quality. We improve the compression efficiency at the algorithmic level or derive the optimal parameters for the combination of a machine and compression based on the tradeoff between the energy consumption and the image/video quality. Based on experimental results, we confirm that the proposed approach can improve the energy efficiency of the straightforward approach by a factor of 2∼5 without compromising image/video quality. PMID:23202181

  7. Feasibility of High Frequency Acoustic Imaging for Inspection of Containments

    SciTech Connect

    C.N. Corrado; J.E. Bondaryk; V. Godino

    1998-08-01

    The Nuclear Regulatory Commission has a program at the Oak Ridge National Laboratory to provide assistance in their assessment of the effects of potential degradation on the structural integrity and Ieaktightness of metal containment vessels and steel liners of concrete containment in nuclear power plants. One of the program objectives is to identify a technique(s) for inspection of inaccessible portions of the containment pressure boundary. Acoustic imaging has been identified as one of these potential techniques. A numerical feasibility study investigated the use of high-frequency bistatic acoustic imaging techniques for inspection of inaccessible portions of the metallic pressure boundary of nuclear power plant containment. The range-dependent version of the OASES Code developed at the Massachusetts Institute of Technology was utilized to perform a series of numerical simulations. OASES is a well developed and extensively tested code for evaluation of the acoustic field in a system of stratified fluid and/or elastic layers. Using the code, an arbitrary number of fluid or solid elastic layers are interleaved, with the outer layers modeled as halfspaces. High frequency vibrational sources were modeled to simulate elastic waves in the steel. The received field due to an arbitrary source array can be calculated at arbitrary depth and range positions. In this numerical study, waves that reflect and scatter from surface roughness caused by modeled degradations (e.g., corrosion) are detected and used to identify and map the steel degradation. Variables in the numerical study included frequency, flaw size, interrogation distance, and sensor incident angle.Based on these analytical simulations, it is considered unlikely that acoustic imaging technology can be used to investigate embedded steel liners of reinforced concrete containment. The thin steel liner and high signal losses to the concrete make this application difficult. Results for portions of steel containment

  8. An acoustic charge transport imager for high definition television applications

    NASA Technical Reports Server (NTRS)

    Hunt, W. D.; Brennan, Kevin F.

    1994-01-01

    The primary goal of this research is to develop a solid-state high definition television (HDTV) imager chip operating at a frame rate of about 170 frames/sec at 2 Megapixels per frame. This imager offers an order of magnitude improvement in speed over CCD designs and will allow for monolithic imagers operating from the IR to the UV. The technical approach of the project focuses on the development of the three basic components of the imager and their integration. The imager chip can be divided into three distinct components: (1) image capture via an array of avalanche photodiodes (APD's), (2) charge collection, storage and overflow control via a charge transfer transistor device (CTD), and (3) charge readout via an array of acoustic charge transport (ACT) channels. The use of APD's allows for front end gain at low noise and low operating voltages while the ACT readout enables concomitant high speed and high charge transfer efficiency. Currently work is progressing towards the development of manufacturable designs for each of these component devices. In addition to the development of each of the three distinct components, work towards their integration is also progressing. The component designs are considered not only to meet individual specifications but to provide overall system level performance suitable for HDTV operation upon integration. The ultimate manufacturability and reliability of the chip constrains the design as well. The progress made during this period is described in detail in Sections 2-4.

  9. Digital video image processing from dental operating microscope in endodontic treatment.

    PubMed

    Suehara, Masataka; Nakagawa, Kan-Ichi; Aida, Natsuko; Ushikubo, Toshihiro; Morinaga, Kazuki

    2012-01-01

    Recently, optical microscopes have been used in endodontic treatment, as they offer advantages in terms of magnification, illumination, and documentation. Documentation is particularly important in presenting images to patients, and can take the form of both still images and motion video. Although high-quality still images can be obtained using a 35-mm film or CCD camera, the quality of still images produced by a video camera is significantly lower. The purpose of this study was to determine the potential of RegiStax in obtaining high-quality still images from a continuous video stream from an optical microscope. Video was captured continuously and sections with the highest luminosity chosen for frame alignment and stacking using the RegiStax program. The resulting stacked images were subjected to wavelet transformation. The results indicate that high-quality images with a large depth of field could be obtained using this method.

  10. From Acoustic Segmentation to Language Processing: Evidence from Optical Imaging

    PubMed Central

    Obrig, Hellmuth; Rossi, Sonja; Telkemeyer, Silke; Wartenburger, Isabell

    2010-01-01

    During language acquisition in infancy and when learning a foreign language, the segmentation of the auditory stream into words and phrases is a complex process. Intuitively, learners use “anchors” to segment the acoustic speech stream into meaningful units like words and phrases. Regularities on a segmental (e.g., phonological) or suprasegmental (e.g., prosodic) level can provide such anchors. Regarding the neuronal processing of these two kinds of linguistic cues a left-hemispheric dominance for segmental and a right-hemispheric bias for suprasegmental information has been reported in adults. Though lateralization is common in a number of higher cognitive functions, its prominence in language may also be a key to understanding the rapid emergence of the language network in infants and the ease at which we master our language in adulthood. One question here is whether the hemispheric lateralization is driven by linguistic input per se or whether non-linguistic, especially acoustic factors, “guide” the lateralization process. Methodologically, functional magnetic resonance imaging provides unsurpassed anatomical detail for such an enquiry. However, instrumental noise, experimental constraints and interference with EEG assessment limit its applicability, pointedly in infants and also when investigating the link between auditory and linguistic processing. Optical methods have the potential to fill this gap. Here we review a number of recent studies using optical imaging to investigate hemispheric differences during segmentation and basic auditory feature analysis in language development. PMID:20725516

  11. From acoustic segmentation to language processing: evidence from optical imaging.

    PubMed

    Obrig, Hellmuth; Rossi, Sonja; Telkemeyer, Silke; Wartenburger, Isabell

    2010-01-01

    During language acquisition in infancy and when learning a foreign language, the segmentation of the auditory stream into words and phrases is a complex process. Intuitively, learners use "anchors" to segment the acoustic speech stream into meaningful units like words and phrases. Regularities on a segmental (e.g., phonological) or suprasegmental (e.g., prosodic) level can provide such anchors. Regarding the neuronal processing of these two kinds of linguistic cues a left-hemispheric dominance for segmental and a right-hemispheric bias for suprasegmental information has been reported in adults. Though lateralization is common in a number of higher cognitive functions, its prominence in language may also be a key to understanding the rapid emergence of the language network in infants and the ease at which we master our language in adulthood. One question here is whether the hemispheric lateralization is driven by linguistic input per se or whether non-linguistic, especially acoustic factors, "guide" the lateralization process. Methodologically, functional magnetic resonance imaging provides unsurpassed anatomical detail for such an enquiry. However, instrumental noise, experimental constraints and interference with EEG assessment limit its applicability, pointedly in infants and also when investigating the link between auditory and linguistic processing. Optical methods have the potential to fill this gap. Here we review a number of recent studies using optical imaging to investigate hemispheric differences during segmentation and basic auditory feature analysis in language development.

  12. Long range acoustic imaging of the continental shelf environment: the Acoustic Clutter Reconnaissance Experiment 2001.

    PubMed

    Ratilal, Purnima; Lai, Yisan; Symonds, Deanelle T; Ruhlmann, Lilimar A; Preston, John R; Scheer, Edward K; Garr, Michael T; Holland, Charles W; Goff, John A; Makris, Nicholas C

    2005-04-01

    An active sonar system is used to image wide areas of the continental shelf environment by long-range echo sounding at low frequency. The bistatic system, deployed in the STRATAFORM area south of Long Island in April-May of 2001, imaged a large number of prominent clutter events over ranges spanning tens of kilometers in near real time. Roughly 3000 waveforms were transmitted into the water column. Wide-area acoustic images of the ocean environment were generated in near real time for each transmission. Between roughly 10 to more than 100 discrete and localized scatterers were registered for each image. This amounts to a total of at least 30000 scattering events that could be confused with those from submerged vehicles over the period of the experiment. Bathymetric relief in the STRATAFORM area is extremely benign, with slopes typically less than 0.5 degrees according to high resolution (30 m sampled) bathymetric data. Most of the clutter occurs in regions where the bathymetry is locally level and does not coregister with seafloor features. No statistically significant difference is found in the frequency of occurrence per unit area of repeatable clutter inside versus outside of areas occupied by subsurface river channels.

  13. Heterogeneity image patch index and its application to consumer video summarization.

    PubMed

    Dang, Chinh T; Radha, Hayder

    2014-06-01

    Automatic video summarization is indispensable for fast browsing and efficient management of large video libraries. In this paper, we introduce an image feature that we refer to as heterogeneity image patch (HIP) index. The proposed HIP index provides a new entropy-based measure of the heterogeneity of patches within any picture. By evaluating this index for every frame in a video sequence, we generate a HIP curve for that sequence. We exploit the HIP curve in solving two categories of video summarization applications: key frame extraction and dynamic video skimming. Under the key frame extraction frame-work, a set of candidate key frames is selected from abundant video frames based on the HIP curve. Then, a proposed patch-based image dissimilarity measure is used to create affinity matrix of these candidates. Finally, a set of key frames is extracted from the affinity matrix using a min–max based algorithm. Under video skimming, we propose a method to measure the distance between a video and its skimmed representation. The video skimming problem is then mapped into an optimization framework and solved by minimizing a HIP-based distance for a set of extracted excerpts. The HIP framework is pixel-based and does not require semantic information or complex camera motion estimation. Our simulation results are based on experiments performed on consumer videos and are compared with state-of-the-art methods. It is shown that the HIP approach outperforms other leading methods, while maintaining low complexity.

  14. Heterogeneity image patch index and its application to consumer video summarization.

    PubMed

    Dang, Chinh T; Radha, Hayder

    2014-06-01

    Automatic video summarization is indispensable for fast browsing and efficient management of large video libraries. In this paper, we introduce an image feature that we refer to as heterogeneity image patch (HIP) index. The proposed HIP index provides a new entropy-based measure of the heterogeneity of patches within any picture. By evaluating this index for every frame in a video sequence, we generate a HIP curve for that sequence. We exploit the HIP curve in solving two categories of video summarization applications: key frame extraction and dynamic video skimming. Under the key frame extraction frame-work, a set of candidate key frames is selected from abundant video frames based on the HIP curve. Then, a proposed patch-based image dissimilarity measure is used to create affinity matrix of these candidates. Finally, a set of key frames is extracted from the affinity matrix using a min–max based algorithm. Under video skimming, we propose a method to measure the distance between a video and its skimmed representation. The video skimming problem is then mapped into an optimization framework and solved by minimizing a HIP-based distance for a set of extracted excerpts. The HIP framework is pixel-based and does not require semantic information or complex camera motion estimation. Our simulation results are based on experiments performed on consumer videos and are compared with state-of-the-art methods. It is shown that the HIP approach outperforms other leading methods, while maintaining low complexity. PMID:24801112

  15. Informative frame detection from wireless capsule video endoscopic images

    NASA Astrophysics Data System (ADS)

    Bashar, Md. Khayrul; Mori, Kensaku; Suenaga, Yasuhito; Kitasaka, Takayuki; Mekada, Yoshito

    2008-03-01

    Wireless capsule endoscopy (WCE) is a new clinical technology permitting the visualization of the small bowel, the most difficult segment of the digestive tract. The major drawback of this technology is the high amount of time for video diagnosis. In this study, we propose a method for informative frame detection by isolating useless frames that are substantially covered by turbid fluids or their contamination with other materials, e.g., faecal, semi-processed or unabsorbed foods etc. Such materials and fluids present a wide range of colors, from brown to yellow, and/or bubble-like texture patterns. The detection scheme, therefore, consists of two stages: highly contaminated non-bubbled (HCN) frame detection and significantly bubbled (SB) frame detection. Local color moments in the Ohta color space are used to characterize HCN frames, which are isolated by the Support Vector Machine (SVM) classifier in Stage-1. The rest of the frames go to the Stage-2, where Laguerre gauss Circular Harmonic Functions (LG-CHFs) extract the characteristics of the bubble-structures in a multi-resolution framework. An automatic segmentation method is designed to extract the bubbled regions based on local absolute energies of the CHF responses, derived from the grayscale version of the original color image. Final detection of the informative frames is obtained by using threshold operation on the extracted regions. An experiment with 20,558 frames from the three videos shows the excellent average detection accuracy (96.75%) by the proposed method, when compared with the Gabor based- (74.29%) and discrete wavelet based features (62.21%).

  16. An acoustic charge transport imager for high definition television applications

    NASA Technical Reports Server (NTRS)

    Hunt, W. D.; Brennan, K. F.; Summers, C. J.

    1994-01-01

    The primary goal of this research is to develop a solid-state television (HDTV) imager chip operating at a frame rate of about 170 frames/sec at 2 Megapixels/frame. This imager will offer an order of magnitude improvements in speed over CCD designs and will allow for monolithic imagers operating from the IR to UV. The technical approach of the project focuses on the development of the three basic components of the imager and their subsequent integration. The camera chip can be divided into three distinct functions: (1) image capture via an array of avalanche photodiodes (APD's); (2) charge collection, storage, and overflow control via a charge transfer transistor device (CTD); and (3) charge readout via an array of acoustic charge transport (ACT) channels. The use of APD's allows for front end gain at low noise and low operating voltages while the ACT readout enables concomitant high speed and high charge transfer efficiency. Currently work is progressing towards the optimization of each of these component devices. In addition to the development of each of the three distinct components, work towards their integration and manufacturability is also progressing. The component designs are considered not only to meet individual specifications but to provide overall system level performance suitable for HDTV operation upon integration. The ultimate manufacturability and reliability of the chip constrains the design as well. The progress made during this period is described in detail.

  17. single-channel stereoscopic video imaging modality based on a transparent rotating deflector

    NASA Astrophysics Data System (ADS)

    Radfar, Edalat; Park, Jihoon; Jun, Eunkwon; Ha, Myungjin; Lee, Sangyeob; Yu, SungKon; Jang, Seul G.; Jung, Byungjo

    2015-03-01

    This paper introduces a stereoscopic video imaging modality based on a transparent rotating deflector (TRD). Sequential two-dimensional (2D) left and right images were obtained by rotating the TRD on a stepping motor synchronized with a complementary metal-oxide semiconductor camera, and the components of the imaging modality were controlled through general purpose input/output ports using a microcontroller unit. In this research, live stereoscopic videos were visualized on a personal computer by both active shutter 3D and passive polarization 3D methods. The imaging modality was characterized by evaluating the stereoscopic video image generation, rotation characteristics of the TRD. The level of 3D conception was estimated in terms of simplified human stereovision. The results show that singlechannel stereoscopic video imaging modality has the potential to become an economical compact stereoscopic device as the system components are amenable to miniaturization; and could be applied in a wide variety of fields.

  18. Frequency identification of vibration signals using video camera image data.

    PubMed

    Jeng, Yih-Nen; Wu, Chia-Hung

    2012-01-01

    This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC) can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system. PMID:23202026

  19. High-resolution, near-real-time x-ray video imaging without image intensification

    NASA Astrophysics Data System (ADS)

    Mengers, Paul

    1993-12-01

    This paper discusses a type of x-ray camera designed to generate standard RS-170 video output that does not use x-ray or optical image intensifiers. Instead, it employs a very sensitive, very high resolution CCD sensor which views an x-ray-to-light conversion screen directly through a high speed imaging lens. This new solid state TV camera, which is described later, has very low readout noise plus unusually high gain which enables it to generate real-time video with incident flux levels typical of many inspection applications. Perhaps more important is an ability to integrate for multiple frame intervals on the chip followed by the output of a standard, RS-170 format video frame containing two balanced interlaced fields. In this integrating mode excellent quality images of low contrast objects can be obtained with only a few tenths of a second integration intervals. The basic elements of this type of camera are described and applications discussed where this approach appears to have important advantages over other methods in common use.

  20. Acoustical imaging of spheres above a reflecting surface

    NASA Astrophysics Data System (ADS)

    Chambers, David; Berryman, James

    2003-04-01

    An analytical study using the MUSIC method of subspace imaging is presented for the case of spheres above a reflecting boundary. The field scattered from the spheres and the reflecting boundary is calculated analytically, neglecting interactions between spheres. The singular value decomposition of the response matrix is calculated and the singular vectors divided into signal and noise subspaces. Images showing the estimated sphere locations are obtained by backpropagating the noise vectors using either the free space Green's function or the Green's function that incorporates reflections from the boundary. We show that the latter Green's function improves imaging performance after applying a normalization that compensates for the interference between direct and reflected fields. We also show that the best images are attained in some cases when the number of singular vectors in the signal subspace exceeds the number of spheres. This is consistent with previous analysis showing multiple eigenvalues of the time reversal operator for spherical scatterers [Chambers and Gautesen, J. Acoust. Soc. Am. 109 (2001)]. [Work performed under the auspices of the U.S. Department of Energy by the University of California, Lawrence Livermore National Laboratory under Contract No. W-7405-Eng-48.

  1. Diagnostic agreement when comparing still and video imaging for the medical evaluation of child sexual abuse.

    PubMed

    Killough, Emily; Spector, Lisa; Moffatt, Mary; Wiebe, Jan; Nielsen-Parker, Monica; Anderst, Jim

    2016-02-01

    Still photo imaging is often used in medical evaluations of child sexual abuse (CSA) but video imaging may be superior. We aimed to compare still images to videos with respect to diagnostic agreement regarding hymenal deep notches and transections in post-pubertal females. Additionally, we evaluated the role of experience and expertise on agreement. We hypothesized that videos would result in improved diagnostic agreement of multiple evaluators as compared to still photos. This was a prospective quasi-experimental study using imaging modality as the quasi-independent variable. The dependent variable was diagnostic agreement of participants regarding presence/absence of findings indicating penetrative trauma on non-acute post-pubertal genital exams. Participants were medical personnel who regularly perform CSA exams. Diagnostic agreement was evaluated utilizing a retrospective selection of videos and still photos obtained directly from the videos. Videos and still photos were embedded into an on-line survey as sixteen cases. One-hundred sixteen participants completed the study. Participant diagnosis was more likely to agree with study center diagnosis when using video (p<0.01). Use of video resulted in statistically significant changes in diagnosis in four of eight cases. In two cases, the diagnosis of the majority of participants changed from no hymenal transection to transection present. No difference in agreement was found based on experience or expertise. Use of video vs. still images resulted in increased agreement with original examiner and changes in diagnostic impressions in review of CSA exams. Further study is warranted, as video imaging may have significant impacts on diagnosis. PMID:26746111

  2. Diagnostic agreement when comparing still and video imaging for the medical evaluation of child sexual abuse.

    PubMed

    Killough, Emily; Spector, Lisa; Moffatt, Mary; Wiebe, Jan; Nielsen-Parker, Monica; Anderst, Jim

    2016-02-01

    Still photo imaging is often used in medical evaluations of child sexual abuse (CSA) but video imaging may be superior. We aimed to compare still images to videos with respect to diagnostic agreement regarding hymenal deep notches and transections in post-pubertal females. Additionally, we evaluated the role of experience and expertise on agreement. We hypothesized that videos would result in improved diagnostic agreement of multiple evaluators as compared to still photos. This was a prospective quasi-experimental study using imaging modality as the quasi-independent variable. The dependent variable was diagnostic agreement of participants regarding presence/absence of findings indicating penetrative trauma on non-acute post-pubertal genital exams. Participants were medical personnel who regularly perform CSA exams. Diagnostic agreement was evaluated utilizing a retrospective selection of videos and still photos obtained directly from the videos. Videos and still photos were embedded into an on-line survey as sixteen cases. One-hundred sixteen participants completed the study. Participant diagnosis was more likely to agree with study center diagnosis when using video (p<0.01). Use of video resulted in statistically significant changes in diagnosis in four of eight cases. In two cases, the diagnosis of the majority of participants changed from no hymenal transection to transection present. No difference in agreement was found based on experience or expertise. Use of video vs. still images resulted in increased agreement with original examiner and changes in diagnostic impressions in review of CSA exams. Further study is warranted, as video imaging may have significant impacts on diagnosis.

  3. Potential usefulness of a video printer for producing secondary images from digitized chest radiographs

    NASA Astrophysics Data System (ADS)

    Nishikawa, Robert M.; MacMahon, Heber; Doi, Kunio; Bosworth, Eric

    1991-05-01

    Communication between radiologists and clinicians could be improved if a secondary image (copy of the original image) accompanied the radiologic report. In addition, the number of lost original radiographs could be decreased, since clinicians would have less need to borrow films. The secondary image should be simple and inexpensive to produce, while providing sufficient image quality for verification of the diagnosis. We are investigating the potential usefulness of a video printer for producing copies of radiographs, i.e. images printed on thermal paper. The video printer we examined (Seikosha model VP-3500) can provide 64 shades of gray. It is capable of recording images up to 1,280 pixels by 1,240 lines and can accept any raster-type video signal. The video printer was characterized in terms of its linearity, contrast, latitude, resolution, and noise properties. The quality of video-printer images was also evaluated in an observer study using portable chest radiographs. We found that observers could confirm up to 90 of the reported findings in the thorax using video- printer images, when the original radiographs were of high quality. The number of verified findings was diminished when high spatial resolution was required (e.g. detection of a subtle pneumothorax) or when a low-contrast finding was located in the mediastinal area or below the diaphragm (e.g. nasogastric tubes).

  4. Objectification of perceptual image quality for mobile video

    NASA Astrophysics Data System (ADS)

    Lee, Seon-Oh; Sim, Dong-Gyu

    2011-06-01

    This paper presents an objective video quality evaluation method for quantifying the subjective quality of digital mobile video. The proposed method aims to objectify the subjective quality by extracting edgeness and blockiness parameters. To evaluate the performance of the proposed algorithms, we carried out subjective video quality tests with the double-stimulus continuous quality scale method and obtained differential mean opinion score values for 120 mobile video clips. We then compared the performance of the proposed methods with that of existing methods in terms of the differential mean opinion score with 120 mobile video clips. Experimental results showed that the proposed methods were approximately 10% better than the edge peak signal-to-noise ratio of the J.247 method in terms of the Pearson correlation.

  5. Standing tree decay detection by using acoustic tomography images

    NASA Astrophysics Data System (ADS)

    Espinosa, Luis F.; Arciniegas, Andres F.; Prieto, Flavio A.; Cortes, Yolima; Brancheriau, Loïc.

    2015-04-01

    The acoustic tomographic technique is used in the diagnosis process of standing trees. This paper presents a segmentation methodology to separate defective regions in cross-section tomographic images obtained with Arbotom® device. A set of experiments was proposed using two trunk samples obtained from a eucalyptus tree, simulating defects by drilling holes with known geometry, size and position and using different number of sensors. Also, tomographic images from trees presenting real defects were studied, by testing two different species with significant internal decay. Tomographic images and photographs from the trunk cross-section were processed to align the propagation velocity data with a corresponding region, healthy or defective. The segmentation was performed by finding a velocity threshold value to separate the defective region; a logistic regression model was fitted to obtain the value that maximizes a performance criterion, being selected the geometric mean. Accuracy segmentation values increased as the number of sensors augmented; also the position influenced the result, obtaining improved results in the case of centric defects.

  6. Video imaging system and thermal mapping of the molten hearth in an electron beam melting furnace

    SciTech Connect

    Miszkiel, M.E.; Davis, R.A.; Van Den Avyle, J.A.

    1995-12-31

    This project was initiated to develop an enhanced video imaging system for the Liquid Metal Processing Laboratory Electron Beam Melting (EB) Furnace at Sandia and to use color video images to map the temperature distribution of the surface of the molten hearth. In a series of test melts, the color output of the video image was calibrated against temperatures measured by an optical pyrometer and CCD camera viewing port above the molten pool. To prevent potential metal vapor deposition onto line-of-sight optical surfaces above the pool, argon backfill was used along with a pinhole aperture to obtain the vide image. The geometry of the optical port to the hearth set the limits for the focus lens and CCD camera`s field of view. Initial melts were completed with the pyrometer and pinhole aperture port in a fixed position. Using commercially available vacuum components, a second flange assembly was constructed to provide flexibility in choosing pyrometer target sights on the hearth and to adjust the field of view for the focus lens/CCD combination. RGB video images processed from the melts verified that red wavelength light captured with the video camera could be calibrated with the optical pyrometer target temperatures and used to generate temperature maps of the hearth surface. Two color ratio thermal mapping using red and green video images, which has theoretical advantages, was less successful due to probable camera non-linearities in the red and green image intensities.

  7. Quantification of nearshore morphology based on video imaging

    USGS Publications Warehouse

    Alexander, P.S.; Holman, R.A.

    2004-01-01

    The Argus network is a series of video cameras with aerial views of beaches around the world. Intensity contrasts in time exposure images reveal areas of preferential breaking, which are closely tied to underlying bed morphology. This relationship was further investigated, including the effect of tidal elevation and wave height on the presence of wave breaking and its cross-shore position over sand bars. Computerized methods of objectively extracting shoreline and sand bar locations were developed, allowing the vast quantity of data generated by Argus to be more effectively examined. Once features were identified in the images, daily alongshore mean values were taken to create time series of shoreline and sand bar location, which were analyzed for annual cycles and cross-correlated with wave data to investigate environmental forcing and response. These data extraction techniques were applied to images from four of the Argus camera sites. A relationship between wave height and shoreline location was found in which increased wave heights resulted in more landward shoreline positions; given the short lag times over which this correlation was significant, and that the strong annual signal in wave height was not replicated in the shoreline time series, it is likely that this relationship is a result of set-up during periods of large waves. Wave height was also found to have an effect on sand bar location, whereby an increase in wave height resulted in offshore bar migration. This correlation was significant over much longer time lags than the relationship between wave height and shoreline location, and a strong annual signal was found in the location of almost all observed bars, indicating that the sand bars are migrating with changes in wave height. In the case of the site with multiple sand bars, the offshore bars responded more significantly to changes in wave height, whereas the innermost bar seemed to be shielded from incident wave energy by breaking over the other

  8. Field methods to measure surface displacement and strain with the Video Image Correlation method

    NASA Technical Reports Server (NTRS)

    Maddux, Gary A.; Horton, Charles M.; Mcneill, Stephen R.; Lansing, Matthew D.

    1994-01-01

    The objective of this project was to develop methods and application procedures to measure displacement and strain fields during the structural testing of aerospace components using paint speckle in conjunction with the Video Image Correlation (VIC) system.

  9. Do Stereotypic Images in Video Games Affect Attitudes and Behavior? Adolescents’ Perspectives

    PubMed Central

    Henning, Alexandra; Brenick, Alaina; Killen, Melanie; O’Connor, Alexander; Collins, Michael J.

    2015-01-01

    This study examined adolescents’ attitudes about video games along with their self-reported play frequency. Ninth and eleventh grade students (N = 361), approximately evenly divided by grade and gender, were surveyed about whether video games have stereotypic images, involve harmful consequences or affect one’s attitudes, whether game playing should be regulated by parents or the government, and whether game playing is a personal choice. Adolescents who played video games frequently showed decreased concern about the effects that games with negatively stereotyped images may have on the players’ attitudes compared to adolescents who played games infrequently or not at all. With age, adolescents were more likely to view images as negative, but were also less likely to recognize stereotypic images of females as harmful and more likely to judge video-game playing as a personal choice. The paper discusses other findings in relation to research on adolescents’ social cognitive judgments. PMID:25729336

  10. Large-scale investigation of plaster detachments in historical murals by acoustic stimulation and video-holographic detection

    NASA Astrophysics Data System (ADS)

    Guelker, Gerd; Hinsch, Klaus D.; Joost, Holger

    2001-10-01

    In the conservation of historical murals an important issue is the detection of plaster or paint layers that detach from the supporting material and thus threaten to fall off. Commonly, walls are inspected by the acoustic response to a gentle finger-tapping (percussion method). Since this is a costly and cumbersome technique there is need for a metrological instrument serving the same purpose. In the last few years we have shown, that a time-average version of electronic speckle pattern interferometry (ESPI) with increased sensitivity in combination with acoustic excitation of the object can be a powerful tool for monitoring of loose areas. It offers full-field, video real time capability and has the advantage of non-contact and remote operation which, for example, is extremely useful in large buildings. Recently, a fully computer-based evaluation and control system was added to the system to assist in the introduction of the method as a generally approved tool in artwork monitoring. Principles of the method and instrumental features of the equipment are presented and some results and their interpretation obtained with the computerized system in the church and chapel at St. John's convent at Mnstair, Switzerland are demonstrated.

  11. Acoustic-integrated dynamic MR imaging for a patient with obstructive sleep apnea.

    PubMed

    Chen, Yunn-Jy; Shih, Tiffany Ting-Fang; Chang, Yi-Chung; Hsu, Ying-Chieh; Huon, Leh-Kiong; Lo, Men-Tzung; Pham, Van-Truong; Lin, Chen; Wang, Pa-Chun

    2015-12-01

    Obstructive sleep apnea syndrome (OSAS) is caused by multi-level upper airway obstruction. Anatomic changes at the sites of obstruction may modify the physical or acoustic properties of snores. The surgical success of OSA depends upon precise localization of obstructed levels. We present a case of OSAS who received simultaneous dynamic MRI and snore acoustic recordings. The synchronized image and acoustic information successfully characterize the sites of temporal obstruction during sleep-disordered breathing events.

  12. Dual-frequency acoustic droplet vaporization detection for medical imaging.

    PubMed

    Arena, Christopher B; Novell, Anthony; Sheeran, Paul S; Puett, Connor; Moyer, Linsey C; Dayton, Paul A

    2015-09-01

    Liquid-filled perfluorocarbon droplets emit a unique acoustic signature when vaporized into gas-filled microbubbles using ultrasound. Here, we conducted a pilot study in a tissue-mimicking flow phantom to explore the spatial aspects of droplet vaporization and investigate the effects of applied pressure and droplet concentration on image contrast and axial and lateral resolution. Control microbubble contrast agents were used for comparison. A confocal dual-frequency transducer was used to transmit at 8 MHz and passively receive at 1 MHz. Droplet signals were of significantly higher energy than microbubble signals. This resulted in improved signal separation and high contrast-to-tissue ratios (CTR). Specifically, with a peak negative pressure (PNP) of 450 kPa applied at the focus, the CTR of B-mode images was 18.3 dB for droplets and -0.4 for microbubbles. The lateral resolution was dictated by the size of the droplet activation area, with lower pressures resulting in smaller activation areas and improved lateral resolution (0.67 mm at 450 kPa). The axial resolution in droplet images was dictated by the size of the initial droplet and was independent of the properties of the transmit pulse (3.86 mm at 450 kPa). In post-processing, time-domain averaging (TDA) improved droplet and microbubble signal separation at high pressures (640 kPa and 700 kPa). Taken together, these results indicate that it is possible to generate high-sensitivity, high-contrast images of vaporization events. In the future, this has the potential to be applied in combination with droplet-mediated therapy to track treatment outcomes or as a standalone diagnostic system to monitor the physical properties of the surrounding environment. PMID:26415125

  13. Acoustic and Elastodynamic Redatuming for VSP Salt Dome Flank Imaging

    NASA Astrophysics Data System (ADS)

    Lu, R.; Willis, M.; Toksoz, N.

    2007-12-01

    We apply an extension of the concept of Time Reversed Acoustics (TRA) for imaging salt dome flanks using Vertical Seismic Profile (VSP) data. We demonstrate its performance and capabilities on both synthetic acoustic and elastic seismic data from a Gulf of Mexico (GOM) model. This target-oriented strategy eliminates the need for the traditional complex process of velocity estimation, model building, and iterative depth migration to remove the effects of the salt canopy and surrounding overburden. In this study, we use data from surface shots recorded in a well from a walkaway VSP survey. The method, called redatuming, creates a geometry as if the source and receiver pairs had been located in the borehole at the positions of the receivers. This process generates effective downhole shot gathers without any knowledge of the overburden velocity structure. The resulting shot gathers are less complex since the VSP ray paths from the surface source are shortened and moved to be as if they started in the borehole, then reflected off the salt flank region and captured in the borehole. After redatuming, we apply multiple passes of prestack migration from the reference datum of the borehole. In our example, the first pass migration, using only simple vertical velocity gradient model, reveals the outline of the salt edge. A second pass of reverse-time prestack depth migration using the full, two-way wave equation, is performed with an updated velocity model that now consists of the velocity gradient and the salt dome. The second pass migration brings out the dipping sediments abutting the salt flank because these reflectors were illuminated by energy that bounced off the salt flank forming prismatic reflections.

  14. Simultaneous compression and encryption of closely resembling images: application to video sequences and polarimetric images.

    PubMed

    Aldossari, M; Alfalou, A; Brosseau, C

    2014-09-22

    This study presents and validates an optimized method of simultaneous compression and encryption designed to process images with close spectra. This approach is well adapted to the compression and encryption of images of a time-varying scene but also to static polarimetric images. We use the recently developed spectral fusion method [Opt. Lett.35, 1914-1916 (2010)] to deal with the close resemblance of the images. The spectral plane (containing the information to send and/or to store) is decomposed in several independent areas which are assigned according a specific way. In addition, each spectrum is shifted in order to minimize their overlap. The dual purpose of these operations is to optimize the spectral plane allowing us to keep the low- and high-frequency information (compression) and to introduce an additional noise for reconstructing the images (encryption). Our results show that not only can the control of the spectral plane enhance the number of spectra to be merged, but also that a compromise between the compression rate and the quality of the reconstructed images can be tuned. We use a root-mean-square (RMS) optimization criterion to treat compression. Image encryption is realized at different security levels. Firstly, we add a specific encryption level which is related to the different areas of the spectral plane, and then, we make use of several random phase keys. An in-depth analysis at the spectral fusion methodology is done in order to find a good trade-off between the compression rate and the quality of the reconstructed images. Our new proposal spectral shift allows us to minimize the image overlap. We further analyze the influence of the spectral shift on the reconstructed image quality and compression rate. The performance of the multiple-image optical compression and encryption method is verified by analyzing several video sequences and polarimetric images.

  15. Single-channel stereoscopic video imaging modality based on transparent rotating deflector.

    PubMed

    Radfar, Edalat; Jang, Won Hyuk; Freidoony, Leila; Park, Jihoon; Kwon, Kichul; Jung, Byungjo

    2015-10-19

    In this study, we developed a single-channel stereoscopic video imaging modality based on a transparent rotating deflector (TRD). Sequential two-dimensional (2D) left and right images were obtained through the TRD synchronized with a camera, and the components of the imaging modality were controlled by a microcontroller unit. The imaging modality was characterized by evaluating the stereoscopic video image generation, rotation of the TRD, heat generation by the stepping motor, and image quality and its stability in terms of the structural similarity index. The degree of depth perception was estimated and subjective analysis was performed to evaluate the depth perception improvement. The results show that the single-channel stereoscopic video imaging modality may: 1) overcome some limitations of conventional stereoscopic video imaging modalities; 2) be a potential economical compact stereoscopic imaging modality if the system components can be miniaturized; 3) be easily integrated into current 2D optical imaging modalities to produce a stereoscopic image; and 4) be applied to various medical and industrial fields.

  16. A system for the real-time display of radar and video images of targets

    NASA Technical Reports Server (NTRS)

    Allen, W. W.; Burnside, W. D.

    1990-01-01

    Described here is a software and hardware system for the real-time display of radar and video images for use in a measurement range. The main purpose is to give the reader a clear idea of the software and hardware design and its functions. This system is designed around a Tektronix XD88-30 graphics workstation, used to display radar images superimposed on video images of the actual target. The system's purpose is to provide a platform for tha analysis and documentation of radar images and their associated targets in a menu-driven, user oriented environment.

  17. From computer images to video presentation: Enhancing technology transfer

    NASA Technical Reports Server (NTRS)

    Beam, Sherilee F.

    1994-01-01

    With NASA placing increased emphasis on transferring technology to outside industry, NASA researchers need to evaluate many aspects of their efforts in this regard. Often it may seem like too much self-promotion to many researchers. However, industry's use of video presentations in sales, advertising, public relations and training should be considered. Today, the most typical presentation at NASA is through the use of vu-graphs (overhead transparencies) which can be effective for text or static presentations. For full blown color and sound presentations, however, the best method is videotape. In fact, it is frequently more convenient due to its portability and the availability of viewing equipment. This talk describes techniques for creating a video presentation through the use of a combined researcher and video professional team.

  18. Characterizing Response to Elemental Unit of Acoustic Imaging Noise: An fMRI Study

    PubMed Central

    Luh, Wen-Ming; Talavage, Thomas M.

    2010-01-01

    Acoustic imaging noise produced during functional magnetic resonance imaging (fMRI) studies can hinder auditory fMRI research analysis by altering the properties of the acquired time-series data. Acoustic imaging noise can be especially confounding when estimating the time course of the hemodynamic response (HDR) in auditory event-related fMRI (fMRI) experiments. This study is motivated by the desire to establish a baseline function that can serve not only as a comparison to other quantities of acoustic imaging noise for determining how detrimental is one's experimental noise, but also as a foundation for a model that compensates for the response to acoustic imaging noise. Therefore, the amplitude and spatial extent of the HDR to the elemental unit of acoustic imaging noise (i.e., a single ping) associated with echoplanar acquisition were characterized and modeled. Results from this fMRI study at 1.5 T indicate that the group-averaged HDR in left and right auditory cortex to acoustic imaging noise (duration of 46 ms) has an estimated peak magnitude of 0.29% (right) to 0.48% (left) signal change from baseline, peaks between 3 and 5 s after stimulus presentation, and returns to baseline and remains within the noise range approximately 8 s after stimulus presentation. PMID:19304477

  19. Negative refraction induced acoustic concentrator and the effects of scattering cancellation, imaging, and mirage

    NASA Astrophysics Data System (ADS)

    Wei, Qi; Cheng, Ying; Liu, Xiao-jun

    2012-07-01

    We present a three-dimensional acoustic concentrator capable of significantly enhancing the sound intensity in the compressive region with scattering cancellation, imaging, and mirage effects. The concentrator shell is built by isotropic gradient negative-index materials, which together with an exterior host medium slab constructs a pair of complementary media. The enhancement factor, which can approach infinity by tuning the geometric parameters, is always much higher than that of a traditional concentrator made by positive-index materials with the same size. The acoustic scattering theory is applied to derive the pressure field distribution of the concentrator, which is consistent with the numerical full-wave simulations. The inherent acoustic impedance match at the interfaces of the shell as well as the inverse processes of “negative refraction—progressive curvature—negative refraction” for arbitrary sound rays can exactly cancel the scattering of the concentrator. In addition, the concentrator shell can also function as an acoustic spherical magnifying superlens, which produces a perfect image with the same shape, with bigger geometric and acoustic parameters located at a shifted position. Then some acoustic mirages are observed whereby the waves radiated from (scattered by) an object located in the center region may seem to be radiated from (scattered by) its image. Based on the mirage effect, we further propose an intriguing acoustic transformer which can transform the sound scattering pattern of one object into another object at will with arbitrary geometric, acoustic, and location parameters.

  20. Acoustic angiography: a new high frequency contrast ultrasound technique for biomedical imaging

    NASA Astrophysics Data System (ADS)

    Shelton, Sarah E.; Lindsey, Brooks D.; Gessner, Ryan; Lee, Yueh; Aylward, Stephen; Lee, Hyunggyun; Cherin, Emmanuel; Foster, F. Stuart; Dayton, Paul A.

    2016-05-01

    Acoustic Angiography is a new approach to high-resolution contrast enhanced ultrasound imaging enabled by ultra-broadband transducer designs. The high frequency imaging technique provides signal separation from tissue which does not produce significant harmonics in the same frequency range, as well as high resolution. This approach enables imaging of microvasculature in-vivo with high resolution and signal to noise, producing images that resemble x-ray angiography. Data shows that acoustic angiography can provide important information about the presence of disease based on vascular patterns, and may enable a new paradigm in medical imaging.

  1. Negative refraction imaging of acoustic metamaterial lens in the supersonic range

    SciTech Connect

    Han, Jianning; Wen, Tingdun; Yang, Peng; Zhang, Lu

    2014-05-15

    Acoustic metamaterials with negative refraction index is the most promising method to overcome the diffraction limit of acoustic imaging to achieve ultrahigh resolution. In this paper, we use localized resonant phononic crystal as the unit cell to construct the acoustic negative refraction lens. Based on the vibration model of the phononic crystal, negative quality parameters of the lens are obtained while excited near the system resonance frequency. Simulation results show that negative refraction of the acoustic lens can be achieved when a sound wave transmiting through the phononic crystal plate. The patterns of the imaging field agree well with that of the incident wave, while the dispersion is very weak. The unit cell size in the simulation is 0.0005 m and the wavelength of the sound source is 0.02 m, from which we show that acoustic signal can be manipulated through structures with dimensions much smaller than the wavelength of incident wave.

  2. Tracking Energy Flow Using a Volumetric Acoustic Intensity Imager (VAIM)

    NASA Technical Reports Server (NTRS)

    Klos, Jacob; Williams, Earl G.; Valdivia, Nicolas P.

    2006-01-01

    A new measurement device has been invented at the Naval Research Laboratory which images instantaneously the intensity vector throughout a three-dimensional volume nearly a meter on a side. The measurement device consists of a nearly transparent spherical array of 50 inexpensive microphones optimally positioned on an imaginary spherical surface of radius 0.2m. Front-end signal processing uses coherence analysis to produce multiple, phase-coherent holograms in the frequency domain each related to references located on suspect sound sources in an aircraft cabin. The analysis uses either SVD or Cholesky decomposition methods using ensemble averages of the cross-spectral density with the fixed references. The holograms are mathematically processed using spherical NAH (nearfield acoustical holography) to convert the measured pressure field into a vector intensity field in the volume of maximum radius 0.4 m centered on the sphere origin. The utility of this probe is evaluated in a detailed analysis of a recent in-flight experiment in cooperation with Boeing and NASA on NASA s Aries 757 aircraft. In this experiment the trim panels and insulation were removed over a section of the aircraft and the bare panels and windows were instrumented with accelerometers to use as references for the VAIM. Results show excellent success at locating and identifying the sources of interior noise in-flight in the frequency range of 0 to 1400 Hz. This work was supported by NASA and the Office of Naval Research.

  3. Breaking the acoustic diffraction limit in photoacoustic imaging with multiple speckle illumination

    NASA Astrophysics Data System (ADS)

    Chaigne, Thomas; Gateau, Jérôme; Allain, Marc; Katz, Ori; Gigan, Sylvain; Sentenac, Anne; Bossy, Emmanuel

    2016-03-01

    In deep photoacoustic imaging, resolution is inherently limited by acoustic diffraction, and ultrasonic frequencies cannot be arbitrarily increased because of attenuation in tissue. Here we report on the use of multiple speckle illumination to perform super resolution photoacoustic imaging. We show that the analysis of speckle-induced second-order fluctuations of the photoacoustic signal combined with deconvolution enables to resolve optically absorbing structures below the acoustic diffraction limit.

  4. Thinking Images: Doing Philosophy in Film and Video

    ERIC Educational Resources Information Center

    Parkes, Graham

    2009-01-01

    Over the past several decades film and video have been steadily infiltrating the philosophy curriculum at colleges and universities. Traditionally, teachers of philosophy have not made much use of "audiovisual aids" in the classroom beyond the chalk board or overhead projector, with only the more adventurous playing audiotapes, for example, or…

  5. Video image processor on the Spacelab 2 Solar Optical Universal Polarimeter /SL2 SOUP/

    NASA Technical Reports Server (NTRS)

    Lindgren, R. W.; Tarbell, T. D.

    1981-01-01

    The SOUP instrument is designed to obtain diffraction-limited digital images of the sun with high photometric accuracy. The Video Processor originated from the requirement to provide onboard real-time image processing, both to reduce the telemetry rate and to provide meaningful video displays of scientific data to the payload crew. This original concept has evolved into a versatile digital processing system with a multitude of other uses in the SOUP program. The central element in the Video Processor design is a 16-bit central processing unit based on 2900 family bipolar bit-slice devices. All arithmetic, logical and I/O operations are under control of microprograms, stored in programmable read-only memory and initiated by commands from the LSI-11. Several functions of the Video Processor are described, including interface to the High Rate Multiplexer downlink, cosmetic and scientific data processing, scan conversion for crew displays, focus and exposure testing, and use as ground support equipment.

  6. Analysis of Particle Image Velocimetry (PIV) Data for Acoustic Velocity Measurements

    NASA Technical Reports Server (NTRS)

    Blackshire, James L.

    1997-01-01

    Acoustic velocity measurements were taken using Particle Image Velocimetry (PIV) in a Normal Incidence Tube configuration at various frequency, phase, and amplitude levels. This report presents the results of the PIV analysis and data reduction portions of the test and details the processing that was done. Estimates of lower measurement sensitivity levels were determined based on PIV image quality, correlation, and noise level parameters used in the test. Comparison of measurements with linear acoustic theory are presented. The onset of nonlinear, harmonic frequency acoustic levels were also studied for various decibel and frequency levels ranging from 90 to 132 dB and 500 to 3000 Hz, respectively.

  7. ACOUSTICAL IMAGING AND MECHANICAL PROPERTIES OF SOFT ROCK AND MARINE SEDIMENTS

    SciTech Connect

    Thurman E. Scott, Jr.; Younane Abousleiman

    2004-04-01

    The research during this project has concentrated on developing a correlation between rock deformation mechanisms and their acoustic velocity signature. This has included investigating: (1) the acoustic signature of drained and undrained unconsolidated sands, (2) the acoustic emission signature of deforming high porosity rocks (in comparison to their low porosity high strength counterparts), (3) the effects of deformation on anisotropic elastic and poroelastic moduli, and (4) the acoustic tomographic imaging of damage development in rocks. Each of these four areas involve triaxial experimental testing of weak porous rocks or unconsolidated sand and involves measuring acoustic properties. The research is directed at determining the seismic velocity signature of damaged rocks so that 3-D or 4-D seismic imaging can be utilized to image rock damage. These four areas of study are described in the report: (1) Triaxial compression experiments have been conducted on unconsolidated Oil Creek sand at high confining pressures. (2) Initial experiments on measuring the acoustic emission activity from deforming high porosity Danian chalk were accomplished and these indicate that the AE activity was of a very low amplitude. (3) A series of triaxial compression experiments were conducted to investigate the effects of induced stress on the anisotropy developed in dynamic elastic and poroelastic parameters in rocks. (4) Tomographic acoustic imaging was utilized to image the internal damage in a deforming porous limestone sample. Results indicate that the deformation damage in rocks induced during laboratory experimentation can be imaged tomographically in the laboratory. By extension the results also indicate that 4-D seismic imaging of a reservoir may become a powerful tool for imaging reservoir deformation (including imaging compaction and subsidence) and for imaging zones where drilling operation may encounter hazardous shallow water flows.

  8. Temporal pattern of acoustic imaging noise asymmetrically modulates activation in the auditory cortex.

    PubMed

    Ranaweera, Ruwan D; Kwon, Minseok; Hu, Shuowen; Tamer, Gregory G; Luh, Wen-Ming; Talavage, Thomas M

    2016-01-01

    This study investigated the hemisphere-specific effects of the temporal pattern of imaging related acoustic noise on auditory cortex activation. Hemodynamic responses (HDRs) to five temporal patterns of imaging noise corresponding to noise generated by unique combinations of imaging volume and effective repetition time (TR), were obtained using a stroboscopic event-related paradigm with extra-long (≥27.5 s) TR to minimize inter-acquisition effects. In addition to confirmation that fMRI responses in auditory cortex do not behave in a linear manner, temporal patterns of imaging noise were found to modulate both the shape and spatial extent of hemodynamic responses, with classically non-auditory areas exhibiting responses to longer duration noise conditions. Hemispheric analysis revealed the right primary auditory cortex to be more sensitive than the left to the presence of imaging related acoustic noise. Right primary auditory cortex responses were significantly larger during all the conditions. This asymmetry of response to imaging related acoustic noise could lead to different baseline activation levels during acquisition schemes using short TR, inducing an observed asymmetry in the responses to an intended acoustic stimulus through limitations of dynamic range, rather than due to differences in neuronal processing of the stimulus. These results emphasize the importance of accounting for the temporal pattern of the acoustic noise when comparing findings across different fMRI studies, especially those involving acoustic stimulation.

  9. Temporal pattern of acoustic imaging noise asymmetrically modulates activation in the auditory cortex.

    PubMed

    Ranaweera, Ruwan D; Kwon, Minseok; Hu, Shuowen; Tamer, Gregory G; Luh, Wen-Ming; Talavage, Thomas M

    2016-01-01

    This study investigated the hemisphere-specific effects of the temporal pattern of imaging related acoustic noise on auditory cortex activation. Hemodynamic responses (HDRs) to five temporal patterns of imaging noise corresponding to noise generated by unique combinations of imaging volume and effective repetition time (TR), were obtained using a stroboscopic event-related paradigm with extra-long (≥27.5 s) TR to minimize inter-acquisition effects. In addition to confirmation that fMRI responses in auditory cortex do not behave in a linear manner, temporal patterns of imaging noise were found to modulate both the shape and spatial extent of hemodynamic responses, with classically non-auditory areas exhibiting responses to longer duration noise conditions. Hemispheric analysis revealed the right primary auditory cortex to be more sensitive than the left to the presence of imaging related acoustic noise. Right primary auditory cortex responses were significantly larger during all the conditions. This asymmetry of response to imaging related acoustic noise could lead to different baseline activation levels during acquisition schemes using short TR, inducing an observed asymmetry in the responses to an intended acoustic stimulus through limitations of dynamic range, rather than due to differences in neuronal processing of the stimulus. These results emphasize the importance of accounting for the temporal pattern of the acoustic noise when comparing findings across different fMRI studies, especially those involving acoustic stimulation. PMID:26519093

  10. Preliminary studies of video images of smoke dispersion in the near wake of a model building

    NASA Astrophysics Data System (ADS)

    Huber, Alan H.; Pal Arya, S.; Rajala, Sarah A.; Borek, James W.

    A summary of analyses of video images of smoke in a wind tunnel study of dispersion in the near wake of a model building is presented. The analyses provide information on both the instantaneous and time-average patterns of dispersion. Since the images represent vertically-integrated or crosswind-integrated smoke concentration, only the primary spatial and temporal scales of pollutant dispersion can be examined. Special graphic displays of the results are presented to assist in the data interpretation. The video image format is shown to have great potential as an easily quantifiable electronic medium for studying the dispersion of smoke.

  11. Correction of spatially varying image and video motion blur using a hybrid camera.

    PubMed

    Tai, Yu-Wing; Du, Hao; Brown, Michael S; Lin, Stephen

    2010-06-01

    We describe a novel approach to reduce spatially varying motion blur in video and images using a hybrid camera system. A hybrid camera is a standard video camera that is coupled with an auxiliary low-resolution camera sharing the same optical path but capturing at a significantly higher frame rate. The auxiliary video is temporally sharper but at a lower resolution, while the lower frame-rate video has higher spatial resolution but is susceptible to motion blur. Our deblurring approach uses the data from these two video streams to reduce spatially varying motion blur in the high-resolution camera with a technique that combines both deconvolution and super-resolution. Our algorithm also incorporates a refinement of the spatially varying blur kernels to further improve results. Our approach can reduce motion blur from the high-resolution video as well as estimate new high-resolution frames at a higher frame rate. Experimental results on a variety of inputs demonstrate notable improvement over current state-of-the-art methods in image/video deblurring.

  12. Acoustics

    NASA Technical Reports Server (NTRS)

    Goodman, Jerry R.; Grosveld, Ferdinand

    2007-01-01

    The acoustics environment in space operations is important to maintain at manageable levels so that the crewperson can remain safe, functional, effective, and reasonably comfortable. High acoustic levels can produce temporary or permanent hearing loss, or cause other physiological symptoms such as auditory pain, headaches, discomfort, strain in the vocal cords, or fatigue. Noise is defined as undesirable sound. Excessive noise may result in psychological effects such as irritability, inability to concentrate, decrease in productivity, annoyance, errors in judgment, and distraction. A noisy environment can also result in the inability to sleep, or sleep well. Elevated noise levels can affect the ability to communicate, understand what is being said, hear what is going on in the environment, degrade crew performance and operations, and create habitability concerns. Superfluous noise emissions can also create the inability to hear alarms or other important auditory cues such as an equipment malfunctioning. Recent space flight experience, evaluations of the requirements in crew habitable areas, and lessons learned (Goodman 2003; Allen and Goodman 2003; Pilkinton 2003; Grosveld et al. 2003) show the importance of maintaining an acceptable acoustics environment. This is best accomplished by having a high-quality set of limits/requirements early in the program, the "designing in" of acoustics in the development of hardware and systems, and by monitoring, testing and verifying the levels to ensure that they are acceptable.

  13. Music video shot segmentation using independent component analysis and keyframe extraction based on image complexity

    NASA Astrophysics Data System (ADS)

    Li, Wei; Chen, Ting; Zhang, Wenjun; Shi, Yunyu; Li, Jun

    2012-04-01

    In recent years, Music video data is increasing at an astonishing speed. Shot segmentation and keyframe extraction constitute a fundamental unit in organizing, indexing, retrieving video content. In this paper a unified framework is proposed to detect the shot boundaries and extract the keyframe of a shot. Music video is first segmented to shots by illumination-invariant chromaticity histogram in independent component (IC) analysis feature space .Then we presents a new metric, image complexity, to extract keyframe in a shot which is computed by ICs. Experimental results show the framework is effective and has a good performance.

  14. 12-Month-Old Infants' Perception of Attention Direction in Static Video Images

    ERIC Educational Resources Information Center

    von Hofsten, Claes; Dahlstrom, Emma; Fredriksson, Ylva

    2005-01-01

    Twelve-month-old infants' ability to perceive gaze direction in static video images was investigated. The images showed a woman who performed attention-directing actions by looking or pointing toward 1 of 4 objects positioned in front of her (2 on each side). When the model just pointed at the objects, she looked straight ahead, and when she just…

  15. JSC Shuttle Mission Simulator (SMS) visual system payload bay video image

    NASA Technical Reports Server (NTRS)

    1981-01-01

    This space shuttle orbiter payload bay (PLB) video image is used in JSC's Fixed Based (FB) Shuttle Mission Simulator (SMS). The image is projected inside the FB-SMS crew compartment during mission simulation training. The FB-SMS is located in the Mission Simulation and Training Facility Bldg 5.

  16. Acoustic and optical borehole-wall imaging for fractured-rock aquifer studies

    USGS Publications Warehouse

    Williams, J.H.; Johnson, C.D.

    2004-01-01

    Imaging with acoustic and optical televiewers results in continuous and oriented 360?? views of the borehole wall from which the character, relation, and orientation of lithologic and structural planar features can be defined for studies of fractured-rock aquifers. Fractures are more clearly defined under a wider range of conditions on acoustic images than on optical images including dark-colored rocks, cloudy borehole water, and coated borehole walls. However, optical images allow for the direct viewing of the character of and relation between lithology, fractures, foliation, and bedding. The most powerful approach is the combined application of acoustic and optical imaging with integrated interpretation. Imaging of the borehole wall provides information useful for the collection and interpretation of flowmeter and other geophysical logs, core samples, and hydraulic and water-quality data from packer testing and monitoring. ?? 2003 Elsevier B.V. All rights reserved.

  17. Experimental design and analysis of JND test on coded image/video

    NASA Astrophysics Data System (ADS)

    Lin, Joe Yuchieh; Jin, Lina; Hu, Sudeng; Katsavounidis, Ioannis; Li, Zhi; Aaron, Anne; Kuo, C.-C. Jay

    2015-09-01

    The visual Just-Noticeable-Difference (JND) metric is characterized by the detectable minimum amount of two visual stimuli. Conducting the subjective JND test is a labor-intensive task. In this work, we present a novel interactive method in performing the visual JND test on compressed image/video. JND has been used to enhance perceptual visual quality in the context of image/video compression. Given a set of coding parameters, a JND test is designed to determine the distinguishable quality level against a reference image/video, which is called the anchor. The JND metric can be used to save coding bitrates by exploiting the special characteristics of the human visual system. The proposed JND test is conducted using a binary-forced choice, which is often adopted to discriminate the difference in perception in a psychophysical experiment. The assessors are asked to compare coded image/video pairs and determine whether they are of the same quality or not. A bisection procedure is designed to find the JND locations so as to reduce the required number of comparisons over a wide range of bitrates. We will demonstrate the efficiency of the proposed JND test, report experimental results on the image and video JND tests.

  18. Comparison of Kodak Professional Digital Camera System images to conventional film, still video, and freeze-frame images

    NASA Astrophysics Data System (ADS)

    Kent, Richard A.; McGlone, John T.; Zoltowski, Norbert W.

    1991-06-01

    Electronic cameras provide near real time image evaluation with the benefits of digital storage methods for rapid transmission or computer processing and enhancement of images. But how does the image quality of their images compare to that of conventional film? A standard Nikon F-3TM 35 mm SLR camera was transformed into an electro-optical camera by replacing the film back with Kodak's KAF-1400V (or KAF-1300L) megapixel CCD array detector back and a processing accessory. Images taken with these Kodak electronic cameras were compared to those using conventional films and to several still video cameras. Quantitative and qualitative methods were used to compare images from these camera systems. Images captured on conventional video analog systems provide a maximum of 450 - 500 TV lines of resolution depending upon the camera resolution, storage method, and viewing system resolution. The Kodak Professional Digital Camera SystemTM exceeded this resolution and more closely approached that of film.

  19. The path to COVIS: A review of acoustic imaging of hydrothermal flow regimes

    NASA Astrophysics Data System (ADS)

    Bemis, Karen G.; Silver, Deborah; Xu, Guangyu; Light, Russ; Jackson, Darrell; Jones, Christopher; Ozer, Sedat; Liu, Li

    2015-11-01

    Acoustic imaging of hydrothermal flow regimes started with the incidental recognition of a plume on a routine sonar scan for obstacles in the path of the human-occupied submersible ALVIN. Developments in sonar engineering, acoustic data processing and scientific visualization have been combined to develop technology which can effectively capture the behavior of focused and diffuse hydrothermal discharge. This paper traces the development of these acoustic imaging techniques for hydrothermal flow regimes from their conception through to the development of the Cabled Observatory Vent Imaging Sonar (COVIS). COVIS has monitored such flow eight times a day for several years. Successful acoustic techniques for estimating plume entrainment, bending, vertical rise, volume flux, and heat flux are presented as is the state-of-the-art in diffuse flow detection.

  20. Computer Vision Tools for Finding Images and Video Sequences.

    ERIC Educational Resources Information Center

    Forsyth, D. A.

    1999-01-01

    Computer vision offers a variety of techniques for searching for pictures in large collections of images. Appearance methods compare images based on the overall content of the image using certain criteria. Finding methods concentrate on matching subparts of images, defined in a variety of ways, in hope of finding particular objects. These ideas…

  1. Change Detection in Uav Video Mosaics Combining a Feature Based Approach and Extended Image Differencing

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Krüger, Wolfgang

    2016-06-01

    Change detection is an important task when using unmanned aerial vehicles (UAV) for video surveillance. We address changes of short time scale using observations in time distances of a few hours. Each observation (previous and current) is a short video sequence acquired by UAV in near-Nadir view. Relevant changes are, e.g., recently parked or moved vehicles. Examples for non-relevant changes are parallaxes caused by 3D structures of the scene, shadow and illumination changes, and compression or transmission artifacts. In this paper we present (1) a new feature based approach to change detection, (2) a combination with extended image differencing (Saur et al., 2014), and (3) the application to video sequences using temporal filtering. In the feature based approach, information about local image features, e.g., corners, is extracted in both images. The label "new object" is generated at image points, where features occur in the current image and no or weaker features are present in the previous image. The label "vanished object" corresponds to missing or weaker features in the current image and present features in the previous image. This leads to two "directed" change masks and differs from image differencing where only one "undirected" change mask is extracted which combines both label types to the single label "changed object". The combination of both algorithms is performed by merging the change masks of both approaches. A color mask showing the different contributions is used for visual inspection by a human image interpreter.

  2. Acoustic micro-Doppler radar for human gait imaging.

    PubMed

    Zhang, Zhaonian; Pouliquen, Philippe O; Waxman, Allen; Andreou, Andreas G

    2007-03-01

    A portable acoustic micro-Doppler radar system for the acquisition of human gait signatures in indoor and outdoor environments is reported. Signals from an accelerometer attached to the leg support the identification of the components in the measured micro-Doppler signature. The acoustic micro-Doppler system described in this paper is simpler and offers advantages over the widely used electromagnetic wave micro-Doppler radars.

  3. Evaluation of the wake effects on plume dispersion using video image analysis

    NASA Astrophysics Data System (ADS)

    Rajala, Sarah A.; Trotter, David S.

    1990-03-01

    Video images of smoke flow in the wake of a model building which were collected in previous wind tunnel studies conducted by EPA at its Fluid Modeling Facility were further analyzed. Three distinct research projects were conducted. The first project evaluated existing image analysis/processing techniques to determine the contents of the data, develop a scheme for separating smoke from the background, and ultimately determine the potential for analyzing the motion characteristics of the smoke flow. The second project used the theory of fractals to extract information from the smoke images. Results from these two projects identified a number of difficulties in trying to characterize smoke images. In the third project, a new technique for video imaging using laser sheet lighting was developed and tested. The resulting smoke images were observed to be more distinct and the noise levels were lower.

  4. Evaluation of the wake effects on plume dispersion using video image analysis

    SciTech Connect

    Rajala, S.A.; Trotter, D.S.

    1990-03-01

    Video images of smoke flow in the wake of a model building which were collected in previous wind-tunnel studies conducted by EPA at its Fluid Modeling Facility were further analyzed. Three distinct research projects were conducted. The first project evaluated existing image analysis/processing techniques to determine the contents of the data, develop a scheme for separating the smoke from the background, and ultimately determine the potential for analyzing the motion characteristics of the smoke flow. The second project used the theory of fractals to extract information from the smoke images. Results from these two projects identified a number of difficulties in trying to characterize smoke images. In the third project, a new technique for video imaging using laser sheet lighting was developed and tested. The resulting smoke images were observed to be more distinct and the noise levels were lower.

  5. VQone MATLAB toolbox: A graphical experiment builder for image and video quality evaluations: VQone MATLAB toolbox.

    PubMed

    Nuutinen, Mikko; Virtanen, Toni; Rummukainen, Olli; Häkkinen, Jukka

    2016-03-01

    This article presents VQone, a graphical experiment builder, written as a MATLAB toolbox, developed for image and video quality ratings. VQone contains the main elements needed for the subjective image and video quality rating process. This includes building and conducting experiments and data analysis. All functions can be controlled through graphical user interfaces. The experiment builder includes many standardized image and video quality rating methods. Moreover, it enables the creation of new methods or modified versions from standard methods. VQone is distributed free of charge under the terms of the GNU general public license and allows code modifications to be made so that the program's functions can be adjusted according to a user's requirements. VQone is available for download from the project page (http://www.helsinki.fi/psychology/groups/visualcognition/).

  6. Internet Teleprescence by Real-Time View-Dependent Image Generation with Omnidirectional Video Camera

    NASA Astrophysics Data System (ADS)

    Morita, Shinji; Yamazawa, Kazumasa; Yokoya, Naokazu

    2003-01-01

    This paper describes a new networked telepresence system which realizes virtual tours into a visualized dynamic real world without significant time delay. Our system is realized by the following three steps: (1) video-rate omnidirectional image acquisition, (2) transportation of an omnidirectional video stream via internet, and (3) real-time view-dependent perspective image generation from the omnidirectional video stream. Our system is applicable to real-time telepresence in the situation where the real world to be seen is far from an observation site, because the time delay from the change of user"s viewing direction to the change of displayed image is small and does not depend on the actual distance between both sites. Moreover, multiple users can look around from a single viewpoint in a visualized dynamic real world in different directions at the same time. In experiments, we have proved that the proposed system is useful for internet telepresence.

  7. A software oscilloscope for DOS computers with an integrated remote control for a video tape recorder. The assignment of acoustic events to behavioural observations.

    PubMed

    Höller, P

    1995-12-01

    With only a little knowledge of programming IBM compatible computers in Basic, it is possible to create a digital software oscilloscope with sampling rates up to 17 kHz (depending on the CPU- and bus-speed). The only additional hardware requirement is a common sound card compatible with the Soundblaster. The system presented in this paper is built to analyse the direction a flying bat is facing during sound emission. For this reason the system works with some additional hardware devices, in order to monitor video sequences at the computer screen, overlaid by an online oscillogram. Using an RS232-interface for a Panasonic video tape recorder both the oscillogram and the video tape recorder can be controlled simultaneously and moreover be analysed frame by frame. Not only acoustical events, but also APs, myograms, EEGs and other physiological data can be digitized and analysed in combination with the behavioural data of an experimental subject.

  8. An on-line video image processing system for real-time neutron radiography

    NASA Astrophysics Data System (ADS)

    Fujine, Shigenori; Yoneda, Kenji; Kanda, Keiji

    1983-09-01

    The neutron radiography system installed at the E-2 experimental hole of the KUR (Kyoto University Reactor) has been used for some NDT applications in the nuclear field. The on-line video image processing system of this facility is introduced in this paper. A 0.5 mm resolution in images was obtained by using a super high quality TV camera developed for X-radiography viewing a NE-426 neutron-sensitive scintillator. The image of the Ne-426 on a CRT can be observed directly and visually, thus many test samples can be sequentially observed when necessary for industrial purposes. The video image signals from the TV camera are digitized, with a 33 ms delay, through a video A/D converter (ADC) and can be stored in the image bufer (32KB DRAM) of a microcomputer (Z-80) system. The digitized pictures are taken with 16 levels of gray scale and resolved to 240×256 picture elements (pixels) on a monochrome CRT, with the capability also to display 16 distinct colors on a RGB video display. The direct image of this system could be satisfactory for penetrating the side plates to test MTR type reactor fuels and for the investigation of moving objects.

  9. Video image processing greatly enhances contrast, quality, and speed in polarization-based microscopy

    PubMed Central

    1981-01-01

    Video cameras with contrast and black level controls can yield polarized light and differential interference contrast microscope images with unprecedented image quality, resolution, and recording speed. The theoretical basis and practical aspects of video polarization and differential interference contrast microscopy are discussed and several applications in cell biology are illustrated. These include: birefringence of cortical structures and beating cilia in Stentor, birefringence of rotating flagella on a single bacterium, growth and morphogenesis of echinoderm skeletal spicules in culture, ciliary and electrical activity in a balancing organ of a nudibranch snail, and acrosomal reaction in activated sperm. PMID:6788777

  10. Video image processing greatly enhances contrast, quality, and speed in polarization-based microscopy.

    PubMed

    Inoué, S

    1981-05-01

    Video cameras with contrast and black level controls can yield polarized light and differential interference contrast microscope images with unprecedented image quality, resolution, and recording speed. The theoretical basis and practical aspects of video polarization and differential interference contrast microscopy are discussed and several applications in cell biology are illustrated. These include: birefringence of cortical structures and beating cilia in Stentor, birefringence of rotating flagella on a single bacterium, growth and morphogenesis of echinoderm skeletal spicules in culture, ciliary and electrical activity in a balancing organ of a nudibranch snail, and acrosomal reaction in activated sperm. PMID:6788777

  11. Computing vanishing points for self-steering vehicles using video image features

    NASA Astrophysics Data System (ADS)

    Snailum, Nic P.; Smith, Martin C. B.

    2000-05-01

    This paper describes a method of finding the vanishing point from lines of perspective in a video image in order to derive self-steering information for a mobile robot. Finding the vanishing point reliably in a cluttered video image can be computationally intensive and often suffers from the problem of accumulations split across several accumulator bins. A polar histogram search method is investigated which is also found to suffer from this problem, and a solution is presented which is then applied to real imagery obtained from an autonomous mobile robot.

  12. Reconstructed imaging of acoustic cloak using time-lapse reversal method

    NASA Astrophysics Data System (ADS)

    Zhou, Chen; Cheng, Ying; Xu, Jian-yi; Li, Bo; Liu, Xiao-jun

    2014-08-01

    We proposed and investigated a solution to the inverse acoustic cloak problem, an anti-stealth technology to make cloaks visible, using the time-lapse reversal (TLR) method. The TLR method reconstructs the image of an unknown acoustic cloak by utilizing scattered acoustic waves. Compared to previous anti-stealth methods, the TLR method can determine not only the existence of a cloak but also its exact geometric information like definite shape, size, and position. Here, we present the process for TLR reconstruction based on time reversal invariance. This technology may have potential applications in detecting various types of cloaks with different geometric parameters.

  13. Computer Evaluation Of Real-Time X-Ray And Acoustic Images

    NASA Astrophysics Data System (ADS)

    Jacoby, M. H.; Loe, R. S.; Dondes, P. A.

    1983-03-01

    The weakest link in the inspection process is the subjective interpretation of data by inspectors. To overcome this troublesome fact computer based analysis systems have been developed. In the field of nondestructive evaluation (NDE) there is a large class of inspections that can benefit from computer analysis. X-ray images (both film and fluoroscopic) and acoustic images lend themselves to automatic analysis as do the one-dimensional signals associated with ultrasonic, eddy current and acoustic emission testing. Computer analysis can enhance and evaluate subtle details. Flaws can be located and measured, and accept-ance decisions made by computer in a consistent and objective manner. This paper describes the interactive, computer-based analysis of real-time x-ray images and acoustic images of graphite/epoxy adhesively bonded structures.

  14. Opti-acoustic stereo imaging: on system calibration and 3-D target reconstruction.

    PubMed

    Negahdaripour, Shahriar; Sekkati, Hicham; Pirsiavash, Hamed

    2009-06-01

    Utilization of an acoustic camera for range measurements is a key advantage for 3-D shape recovery of underwater targets by opti-acoustic stereo imaging, where the associated epipolar geometry of optical and acoustic image correspondences can be described in terms of conic sections. In this paper, we propose methods for system calibration and 3-D scene reconstruction by maximum likelihood estimation from noisy image measurements. The recursive 3-D reconstruction method utilized as initial condition a closed-form solution that integrates the advantages of two other closed-form solutions, referred to as the range and azimuth solutions. Synthetic data tests are given to provide insight into the merits of the new target imaging and 3-D reconstruction paradigm, while experiments with real data confirm the findings based on computer simulations, and demonstrate the merits of this novel 3-D reconstruction paradigm.

  15. Opti-acoustic stereo imaging: on system calibration and 3-D target reconstruction.

    PubMed

    Negahdaripour, Shahriar; Sekkati, Hicham; Pirsiavash, Hamed

    2009-06-01

    Utilization of an acoustic camera for range measurements is a key advantage for 3-D shape recovery of underwater targets by opti-acoustic stereo imaging, where the associated epipolar geometry of optical and acoustic image correspondences can be described in terms of conic sections. In this paper, we propose methods for system calibration and 3-D scene reconstruction by maximum likelihood estimation from noisy image measurements. The recursive 3-D reconstruction method utilized as initial condition a closed-form solution that integrates the advantages of two other closed-form solutions, referred to as the range and azimuth solutions. Synthetic data tests are given to provide insight into the merits of the new target imaging and 3-D reconstruction paradigm, while experiments with real data confirm the findings based on computer simulations, and demonstrate the merits of this novel 3-D reconstruction paradigm. PMID:19380272

  16. Sub-component modeling for face image reconstruction in video communications

    NASA Astrophysics Data System (ADS)

    Shiell, Derek J.; Xiao, Jing; Katsaggelos, Aggelos K.

    2008-08-01

    Emerging communications trends point to streaming video as a new form of content delivery. These systems are implemented over wired systems, such as cable or ethernet, and wireless networks, cell phones, and portable game systems. These communications systems require sophisticated methods of compression and error-resilience encoding to enable communications across band-limited and noisy delivery channels. Additionally, the transmitted video data must be of high enough quality to ensure a satisfactory end-user experience. Traditionally, video compression makes use of temporal and spatial coherence to reduce the information required to represent an image. In many communications systems, the communications channel is characterized by a probabilistic model which describes the capacity or fidelity of the channel. The implication is that information is lost or distorted in the channel, and requires concealment on the receiving end. We demonstrate a generative model based transmission scheme to compress human face images in video, which has the advantages of a potentially higher compression ratio, while maintaining robustness to errors and data corruption. This is accomplished by training an offline face model and using the model to reconstruct face images on the receiving end. We propose a sub-component AAM modeling the appearance of sub-facial components individually, and show face reconstruction results under different types of video degradation using a weighted and non-weighted version of the sub-component AAM.

  17. Progress in passive submillimeter-wave video imaging

    NASA Astrophysics Data System (ADS)

    Heinz, Erik; May, Torsten; Born, Detlef; Zieger, Gabriel; Peiselt, Katja; Zakosarenko, Vyacheslav; Krause, Torsten; Krüger, André; Schulz, Marco; Bauer, Frank; Meyer, Hans-Georg

    2014-06-01

    Since 2007 we are developing passive submillimeter-wave video cameras for personal security screening. In contradiction to established portal-based millimeter-wave scanning techniques, these are suitable for stand-off or stealth operation. The cameras operate in the 350GHz band and use arrays of superconducting transition-edge sensors (TES), reflector optics, and opto-mechanical scanners. Whereas the basic principle of these devices remains unchanged, there has been a continuous development of the technical details, as the detector array, the scanning scheme, and the readout, as well as system integration and performance. The latest prototype of this camera development features a linear array of 128 detectors and a linear scanner capable of 25Hz frame rate. Using different types of reflector optics, a field of view of 1×2m2 and a spatial resolution of 1-2 cm is provided at object distances of about 5-25m. We present the concept of this camera and give details on system design and performance. Demonstration videos show its capability for hidden threat detection and illustrate possible application scenarios.

  18. Modeling hemodynamic responses in auditory cortex at 1.5 T using variable duration imaging acoustic noise.

    PubMed

    Hu, Shuowen; Olulade, Olumide; Castillo, Javier Gonzalez; Santos, Joseph; Kim, Sungeun; Tamer, Gregory G; Luh, Wen-Ming; Talavage, Thomas M

    2010-02-15

    A confound for functional magnetic resonance imaging (fMRI), especially for auditory studies, is the presence of imaging acoustic noise generated mainly as a byproduct of rapid gradient switching during volume acquisition and, to a lesser extent, the radiofrequency transmit. This work utilized a novel pulse sequence to present actual imaging acoustic noise for characterization of the induced hemodynamic responses and assessment of linearity in the primary auditory cortex with respect to noise duration. Results show that responses to brief duration (46 ms) imaging acoustic noise is highly nonlinear while responses to longer duration (>1 s) imaging acoustic noise becomes approximately linear, with the right primary auditory cortex exhibiting a higher degree of nonlinearity than the left for the investigated noise durations. This study also assessed the spatial extent of activation induced by imaging acoustic noise, showing that the use of modeled responses (specific to imaging acoustic noise) as the reference waveform revealed additional activations in the auditory cortex not observed with a canonical gamma variate reference waveform, suggesting an improvement in detection sensitivity for imaging acoustic noise-induced activity. Longer duration (1.5 s) imaging acoustic noise was observed to induce activity that expanded outwards from Heschl's gyrus to cover the superior temporal gyrus as well as parts of the middle temporal gyrus and insula, potentially affecting higher level acoustic processing.

  19. Characterization of acoustic streaming and heating using synchronized infrared thermography and particle image velocimetry.

    PubMed

    Layman, Christopher N; Sou, In Mei; Bartak, Rico; Ray, Chittaranjan; Allen, John S

    2011-09-01

    Real-time measurements of acoustic streaming velocities and surface temperature fields using synchronized particle image velocimetry and infrared thermography are reported. Measurements were conducted using a 20 kHz Langevin type acoustic horn mounted vertically in a model sonochemical reactor of either degassed water or a glycerin-water mixture. These dissipative phenomena are found to be sensitive to small variations in the medium viscosity, and a correlation between the heat flux and vorticity was determined for unsteady convective heat transfer.

  20. Enhancing thermal video using a public database of images

    NASA Astrophysics Data System (ADS)

    Qadir, Hemin; Kozaitis, S. P.; Ali, Ehsan

    2014-05-01

    We presented a system to display nightime imagery with natural colors using a public database of images. We initially combined two spectral bands of images, thermal and visible, to enhance night vision imagery, however the fused image gave an unnatural color appearance. Therefore, a color transfer based on look-up table (LUT) was used to replace the false color appearance with a colormap derived from a daytime reference image obtained from a public database using the GPS coordinates of the vehicle. Because of the computational demand in deriving the colormap from the reference image, we created an additional local database of colormaps. Reference images from the public database were compared to a compact local database to retrieve one of a limited number of colormaps that represented several driving environments. Each colormap in the local database was stored with an image from which it was derived. To retrieve a colormap, we compared the histogram of the fused image with histograms of images in the local database. The colormaps of the best match was then used for the fused image. Continuously selecting and applying colormaps using this approach offered a convenient way to color night vision imagery.

  1. Image enhancement using a range gated MCPII video system with a 180-ps FWHM shutter

    SciTech Connect

    Thomas, M.C.; Yates, G.J.; Zadgarino, P.

    1995-09-01

    The video image of a target submerged in a scattering medium was improved through the use of range gating techniques. The target, an Air Force resolution chart, was submerged in 18 in. of a colloidal suspension of tincture green soap in water. The target was illuminated with pulsed light from a Raman shifted, frequency-doubled, ND:YAG laser having a wavelength of 559 mm and a width of 20 ps FWHM. The laser light reflected by the target along with the light scattered by the soap, was imaged onto a microchannel-plate image intensifier (MCPII). The output from the MCPII was then recorded with a RS-170 video camera and a video digitizer. The MCPII was gated on with a pulse synchronously timed to the laser pulse. The relative timing between the reflected laser pulse and the shuttering of the MCPII determined the distance to the imaged region. The resolution of the image was influenced by the MCPII`s shutter time. A comparison was made between the resolution of images obtained with 6 ns, 500 ps and 180 ps FWHM (8 ns, 750 ps and 250 ps off-to-off) shutter times. it was found that the image resolution was enhanced by using the faster shutter since the longer exposures allowed light scattered by the water to be recorded too. The presence of scattered light in the image increased the noise, thereby reducing the contrast and the resolution.

  2. Collaborative real-time motion video analysis by human observer and image exploitation algorithms

    NASA Astrophysics Data System (ADS)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2015-05-01

    Motion video analysis is a challenging task, especially in real-time applications. In most safety and security critical applications, a human observer is an obligatory part of the overall analysis system. Over the last years, substantial progress has been made in the development of automated image exploitation algorithms. Hence, we investigate how the benefits of automated video analysis can be integrated suitably into the current video exploitation systems. In this paper, a system design is introduced which strives to combine both the qualities of the human observer's perception and the automated algorithms, thus aiming to improve the overall performance of a real-time video analysis system. The system design builds on prior work where we showed the benefits for the human observer by means of a user interface which utilizes the human visual focus of attention revealed by the eye gaze direction for interaction with the image exploitation system; eye tracker-based interaction allows much faster, more convenient, and equally precise moving target acquisition in video images than traditional computer mouse selection. The system design also builds on prior work we did on automated target detection, segmentation, and tracking algorithms. Beside the system design, a first pilot study is presented, where we investigated how the participants (all non-experts in video analysis) performed in initializing an object tracking subsystem by selecting a target for tracking. Preliminary results show that the gaze + key press technique is an effective, efficient, and easy to use interaction technique when performing selection operations on moving targets in videos in order to initialize an object tracking function.

  3. Liver reserve function assessment by acoustic radiation force impulse imaging

    PubMed Central

    Sun, Xiao-Lan; Liang, Li-Wei; Cao, Hui; Men, Qiong; Hou, Ke-Zhu; Chen, Zhen; Zhao, Ya-E

    2015-01-01

    AIM: To evaluate the utility of liver reserve function by acoustic radiation force impulse (ARFI) imaging in patients with liver tumors. METHODS: Seventy-six patients with liver tumors were enrolled in this study. Serum biochemical indexes, such as aminotransferase (ALT), aspartate aminotransferase (AST), serum albumin (ALB), total bilirubin (T-Bil), and other indicators were observed. Liver stiffness (LS) was measured by ARFI imaging, measurements were repeated 10 times, and the average value of the results was taken as the final LS value. Indocyanine green (ICG) retention was performed, and ICG-K and ICG-R15 were recorded. Child-Pugh (CP) scores were carried out based on patient’s preoperative biochemical tests and physical condition. Correlations among CP scores, ICG-R15, ICG-K and LS values were observed and analyzed using either the Pearson correlation coefficient or the Spearman rank correlation coefficient. Kruskal-Wallis test was used to compare LS values of CP scores, and the receiver-operator characteristic (ROC) curve was used to analyze liver reserve function assessment accuracy. RESULTS: LS in the ICG-R15 10%-20% group was significantly higher than in the ICG-R15 < 10% group; and the difference was statistically significant (2.19 ± 0.27 vs 1.59 ± 0.32, P < 0.01). LS in the ICG-R15 > 20% group was significantly higher than in the ICG-R15 < 10% group; and the difference was statistically significant (2.92 ± 0.29 vs 1.59 ± 0.32, P < 0.01). The LS value in patients with CP class A was lower than in patients with CP class B (1.57 ± 0.34 vs 1.86 ± 0.27, P < 0.05), while the LS value in patients with CP class B was lower than in patients with CP class C (1.86 ± 0.27 vs 2.47 ± 0.33, P < 0.01). LS was positively correlated with ICG-R15 (r = 0.617, P < 0.01) and CP score (r = 0.772, P < 0.01). Meanwhile, LS was negatively correlated with ICG-K (r = -0.673, P < 0.01). AST, ALT and T-Bil were positively correlated with LS, while ALB was negatively

  4. A surface acoustic wave /SAW/ charge transfer imager

    NASA Technical Reports Server (NTRS)

    Papanicolauo, N. A.; Lin, H. C.

    1981-01-01

    An 80 MHz, 2-microsecond surface acoustic wave charge transfer device (SAW-CTD) has been fabricated in which surface acoustic waves are used to create traveling longitudinal electric fields in the silicon substrate and to replace the multiphase clocks of charge coupled devices. The traveling electric fields create potential wells which will carry along charges that may be stored in the wells; the charges may be injected into the wells by light. An optical application is proposed where the SAW-CTD structure is used in place of a conventional interline transfer design.

  5. Video and image retrieval beyond the cognitive level: the needs and possibilities

    NASA Astrophysics Data System (ADS)

    Hanjalic, Alan

    2000-12-01

    The worldwide research efforts in the are of image and video retrieval have concentrated so far on increasing the efficiency and reliability of extracting the elements of image and video semantics and so on improving the search and retrieval performance at the cognitive level of content abstraction. At this abstraction level, the user is searching for 'factual' or 'objective' content such as image showing a panorama of San Francisco, an outdoor or an indoor image, a broadcast news report on a defined topic, a movie dialog between the actors A and B or the parts of a basketball game showing fast breaks, steals and scores. These efforts, however, do not address the retrieval applications at the so-called affective level of content abstraction where the 'ground truth' is not strictly defined. Such applications are, for instance, those where subjectivity of the user plays the major role, e.g. the task of retrieving all images that the user 'likes most', and those that are based on 'recognizing emotions' in audiovisual data. Typical examples are searching for all images that 'radiate happiness', identifying all 'sad' movie fragments and looking for the 'romantic landscapes', 'sentimental' movie segments, 'movie highlights' or 'most exciting' moments of a sport event. This paper discusses the needs and possibilities for widening the current scope of research in the area of image and video search and retrieval in order to enable applications at the affective level of content abstraction.

  6. Video and image retrieval beyond the cognitive level: the needs and possibilities

    NASA Astrophysics Data System (ADS)

    Hanjalic, Alan

    2001-01-01

    The worldwide research efforts in the are of image and video retrieval have concentrated so far on increasing the efficiency and reliability of extracting the elements of image and video semantics and so on improving the search and retrieval performance at the cognitive level of content abstraction. At this abstraction level, the user is searching for 'factual' or 'objective' content such as image showing a panorama of San Francisco, an outdoor or an indoor image, a broadcast news report on a defined topic, a movie dialog between the actors A and B or the parts of a basketball game showing fast breaks, steals and scores. These efforts, however, do not address the retrieval applications at the so-called affective level of content abstraction where the 'ground truth' is not strictly defined. Such applications are, for instance, those where subjectivity of the user plays the major role, e.g. the task of retrieving all images that the user 'likes most', and those that are based on 'recognizing emotions' in audiovisual data. Typical examples are searching for all images that 'radiate happiness', identifying all 'sad' movie fragments and looking for the 'romantic landscapes', 'sentimental' movie segments, 'movie highlights' or 'most exciting' moments of a sport event. This paper discusses the needs and possibilities for widening the current scope of research in the area of image and video search and retrieval in order to enable applications at the affective level of content abstraction.

  7. Real Time Speed Estimation of Moving Vehicles from Side View Images from an Uncalibrated Video Camera

    PubMed Central

    Doğan, Sedat; Temiz, Mahir Serhan; Külür, Sıtkı

    2010-01-01

    In order to estimate the speed of a moving vehicle with side view camera images, velocity vectors of a sufficient number of reference points identified on the vehicle must be found using frame images. This procedure involves two main steps. In the first step, a sufficient number of points from the vehicle is selected, and these points must be accurately tracked on at least two successive video frames. In the second step, by using the displacement vectors of the tracked points and passed time, the velocity vectors of those points are computed. Computed velocity vectors are defined in the video image coordinate system and displacement vectors are measured by the means of pixel units. Then the magnitudes of the computed vectors in image space should be transformed to the object space to find the absolute values of these magnitudes. This transformation requires an image to object space information in a mathematical sense that is achieved by means of the calibration and orientation parameters of the video frame images. This paper presents proposed solutions for the problems of using side view camera images mentioned here. PMID:22399909

  8. Segmentation of the spinous process and its acoustic shadow in vertebral ultrasound images.

    PubMed

    Berton, Florian; Cheriet, Farida; Miron, Marie-Claude; Laporte, Catherine

    2016-05-01

    Spinal ultrasound imaging is emerging as a low-cost, radiation-free alternative to conventional X-ray imaging for the clinical follow-up of patients with scoliosis. Currently, deformity measurement relies almost entirely on manual identification of key vertebral landmarks. However, the interpretation of vertebral ultrasound images is challenging, primarily because acoustic waves are entirely reflected by bone. To alleviate this problem, we propose an algorithm to segment these images into three regions: the spinous process, its acoustic shadow and other tissues. This method consists, first, in the extraction of several image features and the selection of the most relevant ones for the discrimination of the three regions. Then, using this set of features and linear discriminant analysis, each pixel of the image is classified as belonging to one of the three regions. Finally, the image is segmented by regularizing the pixel-wise classification results to account for some geometrical properties of vertebrae. The feature set was first validated by analyzing the classification results across a learning database. The database contained 107 vertebral ultrasound images acquired with convex and linear probes. Classification rates of 84%, 92% and 91% were achieved for the spinous process, the acoustic shadow and other tissues, respectively. Dice similarity coefficients of 0.72 and 0.88 were obtained respectively for the spinous process and acoustic shadow, confirming that the proposed method accurately segments the spinous process and its acoustic shadow in vertebral ultrasound images. Furthermore, the centroid of the automatically segmented spinous process was located at an average distance of 0.38 mm from that of the manually labeled spinous process, which is on the order of image resolution. This suggests that the proposed method is a promising tool for the measurement of the Spinous Process Angle and, more generally, for assisting ultrasound-based assessment of scoliosis

  9. Matters of Light & Depth: Creating Memorable Images for Video, Film, & Stills through Lighting.

    ERIC Educational Resources Information Center

    Lowell, Ross

    Written for students, professionals with limited experience, and professionals who encounter lighting difficulties, this book encourages sensitivity to light in its myriad manifestations: it offers advice in creating memorable images for video, film, and stills through lighting. Chapters in the book are: (1) "Lights of Passage: Basic Theory and…

  10. An acoustic charge transport imager for high definition television applications

    NASA Technical Reports Server (NTRS)

    Hunt, William D.; Brennan, Kevin F.; Summers, Chris J.

    1992-01-01

    In this report we present the progress during the second six month period of the project. This includes both experimental and theoretical work on the acoustic charge transport (ACT) portion of the chip, the theoretical program modelling of both the avalanche photodiode (APD) and the charge transfer and overflow transistor and the materials growth and fabrication part of the program.

  11. Video rate imaging at 1.5 THz via frequency upconversion to the near-IR

    NASA Astrophysics Data System (ADS)

    Tekavec, Patrick F.; Kozlov, Vladimir G.; McNee, Ian; Lee, Yun-Shik; Vodopyanov, Konstantin

    2015-05-01

    We demonstrate video rate THz imaging in both reflection and transmission by frequency upconverting the THz image to the near-IR. In reflection, the ability to resolve images generated at different depths is shown. By mixing the THz pulses with a portion of the fiber laser pump (1064 nm) in a quasi-phase matched gallium arsenide crystal, distinct sidebands are observed at 1058 nm and 1070 nm, corresponding to sum and difference frequency generation of the pump pulse with the THz pulse. By using a polarizer and long pass filter, the strong pump light can be removed, leaving a nearly background free signal at 1070 nm. We have obtained video rate images with spatial resolution of 1mm and field of view ca. 20 mm in diameter without any post processing of the data.

  12. Class Energy Image Analysis for Video Sensor-Based Gait Recognition: A Review

    PubMed Central

    Lv, Zhuowen; Xing, Xianglei; Wang, Kejun; Guan, Donghai

    2015-01-01

    Gait is a unique perceptible biometric feature at larger distances, and the gait representation approach plays a key role in a video sensor-based gait recognition system. Class Energy Image is one of the most important gait representation methods based on appearance, which has received lots of attentions. In this paper, we reviewed the expressions and meanings of various Class Energy Image approaches, and analyzed the information in the Class Energy Images. Furthermore, the effectiveness and robustness of these approaches were compared on the benchmark gait databases. We outlined the research challenges and provided promising future directions for the field. To the best of our knowledge, this is the first review that focuses on Class Energy Image. It can provide a useful reference in the literature of video sensor-based gait representation approach. PMID:25574935

  13. Method and apparatus for detecting internal structures of bulk objects using acoustic imaging

    DOEpatents

    Deason, Vance A.; Telschow, Kenneth L.

    2002-01-01

    Apparatus for producing an acoustic image of an object according to the present invention may comprise an excitation source for vibrating the object to produce at least one acoustic wave therein. The acoustic wave results in the formation of at least one surface displacement on the surface of the object. A light source produces an optical object wavefront and an optical reference wavefront and directs the optical object wavefront toward the surface of the object to produce a modulated optical object wavefront. A modulator operatively associated with the optical reference wavefront modulates the optical reference wavefront in synchronization with the acoustic wave to produce a modulated optical reference wavefront. A sensing medium positioned to receive the modulated optical object wavefront and the modulated optical reference wavefront combines the modulated optical object and reference wavefronts to produce an image related to the surface displacement on the surface of the object. A detector detects the image related to the surface displacement produced by the sensing medium. A processing system operatively associated with the detector constructs an acoustic image of interior features of the object based on the phase and amplitude of the surface displacement on the surface of the object.

  14. Efficient super-resolution image reconstruction applied to surveillance video captured by small unmanned aircraft systems

    NASA Astrophysics Data System (ADS)

    He, Qiang; Schultz, Richard R.; Chu, Chee-Hung Henry

    2008-04-01

    The concept surrounding super-resolution image reconstruction is to recover a highly-resolved image from a series of low-resolution images via between-frame subpixel image registration. In this paper, we propose a novel and efficient super-resolution algorithm, and then apply it to the reconstruction of real video data captured by a small Unmanned Aircraft System (UAS). Small UAS aircraft generally have a wingspan of less than four meters, so that these vehicles and their payloads can be buffeted by even light winds, resulting in potentially unstable video. This algorithm is based on a coarse-to-fine strategy, in which a coarsely super-resolved image sequence is first built from the original video data by image registration and bi-cubic interpolation between a fixed reference frame and every additional frame. It is well known that the median filter is robust to outliers. If we calculate pixel-wise medians in the coarsely super-resolved image sequence, we can restore a refined super-resolved image. The primary advantage is that this is a noniterative algorithm, unlike traditional approaches based on highly-computational iterative algorithms. Experimental results show that our coarse-to-fine super-resolution algorithm is not only robust, but also very efficient. In comparison with five well-known super-resolution algorithms, namely the robust super-resolution algorithm, bi-cubic interpolation, projection onto convex sets (POCS), the Papoulis-Gerchberg algorithm, and the iterated back projection algorithm, our proposed algorithm gives both strong efficiency and robustness, as well as good visual performance. This is particularly useful for the application of super-resolution to UAS surveillance video, where real-time processing is highly desired.

  15. Uncontact Certification Using Video Hand Image by Morphology Analysis

    NASA Astrophysics Data System (ADS)

    Moritani, Motoki; Saitoh, Fumihiko

    This paper proposes a non-contacting certification system by using morphological analysis of contiguous hand images to access security control. The non-contacting hand image certification system is more effective than contacting system where psychological resistance and conformability are required. The morphology is applied to get useful individual characteristic even if the pose of a hand is changed. The experimental results show the more accuracy to certificate individuals was obtained by using contiguous frames compared to conventional method.

  16. Laser Imaging Video Camera Sees Through Fire, Fog, Smoke

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Under a series of SBIR contracts with Langley Research Center, inventor Richard Billmers refined a prototype for a laser imaging camera capable of seeing through fire, fog, smoke, and other obscurants. Now, Canton, Ohio-based Laser Imaging through Obscurants (LITO) Technologies Inc. is demonstrating the technology as a perimeter security system at Glenn Research Center and planning its future use in aviation, shipping, emergency response, and other fields.

  17. Lidar-Incorporated Traffic Sign Detection from Video Log Images of Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Li, Y.; Fan, J.; Huang, Y.; Chen, Z.

    2016-06-01

    Mobile Mapping System (MMS) simultaneously collects the Lidar points and video log images in a scenario with the laser profiler and digital camera. Besides the textural details of video log images, it also captures the 3D geometric shape of point cloud. It is widely used to survey the street view and roadside transportation infrastructure, such as traffic sign, guardrail, etc., in many transportation agencies. Although many literature on traffic sign detection are available, they only focus on either Lidar or imagery data of traffic sign. Based on the well-calibrated extrinsic parameters of MMS, 3D Lidar points are, the first time, incorporated into 2D video log images to enhance the detection of traffic sign both physically and visually. Based on the local elevation, the 3D pavement area is first located. Within a certain distance and height of the pavement, points of the overhead and roadside traffic signs can be obtained according to the setup specification of traffic signs in different transportation agencies. The 3D candidate planes of traffic signs are then fitted using the RANSAC plane-fitting of those points. By projecting the candidate planes onto the image, Regions of Interest (ROIs) of traffic signs are found physically with the geometric constraints between laser profiling and camera imaging. The Random forest learning of the visual color and shape features of traffic signs is adopted to validate the sign ROIs from the video log images. The sequential occurrence of a traffic sign among consecutive video log images are defined by the geometric constraint of the imaging geometry and GPS movement. Candidate ROIs are predicted in this temporal context to double-check the salient traffic sign among video log images. The proposed algorithm is tested on a diverse set of scenarios on the interstate highway G-4 near Beijing, China under varying lighting conditions and occlusions. Experimental results show the proposed algorithm enhances the rate of detecting

  18. NOTE: Acoustical properties of selected tissue phantom materials for ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Zell, K.; Sperl, J. I.; Vogel, M. W.; Niessner, R.; Haisch, C.

    2007-10-01

    This note summarizes the characterization of the acoustic properties of four materials intended for the development of tissue, and especially breast tissue, phantoms for the use in photoacoustic and ultrasound imaging. The materials are agar, silicone, polyvinyl alcohol gel (PVA) and polyacrylamide gel (PAA). The acoustical properties, i.e., the speed of sound, impedance and acoustic attenuation, are determined by transmission measurements of sound waves at room temperature under controlled conditions. Although the materials are tested for application such as photoacoustic phantoms, we focus here on the acoustic properties, while the optical properties will be discussed elsewhere. To obtain the acoustic attenuation in a frequency range from 4 MHz to 14 MHz, two ultrasound sources of 5 MHz and 10 MHz core frequencies are used. For preparation, each sample is cast into blocks of three different thicknesses. Agar, PVA and PAA show similar acoustic properties as water. Within silicone polymer, a significantly lower speed of sound and higher acoustical attenuation than in water and human tissue were found. All materials can be cast into arbitrary shapes and are suitable for tissue-mimicking phantoms. Due to its lower speed of sound, silicone is generally less suitable than the other presented materials.

  19. Video-rate molecular imaging in vivo with stimulated Raman scattering.

    PubMed

    Saar, Brian G; Freudiger, Christian W; Reichman, Jay; Stanley, C Michael; Holtom, Gary R; Xie, X Sunney

    2010-12-01

    Optical imaging in vivo with molecular specificity is important in biomedicine because of its high spatial resolution and sensitivity compared with magnetic resonance imaging. Stimulated Raman scattering (SRS) microscopy allows highly sensitive optical imaging based on vibrational spectroscopy without adding toxic or perturbative labels. However, SRS imaging in living animals and humans has not been feasible because light cannot be collected through thick tissues, and motion-blur arises from slow imaging based on backscattered light. In this work, we enable in vivo SRS imaging by substantially enhancing the collection of the backscattered signal and increasing the imaging speed by three orders of magnitude to video rate. This approach allows label-free in vivo imaging of water, lipid, and protein in skin and mapping of penetration pathways of topically applied drugs in mice and humans.

  20. Real-Depth imaging: a new (no glasses) 3D imaging technology with video/data projection applications

    NASA Astrophysics Data System (ADS)

    Dolgoff, Eugene

    1997-05-01

    Floating Images, Inc. has developed the software and hardware for anew, patent pending, 'floating 3D, off-the- screen-experience' display technology. This technology has the potential to become the next standard for home and arcade video games, computers, corporate presentations, Internet/Intranet viewing, and television. Current '3D Graphics' technologies are actually flat on screen. Floating Images technology actually produce images at different depths from any display, such as CRT and LCD, for television, computer, projection, and other formats. In addition, unlike stereoscopic 3D imaging, no glasses, headgear, or other viewing aids are used. And, unlike current autostereoscopic imaging technologies, there is virtually no restriction on where viewers can sit to view the images, with no 'bad' or 'dead' zones, flipping, or pseudoscopy. In addition to providing traditional depth cues such as perspective and background image occlusion, the new technology also provides both horizontal and vertical binocular parallax and accommodation which coincides with convergence. Since accommodation coincides with convergence, viewing these images doesn't produce headaches, fatigue, or eye-strain, regardless of how long they are viewed. The imagery must either be formatted for the Floating Images platform when written, or existing software can be reformatted without much difficult. The optical hardware system can be made to accommodate virtually any projection system to produce Floating Images for the Boardroom, video arcade, stage shows, or the classroom.

  1. Imaging Defects in Thin DLC Coatings Using High Frequency Scanning Acoustic Microscopy

    NASA Astrophysics Data System (ADS)

    Fei, Dong; Rebinsky, Douglas A.; Zinin, Pavel; Koehler, Bernd

    2004-02-01

    In this work high frequency scanning acoustic microscopy was employed to nondestructively characterize subsurface defects in chromium containing DLC (Cr-DLC) coatings. Subsurface defects as small as one micron were successfully detected in a flat Cr-DLC coated steel coupon. Depth of the imaged subsurface defects was estimated using a simple geometrical acoustics model. The nature of the subsurface defects was investigated by using FIB/SEM technique. Curved Cr-DLC coated components including a roller and gear tooth were also imaged, and the encountered challenges were addressed.

  2. Apparatus for real-time acoustic imaging of Rayleigh-Benard convection.

    PubMed

    Kuehn, Kerry; Polfer, Jonathan; Furno, Joanna; Finke, Nathan

    2007-11-01

    We have designed and built an apparatus for real-time acoustic imaging of convective flow patterns in optically opaque fluids. This apparatus takes advantage of recent advances in two-dimensional ultrasound transducer array technology; it employs a modified version of a commercially available ultrasound camera, similar to those employed in nondestructive testing of solids. Images of convection patterns are generated by observing the lateral variation of the temperature dependent speed of sound via refraction of acoustic plane waves passing vertically through the fluid layer. The apparatus has been validated by observing convection rolls in both silicone oil and ferrofluid. PMID:18052477

  3. Compact Video Microscope Imaging System Implemented in Colloid Studies

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2002-01-01

    Long description Photographs showing fiber-optic light source, microscope and charge-coupled discharge (CCD) camera head connected to camera body, CCD camera body feeding data to image acquisition board in PC, and Cartesian robot controlled via PC board. The Compact Microscope Imaging System (CMIS) is a diagnostic tool with intelligent controls for use in space, industrial, medical, and security applications. CMIS can be used in situ with a minimum amount of user intervention. This system can scan, find areas of interest in, focus on, and acquire images automatically. Many multiple-cell experiments require microscopy for in situ observations; this is feasible only with compact microscope systems. CMIS is a miniature machine vision system that combines intelligent image processing with remote control. The software also has a user-friendly interface, which can be used independently of the hardware for further post-experiment analysis. CMIS has been successfully developed in the SML Laboratory at the NASA Glenn Research Center and adapted for use for colloid studies and is available for telescience experiments. The main innovations this year are an improved interface, optimized algorithms, and the ability to control conventional full-sized microscopes in addition to compact microscopes. The CMIS software-hardware interface is being integrated into our SML Analysis package, which will be a robust general-purpose image-processing package that can handle over 100 space and industrial applications.

  4. Video-image-based neural network guidance system with adaptive view-angles for autonomous vehicles

    NASA Astrophysics Data System (ADS)

    Luebbers, Paul G.; Pandya, Abhijit S.

    1991-08-01

    This paper describes the guidance function of an autonomous vehicle based on a neural network controller using video images with adaptive view angles for sensory input. The guidance function for an autonomous vehicle provides the low-level control required for maintaining the autonomous vehicle on a prescribed trajectory. Neural networks possess unique properties such as the ability to perform sensor fusion, the ability to learn, and fault tolerant architectures, qualities which are desirable for autonomous vehicle applications. To demonstrate the feasibility of using neural networks in this type of an application, an Intelledex 405 robot fitted with a video camera and vision system was used to model an autonomous vehicle with a limited range of motion. In addition to fixed-angle video images, a set of images using adaptively varied view angles based on speed are used as the input to the neural network controller. It was shown that the neural network was able to control the autonomous vehicle model along a path composed of path segments unlike the exemplars with which it was trained. This system was designed to assess only the guidance system, and it was assumed that other functions employed in autonomous vehicle control systems (mission planning, navigation, and obstacle avoidance) are to be implemented separately and are providing a desired path to the guidance system. The desired path trajectory is presented to the robot in the form of a two-dimensional path, with centerline, that is to be followed. A video camera and associated vision system provides video image data as control feedback to the guidance system. The neural network controller uses Gaussian curves for the output vector to facilitate interpolation and generalization of the output space.

  5. Standoff passive video imaging at 350 GHz with 251 superconducting detectors

    NASA Astrophysics Data System (ADS)

    Becker, Daniel; Gentry, Cale; Smirnov, Ilya; Ade, Peter; Beall, James; Cho, Hsiao-Mei; Dicker, Simon; Duncan, William; Halpern, Mark; Hilton, Gene; Irwin, Kent; Li, Dale; Paulter, Nicholas; Reintsema, Carl; Schwall, Robert; Tucker, Carole

    2014-06-01

    Millimeter wavelength radiation holds promise for detection of security threats at a distance, including suicide bomb belts and maritime threats in poor weather. The high sensitivity of superconducting Transition Edge Sensor (TES) detectors makes them ideal for passive imaging of thermal signals at these wavelengths. We have built a 350 GHz video-rate imaging system using a large-format array of feedhorn-coupled TES bolometers. The system operates at a standoff distance of 16m to 28m with a spatial resolution of 1:4 cm (at 17m). It currently contains one 251-detector subarray, and will be expanded to contain four subarrays for a total of 1004 detectors. The system has been used to take video images which reveal the presence of weapons concealed beneath a shirt in an indoor setting. We present a summary of this work.

  6. Unified imaging theory for x-ray and acoustic computerized tomography

    NASA Astrophysics Data System (ADS)

    Liu, Pingyu; Wang, Ge; Boyer, Arthur

    2004-10-01

    X-ray computerized tomography (CT) and acoustic CT are two main medical imaging modalities based on two intrinsically different physical phenomena. X-ray CT is based on x-ray"s attenuation when x-ray passes through medium. It has been well known that the Radon transform is the imaging theory for x-ray CT. Photoacoustic CT is a type of acoustic CT, which is based on differentiating electromagnetic energy absorption among media. In 1998 a new 3D reconstruction concept, the P-transform, was proposed to serve the imaging theory for photoacoustic CT. In this paper it was rigorously proved that both x-ray CT and photoacoustic CT are governed by a unified imaging theory. 3D data acquisition can be completed in 2p stereoangle. This new imaging theory realized, in part, the dream of all physicists, including Albert Einstein, who have long believed that our world is ultimately governed by few simple rules.

  7. The integrated platform of controlling and digital video processing for underwater range-gated laser imaging system

    NASA Astrophysics Data System (ADS)

    Shi, Yan; Qiu, Su; Jin, Wei-qi; Yu, Bing; Li, Li; Tian, Dong-kang

    2015-04-01

    Laser range-gated imaging is one of the effective techniques of underwater optical imaging. It could make the viewing distance up to 4 to 7 times with the video image processing technology. Accordingly, the control and image processing technologies come to be the key technologies for the underwater laser range-gated imaging system. In this article, the integrated platform of controlling and digital video processing for the underwater range-gated laser imaging system based on FPGA has been introduced. It accomplishes both the communication for remote control system as the role of lower computer and the task of high-speed images grabbing and video enhance processing as the role of high-speed image processing platform. The host computer can send commands composed to the FPGA, vectoring the underwater range-gated laser imaging system to executive operation.

  8. A four-dimensional snapshot hyperspectral video-endoscope for bio-imaging applications

    PubMed Central

    Lim, Hoong-Ta; Murukeshan, Vadakke Matham

    2016-01-01

    Hyperspectral imaging has proven significance in bio-imaging applications and it has the ability to capture up to several hundred images of different wavelengths offering relevant spectral signatures. To use hyperspectral imaging for in vivo monitoring and diagnosis of the internal body cavities, a snapshot hyperspectral video-endoscope is required. However, such reported systems provide only about 50 wavelengths. We have developed a four-dimensional snapshot hyperspectral video-endoscope with a spectral range of 400–1000 nm, which can detect 756 wavelengths for imaging, significantly more than such systems. Capturing the three-dimensional datacube sequentially gives the fourth dimension. All these are achieved through a flexible two-dimensional to one-dimensional fiber bundle. The potential of this custom designed and fabricated compact biomedical probe is demonstrated by imaging phantom tissue samples in reflectance and fluorescence imaging modalities. It is envisaged that this novel concept and developed probe will contribute significantly towards diagnostic in vivo biomedical imaging in the near future. PMID:27044607

  9. A four-dimensional snapshot hyperspectral video-endoscope for bio-imaging applications.

    PubMed

    Lim, Hoong-Ta; Murukeshan, Vadakke Matham

    2016-01-01

    Hyperspectral imaging has proven significance in bio-imaging applications and it has the ability to capture up to several hundred images of different wavelengths offering relevant spectral signatures. To use hyperspectral imaging for in vivo monitoring and diagnosis of the internal body cavities, a snapshot hyperspectral video-endoscope is required. However, such reported systems provide only about 50 wavelengths. We have developed a four-dimensional snapshot hyperspectral video-endoscope with a spectral range of 400-1000 nm, which can detect 756 wavelengths for imaging, significantly more than such systems. Capturing the three-dimensional datacube sequentially gives the fourth dimension. All these are achieved through a flexible two-dimensional to one-dimensional fiber bundle. The potential of this custom designed and fabricated compact biomedical probe is demonstrated by imaging phantom tissue samples in reflectance and fluorescence imaging modalities. It is envisaged that this novel concept and developed probe will contribute significantly towards diagnostic in vivo biomedical imaging in the near future. PMID:27044607

  10. Hydroacoustic transmission of video image for the control of a Remotely Operated undersea Vehicle (ROV)

    NASA Astrophysics Data System (ADS)

    Koskinen, Kari; Kohola, Pekka; Murtoviita, Esko; Leppaenen, Juha; Typpi, Vaeinoe

    1991-04-01

    A concept for enabling cableless transmission of live video images from a remotely operated undersea vehicle to a support vessel was developed and studied. This concept is based on the utilization of a hydroacoustic link for transmission and image preprocessing, compression and enhancement methods required by the bandwidth limitation of the link. The image processing methods were developed and tested in the laboratory by using videotape recorded picture material from real underwater operations. A feasibility assessment was made based on human judgement of the resulting live picture sequences. Realization of a realtime system was considered and discussed.

  11. JSC Shuttle Mission Simulator (SMS) visual system payload bay video image

    NASA Technical Reports Server (NTRS)

    1981-01-01

    This video image is of the STS-2 Columbia, Orbiter Vehicle (OV) 102, payload bay (PLB) showing the Office of Space Terrestrial Applications 1 (OSTA-1) pallet (Shuttle Imaging Radar A (SIR-A) antenna (left) and SIR-A recorder, Shuttle Multispectral Infrared Radiometer (SMIRR), Feature Identification Location Experiment (FILE), Measurement of Air Pollution for Satellites (MAPS) (right)). The image is used in JSC's Fixed Based (FB) Shuttle Mission Simulator (SMS). It is projected inside the FB-SMS crew compartment during mission simulation training. The FB-SMS is located in the Mission Simulation and Training Facility Bldg 5.

  12. Analysis of physiological responses associated with emotional changes induced by viewing video images of dental treatments.

    PubMed

    Sekiya, Taki; Miwa, Zenzo; Tsuchihashi, Natsumi; Uehara, Naoko; Sugimoto, Kumiko

    2015-01-01

    Since the understanding of emotional changes induced by dental treatments is important for dentists to provide a safe and comfortable dental treatment, we analyzed physiological responses during watching video images of dental treatments to search for the appropriate objective indices reflecting emotional changes. Fifteen healthy young adult subjects voluntarily participated in the present study. Electrocardiogram (ECG), electroencephalogram (EEG) and corrugator muscle electromyogram (EMG) were recorded and changes of them by viewing videos of dental treatments were analyzed. The subjective discomfort level was acquired by Visual Analog Scale method. Analyses of autonomic nervous activities from ECG and four emotional factors (anger/stress, joy/satisfaction, sadness/depression and relaxation) from EEG demonstrated that increases in sympathetic nervous activity reflecting stress increase and decreases in relaxation level were induced by the videos of infiltration anesthesia and cavity excavation, but not intraoral examination. The corrugator muscle activity was increased by all three images regardless of video contents. The subjective discomfort during watching infiltration anesthesia and cavity excavation was higher than intraoral examination, showing that sympathetic activities and relaxation factor of emotion changed in a manner consistent with subjective emotional changes. These results suggest that measurement of autonomic nervous activities estimated from ECG and emotional factors analyzed from EEG is useful for objective evaluation of subjective emotion. PMID:26111531

  13. Acquiring a dataset of labeled video images showing discomfort in demented elderly.

    PubMed

    Bonroy, Bert; Schiepers, Pieter; Leysens, Greet; Miljkovic, Dragana; Wils, Maartje; De Maesschalck, Lieven; Quanten, Stijn; Triau, Eric; Exadaktylos, Vasileios; Berckmans, Daniel; Vanrumste, Bart

    2009-05-01

    One of the effects of late-stage dementia is the loss of the ability to communicate verbally. Patients become unable to call for help if they feel uncomfortable. The first objective of this article was to record facial expressions of bedridden demented elderly. For this purpose, we developed a video acquisition system (ViAS) that records synchronized video coming from two cameras. Each camera delivers uncompressed color images of 1,024 x 768 pixels, up to 30 frames per second. It is the first time that such a system has been placed in a patient's room. The second objective was to simultaneously label these video recordings with respect to discomfort expressions of the patients. Therefore, we developed a Digital Discomfort Labeling Tool (DDLT). This tool provides an easy-to-use software representation on a tablet PC of validated "paper" discomfort scales. With ViAS and DDLT, 80 different datasets were obtained of about 15 minutes of recordings. Approximately 80% of the recorded datasets delivered the labeled video recordings. The remainder were not usable due to under- or overexposed images and due to the patients being out of view as the system was not properly replaced after care. In one of 6 observed patients, nurses recognized a higher discomfort level that would not have been observed without the DDLT.

  14. Biologically relevant photoacoustic imaging phantoms with tunable optical and acoustic properties.

    PubMed

    Vogt, William C; Jia, Congxian; Wear, Keith A; Garra, Brian S; Joshua Pfefer, T

    2016-10-01

    Established medical imaging technologies such as magnetic resonance imaging and computed tomography rely on well-validated tissue-simulating phantoms for standardized testing of device image quality. The availability of high-quality phantoms for optical-acoustic diagnostics such as photoacoustic tomography (PAT) will facilitate standardization and clinical translation of these emerging approaches. Materials used in prior PAT phantoms do not provide a suitable combination of long-term stability and realistic acoustic and optical properties. Therefore, we have investigated the use of custom polyvinyl chloride plastisol (PVCP) formulations for imaging phantoms and identified a dual-plasticizer approach that provides biologically relevant ranges of relevant properties. Speed of sound and acoustic attenuation were determined over a frequency range of 4 to 9 MHz and optical absorption and scattering over a wavelength range of 400 to 1100 nm. We present characterization of several PVCP formulations, including one designed to mimic breast tissue. This material is used to construct a phantom comprised of an array of cylindrical, hemoglobin-filled inclusions for evaluation of penetration depth. Measurements with a custom near-infrared PAT imager provide quantitative and qualitative comparisons of phantom and tissue images. Results indicate that our PVCP material is uniquely suitable for PAT system image quality evaluation and may provide a practical tool for device validation and intercomparison.

  15. Biologically relevant photoacoustic imaging phantoms with tunable optical and acoustic properties.

    PubMed

    Vogt, William C; Jia, Congxian; Wear, Keith A; Garra, Brian S; Joshua Pfefer, T

    2016-10-01

    Established medical imaging technologies such as magnetic resonance imaging and computed tomography rely on well-validated tissue-simulating phantoms for standardized testing of device image quality. The availability of high-quality phantoms for optical-acoustic diagnostics such as photoacoustic tomography (PAT) will facilitate standardization and clinical translation of these emerging approaches. Materials used in prior PAT phantoms do not provide a suitable combination of long-term stability and realistic acoustic and optical properties. Therefore, we have investigated the use of custom polyvinyl chloride plastisol (PVCP) formulations for imaging phantoms and identified a dual-plasticizer approach that provides biologically relevant ranges of relevant properties. Speed of sound and acoustic attenuation were determined over a frequency range of 4 to 9 MHz and optical absorption and scattering over a wavelength range of 400 to 1100 nm. We present characterization of several PVCP formulations, including one designed to mimic breast tissue. This material is used to construct a phantom comprised of an array of cylindrical, hemoglobin-filled inclusions for evaluation of penetration depth. Measurements with a custom near-infrared PAT imager provide quantitative and qualitative comparisons of phantom and tissue images. Results indicate that our PVCP material is uniquely suitable for PAT system image quality evaluation and may provide a practical tool for device validation and intercomparison. PMID:26886681

  16. Video-rate scanning two-photon excitation fluorescence microscopy and ratio imaging with cameleons.

    PubMed

    Fan, G Y; Fujisaki, H; Miyawaki, A; Tsay, R K; Tsien, R Y; Ellisman, M H

    1999-05-01

    A video-rate (30 frames/s) scanning two-photon excitation microscope has been successfully tested. The microscope, based on a Nikon RCM 8000, incorporates a femtosecond pulsed laser with wavelength tunable from 690 to 1050 nm, prechirper optics for laser pulse-width compression, resonant galvanometer for video-rate point scanning, and a pair of nonconfocal detectors for fast emission ratioing. An increase in fluorescent emission of 1.75-fold is consistently obtained with the use of the prechirper optics. The nonconfocal detectors provide another 2.25-fold increase in detection efficiency. Ratio imaging and optical sectioning can therefore be performed more efficiently without confocal optics. Faster frame rates, at 60, 120, and 240 frames/s, can be achieved with proportionally reduced scan lines per frame. Useful two-photon images can be acquired at video rate with a laser power as low as 2.7 mW at specimen with the genetically modified green fluorescent proteins. Preliminary results obtained using this system confirm that the yellow "cameleons" exhibit similar optical properties as under one-photon excitation conditions. Dynamic two-photon images of cardiac myocytes and ratio images of yellow cameleon-2.1, -3.1, and -3.1nu are also presented. PMID:10233058

  17. Video-rate scanning two-photon excitation fluorescence microscopy and ratio imaging with cameleons.

    PubMed Central

    Fan, G Y; Fujisaki, H; Miyawaki, A; Tsay, R K; Tsien, R Y; Ellisman, M H

    1999-01-01

    A video-rate (30 frames/s) scanning two-photon excitation microscope has been successfully tested. The microscope, based on a Nikon RCM 8000, incorporates a femtosecond pulsed laser with wavelength tunable from 690 to 1050 nm, prechirper optics for laser pulse-width compression, resonant galvanometer for video-rate point scanning, and a pair of nonconfocal detectors for fast emission ratioing. An increase in fluorescent emission of 1.75-fold is consistently obtained with the use of the prechirper optics. The nonconfocal detectors provide another 2.25-fold increase in detection efficiency. Ratio imaging and optical sectioning can therefore be performed more efficiently without confocal optics. Faster frame rates, at 60, 120, and 240 frames/s, can be achieved with proportionally reduced scan lines per frame. Useful two-photon images can be acquired at video rate with a laser power as low as 2.7 mW at specimen with the genetically modified green fluorescent proteins. Preliminary results obtained using this system confirm that the yellow "cameleons" exhibit similar optical properties as under one-photon excitation conditions. Dynamic two-photon images of cardiac myocytes and ratio images of yellow cameleon-2.1, -3.1, and -3.1nu are also presented. PMID:10233058

  18. Quantitative assessment of properties of make-up products by video imaging: application to lipsticks.

    PubMed

    Korichi, Rodolphe; Provost, Robin; Heusèle, Catherine; Schnebert, Sylvianne

    2000-11-01

    BACKGROUND/AIMS: The different properties and visual effects of lipstick have been studied by image analysis directly on volunteers. METHODS: After controlling the volunteer's position mechanically using an ophthalmic table and visually using an acquirement mask, which is an indicator of luminance and guide marks, we carried out video colour images of the make-up area. From these images, we quantified the colour, gloss, covering power, long-lasting effect and streakiness, using computer science programs. RESULTS/CONCLUSION: Quantitative colorimetric assessment requires the transformation of the RGB components obtained by a video colour camera into CIELAB colorimetric space. The expression of each coordinate of the L*a*b* space according to R,G,B was carried out by a statistical method of polynomial approximations. A study, using 24 colour images extracted from a Pantone(R) palette, showed a very good correlation with a Minolta Colorimeter(R) CR 300. The colour assessment on volunteers required a segmentation method by maximizing the entropy. The aim was to separate the colour information sent back by the skin to the make-up area. It was very useful to precisely delimit the contour between the skin and the product in the case of almost identical colours and to evaluate the streakiness. From this colour segmentation, an algorithm was studied to search for the shades most represented in the overall colour of the make-up area. The capacity to replicate what the consumer perceives of the make-up product, to carry out studies without having any contact with the skin surface, and the constant improvement of software and video acquirement systems all make video imaging a very useful tool in the quantitative assessment of the properties and visual effects of a make-up product. PMID:11428961

  19. Quantitative assessment of properties of make-up products by video imaging: application to lipsticks.

    PubMed

    Korichi, Rodolphe; Provost, Robin; Heusèle, Catherine; Schnebert, Sylvianne

    2000-11-01

    BACKGROUND/AIMS: The different properties and visual effects of lipstick have been studied by image analysis directly on volunteers. METHODS: After controlling the volunteer's position mechanically using an ophthalmic table and visually using an acquirement mask, which is an indicator of luminance and guide marks, we carried out video colour images of the make-up area. From these images, we quantified the colour, gloss, covering power, long-lasting effect and streakiness, using computer science programs. RESULTS/CONCLUSION: Quantitative colorimetric assessment requires the transformation of the RGB components obtained by a video colour camera into CIELAB colorimetric space. The expression of each coordinate of the L*a*b* space according to R,G,B was carried out by a statistical method of polynomial approximations. A study, using 24 colour images extracted from a Pantone(R) palette, showed a very good correlation with a Minolta Colorimeter(R) CR 300. The colour assessment on volunteers required a segmentation method by maximizing the entropy. The aim was to separate the colour information sent back by the skin to the make-up area. It was very useful to precisely delimit the contour between the skin and the product in the case of almost identical colours and to evaluate the streakiness. From this colour segmentation, an algorithm was studied to search for the shades most represented in the overall colour of the make-up area. The capacity to replicate what the consumer perceives of the make-up product, to carry out studies without having any contact with the skin surface, and the constant improvement of software and video acquirement systems all make video imaging a very useful tool in the quantitative assessment of the properties and visual effects of a make-up product.

  20. The effects of video compression on acceptability of images for monitoring life sciences experiments

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Chuang, Sherry L.

    1992-01-01

    Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters

  1. The effects of video compression on acceptability of images for monitoring life sciences experiments

    NASA Astrophysics Data System (ADS)

    Haines, Richard F.; Chuang, Sherry L.

    1992-07-01

    Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters

  2. An echolocation model for the restoration of an acoustic image from a single-emission echo

    NASA Astrophysics Data System (ADS)

    Matsuo, Ikuo; Yano, Masafumi

    2004-12-01

    Bats can form a fine acoustic image of an object using frequency-modulated echolocation sound. The acoustic image is an impulse response, known as a reflected-intensity distribution, which is composed of amplitude and phase spectra over a range of frequencies. However, bats detect only the amplitude spectrum due to the low-time resolution of their peripheral auditory system, and the frequency range of emission is restricted. It is therefore necessary to restore the acoustic image from limited information. The amplitude spectrum varies with the changes in the configuration of the reflected-intensity distribution, while the phase spectrum varies with the changes in its configuration and location. Here, by introducing some reasonable constraints, a method is proposed for restoring an acoustic image from the echo. The configuration is extrapolated from the amplitude spectrum of the restricted frequency range by using the continuity condition of the amplitude spectrum at the minimum frequency of the emission and the minimum phase condition. The determination of the location requires extracting the amplitude spectra, which vary with its location. For this purpose, the Gaussian chirplets with a carrier frequency compatible with bat emission sweep rates were used. The location is estimated from the temporal changes of the amplitude spectra. .

  3. Viral video: Live imaging of virus-host encounters

    NASA Astrophysics Data System (ADS)

    Son, Kwangmin; Guasto, Jeffrey S.; Cubillos-Ruiz, Andres; Chisholm, Sallie W.; Sullivan, Matthew B.; Stocker, Roman

    2014-11-01

    Viruses are non-motile infectious agents that rely on Brownian motion to encounter and subsequently adsorb to their hosts. Paradoxically, the viral adsorption rate is often reported to be larger than the theoretical limit imposed by the virus-host encounter rate, highlighting a major gap in the experimental quantification of virus-host interactions. Here we present the first direct quantification of the viral adsorption rate, obtained using live imaging of individual host cells and viruses for thousands of encounter events. The host-virus pair consisted of Prochlorococcus MED4, a 800 nm small non-motile bacterium that dominates photosynthesis in the oceans, and its virus PHM-2, a myovirus that has a 80 nm icosahedral capsid and a 200 nm long rigid tail. We simultaneously imaged hosts and viruses moving by Brownian motion using two-channel epifluorescent microscopy in a microfluidic device. This detailed quantification of viral transport yielded a 20-fold smaller adsorption efficiency than previously reported, indicating the need for a major revision in infection models for marine and likely other ecosystems.

  4. Selective magnetic resonance imaging of magnetic nanoparticles by Acoustically Induced Rotary Saturation (AIRS)

    PubMed Central

    Zhu, Bo; Witzel, Thomas; Jiang, Shan; Huang, Susie Y.; Rosen, Bruce R.; Wald, Lawrence L.

    2016-01-01

    Purpose We introduce a new method to selectively detect iron oxide contrast agents using an acoustic wave to perturb the spin-locked water signal in the vicinity of the magnetic particles. The acoustic drive can be externally modulated to turn the effect on and off, allowing sensitive and quantitative statistical comparison and removal of confounding image background variations. Methods We demonstrate the effect in spin-locking experiments using piezoelectric actuators to generate vibrational displacements of iron oxide samples. We observe a resonant behavior of the signal changes with respect to the acoustic frequency where iron oxide is present. We characterize the effect as a function of actuator displacement and contrast agent concentration. Results The resonant effect allows us to generate block-design “modulation response maps” indicating the contrast agent’s location, as well as positive contrast images with suppressed background signal. We show the AIRS effect stays approximately constant across acoustic frequency, and behaves monotonically over actuator displacement and contrast agent concentration. Conclusion AIRS is a promising method capable of using acoustic vibrations to modulate the contrast from iron oxide nanoparticles and thus perform selective detection of the contrast agents, potentially enabling more accurate visualization of contrast agents in clinical and research settings. PMID:25537578

  5. Precise color images a high-speed color video camera system with three intensified sensors

    NASA Astrophysics Data System (ADS)

    Oki, Sachio; Yamakawa, Masafumi; Gohda, Susumu; Etoh, Takeharu G.

    1999-06-01

    High speed imaging systems have been used in a large field of science and engineering. Although the high speed camera systems have been improved to high performance, most of their applications are only to get high speed motion pictures. However, in some fields of science and technology, it is useful to get some other information, such as temperature of combustion flame, thermal plasma and molten materials. Recent digital high speed video imaging technology should be able to get such information from those objects. For this purpose, we have already developed a high speed video camera system with three-intensified-sensors and cubic prism image splitter. The maximum frame rate is 40,500 pps (picture per second) at 64 X 64 pixels and 4,500 pps at 256 X 256 pixels with 256 (8 bit) intensity resolution for each pixel. The camera system can store more than 1,000 pictures continuously in solid state memory. In order to get the precise color images from this camera system, we need to develop a digital technique, which consists of a computer program and ancillary instruments, to adjust displacement of images taken from two or three image sensors and to calibrate relationship between incident light intensity and corresponding digital output signals. In this paper, the digital technique for pixel-based displacement adjustment are proposed. Although the displacement of the corresponding circle was more than 8 pixels in original image, the displacement was adjusted within 0.2 pixels at most by this method.

  6. MO-A-BRD-06: In Vivo Cherenkov Video Imaging to Verify Whole Breast Irradiation Treatment

    SciTech Connect

    Zhang, R; Glaser, A; Jarvis, L; Gladstone, D; Andreozzi, J; Hitchcock, W; Pogue, B

    2014-06-15

    Purpose: To show in vivo video imaging of Cherenkov emission (Cherenkoscopy) can be acquired in the clinical treatment room without affecting the normal process of external beam radiation therapy (EBRT). Applications of Cherenkoscopy, such as patient positioning, movement tracking, treatment monitoring and superficial dose estimation, were examined. Methods: In a phase 1 clinical trial, including 12 patients undergoing post-lumpectomy whole breast irradiation, Cherenkov emission was imaged with a time-gated ICCD camera synchronized to the radiation pulses, during 10 fractions of the treatment. Images from different treatment days were compared by calculating the 2-D correlations corresponding to the averaged image. An edge detection algorithm was utilized to highlight biological features, such as the blood vessels. Superficial dose deposited at the sampling depth were derived from the Eclipse treatment planning system (TPS) and compared with the Cherenkov images. Skin reactions were graded weekly according to the Common Toxicity Criteria and digital photographs were obtained for comparison. Results: Real time (fps = 4.8) imaging of Cherenkov emission was feasible and feasibility tests indicated that it could be improved to video rate (fps = 30) with system improvements. Dynamic field changes due to fast MLC motion were imaged in real time. The average 2-D correlation was about 0.99, suggesting the stability of this imaging technique and repeatability of patient positioning was outstanding. Edge enhanced images of blood vessels were observed, and could serve as unique biological markers for patient positioning and movement tracking (breathing). Small discrepancies exists between the Cherenkov images and the superficial dose predicted from the TPS but the former agreed better with actual skin reactions than did the latter. Conclusion: Real time Cherenkoscopy imaging during EBRT is a novel imaging tool that could be utilized for patient positioning, movement tracking

  7. Method and system to synchronize acoustic therapy with ultrasound imaging

    NASA Technical Reports Server (NTRS)

    Owen, Neil (Inventor); Bailey, Michael R. (Inventor); Hossack, James (Inventor)

    2009-01-01

    Interference in ultrasound imaging when used in connection with high intensity focused ultrasound (HIFU) is avoided by employing a synchronization signal to control the HIFU signal. Unless the timing of the HIFU transducer is controlled, its output will substantially overwhelm the signal produced by ultrasound imaging system and obscure the image it produces. The synchronization signal employed to control the HIFU transducer is obtained without requiring modification of the ultrasound imaging system. Signals corresponding to scattered ultrasound imaging waves are collected using either the HIFU transducer or a dedicated receiver. A synchronization processor manipulates the scattered ultrasound imaging signals to achieve the synchronization signal, which is then used to control the HIFU bursts so as to substantially reduce or eliminate HIFU interference in the ultrasound image. The synchronization processor can alternatively be implemented using a computing device or an application-specific circuit.

  8. Picturing Video

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Video Pics is a software program that generates high-quality photos from video. The software was developed under an SBIR contract with Marshall Space Flight Center by Redhawk Vision, Inc.--a subsidiary of Irvine Sensors Corporation. Video Pics takes information content from multiple frames of video and enhances the resolution of a selected frame. The resulting image has enhanced sharpness and clarity like that of a 35 mm photo. The images are generated as digital files and are compatible with image editing software.

  9. A novel multiresolution spatiotemporal saliency detection model and its applications in image and video compression.

    PubMed

    Guo, Chenlei; Zhang, Liming

    2010-01-01

    Salient areas in natural scenes are generally regarded as areas which the human eye will typically focus on, and finding these areas is the key step in object detection. In computer vision, many models have been proposed to simulate the behavior of eyes such as SaliencyToolBox (STB), Neuromorphic Vision Toolkit (NVT), and others, but they demand high computational cost and computing useful results mostly relies on their choice of parameters. Although some region-based approaches were proposed to reduce the computational complexity of feature maps, these approaches still were not able to work in real time. Recently, a simple and fast approach called spectral residual (SR) was proposed, which uses the SR of the amplitude spectrum to calculate the image's saliency map. However, in our previous work, we pointed out that it is the phase spectrum, not the amplitude spectrum, of an image's Fourier transform that is key to calculating the location of salient areas, and proposed the phase spectrum of Fourier transform (PFT) model. In this paper, we present a quaternion representation of an image which is composed of intensity, color, and motion features. Based on the principle of PFT, a novel multiresolution spatiotemporal saliency detection model called phase spectrum of quaternion Fourier transform (PQFT) is proposed in this paper to calculate the spatiotemporal saliency map of an image by its quaternion representation. Distinct from other models, the added motion dimension allows the phase spectrum to represent spatiotemporal saliency in order to perform attention selection not only for images but also for videos. In addition, the PQFT model can compute the saliency map of an image under various resolutions from coarse to fine. Therefore, the hierarchical selectivity (HS) framework based on the PQFT model is introduced here to construct the tree structure representation of an image. With the help of HS, a model called multiresolution wavelet domain foveation (MWDF) is

  10. Real-time area-tracker records cellular volume changes from video images

    NASA Astrophysics Data System (ADS)

    Lindemann, Bernd

    1984-11-01

    High-contrast TV images of living cells are recorded from a light microscope. The video line signal is converted to binary and used to gate a 12.5-MHz clock driving a counter. After completion of each video frame the accumulated counts (proportional to the dark image area) are written onto a stack (FIFO) before the counter is reset. Thus, area information is actualized at a rate of 50 values per second and can be read out at speeds suitable for computer interfacing and (after D/A conversion) oscilloscope displays or paper chart recording. With two counting channels the time course of two areas of distinctly different gray shades can be recorded at once.

  11. Object detection and classification using image moment functions in the applied to video and imagery analysis

    NASA Astrophysics Data System (ADS)

    Mise, Olegs; Bento, Stephen

    2013-05-01

    This paper proposes an object detection algorithm and a framework based on a combination of Normalized Central Moment Invariant (NCMI) and Normalized Geometric Radial Moment (NGRM). The developed framework allows detecting objects with offline pre-loaded signatures and/or using the tracker data in order to create an online object signature representation. The framework has been successfully applied to the target detection and has demonstrated its performance on real video and imagery scenes. In order to overcome the implementation constraints of the low-powered hardware, the developed framework uses a combination of image moment functions and utilizes a multi-layer neural network. The developed framework has been shown to be robust to false alarms on non-target objects. In addition, optimization for fast calculation of the image moments descriptors is discussed. This paper presents an overview of the developed framework and demonstrates its performance on real video and imagery scenes.

  12. Imaging Acoustic Phonon Dynamics on the Nanometer-Femtosecond Spatiotemporal Length-Scale with Ultrafast Electron Microscopy

    NASA Astrophysics Data System (ADS)

    Plemmons, Dayne; Flannigan, David

    Coherent collective lattice oscillations known as phonons dictate a broad range of physical observables in condensed matter and act as primary energy carriers across a wide range of material systems. Despite this omnipresence, analysis of phonon dynamics on their ultrashort native spatiotemporal length scale - that is, the combined nanometer (nm), spatial and femtosecond (fs), temporal length-scales - has largely remained experimentally inaccessible. Here, we employ ultrafast electron microscopy (UEM) to directly image discrete acoustic phonons in real-space with combined nm-fs resolution. By directly probing electron scattering in the image plane (as opposed to the diffraction plane), we retain phase information critical for following the evolution, propagation, scattering, and decay of phonons in relation to morphological features of the specimen (i.e. interfaces, grain boundaries, voids, ripples, etc.). We extract a variety of morphologically-specific quantitative information from the UEM videos including phonon frequencies, phase velocities, and decays times. We expect these direct manifestations of local elastic properties in the vicinity of material defects and interfaces will aide in the understanding and application of phonon-mediated phenomena in nanostructures. Department of Chemical Engineering and Materials Science, University of Minnesota, Minneapolis, MN, 55455, USA.

  13. A synchronized particle image velocimetry and infrared thermography technique applied to an acoustic streaming flow

    PubMed Central

    Sou, In Mei; Layman, Christopher N.; Ray, Chittaranjan

    2013-01-01

    Subsurface coherent structures and surface temperatures are investigated using simultaneous measurements of particle image velocimetry (PIV) and infrared (IR) thermography. Results for coherent structures from acoustic streaming and associated heating transfer in a rectangular tank with an acoustic horn mounted horizontally at the sidewall are presented. An observed vortex pair develops and propagates in the direction along the centerline of the horn. From the PIV velocity field data, distinct kinematic regions are found with the Lagrangian coherent structure (LCS) method. The implications of this analysis with respect to heat transfer and related sonochemical applications are discussed. PMID:24347810

  14. A synchronized particle image velocimetry and infrared thermography technique applied to an acoustic streaming flow.

    PubMed

    Sou, In Mei; Allen, John S; Layman, Christopher N; Ray, Chittaranjan

    2011-11-01

    Subsurface coherent structures and surface temperatures are investigated using simultaneous measurements of particle image velocimetry (PIV) and infrared (IR) thermography. Results for coherent structures from acoustic streaming and associated heating transfer in a rectangular tank with an acoustic horn mounted horizontally at the sidewall are presented. An observed vortex pair develops and propagates in the direction along the centerline of the horn. From the PIV velocity field data, distinct kinematic regions are found with the Lagrangian coherent structure (LCS) method. The implications of this analysis with respect to heat transfer and related sonochemical applications are discussed. PMID:24347810

  15. Measurement of thigmomorphogenesis and gravitropism by non-intrusive computerized video image processing

    NASA Technical Reports Server (NTRS)

    Jaffe, M. J.

    1984-01-01

    A video image processing instrument, DARWIN (Digital Analyser of Resolvable Whole-pictures by Image Numeration), was developed. It was programmed to measure stem or root growth and bending, and coupled to a specially mounted video camera to be able to automatically generate growth and bending curves during gravitropism. The growth of the plant is recorded on a video casette recorder with a specially modified time lapse function. At the end of the experiment, DARWIN analyses the growth or movement and prints out bending and growth curves. This system was used to measure thigmomorphagenesis in light grown corn plants. If the plant is rubbed with an applied force load of 0.38 N., it grows faster than the unrubbed control, whereas 1.14 N. retards its growth. Image analysis shows that most of the change in the rate of growth is caused in the first hour after rubbing. When DARWIN was used to measure gravitropism in dark grown oat seedlings, it was found that the top side of the shoot contracts during the first hour of gravitational stimulus, whereas the bottom side begins to elongate after 10 to 15 minutes.

  16. The effect of music video clips on adolescent boys' body image, mood, and schema activation.

    PubMed

    Mulgrew, Kate E; Volcevski-Kostas, Diana; Rendell, Peter G

    2014-01-01

    There is limited research that has examined experimentally the effects of muscular images on adolescent boys' body image, with no research specifically examining the effects of music television. The aim of the current study was to examine the effects of viewing muscular and attractive singers in music video clips on early, mid, and late adolescent boys' body image, mood, and schema activation. Participants were 180 boys in grade 7 (mean age = 12.73 years), grade 9 (mean age = 14.40 years) or grade 11 (mean age = 16.15 years) who completed pre- and post-test measures of mood and body satisfaction after viewing music videos containing male singers of muscular or average appearance. They also completed measures of schema activation and social comparison after viewing the clips. The results showed that the boys who viewed the muscular clips reported poorer upper body satisfaction, lower appearance satisfaction, lower happiness, and more depressive feelings compared to boys who viewed the clips depicting singers of average appearance. There was no evidence of increased appearance schema activation but the boys who viewed the muscular clips did report higher levels of social comparison to the singers. The results suggest that music video clips are a powerful form of media in conveying information about the male ideal body shape and that negative effects are found in boys as young as 12 years.

  17. The effect of music video clips on adolescent boys' body image, mood, and schema activation.

    PubMed

    Mulgrew, Kate E; Volcevski-Kostas, Diana; Rendell, Peter G

    2014-01-01

    There is limited research that has examined experimentally the effects of muscular images on adolescent boys' body image, with no research specifically examining the effects of music television. The aim of the current study was to examine the effects of viewing muscular and attractive singers in music video clips on early, mid, and late adolescent boys' body image, mood, and schema activation. Participants were 180 boys in grade 7 (mean age = 12.73 years), grade 9 (mean age = 14.40 years) or grade 11 (mean age = 16.15 years) who completed pre- and post-test measures of mood and body satisfaction after viewing music videos containing male singers of muscular or average appearance. They also completed measures of schema activation and social comparison after viewing the clips. The results showed that the boys who viewed the muscular clips reported poorer upper body satisfaction, lower appearance satisfaction, lower happiness, and more depressive feelings compared to boys who viewed the clips depicting singers of average appearance. There was no evidence of increased appearance schema activation but the boys who viewed the muscular clips did report higher levels of social comparison to the singers. The results suggest that music video clips are a powerful form of media in conveying information about the male ideal body shape and that negative effects are found in boys as young as 12 years. PMID:23443315

  18. An acoustic charge transport imager for high definition television applications

    NASA Technical Reports Server (NTRS)

    Hunt, William D.; Brennan, Kevin F.; Summers, Christopher J.

    1993-01-01

    This report covers: (1) invention of a new, ultra-low noise, low operating voltage APD which is expected to offer far better performance than the existing volume doped APD device; (2) performance of a comprehensive series of experiments on the acoustic and piezoelectric properties of ZnO films sputtered on GaAs which can possibly lead to a decrease in the required rf drive power for ACT devices by 15dB; (3) development of an advanced, hydrodynamic, macroscopic simulator used for evaluating the performance of ACT and CTD devices and aiding in the development of the next generation of devices; (4) experimental development of CTD devices which utilize a p-doped top barrier demonstrating charge storage capacity and low leakage currents; (5) refinements in materials growth techniques and in situ controls to lower surface defect densities to record levels as well as increase material uniformity and quality.

  19. An acoustic charge transport imager for high definition television applications

    NASA Astrophysics Data System (ADS)

    Hunt, William D.; Brennan, Kevin F.; Summers, Christopher J.

    1993-09-01

    This report covers: (1) invention of a new, ultra-low noise, low operating voltage APD which is expected to offer far better performance than the existing volume doped APD device; (2) performance of a comprehensive series of experiments on the acoustic and piezoelectric properties of ZnO films sputtered on GaAs which can possibly lead to a decrease in the required rf drive power for ACT devices by 15dB; (3) development of an advanced, hydrodynamic, macroscopic simulator used for evaluating the performance of ACT and CTD devices and aiding in the development of the next generation of devices; (4) experimental development of CTD devices which utilize a p-doped top barrier demonstrating charge storage capacity and low leakage currents; (5) refinements in materials growth techniques and in situ controls to lower surface defect densities to record levels as well as increase material uniformity and quality.

  20. Modern Techniques in Acoustical Signal and Image Processing

    SciTech Connect

    Candy, J V

    2002-04-04

    Acoustical signal processing problems can lead to some complex and intricate techniques to extract the desired information from noisy, sometimes inadequate, measurements. The challenge is to formulate a meaningful strategy that is aimed at performing the processing required even in the face of uncertainties. This strategy can be as simple as a transformation of the measured data to another domain for analysis or as complex as embedding a full-scale propagation model into the processor. The aims of both approaches are the same--to extract the desired information and reject the extraneous, that is, develop a signal processing scheme to achieve this goal. In this paper, we briefly discuss this underlying philosophy from a ''bottom-up'' approach enabling the problem to dictate the solution rather than visa-versa.

  1. A low-cost, high-resolution, video-rate imaging optical radar

    SciTech Connect

    Sackos, J.T.; Nellums, R.O.; Lebien, S.M.; Diegert, C.F.; Grantham, J.W.; Monson, T.

    1998-04-01

    Sandia National Laboratories has developed a unique type of portable low-cost range imaging optical radar (laser radar or LADAR). This innovative sensor is comprised of an active floodlight scene illuminator and an image intensified CCD camera receiver. It is a solid-state device (no moving parts) that offers significant size, performance, reliability, and simplicity advantages over other types of 3-D imaging sensors. This unique flash LADAR is based on low cost, commercially available hardware, and is well suited for many government and commercial uses. This paper presents an update of Sandia`s development of the Scannerless Range Imager technology and applications, and discusses the progress that has been made in evolving the sensor into a compact, low, cost, high-resolution, video rate Laser Dynamic Range Imager.

  2. Video image-based analysis of single human induced pluripotent stem cell derived cardiomyocyte beating dynamics using digital image correlation

    PubMed Central

    2014-01-01

    Background The functionality of a cardiomyocyte is primarily measured by analyzing the electrophysiological properties of the cell. The analysis of the beating behavior of single cardiomyocytes, especially ones derived from stem cells, is challenging but well warranted. In this study, a video-based method that is non-invasive and label-free is introduced and applied for the study of single human cardiomyocytes derived from induced pluripotent stem cells. Methods The beating of dissociated stem cell-derived cardiomyocytes was visualized with a microscope and the motion was video-recorded. Minimum quadratic difference, a digital image correlation method, was used for beating analysis with geometrical sectorial cell division and radial/tangential directions. The time series of the temporal displacement vector fields of a single cardiomyocyte was computed from video data. The vector field data was processed to obtain cell-specific, contraction-relaxation dynamics signals. Simulated cardiomyocyte beating was used as a reference and the current clamp of real cardiomyocytes was used to analyze the electrical functionality of the beating cardiomyocytes. Results Our results demonstrate that our sectorized image correlation method is capable of extracting single cell beating characteristics from the video data of induced pluripotent stem cell-derived cardiomyocytes that have no clear movement axis, and that the method can accurately identify beating phases and time parameters. Conclusion Our video analysis of the beating motion of single human cardiomyocytes provides a robust, non-invasive and label-free method to analyze the mechanobiological functionality of cardiomyocytes derived from induced pluripotent stem cells. Thus, our method has potential for the high-throughput analysis of cardiomyocyte functions. PMID:24708714

  3. Numerical Simulation of Target Range Estimation Using Ambient Noise Imaging with Acoustic Lens

    NASA Astrophysics Data System (ADS)

    Mori, Kazuyoshi; Ogasawara, Hanako; Nakamura, Toshiaki; Tsuchiya, Takenobu; Endoh, Nobuyuki

    2010-07-01

    In ambient noise imaging (ANI), each pixel of a target image is mapped by either monochrome or pseudo color to represent its acoustic intensity in each direction. This intensity is obtained by measuring the target object's reflecting or scattering wave, with ocean background noise serving as the sound source. In the case of using an acoustic lens, the ANI system creates a C-mode-like image, where receivers are arranged on a focal plane and each pixel's color corresponds to the intensity of each receiver output. There is no consideration for estimating a target range by this method, because it is impossible to measure the traveling time between a transducer and a target by a method like an active imaging sonar. In this study, we tried to estimate a target range using the ANI system with an acoustic lens. Here, we conducted a numerical simulation of sound propagation based on the principle of the time reversal mirror. First, instead of actual ocean measurements in the forward propagation, we calculated the scattering wave from a rigid target object in an acoustic noise field generated by a large number of point sources using the two-dimensional (2D) finite difference time domain (FDTD) method. The time series of the scattering wave converged by the lens was then recorded on each receiver. The sound pressure distribution assuming that the time-reversed wave of the scattering wave was reradiated from each receiver position was also calculated using the 2D FDTD method in the backward propagation. It was possible to estimate a target range using the ANI system with an acoustic lens, because the maximum position of the reradiated sound pressure field was close to the target position.

  4. Numerical Simulation of Target Range Estimation Using Ambient Noise Imaging with Acoustic Lens

    NASA Astrophysics Data System (ADS)

    Kazuyoshi Mori,; Hanako Ogasawara,; Toshiaki Nakamura,; Takenobu Tsuchiya,; Nobuyuki Endoh,

    2010-07-01

    In ambient noise imaging (ANI), each pixel of a target image is mapped by either monochrome or pseudo color to represent its acoustic intensity in each direction. This intensity is obtained by measuring the target object’s reflecting or scattering wave, with ocean background noise serving as the sound source. In the case of using an acoustic lens, the ANI system creates a C-mode-like image, where receivers are arranged on a focal plane and each pixel’s color corresponds to the intensity of each receiver output. There is no consideration for estimating a target range by this method, because it is impossible to measure the traveling time between a transducer and a target by a method like an active imaging sonar. In this study, we tried to estimate a target range using the ANI system with an acoustic lens. Here, we conducted a numerical simulation of sound propagation based on the principle of the time reversal mirror. First, instead of actual ocean measurements in the forward propagation, we calculated the scattering wave from a rigid target object in an acoustic noise field generated by a large number of point sources using the two-dimensional (2D) finite difference time domain (FDTD) method. The time series of the scattering wave converged by the lens was then recorded on each receiver. The sound pressure distribution assuming that the time-reversed wave of the scattering wave was reradiated from each receiver position was also calculated using the 2D FDTD method in the backward propagation. It was possible to estimate a target range using the ANI system with an acoustic lens, because the maximum position of the reradiated sound pressure field was close to the target position.

  5. Phase Time and Envelope Time in Time-Distance Analysis and Acoustic Imaging

    NASA Technical Reports Server (NTRS)

    Chou, Dean-Yi; Duvall, Thomas L.; Sun, Ming-Tsung; Chang, Hsiang-Kuang; Jimenez, Antonio; Rabello-Soares, Maria Cristina; Ai, Guoxiang; Wang, Gwo-Ping; Goode Philip; Marquette, William; Ehgamberdiev, Shuhrat; Landenkov, Oleg

    1999-01-01

    Time-distance analysis and acoustic imaging are two related techniques to probe the local properties of solar interior. In this study, we discuss the relation of phase time and envelope time between the two techniques. The location of the envelope peak of the cross correlation function in time-distance analysis is identified as the travel time of the wave packet formed by modes with the same w/l. The phase time of the cross correlation function provides information of the phase change accumulated along the wave path, including the phase change at the boundaries of the mode cavity. The acoustic signals constructed with the technique of acoustic imaging contain both phase and intensity information. The phase of constructed signals can be studied by computing the cross correlation function between time series constructed with ingoing and outgoing waves. In this study, we use the data taken with the Taiwan Oscillation Network (TON) instrument and the Michelson Doppler Imager (MDI) instrument. The analysis is carried out for the quiet Sun. We use the relation of envelope time versus distance measured in time-distance analyses to construct the acoustic signals in acoustic imaging analyses. The phase time of the cross correlation function of constructed ingoing and outgoing time series is twice the difference between the phase time and envelope time in time-distance analyses as predicted. The envelope peak of the cross correlation function between constructed ingoing and outgoing time series is located at zero time as predicted for results of one-bounce at 3 mHz for all four data sets and two-bounce at 3 mHz for two TON data sets. But it is different from zero for other cases. The cause of the deviation of the envelope peak from zero is not known.

  6. Schlieren imaging of the standing wave field in an ultrasonic acoustic levitator

    NASA Astrophysics Data System (ADS)

    Rendon, Pablo Luis; Boullosa, Ricardo R.; Echeverria, Carlos; Porta, David

    2015-11-01

    We consider a model of a single axis acoustic levitator consisting of two cylinders immersed in air and directed along the same axis. The first cylinder has a flat termination and functions as a sound emitter, and the second cylinder, which is simply a refector, has the side facing the first cylinder cut out by a spherical surface. By making the first cylinder vibrate at ultrasonic frequencies a standing wave is produced in the air between the cylinders which makes it possible, by means of the acoustic radiation pressure, to levitate one or several small objects of different shapes, such as spheres or disks. We use schlieren imaging to observe the acoustic field resulting from the levitation of one or several objects, and compare these results to previous numerical approximations of the field obtained using a finite element method. The authors acknowledge financial support from DGAPA-UNAM through project PAPIIT IN109214.

  7. Imaging of transient surface acoustic waves by full-field photorefractive interferometry

    SciTech Connect

    Xiong, Jichuan; Xu, Xiaodong E-mail: christ.glorieux@fys.kuleuven.be; Glorieux, Christ E-mail: christ.glorieux@fys.kuleuven.be; Matsuda, Osamu; Cheng, Liping

    2015-05-15

    A stroboscopic full-field imaging technique based on photorefractive interferometry for the visualization of rapidly changing surface displacement fields by using of a standard charge-coupled device (CCD) camera is presented. The photorefractive buildup of the space charge field during and after probe laser pulses is simulated numerically. The resulting anisotropic diffraction upon the refractive index grating and the interference between the polarization-rotated diffracted reference beam and the transmitted signal beam are modeled theoretically. The method is experimentally demonstrated by full-field imaging of the propagation of photoacoustically generated surface acoustic waves with a temporal resolution of nanoseconds. The surface acoustic wave propagation in a 23 mm × 17 mm area on an aluminum plate was visualized with 520 × 696 pixels of the CCD sensor, yielding a spatial resolution of 33 μm. The short pulse duration (8 ns) of the probe laser yields the capability of imaging SAWs with frequencies up to 60 MHz.

  8. High-Performance Motion Estimation for Image Sensors with Video Compression

    PubMed Central

    Xu, Weizhi; Yin, Shouyi; Liu, Leibo; Liu, Zhiyong; Wei, Shaojun

    2015-01-01

    It is important to reduce the time cost of video compression for image sensors in video sensor network. Motion estimation (ME) is the most time-consuming part in video compression. Previous work on ME exploited intra-frame data reuse in a reference frame to improve the time efficiency but neglected inter-frame data reuse. We propose a novel inter-frame data reuse scheme which can exploit both intra-frame and inter-frame data reuse for ME in video compression (VC-ME). Pixels of reconstructed frames are kept on-chip until they are used by the next current frame to avoid off-chip memory access. On-chip buffers with smart schedules of data access are designed to perform the new data reuse scheme. Three levels of the proposed inter-frame data reuse scheme are presented and analyzed. They give different choices with tradeoff between off-chip bandwidth requirement and on-chip memory size. All three levels have better data reuse efficiency than their intra-frame counterparts, so off-chip memory traffic is reduced effectively. Comparing the new inter-frame data reuse scheme with the traditional intra-frame data reuse scheme, the memory traffic can be reduced by 50% for VC-ME. PMID:26307996

  9. High-Performance Motion Estimation for Image Sensors with Video Compression.

    PubMed

    Xu, Weizhi; Yin, Shouyi; Liu, Leibo; Liu, Zhiyong; Wei, Shaojun

    2015-01-01

    It is important to reduce the time cost of video compression for image sensors in video sensor network. Motion estimation (ME) is the most time-consuming part in video compression. Previous work on ME exploited intra-frame data reuse in a reference frame to improve the time efficiency but neglected inter-frame data reuse. We propose a novel inter-frame data reuse scheme which can exploit both intra-frame and inter-frame data reuse for ME in video compression (VC-ME). Pixels of reconstructed frames are kept on-chip until they are used by the next current frame to avoid off-chip memory access. On-chip buffers with smart schedules of data access are designed to perform the new data reuse scheme. Three levels of the proposed inter-frame data reuse scheme are presented and analyzed. They give different choices with tradeoff between off-chip bandwidth requirement and on-chip memory size. All three levels have better data reuse efficiency than their intra-frame counterparts, so off-chip memory traffic is reduced effectively. Comparing the new inter-frame data reuse scheme with the traditional intra-frame data reuse scheme, the memory traffic can be reduced by 50% for VC-ME.

  10. Multi-acoustic lens design methodology for a low cost C-scan photoacoustic imaging camera

    NASA Astrophysics Data System (ADS)

    Chinni, Bhargava; Han, Zichao; Brown, Nicholas; Vallejo, Pedro; Jacobs, Tess; Knox, Wayne; Dogra, Vikram; Rao, Navalgund

    2016-03-01

    We have designed and implemented a novel acoustic lens based focusing technology into a prototype photoacoustic imaging camera. All photoacoustically generated waves from laser exposed absorbers within a small volume get focused simultaneously by the lens onto an image plane. We use a multi-element ultrasound transducer array to capture the focused photoacoustic signals. Acoustic lens eliminates the need for expensive data acquisition hardware systems, is faster compared to electronic focusing and enables real-time image reconstruction. Using this photoacoustic imaging camera, we have imaged more than 150 several centimeter size ex-vivo human prostate, kidney and thyroid specimens with a millimeter resolution for cancer detection. In this paper, we share our lens design strategy and how we evaluate the resulting quality metrics (on and off axis point spread function, depth of field and modulation transfer function) through simulation. An advanced toolbox in MATLAB was adapted and used for simulating a two-dimensional gridded model that incorporates realistic photoacoustic signal generation and acoustic wave propagation through the lens with medium properties defined on each grid point. Two dimensional point spread functions have been generated and compared with experiments to demonstrate the utility of our design strategy. Finally we present results from work in progress on the use of two lens system aimed at further improving some of the quality metrics of our system.

  11. Acoustic imaging with time reversal methods: From medicine to NDT

    NASA Astrophysics Data System (ADS)

    Fink, Mathias

    2015-03-01

    This talk will present an overview of the research conducted on ultrasonic time-reversal methods applied to biomedical imaging and to non-destructive testing. We will first describe iterative time-reversal techniques that allow both focusing ultrasonic waves on reflectors in tissues (kidney stones, micro-calcifications, contrast agents) or on flaws in solid materials. We will also show that time-reversal focusing does not need the presence of bright reflectors but it can be achieved only from the speckle noise generated by random distributions of non-resolved scatterers. We will describe the applications of this concept to correct distortions and aberrations in ultrasonic imaging and in NDT. In the second part of the talk we will describe the concept of time-reversal processors to get ultrafast ultrasonic images with typical frame rates of order of 10.000 F/s. It is the field of ultrafast ultrasonic imaging that has plenty medical applications and can be of great interest in NDT. We will describe some applications in the biomedical domain: Quantitative Elasticity imaging of tissues by following shear wave propagation to improve cancer detection and Ultrafast Doppler imaging that allows ultrasonic functional imaging.

  12. Structural changes and imaging signatures of acoustically sensitive microcapsules under ultrasound.

    PubMed

    Sridhar-Keralapura, Mallika; Thirumalai, Shruthi; Mobed-Miremadi, Maryam

    2013-07-01

    The ultrasound drug delivery field is actively designing new agents that would obviate the problems of just using microbubbles for drug delivery. Microbubbles have very short circulation time (minutes), low payload and large size (2-10μm), all of these aspects are not ideal for systemic drug delivery. However, microbubble carriers provide excellent image contrast and their use for image guidance can be exploited. In this paper, we suggest an alternative approach by developing acoustically sensitive microcapsule reservoirs that have future applications for treating large ischemic tumors through intratumoral therapy. We call these agents Acoustically Sensitized Microcapsules (ASMs) and these are not planned for the circulation. ASMs are very simple in their formulation, robust and reproducible. They have been designed to offer high payload (because of their large size), be acoustically sensitive and reactive (because of the Ultrasound Contrast Agents (UCAs) encapsulated) and mechanically robust for future injections/implantations within tumors. We describe three different aspects - (1) effect of therapeutic ultrasound; (2) mechanical properties and (3) imaging signatures of these agents. Under therapeutic ultrasound, the formation of a cavitational bubble was seen prior to rupture. The time to rupture was size dependent. Size dependency was also seen when measuring mechanical properties of these ASMs. % Alginate and permeability also affected the Young's modulus estimates. For study of imaging signatures of these agents, we show six schemes. For example, with harmonic imaging, tissue phantoms and controls did not generate higher harmonic components. Only ASM phantoms created a harmonic signal, whose sensitivity increased with applied acoustic pressure. Future work includes developing schemes combining both sonication and imaging to help detect ASMs before, during and after release of drug substance.

  13. Measurement of acoustic velocity in the stack of a thermoacoustic refrigerator using particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Berson, Arganthaël; Michard, Marc; Blanc-Benon, Philippe

    2008-06-01

    Thermoacoustic refrigeration systems generate cooling power from a high-amplitude acoustic standing wave. There has recently been a growing interest in this technology because of its simple and robust architecture and its use of environmentally safe gases. With the prospect of commercialization, it is necessary to enhance the efficiency of thermoacoustic cooling systems and more particularly of some of their components such as the heat exchangers. The characterization of the flow field at the end of the stack plates is a crucial step for the understanding and optimization of heat transfer between the stack and the heat exchangers. In this study, a specific particle image velocimetry measurement is performed inside a thermoacoustic refrigerator. Acoustic velocity is measured using synchronization and phase-averaging. The measurement method is validated inside a void resonator by successfully comparing experimental data with an acoustic plane wave model. Velocity is measured inside the oscillating boundary layers, between the plates of the stack, and compared to a linear model. The flow behind the stack is characterized, and it shows the generation of symmetric pairs of counter-rotating vortices at the end of the stack plates at low acoustic pressure level. As the acoustic pressure level increases, detachment of the vortices and symmetry breaking are observed.

  14. Synchronized imaging and acoustic analysis of the upper airway in patients with sleep-disordered breathing.

    PubMed

    Chang, Yi-Chung; Huon, Leh-Kiong; Pham, Van-Truong; Chen, Yunn-Jy; Jiang, Sun-Fen; Shih, Tiffany Ting-Fang; Tran, Thi-Thao; Wang, Yung-Hung; Lin, Chen; Tsao, Jenho; Lo, Men-Tzung; Wang, Pa-Chun

    2014-12-01

    Progressive narrowing of the upper airway increases airflow resistance and can produce snoring sounds and apnea/hypopnea events associated with sleep-disordered breathing due to airway collapse. Recent studies have shown that acoustic properties during snoring can be altered with anatomic changes at the site of obstruction. To evaluate the instantaneous association between acoustic features of snoring and the anatomic sites of obstruction, a novel method was developed and applied in nine patients to extract the snoring sounds during sleep while performing dynamic magnetic resonance imaging (MRI). The degree of airway narrowing during the snoring events was then quantified by the collapse index (ratio of airway diameter preceding and during the events) and correlated with the synchronized acoustic features. A total of 201 snoring events (102 pure retropalatal and 99 combined retropalatal and retroglossal events) were recorded, and the collapse index as well as the soft tissue vibration time were significantly different between pure retropalatal (collapse index, 2 ± 11%; vibration time, 0.2 ± 0.3 s) and combined (retropalatal and retroglossal) snores (collapse index, 13 ± 7% [P ≤ 0.0001]; vibration time, 1.2 ± 0.7 s [P ≤ 0.0001]). The synchronized dynamic MRI and acoustic recordings successfully characterized the sites of obstruction and established the dynamic relationship between the anatomic site of obstruction and snoring acoustics.

  15. Video rate imaging of narrow band THz radiation based on frequency upconversion

    NASA Astrophysics Data System (ADS)

    Tekavec, Patrick F.; Kozlov, Vladimir G.; Mcnee, Ian; Spektor, Igor E.; Lebedev, Sergey P.

    2015-03-01

    We demonstrate video rate THz imaging by detecting a frequency upconverted signal with a CMOS camera. A fiber laser pumped, double resonant optical parametric oscillator generates THz pulses via difference frequency generation in a quasi-phasematched gallium arsenide (QPM-GaAs) crystal located inside the OPO cavity. The output produced THz pulses centered at 1.5 THz, with an average power up to 1 mW, a linewidth of <100 GHz, and peak power of >2 W. By mixing the THz pulses with a portion of the fiber laser pump (1064 nm) in a second QPM-GaAs crystal, distinct sidebands are observed at 1058 nm and 1070 nm, corresponding to sum and difference frequency generation of the pump pule with the THz pulse. By using a polarizer and long pass filter, the strong pump light can be removed, leaving a nearly background free signal at 1070 nm. For imaging, a Fourier imaging geometry is used, with the object illuminated by the THz beam located one focal length from the GaAs crystal. The spatial Fourier transform is upconverted with a large diameter pump beam, after which a second lens inverse transforms the upconverted spatial components, and the image is detected with a CMOS camera. We have obtained video rate images with spatial resolution of 1mm and field of view ca. 20 mm in diameter without any post processing of the data.

  16. Real-time three-dimensional Fourier-domain optical coherence tomography video image guided microsurgeries

    NASA Astrophysics Data System (ADS)

    Kang, Jin U.; Huang, Yong; Zhang, Kang; Ibrahim, Zuhaib; Cha, Jaepyeong; Lee, W. P. Andrew; Brandacher, Gerald; Gehlbach, Peter L.

    2012-08-01

    The authors describe the development of an ultrafast three-dimensional (3D) optical coherence tomography (OCT) imaging system that provides real-time intraoperative video images of the surgical site to assist surgeons during microsurgical procedures. This system is based on a full-range complex conjugate free Fourier-domain OCT (FD-OCT). The system was built in a CPU-GPU heterogeneous computing architecture capable of video OCT image processing. The system displays at a maximum speed of 10 volume/s for an image volume size of 160×80×1024 (X×Y×Z) pixels. We have used this system to visualize and guide two prototypical microsurgical maneuvers: microvascular anastomosis of the rat femoral artery and ultramicrovascular isolation of the retinal arterioles of the bovine retina. Our preliminary experiments using 3D-OCT-guided microvascular anastomosis showed optimal visualization of the rat femoral artery (diameter<0.8 mm), instruments, and suture material. Real-time intraoperative guidance helped facilitate precise suture placement due to optimized views of the vessel wall during anastomosis. Using the bovine retina as a model system, we have performed "ultra microvascular" feasibility studies by guiding handheld surgical micro-instruments to isolate retinal arterioles (diameter˜0.1 mm). Isolation of the microvessels was confirmed by successfully passing a suture beneath the vessel in the 3D imaging environment.

  17. Real-time three-dimensional Fourier-domain optical coherence tomography video image guided microsurgeries

    PubMed Central

    Huang, Yong; Zhang, Kang; Ibrahim, Zuhaib; Cha, Jaepyeong; Lee, W. P. Andrew; Brandacher, Gerald; Gehlbach, Peter L.

    2012-01-01

    Abstract. The authors describe the development of an ultrafast three-dimensional (3D) optical coherence tomography (OCT) imaging system that provides real-time intraoperative video images of the surgical site to assist surgeons during microsurgical procedures. This system is based on a full-range complex conjugate free Fourier-domain OCT (FD-OCT). The system was built in a CPU-GPU heterogeneous computing architecture capable of video OCT image processing. The system displays at a maximum speed of 10  volume/s for an image volume size of 160×80×1024 (X×Y×Z) pixels. We have used this system to visualize and guide two prototypical microsurgical maneuvers: microvascular anastomosis of the rat femoral artery and ultramicrovascular isolation of the retinal arterioles of the bovine retina. Our preliminary experiments using 3D-OCT-guided microvascular anastomosis showed optimal visualization of the rat femoral artery (diameter<0.8  mm), instruments, and suture material. Real-time intraoperative guidance helped facilitate precise suture placement due to optimized views of the vessel wall during anastomosis. Using the bovine retina as a model system, we have performed “ultra microvascular” feasibility studies by guiding handheld surgical micro-instruments to isolate retinal arterioles (diameter∼0.1  mm). Isolation of the microvessels was confirmed by successfully passing a suture beneath the vessel in the 3D imaging environment. PMID:23224164

  18. Application of mathematical modelling methods for acoustic images reconstruction

    NASA Astrophysics Data System (ADS)

    Bolotina, I.; Kazazaeva, A.; Kvasnikov, K.; Kazazaev, A.

    2016-04-01

    The article considers the reconstruction of images by Synthetic Aperture Focusing Technique (SAFT). The work compares additive and multiplicative methods for processing signals received from antenna array. We have proven that the multiplicative method gives a better resolution. The study includes the estimation of beam trajectories for antenna arrays using analytical and numerical methods. We have shown that the analytical estimation method allows decreasing the image reconstruction time in case of linear antenna array implementation.

  19. Methods And Systems For Using Reference Images In Acoustic Image Processing

    DOEpatents

    Moore, Thomas L.; Barter, Robert Henry

    2005-01-04

    A method and system of examining tissue are provided in which a field, including at least a portion of the tissue and one or more registration fiducials, is insonified. Scattered acoustic information, including both transmitted and reflected waves, is received from the field. A representation of the field, including both the tissue and the registration fiducials, is then derived from the received acoustic radiation.

  20. Three dimensional full-wave nonlinear acoustic simulations: Applications to ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Pinton, Gianmarco

    2015-10-01

    Characterization of acoustic waves that propagate nonlinearly in an inhomogeneous medium has significant applications to diagnostic and therapeutic ultrasound. The generation of an ultrasound image of human tissue is based on the complex physics of acoustic wave propagation: diffraction, reflection, scattering, frequency dependent attenuation, and nonlinearity. The nonlinearity of wave propagation is used to the advantage of diagnostic scanners that use the harmonic components of the ultrasonic signal to improve the resolution and penetration of clinical scanners. One approach to simulating ultrasound images is to make approximations that can reduce the physics to systems that have a low computational cost. Here a maximalist approach is taken and the full three dimensional wave physics is simulated with finite differences. This paper demonstrates how finite difference simulations for the nonlinear acoustic wave equation can be used to generate physically realistic two and three dimensional ultrasound images anywhere in the body. A specific intercostal liver imaging scenario for two cases: with the ribs in place, and with the ribs removed. This configuration provides an imaging scenario that cannot be performed in vivo but that can test the influence of the ribs on image quality. Several imaging properties are studied, in particular the beamplots, the spatial coherence at the transducer surface, the distributed phase aberration, and the lesion detectability for imaging at the fundamental and harmonic frequencies. The results indicate, counterintuitively, that at the fundamental frequency the beamplot improves due to the apodization effect of the ribs but at the same time there is more degradation from reverberation clutter. At the harmonic frequency there is significantly less improvement in the beamplot and also significantly less degradation from reverberation. It is shown that even though simulating the full propagation physics is computationally challenging it

  1. Three dimensional full-wave nonlinear acoustic simulations: Applications to ultrasound imaging

    SciTech Connect

    Pinton, Gianmarco

    2015-10-28

    Characterization of acoustic waves that propagate nonlinearly in an inhomogeneous medium has significant applications to diagnostic and therapeutic ultrasound. The generation of an ultrasound image of human tissue is based on the complex physics of acoustic wave propagation: diffraction, reflection, scattering, frequency dependent attenuation, and nonlinearity. The nonlinearity of wave propagation is used to the advantage of diagnostic scanners that use the harmonic components of the ultrasonic signal to improve the resolution and penetration of clinical scanners. One approach to simulating ultrasound images is to make approximations that can reduce the physics to systems that have a low computational cost. Here a maximalist approach is taken and the full three dimensional wave physics is simulated with finite differences. This paper demonstrates how finite difference simulations for the nonlinear acoustic wave equation can be used to generate physically realistic two and three dimensional ultrasound images anywhere in the body. A specific intercostal liver imaging scenario for two cases: with the ribs in place, and with the ribs removed. This configuration provides an imaging scenario that cannot be performed in vivo but that can test the influence of the ribs on image quality. Several imaging properties are studied, in particular the beamplots, the spatial coherence at the transducer surface, the distributed phase aberration, and the lesion detectability for imaging at the fundamental and harmonic frequencies. The results indicate, counterintuitively, that at the fundamental frequency the beamplot improves due to the apodization effect of the ribs but at the same time there is more degradation from reverberation clutter. At the harmonic frequency there is significantly less improvement in the beamplot and also significantly less degradation from reverberation. It is shown that even though simulating the full propagation physics is computationally challenging it

  2. Parameter-dependence of the acoustic rotation effect of a metamaterial-based field rotator (Presentation Video)

    NASA Astrophysics Data System (ADS)

    Jiang, Xue; Cheng, JianChun; Liang, Bin

    2015-05-01

    The field rotator is a fascinating device capable to rotate the wave front by a certain angle, which can be regarded as a special kind of illusion. We have theoretically designed and experimentally realized an acoustic field rotator by exploiting acoustic metamaterials with extremely anisotropic parameters. A nearly perfect agreement is observed between the numerical simulation and experimental results. We have also studied the acoustic property of the acoustic rotator, and investigated how various structural parameters affect the performances of such devices, including the operating frequency range and rotation angle, which are of particularly significance for the application. The inspection of the operating frequency range shows the device can work within a considerably broad band as long as the effective medium approximation is valid. The influence of the configuration of the metamaterial unit has also been investigated, illustrating the increase of anisotropy of metamaterial helps to enhance the rotator effect, which can be conveniently attained by elongating each rectangle inserted to the units. Furthermore, we have analyzed the underlying physics to gain a deep insight to the rotation mechanism, and discussed the application of such devices for non-plane wave and the potential of extending the scheme to three-dimensional cases. The realization of acoustic field rotator has opened up a new avenue for the versatile manipulations on acoustic waves and our findings are of significance to their design and characterization, which may pave the way for the practical application of such devices.

  3. Focused acoustic beam imaging of grain structure and local Young's modulus with Rayleigh and surface skimming longitudinal waves

    SciTech Connect

    Martin, R. W.; Sathish, S.; Blodgett, M. P.

    2013-01-25

    The interaction of a focused acoustic beam with materials generates Rayleigh surface waves (RSW) and surface skimming longitudinal waves (SSLW). Acoustic microscopic investigations have used the RSW amplitude and the velocity measurements, extensively for grain structure analysis. Although, the presence of SSLW has been recognized, it is rarely used in acoustic imaging. This paper presents an approach to perform microstructure imaging and local elastic modulus measurements by combining both RSW and SSLW. The acoustic imaging of grain structure was performed by measuring the amplitude of RSW and SSLW signal. The microstructure images obtained on the same region of the samples with RSW and SSLW are compared and the difference in the contrast observed is discussed based on the propagation characteristics of the individual surface waves. The velocity measurements are determined by two point defocus method. The surface wave velocities of RSW and SSLW of the same regions of the sample are combined and presented as average Young's modulus image.

  4. Synthetic aperture acoustic imaging of canonical targets with a 2-15 kHz linear FM chirp

    NASA Astrophysics Data System (ADS)

    Vignola, Joseph F.; Judge, John A.; Good, Chelsea E.; Bishop, Steven S.; Gugino, Peter M.; Soumekh, Mehrdad

    2011-06-01

    Synthetic aperture image reconstruction applied to outdoor acoustic recordings is presented. Acoustic imaging is an alternate method having several military relevant advantages such as being immune to RF jamming, superior spatial resolution, capable of standoff side and forward-looking scanning, and relatively low cost, weight and size when compared to 0.5 - 3 GHz ground penetrating radar technologies. Synthetic aperture acoustic imaging is similar to synthetic aperture radar, but more akin to synthetic aperture sonar technologies owing to the nature of longitudinal or compressive wave propagation in the surrounding acoustic medium. The system's transceiver is a quasi mono-static microphone and audio speaker pair mounted on a rail 5meters in length. Received data sampling rate is 80 kHz with a 2- 15 kHz Linear Frequency Modulated (LFM) chirp, with a pulse repetition frequency (PRF) of 10 Hz and an inter-pulse period (IPP) of 50 milliseconds. Targets are positioned within the acoustic scene at slant range of two to ten meters on grass, dirt or gravel surfaces, and with and without intervening metallic chain link fencing. Acoustic image reconstruction results in means for literal interpretation and quantifiable analyses. A rudimentary technique characterizes acoustic scatter at the ground surfaces. Targets within the acoustic scene are first digitally spotlighted and further processed, providing frequency and aspect angle dependent signature information.

  5. Video-rate imaging of microcirculation with single-exposure oblique back-illumination microscopy.

    PubMed

    Ford, Tim N; Mertz, Jerome

    2013-06-01

    Oblique back-illumination microscopy (OBM) is a new technique for simultaneous, independent measurements of phase gradients and absorption in thick scattering tissues based on widefield imaging. To date, OBM has been used with sequential camera exposures, which reduces temporal resolution, and can produce motion artifacts in dynamic samples. Here, a variation of OBM that allows single-exposure operation with wavelength multiplexing and image splitting with a Wollaston prism is introduced. Asymmetric anamorphic distortion induced by the prism is characterized and corrected in real time using a graphics-processing unit. To demonstrate the capacity of single-exposure OBM to perform artifact-free imaging of blood flow, video-rate movies of microcirculation in ovo in the chorioallantoic membrane of the developing chick are presented. Imaging is performed with a high-resolution rigid Hopkins lens suitable for endoscopy. PMID:23733023

  6. Video-rate imaging of microcirculation with single-exposure oblique back-illumination microscopy

    NASA Astrophysics Data System (ADS)

    Ford, Tim N.; Mertz, Jerome

    2013-06-01

    Oblique back-illumination microscopy (OBM) is a new technique for simultaneous, independent measurements of phase gradients and absorption in thick scattering tissues based on widefield imaging. To date, OBM has been used with sequential camera exposures, which reduces temporal resolution, and can produce motion artifacts in dynamic samples. Here, a variation of OBM that allows single-exposure operation with wavelength multiplexing and image splitting with a Wollaston prism is introduced. Asymmetric anamorphic distortion induced by the prism is characterized and corrected in real time using a graphics-processing unit. To demonstrate the capacity of single-exposure OBM to perform artifact-free imaging of blood flow, video-rate movies of microcirculation in ovo in the chorioallantoic membrane of the developing chick are presented. Imaging is performed with a high-resolution rigid Hopkins lens suitable for endoscopy.

  7. Video-rate imaging of microcirculation with single-exposure oblique back-illumination microscopy.

    PubMed

    Ford, Tim N; Mertz, Jerome

    2013-06-01

    Oblique back-illumination microscopy (OBM) is a new technique for simultaneous, independent measurements of phase gradients and absorption in thick scattering tissues based on widefield imaging. To date, OBM has been used with sequential camera exposures, which reduces temporal resolution, and can produce motion artifacts in dynamic samples. Here, a variation of OBM that allows single-exposure operation with wavelength multiplexing and image splitting with a Wollaston prism is introduced. Asymmetric anamorphic distortion induced by the prism is characterized and corrected in real time using a graphics-processing unit. To demonstrate the capacity of single-exposure OBM to perform artifact-free imaging of blood flow, video-rate movies of microcirculation in ovo in the chorioallantoic membrane of the developing chick are presented. Imaging is performed with a high-resolution rigid Hopkins lens suitable for endoscopy.

  8. Image Size Scalable Full-parallax Coloured Three-dimensional Video by Electronic Holography

    NASA Astrophysics Data System (ADS)

    Sasaki, Hisayuki; Yamamoto, Kenji; Ichihashi, Yasuyuki; Senoh, Takanori

    2014-02-01

    In electronic holography, various methods have been considered for using multiple spatial light modulators (SLM) to increase the image size. In a previous work, we used a monochrome light source for a method that located an optical system containing lens arrays and other components in front of multiple SLMs. This paper proposes a colourization technique for that system based on time division multiplexing using laser light sources of three colours (red, green, and blue). The experimental device we constructed was able to perform video playback (20 fps) in colour of full parallax holographic three-dimensional (3D) images with an image size of 63 mm and a viewing-zone angle of 5.6 degrees without losing any part of the 3D image.

  9. Target-acquisition performance in undersampled infrared imagers: static imagery to motion video.

    PubMed

    Krapels, Keith; Driggers, Ronald G; Teaney, Brian

    2005-11-20

    In this research we show that the target-acquisition performance of an undersampled imager improves with sensor or target motion. We provide an experiment designed to evaluate the improvement in observer performance as a function of target motion rate in the video. We created the target motion by mounting a thermal imager on a precision two-axis gimbal and varying the sensor motion rate from 0.25 to 1 instantaneous field of view per frame. A midwave thermal imager was used to permit short integration times and remove the effects of motion blur. It is shown that the human visual system performs a superresolution reconstruction that mitigates some aliasing and provides a higher (than static imagery) effective resolution. This process appears to be relatively independent of motion velocity. The results suggest that the benefits of superresolution reconstruction techniques as applied to imaging systems with motion may be limited. PMID:16318174

  10. Venus in motion: An animated video catalog of Pioneer Venus Orbiter Cloud Photopolarimeter images

    NASA Technical Reports Server (NTRS)

    Limaye, Sanjay S.

    1992-01-01

    Images of Venus acquired by the Pioneer Venus Orbiter Cloud Photopolarimeter (OCPP) during the 1982 opportunity have been utilized to create a short video summary of the data. The raw roll by roll images were first navigated using the spacecraft attitude and orbit information along with the CPP instrument pointing information. The limb darkening introduced by the variation of solar illumination geometry and the viewing angle was then modelled and removed. The images were then projected to simulate a view obtained from a fixed perspective with the observer at 10 Venus radii away and located above a Venus latitude of 30 degrees south and a longitude 60 degrees west. A total of 156 images from the 1982 opportunity have been animated at different dwell rates.

  11. Characterising the dynamics of expirated bloodstain pattern formation using high-speed digital video imaging.

    PubMed

    Donaldson, Andrea E; Walker, Nicole K; Lamont, Iain L; Cordiner, Stephen J; Taylor, Michael C

    2011-11-01

    During forensic investigations, it is often important to be able to distinguish between impact spatter patterns (blood from gunshots, explosives, blunt force trauma and/or machinery accidents) and bloodstain patterns generated by expiration (blood from the mouth, nose or lungs). These patterns can be difficult to distinguish on the basis of the size of the bloodstains. In this study, high-speed digital video imaging has been used to investigate the formation of expirated bloodstain patterns generated by breathing, spitting and coughing mechanisms. Bloodstain patterns from all three expiration mechanisms were dominated by the presence of stains less than 0.5 mm in diameter. Video analysis showed that in the process of coughing blood, high-velocity, very small blood droplets were ejected first. These were followed by lower velocity, larger droplets, strands and plumes of liquid held together in part by saliva. The video images showed the formation of bubble rings and beaded stains, traditional markers for classifying expirated patterns. However, the expulsion mechanism, the distance travelled by the blood droplets, and the type of surface the blood was deposited on were all factors determining whether beaded stains were generated.

  12. ACOUSTICAL IMAGING AND MECHANICAL PROPERTIES OF SOFT ROCK AND MARINE SEDIMENTS

    SciTech Connect

    Thurman E. Scott, Jr., Ph.D.; Younane Abousleiman, Ph.D.; Musharraf Zaman, Ph.D., P.E.

    2001-01-31

    During this phase of the project the research team concentrated on acquisition of acoustic emission data from the high porosity rock samples. The initial experiments indicated that the acoustic emission activity from high porosity Danian chalk were of a very low amplitude. Even though the sample underwent yielding and significant plastic deformation the sample did not generate significant AE activity. This was somewhat surprising. These initial results call into question the validity of attempting to locate AE activity in this weak rock type. As a result the testing program was slightly altered to include measuring the acoustic emission activity from many of the rock types listed in the research program. The preliminary experimental results indicate that AE activity in the sandstones is much higher than in the carbonate rocks (i.e., the chalks and limestones). This observation may be particularly important for planning microseismic imaging of reservoir rocks in the field environment. The preliminary results suggest that microseismic imaging of reservoir rock from acoustic emission activity generated from matrix deformation (during compaction and subsidence) would be extremely difficult to accomplish.

  13. Acoustic reciprocity of spatial coherence in ultrasound imaging.

    PubMed

    Bottenus, Nick; Üstüner, Kutay F

    2015-05-01

    A conventional ultrasound image is formed by transmitting a focused wave into tissue, time-shifting the backscattered echoes received on an array transducer, and summing the resulting signals. The van Cittert-Zernike theorem predicts a particular similarity, or coherence, of these focused signals across the receiving array. Many groups have used an estimate of the coherence to augment or replace the B-mode image in an effort to suppress noise and stationary clutter echo signals, but this measurement requires access to individual receive channel data. Most clinical systems have efficient pipelines for producing focused and summed RF data without any direct way to individually address the receive channels. We describe a method for performing coherence measurements that is more accessible for a wide range of coherence-based imaging. The reciprocity of the transmit and receive apertures in the context of coherence is derived and equivalence of the coherence function is validated experimentally using a research scanner. The proposed method is implemented on a commercial ultrasound system and in vivo short-lag spatial coherence imaging is demonstrated using only summed RF data. The components beyond the acquisition hardware and beamformer necessary to produce a real-time ultrasound coherence imaging system are discussed. PMID:25965679

  14. Abnormal Image Detection in Endoscopy Videos Using a Filter Bank and Local Binary Patterns

    PubMed Central

    Nawarathna, Ruwan; Oh, JungHwan; Muthukudage, Jayantha; Tavanapong, Wallapak; Wong, Johnny; de Groen, Piet C.; Tang, Shou Jiang

    2014-01-01

    Finding mucosal abnormalities (e.g., erythema, blood, ulcer, erosion, and polyp) is one of the most essential tasks during endoscopy video review. Since these abnormalities typically appear in a small number of frames (around 5% of the total frame number), automated detection of frames with an abnormality can save physician’s time significantly. In this paper, we propose a new multi-texture analysis method that effectively discerns images showing mucosal abnormalities from the ones without any abnormality since most abnormalities in endoscopy images have textures that are clearly distinguishable from normal textures using an advanced image texture analysis method. The method uses a “texton histogram” of an image block as features. The histogram captures the distribution of different “textons” representing various textures in an endoscopy image. The textons are representative response vectors of an application of a combination of Leung and Malik (LM) filter bank (i.e., a set of image filters) and a set of Local Binary Patterns on the image. Our experimental results indicate that the proposed method achieves 92% recall and 91.8% specificity on wireless capsule endoscopy (WCE) images and 91% recall and 90.8% specificity on colonoscopy images. PMID:25132723

  15. Three-dimensional ghost imaging using acoustic transducer

    NASA Astrophysics Data System (ADS)

    Zhang, Chi; Guo, Shuxu; Guan, Jian; Cao, Junsheng; Gao, Fengli

    2016-06-01

    We propose a novel three-dimensional (3D) ghost imaging method using unfocused ultrasonic transducer, where the transducer is used as the bucket detector to collect the total photoacoustic signal intensity from spherical surfaces with different radius circling the transducer. This collected signal is a time sequence corresponding to the optic absorption information on the spherical surfaces, and the values at the same moments in all the sequences are used as the bucket signals to restore the corresponding spherical images, which are assembled as the object 3D reconstruction. Numerical experiments show this method can effectively accomplish the 3D reconstruction and by adding up each sequence on time domain as a bucket signal it can also realize two dimensional (2D) ghost imaging. The influence of the measurement times on the 3D and 2D reconstruction is analyzed with Peak Signal to Noise Ratio (PSNR) as the yardstick, and the transducer as a bucket detector is also discussed.

  16. An Objective Focussing Measure for Acoustically Obtained Images

    NASA Astrophysics Data System (ADS)

    Czarnecki, Krzysztof; Moszyński, Marek; Rojewski, Mirosław

    In scientific literature many parameters of an image sharpness can be defined, that can be used for the evaluation of display energy concentration (EC). This paper proposes a new, simple approach to EC quantitative evaluation in spectrograms, which are used for the analysis and visualization of sonar signals. The presented approach of the global-image EC measure was developed to the evaluation of EC in arbitrary direction (or at an arbitrary angle) and along an arbitrary path that is contained within the displayed area. The proposed measures were used to establish optimum spectrograph parameters, subject to high EC in images, in particular the type and width of the window. Moreover, the paper defines the marginal EC distributions that can be used in sonar signal detection as a support to the main detector.

  17. Characteristics of luminous structures in the stratosphere above thunderstorms as imaged by low-light video

    NASA Technical Reports Server (NTRS)

    Lyons, Walter A.

    1994-01-01

    An experiment was conducted in which an image-intensified, low-light video camera systematically monitored the stratosphere above distant (100-800 km) mesoscale convective systems over the high plains of the central U.S. for 21 nights between 6 July and 27 August 1993. Complex, luminous structures were observed above large thunderstorm clusters on eleven nights, with one storm system (7 July 1993) yielding 248 events in 410 minutes. Their duration ranged from 33 to 283 ms, with an average of 98 ms. The luminous structures, generally not visible to the naked, dark-adapted eye, exhibited on video a wide variety of brightness levels and shapes including streaks, aurora-like curtains, smudges, fountains and jets. The structures were often more than 10 km wide and their upper portions extended to above 50 km msl.

  18. Characteristics of luminous structures in the stratosphere above thunderstorms as imaged by low-light video

    SciTech Connect

    Lyons, W.A. , Inc., Ft. Collins, CO )

    1994-05-15

    An experiment was conducted in which an image-intensified, low-light video camera systematically monitored the stratosphere above distant (100-800 km) mesoscale convective systems over the high plains of the central US for 21 nights between 6 July and 27 August 1993. Complex, luminous structures were observed above large thunderstorm clusters on eleven nights, with one storm system (7 July 1993) yielding 248 events in 410 minutes. Their duration ranged from 33 to 283 ms, with an average of 98 ms. The luminous structures, generally not visible to the naked, dark-adapted eye, exhibited on video a wide variety of brightness levels and shapes including streaks, aurora-like curtains, smudges, fountains and jets. The structures were often more than 10 km wide and their upper portions extended to above 50 km msl. 14 refs., 4 figs.

  19. Effects of image restoration on automatic acquisition of moving objects in thermal video sequences degraded by the atmosphere.

    PubMed

    Haik, Oren; Yitzhaky, Yitzhak

    2007-12-20

    We aim to determine the effect of image restoration (deblurring) on the ability to acquire moving objects detected automatically from long-distance thermal video signals. This is done by first restoring the videos using a blind-deconvolution method developed recently, and then examining its effect on the geometrical features of automatically detected moving objects. Results show that for modern (low-noise and high-resolution) thermal imaging devices, the geometrical features obtained from the restored videos better resemble the true properties of the objects. These results correspond to a previous study, which demonstrated that image restoration can significantly improve the ability of human observers to acquire moving objects from long-range thermal videos.

  20. Synthetic streak images (x-t diagrams) from high-speed digital video records

    NASA Astrophysics Data System (ADS)

    Settles, Gary

    2013-11-01

    Modern digital video cameras have entirely replaced the older photographic drum and rotating-mirror cameras for recording high-speed physics phenomena. They are superior in almost every regard except, at speeds approaching one million frames/s, sensor segmentation results in severely reduced frame size, especially height. However, if the principal direction of subject motion is arranged to be along the frame length, a simple Matlab code can extract a row of pixels from each frame and stack them to produce a pseudo-streak image or x-t diagram. Such a 2-D image can convey the essence of the large volume of information contained in a high-speed video sequence, and can be the basis for the extraction of quantitative velocity data. Examples include streak shadowgrams of explosions and gunshots, streak schlieren images of supersonic cavity-flow oscillations, and direct streak images of shock-wave motion in polyurea samples struck by gas-gun projectiles, from which the shock Hugoniot curve of the polymer is measured. This approach is especially useful, since commercial streak cameras remain very expensive and rooted in 20th-century technology.

  1. VISDTA: A video imaging system for detection, tracking, and assessment: Prototype development and concept demonstration

    SciTech Connect

    Pritchard, D.A.

    1987-05-01

    It has been demonstrated that thermal imagers are an effective surveillance and assessment tool for security applications because: (1) they work day or night due to their sensitivity to thermal signatures; (2) penetrability through fog, rain, dust, etc., is better than human eyes; (3) short or long range operation is possible with various optics; and (4) they are strictly passive devices providing visible imagery which is readily interpreted by the operator with little training. Unfortunately, most thermal imagers also require the setup of a tripod, connection of batteries, cables, display, etc. When this is accomplished, the operator must manually move the camera back and forth searching for signs of aggressor activity. VISDTA is designed to provide automatic panning, and in a sense, ''watch'' the imagery in place of the operator. The idea behind the development of VISDTA is to provide a small, portable, rugged system to automatically scan areas and detect targets by computer processing of images. It would use a thermal imager and possibly an intensified day/night TV camera, a pan/ tilt mount, and a computer for system control. If mounted on a dedicated vehicle or on a tower, VISDTA will perform video motion detection functions on incoming video imagery, and automatically scan predefined patterns in search of abnormal conditions which may indicate attempted intrusions into the field-of-regard. In that respect, VISDTA is capable of improving the ability of security forces to maintain security of a given area of interest by augmenting present techniques and reducing operator fatigue.

  2. Near-Field Imaging with Sound: An Acoustic STM Model

    ERIC Educational Resources Information Center

    Euler, Manfred

    2012-01-01

    The invention of scanning tunneling microscopy (STM) 30 years ago opened up a visual window to the nano-world and sparked off a bunch of new methods for investigating and controlling matter and its transformations at the atomic and molecular level. However, an adequate theoretical understanding of the method is demanding; STM images can be…

  3. A flexible software architecture for scalable real-time image and video processing applications

    NASA Astrophysics Data System (ADS)

    Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.

    2012-06-01

    Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility because they are normally oriented towards particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse and inefficient execution on multicore processors. This paper presents a novel software architecture for real-time image and video processing applications which addresses these issues. The architecture is divided into three layers: the platform abstraction layer, the messaging layer, and the application layer. The platform abstraction layer provides a high level application programming interface for the rest of the architecture. The messaging layer provides a message passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of messages. The application layer provides a repository for reusable application modules designed for real-time image and video processing applications. These modules, which include acquisition, visualization, communication, user interface and data processing modules, take advantage of the power of other well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, we present different prototypes and applications to show the possibilities of the proposed architecture.

  4. Screen-imaging guidance using a modified portable video macroscope for middle cerebral artery occlusion.

    PubMed

    Zhu, Xingbao; Luo, Junli; Liu, Yun; Chen, Guolong; Liu, Song; Ruan, Qiangjin; Deng, Xunding; Wang, Dianchun; Fan, Quanshui; Pan, Xinghua

    2012-04-25

    The use of operating microscopes is limited by the focal length. Surgeons using these instruments cannot simultaneously view and access the surgical field and must choose one or the other. The longer focal length (more than 1 000 mm) of an operating telescope permits a position away from the operating field, above the surgeon and out of the field of view. This gives the telescope an advantage over an operating microscope. We developed a telescopic system using screen-imaging guidance and a modified portable video macroscope constructed from a Computar MLH-10 × macro lens, a DFK-21AU04 USB CCD Camera and a Dell laptop computer as monitor screen. This system was used to establish a middle cerebral artery occlusion model in rats. Results showed that magnification of the modified portable video macroscope was appropriate (5-20 ×) even though the Computar MLH-10 × macro lens was placed 800 mm away from the operating field rather than at the specified working distance of 152.4 mm with a zoom of 1-40 ×. The screen-imaging telescopic technique was clear, life-like, stereoscopic and matched the actual operation. Screen-imaging guidance led to an accurate, smooth, minimally invasive and comparatively easy surgical procedure. Success rate of the model establishment evaluated by neurological function using the modified neurological score system was 74.07%. There was no significant difference in model establishment time, sensorimotor deficit and infarct volume percentage. Our findings indicate that the telescopic lens is effective in the screen surgical operation mode referred to as "long distance observation and short distance operation" and that screen-imaging guidance using an modified portable video macroscope can be utilized for the establishment of a middle cerebral artery occlusion model and micro-neurosurgery. PMID:25722675

  5. Observations of Brine Pool Surface Characteristics and Internal Structure Through Remote Acoustic and Structured Light Imaging

    NASA Astrophysics Data System (ADS)

    Smart, C.; Roman, C.; Michel, A.; Wankel, S. D.

    2015-12-01

    Observations and analysis of the surface characteristics and internal structure of deep-sea brine pools are currently limited to discrete in-situ observations. Complementary acoustic and structured light imaging sensors mounted on a remotely operated vehicle (ROV) have demonstrated the ability systematically detect variations in surface characteristics of a brine pool, reveal internal stratification and detect areas of active hydrocarbon activity. The presented visual and acoustic sensors combined with a stereo camera pair are mounted on the 4000m rated ROV Hercules (Ocean Exploration Trust). These three independent sensors operate simultaneously from a typical 3m altitude resulting in visual and bathymetric maps with sub-centimeter resolution. Applying this imaging technology to 2014 and 2015 brine pool surveys in the Gulf of Mexico revealed acoustic and visual anomalies due to the density changes inherent in the brine. Such distinct changes in acoustic impedance allowed the high frequency 1350KHz multibeam sonar to detect multiple interfaces. For instance, distinct acoustic reflections were observed at 3m and 5.5m below the vehicle. Subsequent verification using a CDT and lead line indicated the acoustic return from the brine surface was the signal at 3m, while a thicker muddy and more saline interface occurred at 5.5m, the bottom of the brine pool was not located but is assumed to be deeper than 15m. The multibeam is also capable of remotely detecting emitted gas bubbles within the brine pool, indicative of active hydrocarbon seeps. Bubbles associated with these seeps were not consistently visible above the brine while using the HD camera on the ROV. Additionally, while imaging the surface of brine pool the structured light sheet laser became diffuse, refracting across the main interface. Analysis of this refraction combined with varying acoustic returns allow for systematic and remote detection of the density, stratification and activity levels within and

  6. Finite Difference Time Domain Analysis of Underwater Acoustic Lens System for Ambient Noise Imaging

    NASA Astrophysics Data System (ADS)

    Mori, Kazuyoshi; Miyazaki, Ayano; Ogasawara, Hanako; Yokoyama, Tomoki; Nakamura, Toshiaki

    2006-05-01

    Much attention has been paid to the new idea of detecting objects using ocean ambient noise. This concept is called ambient noise imaging (ANI). In this study, sound fields focused by an acoustic lens system constructed with a single biconcave lens were analyzed using the finite difference time domain (FDTD) method for realizing an ANI system. The size of the lens aperture that would have sufficient resolution—for example, the beam width is 1° at 60 kHz—was roughly determined by comparing the image points and -3 dB areas of sound pressure fields generated by lenses with various apertures. Then, in another FDTD analysis, we successfully used a lens with a determined aperture to detect rigid target objects in an acoustic noise field generated by a large number of point sources.

  7. Reflection imaging in the millimeter-wave range using a video-rate terahertz camera

    NASA Astrophysics Data System (ADS)

    Marchese, Linda E.; Terroux, Marc; Doucet, Michel; Blanchard, Nathalie; Pancrati, Ovidiu; Dufour, Denis; Bergeron, Alain

    2016-05-01

    The ability of millimeter waves (1-10 mm, or 30-300 GHz) to penetrate through dense materials, such as leather, wool, wood and gyprock, and to also transmit over long distances due to low atmospheric absorption, makes them ideal for numerous applications, such as body scanning, building inspection and seeing in degraded visual environments. Current drawbacks of millimeter wave imaging systems are they use single detector or linear arrays that require scanning or the two dimensional arrays are bulky, often consisting of rather large antenna-couple focal plane arrays (FPAs). Previous work from INO has demonstrated the capability of its compact lightweight camera, based on a 384 x 288 microbolometer pixel FPA with custom optics for active video-rate imaging at wavelengths of 118 μm (2.54 THz), 432 μm (0.69 THz), 663 μm (0.45 THz), and 750 μm (0.4 THz). Most of the work focused on transmission imaging, as a first step, but some preliminary demonstrations of reflection imaging at these were also reported. In addition, previous work also showed that the broadband FPA remains sensitive to wavelengths at least up to 3.2 mm (94 GHz). The work presented here demonstrates the ability of the INO terahertz camera for reflection imaging at millimeter wavelengths. Snapshots taken at video rates of objects show the excellent quality of the images. In addition, a description of the imaging system that includes the terahertz camera and different millimeter sources is provided.

  8. Video-image analyses of the cross-stream distribution of smoke in the near wake of a building

    SciTech Connect

    Huber, A.H.; Arya, S.P.S.

    1988-04-01

    In a wind-tunnel study, recorded video images of the top view of smoke dispersion in the wake of a building were analyzed. A continuous source of smoke was emitted at floor level, midway along the leeward side of the building. The technique and usefulness of analyzing video images of smoke is demonstrated in a study of building effects on smoke dispersion. The presentation discusses how the video-image intensity is corrected for background intensity and then normalized to a scale of 0 to 100%. Profiles of the normalized image mean intensity are compared with similar profiles for the mean concentration of hydrocarbon tracer. The distributions of intensity of cross-stream profiles are discussed. These distributions were analyzed in time and space. Also, time-averaged cross-stream profiles of mean, standard deviation, and other statistics of image intensity are compared with traditional concentration measurements for symmetric wake flow.

  9. The research on binocular stereo video imaging and display system based on low-light CMOS

    NASA Astrophysics Data System (ADS)

    Xie, Ruobing; Li, Li; Jin, Weiqi; Guo, Hong

    2015-10-01

    It is prevalent for the low-light night-vision helmet to equip the binocular viewer with image intensifiers. Such equipment can not only acquire night vision ability, but also obtain the sense of stereo vision to achieve better perception and understanding of the visual field. However, since the image intensifier is for direct-observation, it is difficult to apply the modern image processing technology. As a result, developing digital video technology in night vision is of great significance. In this paper, we design a low-light night-vision helmet with digital imaging device. It consists of three parts: a set of two low-illumination CMOS cameras, a binocular OLED micro display and an image processing PCB. Stereopsis is achieved through the binocular OLED micro display. We choose Speed-Up Robust Feature (SURF) algorithm for image registration. Based on the image matching information and the cameras' calibration parameters, disparity can be calculated in real-time. We then elaborately derive the constraints of binocular stereo display. The sense of stereo vision can be obtained by dynamically adjusting the content of the binocular OLED micro display. There is sufficient space for function extensions in our system. The performance of this low-light night-vision helmet can be further enhanced in combination with The HDR technology and image fusion technology, etc.

  10. Underwater Acoustic Matched Field Imaging Based on Compressed Sensing

    PubMed Central

    Yan, Huichen; Xu, Jia; Long, Teng; Zhang, Xudong

    2015-01-01

    Matched field processing (MFP) is an effective method for underwater target imaging and localizing, but its performance is not guaranteed due to the nonuniqueness and instability problems caused by the underdetermined essence of MFP. By exploiting the sparsity of the targets in an imaging area, this paper proposes a compressive sensing MFP (CS-MFP) model from wave propagation theory by using randomly deployed sensors. In addition, the model’s recovery performance is investigated by exploring the lower bounds of the coherence parameter of the CS dictionary. Furthermore, this paper analyzes the robustness of CS-MFP with respect to the displacement of the sensors. Subsequently, a coherence-excluding coherence optimized orthogonal matching pursuit (CCOOMP) algorithm is proposed to overcome the high coherent dictionary problem in special cases. Finally, some numerical experiments are provided to demonstrate the effectiveness of the proposed CS-MFP method. PMID:26457708

  11. Underwater Acoustic Matched Field Imaging Based on Compressed Sensing.

    PubMed

    Yan, Huichen; Xu, Jia; Long, Teng; Zhang, Xudong

    2015-01-01

    Matched field processing (MFP) is an effective method for underwater target imaging and localizing, but its performance is not guaranteed due to the nonuniqueness and instability problems caused by the underdetermined essence of MFP. By exploiting the sparsity of the targets in an imaging area, this paper proposes a compressive sensing MFP (CS-MFP) model from wave propagation theory by using randomly deployed sensors. In addition, the model's recovery performance is investigated by exploring the lower bounds of the coherence parameter of the CS dictionary. Furthermore, this paper analyzes the robustness of CS-MFP with respect to the displacement of the sensors. Subsequently, a coherence-excluding coherence optimized orthogonal matching pursuit (CCOOMP) algorithm is proposed to overcome the high coherent dictionary problem in special cases. Finally, some numerical experiments are provided to demonstrate the effectiveness of the proposed CS-MFP method. PMID:26457708

  12. A magnetic resonance imaging study on the articulatory and acoustic speech parameters of Malay vowels

    PubMed Central

    2014-01-01

    The phonetic properties of six Malay vowels are investigated using magnetic resonance imaging (MRI) to visualize the vocal tract in order to obtain dynamic articulatory parameters during speech production. To resolve image blurring due to the tongue movement during the scanning process, a method based on active contour extraction is used to track tongue contours. The proposed method efficiently tracks tongue contours despite the partial blurring of MRI images. Consequently, the articulatory parameters that are effectively measured as tongue movement is observed, and the specific shape of the tongue and its position for all six uttered Malay vowels are determined. Speech rehabilitation procedure demands some kind of visual perceivable prototype of speech articulation. To investigate the validity of the measured articulatory parameters based on acoustic theory of speech production, an acoustic analysis based on the uttered vowels by subjects has been performed. As the acoustic speech and articulatory parameters of uttered speech were examined, a correlation between formant frequencies and articulatory parameters was observed. The experiments reported a positive correlation between the constriction location of the tongue body and the first formant frequency, as well as a negative correlation between the constriction location of the tongue tip and the second formant frequency. The results demonstrate that the proposed method is an effective tool for the dynamic study of speech production. PMID:25060583

  13. Acoustical standards in engineering acoustics

    NASA Astrophysics Data System (ADS)

    Burkhard, Mahlon D.

    2001-05-01

    The Engineering Acoustics Technical Committee is concerned with the evolution and improvement of acoustical techniques and apparatus, and with the promotion of new applications of acoustics. As cited in the Membership Directory and Handbook (2002), the interest areas include transducers and arrays; underwater acoustic systems; acoustical instrumentation and monitoring; applied sonics, promotion of useful effects, information gathering and transmission; audio engineering; acoustic holography and acoustic imaging; acoustic signal processing (equipment and techniques); and ultrasound and infrasound. Evident connections between engineering and standards are needs for calibration, consistent terminology, uniform presentation of data, reference levels, or design targets for product development. Thus for the acoustical engineer standards are both a tool for practices, for communication, and for comparison of his efforts with those of others. Development of many standards depends on knowledge of the way products are put together for the market place and acoustical engineers provide important input to the development of standards. Acoustical engineers and members of the Engineering Acoustics arm of the Society both benefit from and contribute to the Acoustical Standards of the Acoustical Society.

  14. An image reconstruction for Capella with the Steward Observatory/AFGL intensified video speckle interferometry system

    NASA Astrophysics Data System (ADS)

    Cocke, W. J.; Hege, E. K.; Hubbard, E. N.; Strittmatter, P. A.; Worden, S. P.

    Since their invention in 1970, speckle interferometric techniques have evolved from simple optical processing of photographic images to high-speed digital processing of quantum-limited video data. Basic speckle interferometric techniques are discussed, taking into account the implementation of two distinct data-recording/data-processing modes. A description of image reconstruction techniques is also provided. Two methods for image phase retrieval have been implemented, including a phase unwrapping method developed by Cocke (1980) and the phase accumulation method of Knox and Thompson (1974). On February 3, 1981, analogue mode speckle interferograms for Capella and the unresolved star Gamma Ori were obtained with both the phase-unwrapping and the Knox-Thompson method.

  15. ΤND: a thyroid nodule detection system for analysis of ultrasound images and videos.

    PubMed

    Keramidas, Eystratios G; Maroulis, Dimitris; Iakovidis, Dimitris K

    2012-06-01

    In this paper, we present a computer-aided-diagnosis (CAD) system prototype, named TND (Thyroid Nodule Detector), for the detection of nodular tissue in ultrasound (US) thyroid images and videos acquired during thyroid US examinations. The proposed system incorporates an original methodology that involves a novel algorithm for automatic definition of the boundaries of the thyroid gland, and a novel approach for the extraction of noise resilient image features effectively representing the textural and the echogenic properties of the thyroid tissue. Through extensive experimental evaluation on real thyroid US data, its accuracy in thyroid nodule detection has been estimated to exceed 95%. These results attest to the feasibility of the clinical application of TND, for the provision of a second more objective opinion to the radiologists by exploiting image evidences.

  16. Video and thermal imaging system for monitoring interiors of high temperature reaction vessels

    DOEpatents

    Saveliev, Alexei V.; Zelepouga, Serguei A.; Rue, David M.

    2012-01-10

    A system and method for real-time monitoring of the interior of a combustor or gasifier wherein light emitted by the interior surface of a refractory wall of the combustor or gasifier is collected using an imaging fiber optic bundle having a light receiving end and a light output end. Color information in the light is captured with primary color (RGB) filters or complimentary color (GMCY) filters placed over individual pixels of color sensors disposed within a digital color camera in a BAYER mosaic layout, producing RGB signal outputs or GMCY signal outputs. The signal outputs are processed using intensity ratios of the primary color filters or the complimentary color filters, producing video images and/or thermal images of the interior of the combustor or gasifier.

  17. Frequency-space prediction filtering for acoustic clutter and random noise attenuation in ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Shin, Junseob; Huang, Lianjie

    2016-04-01

    Frequency-space prediction filtering (FXPF), also known as FX deconvolution, is a technique originally developed for random noise attenuation in seismic imaging. FXPF attempts to reduce random noise in seismic data by modeling only real signals that appear as linear or quasilinear events in the aperture domain. In medical ultrasound imaging, channel radio frequency (RF) signals from the main lobe appear as horizontal events after receive delays are applied while acoustic clutter signals from off-axis scatterers and electronic noise do not. Therefore, FXPF is suitable for preserving only the main-lobe signals and attenuating the unwanted contributions from clutter and random noise in medical ultrasound imaging. We adapt FXPF to ultrasound imaging, and evaluate its performance using simulated data sets from a point target and an anechoic cyst. Our simulation results show that using only 5 iterations of FXPF achieves contrast-to-noise ratio (CNR) improvements of 67 % in a simulated noise-free anechoic cyst and 228 % in a simulated anechoic cyst contaminated with random noise of 15 dB signal-to-noise ratio (SNR). Our findings suggest that ultrasound imaging with FXPF attenuates contributions from both acoustic clutter and random noise and therefore, FXPF has great potential to improve ultrasound image contrast for better visualization of important anatomical structures and detection of diseased conditions.

  18. Novel methods for acoustic and elastic wave-based subsurface imaging

    NASA Astrophysics Data System (ADS)

    Heidari, Amir Homayoun

    Novel, accurate and computationally efficient methods for wave-based subsurface imaging in acoustic and elastic media are developed. The methods are based on Arbitrarily Wide-Angle Wave Equations (AWWE), which are highly-accurate space domain one-way wave equations, formulated in terms of displacement components. Main contributions of this research are as follows: (I) Acoustic-AWWE Imaging, a new time-domain migration technique that is highly accurate for imaging steep dips in heterogeneous media. Similar in form to conventional 15° equation, the acoustic AWWE is implemented using an efficient double-marching explicit finite-difference scheme. Its accuracy and efficiency is studied both analytically and through numerical experiments. The method is able to achieve highly accurate images with only a few times the computational cost of the conventional low-order methods. (II) A new class of highly-accurate Absorbing Boundary Conditions (ABCs) for modeling and imaging with high-order one-way wave equations and parabolic equations. These ABCs, are developed using special imaginary-length finite elements. They effectively absorb the incident wave front and generate artifact-free images with as few as three absorbing layers. They are essential tools in imaging in truncated domains and underwater acoustics. (III) Elastic-AWWE imaging: The first high-order space-domain displacement-based elastic imaging method is developed in this research. The method, which is applicable to complex elastic media, is implemented using a unique downward continuation technique. At each depth step, a half-space is attached to the physical layer to simulate one-way propagation. The half-space is effectively approximated using special imaginary-length finite elements. The method is eventually implemented in frequency-space domain using a finite difference method. Numerical instabilities due to improper mapping of complex wave modes are suppressed by rotating the AWWE parameters in complex

  19. Understanding Discrete Facial Expressions in Video Using an Emotion Avatar Image.

    PubMed

    Songfan Yang; Bhanu, B

    2012-08-01

    Existing video-based facial expression recognition techniques analyze the geometry-based and appearance-based information in every frame as well as explore the temporal relation among frames. On the contrary, we present a new image-based representation and an associated reference image called the emotion avatar image (EAI), and the avatar reference, respectively. This representation leverages the out-of-plane head rotation. It is not only robust to outliers but also provides a method to aggregate dynamic information from expressions with various lengths. The approach to facial expression analysis consists of the following steps: 1) face detection; 2) face registration of video frames with the avatar reference to form the EAI representation; 3) computation of features from EAIs using both local binary patterns and local phase quantization; and 4) the classification of the feature as one of the emotion type by using a linear support vector machine classifier. Our system is tested on the Facial Expression Recognition and Analysis Challenge (FERA2011) data, i.e., the Geneva Multimodal Emotion Portrayal-Facial Expression Recognition and Analysis Challenge (GEMEP-FERA) data set. The experimental results demonstrate that the information captured in an EAI for a facial expression is a very strong cue for emotion inference. Moreover, our method suppresses the person-specific information for emotion and performs well on unseen data.

  20. Geometric-structure-based directional filtering for error concealment in image/video transmission

    NASA Astrophysics Data System (ADS)

    Zeng, Wenjun; Liu, Bede

    1995-12-01

    The time varying nature of wireless channels can cause severe error bursts or dropouts. It is important to recover lost data in coded images in interactive video communication over wireless network. A good spatial interpolation strategy is considered as essential for replenishing missing blocks in still images and video frames. This paper proposes a novel spatial directional interpolation scheme which makes use of the local geometric information extracted from the surrounding blocks. Specifically, statistics of the local geometric structure are modeled as a bimodal distribution. The two nearest surrounding layers of pixels are converted into binary pattern to reveal the local geometric structure. A measure of directional consistency is employed to resolve ambiguity of possible connections of the transition points on the inner layer. The transition lines can be specified within one-pixel accuracy, unlike previous directional filtering schemes which usually filter along only one single direction chosen from a finite candidate set. The new approach produces results that are superior to that of other approaches, in terms of both peak signal-to-noise ratio (PSNR) and visual quality. The computation is also much reduced. It is observed that local structures such as edges, streaks and corners are well preserved in the reconstructed image.

  1. Monitoring an eruption fissure in 3D: video recording, particle image velocimetry and dynamics

    NASA Astrophysics Data System (ADS)

    Witt, Tanja; Walter, Thomas R.

    2015-04-01

    The processes during an eruption are very complex. To get a better understanding several parameters are measured. One of the measured parameters is the velocity of particles and patterns, as ash and emitted magma, and of the volcano itself. The resulting velocity field provides insights into the dynamics of a vent. Here we test our algorithm for 3 dimensional velocity fields on videos of the second fissure eruption of Bárdarbunga 2014. There we acquired videos from lava fountains of the main fissure with 2 high speed cameras with small angles between the cameras. Additionally we test the algorithm on videos from the geyser Strokkur, where we had 3 cameras and larger angles between the cameras. The velocity is calculated by a correlation in the Fourier space of contiguous images. Considering that we only have the velocity field of the surface smaller angles result in a better resolution of the existing velocity field in the near field. For general movements also larger angles can be useful, e.g. to get the direction, height and velocity of eruption clouds. In summary, it can be stated that 3D velocimetry can be used for several application and with different setup due to the application.

  2. Rocket engine plume diagnostics using video digitization and image processing - Analysis of start-up

    NASA Technical Reports Server (NTRS)

    Disimile, P. J.; Shoe, B.; Dhawan, A. P.

    1991-01-01

    Video digitization techniques have been developed to analyze the exhaust plume of the Space Shuttle Main Engine. Temporal averaging and a frame-by-frame analysis provide data used to evaluate the capabilities of image processing techniques for use as measurement tools. Capabilities include the determination of the necessary time requirement for the Mach disk to obtain a fully-developed state. Other results show the Mach disk tracks the nozzle for short time intervals, and that dominate frequencies exist for the nozzle and Mach disk movement.

  3. Automated video-microscopic imaging and data acquisition system for colloid deposition measurements

    DOEpatents

    Abdel-Fattah, Amr I.; Reimus, Paul W.

    2004-12-28

    A video microscopic visualization system and image processing and data extraction and processing method for in situ detailed quantification of the deposition of sub-micrometer particles onto an arbitrary surface and determination of their concentration across the bulk suspension. The extracted data includes (a) surface concentration and flux of deposited, attached and detached colloids, (b) surface concentration and flux of arriving and departing colloids, (c) distribution of colloids in the bulk suspension in the direction perpendicular to the deposition surface, and (d) spatial and temporal distributions of deposited colloids.

  4. In situ calibration of an infrared imaging video bolometer in the Large Helical Device

    SciTech Connect

    Mukai, K. Peterson, B. J.; Pandya, S. N.; Sano, R.

    2014-11-15

    The InfraRed imaging Video Bolometer (IRVB) is a powerful diagnostic to measure multi-dimensional radiation profiles in plasma fusion devices. In the Large Helical Device (LHD), four IRVBs have been installed with different fields of view to reconstruct three-dimensional profiles using a tomography technique. For the application of the measurement to plasma experiments using deuterium gas in LHD in the near future, the long-term effect of the neutron irradiation on the heat characteristics of an IRVB foil should be taken into account by regular in situ calibration measurements. Therefore, in this study, an in situ calibration system was designed.

  5. Imaging morphodynamics of human blood cells in vivo with video-rate third harmonic generation microscopy

    PubMed Central

    Chen, Chien-Kuo; Liu, Tzu-Ming

    2012-01-01

    With a video-rate third harmonic generation (THG) microscopy system, we imaged the micro-circulation beneath the human skin without labeling. Not only the speed of circulation but also the morpho-hydrodynamics of blood cells can be analyzed. Lacking of nuclei, red blood cells (RBCs) shows typical parachute-like and hollow-core morphology under THG microscopy. Quite different from RBCs, every now and then, round and granule rich blood cells with strong THG contrast appear in circulation. The corresponding volume densities in blood, evaluated from their frequencies of appearance and the velocity of circulation, fall within the physiological range of human white blood cell counts. PMID:23162724

  6. Imaging morphodynamics of human blood cells in vivo with video-rate third harmonic generation microscopy.

    PubMed

    Chen, Chien-Kuo; Liu, Tzu-Ming

    2012-11-01

    With a video-rate third harmonic generation (THG) microscopy system, we imaged the micro-circulation beneath the human skin without labeling. Not only the speed of circulation but also the morpho-hydrodynamics of blood cells can be analyzed. Lacking of nuclei, red blood cells (RBCs) shows typical parachute-like and hollow-core morphology under THG microscopy. Quite different from RBCs, every now and then, round and granule rich blood cells with strong THG contrast appear in circulation. The corresponding volume densities in blood, evaluated from their frequencies of appearance and the velocity of circulation, fall within the physiological range of human white blood cell counts. PMID:23162724

  7. Acoustic property reconstruction of a pygmy sperm whale (Kogia breviceps) forehead based on computed tomography imaging.

    PubMed

    Song, Zhongchang; Xu, Xiao; Dong, Jianchen; Xing, Luru; Zhang, Meng; Liu, Xuecheng; Zhang, Yu; Li, Songhai; Berggren, Per

    2015-11-01

    Computed tomography (CT) imaging and sound experimental measurements were used to reconstruct the acoustic properties (density, velocity, and impedance) of the forehead tissues of a deceased pygmy sperm whale (Kogia breviceps). The forehead was segmented along the body axis and sectioned into cross section slices, which were further cut into sample pieces for measurements. Hounsfield units (HUs) of the corresponding measured pieces were obtained from CT scans, and regression analyses were conducted to investigate the linear relationships between the tissues' HUs and velocity, and HUs and density. The distributions of the acoustic properties of the head at axial, coronal, and sagittal cross sections were reconstructed, revealing that the nasal passage system was asymmetric and the cornucopia-shaped spermaceti organ was in the right nasal passage, surrounded by tissues and airsacs. A distinct dense theca was discovered in the posterior-dorsal area of the melon, which was characterized by low velocity in the inner core and high velocity in the outer region. Statistical analyses revealed significant differences in density, velocity, and acoustic impedance between all four structures, melon, spermaceti organ, muscle, and connective tissue (p < 0.001). The obtained acoustic properties of the forehead tissues provide important information for understanding the species' bioacoustic characteristics.

  8. Acoustic property reconstruction of a pygmy sperm whale (Kogia breviceps) forehead based on computed tomography imaging.

    PubMed

    Song, Zhongchang; Xu, Xiao; Dong, Jianchen; Xing, Luru; Zhang, Meng; Liu, Xuecheng; Zhang, Yu; Li, Songhai; Berggren, Per

    2015-11-01

    Computed tomography (CT) imaging and sound experimental measurements were used to reconstruct the acoustic properties (density, velocity, and impedance) of the forehead tissues of a deceased pygmy sperm whale (Kogia breviceps). The forehead was segmented along the body axis and sectioned into cross section slices, which were further cut into sample pieces for measurements. Hounsfield units (HUs) of the corresponding measured pieces were obtained from CT scans, and regression analyses were conducted to investigate the linear relationships between the tissues' HUs and velocity, and HUs and density. The distributions of the acoustic properties of the head at axial, coronal, and sagittal cross sections were reconstructed, revealing that the nasal passage system was asymmetric and the cornucopia-shaped spermaceti organ was in the right nasal passage, surrounded by tissues and airsacs. A distinct dense theca was discovered in the posterior-dorsal area of the melon, which was characterized by low velocity in the inner core and high velocity in the outer region. Statistical analyses revealed significant differences in density, velocity, and acoustic impedance between all four structures, melon, spermaceti organ, muscle, and connective tissue (p < 0.001). The obtained acoustic properties of the forehead tissues provide important information for understanding the species' bioacoustic characteristics. PMID:26627786

  9. Military jet noise source imaging using multisource statistically optimized near-field acoustical holography.

    PubMed

    Wall, Alan T; Gee, Kent L; Neilsen, Tracianne B; McKinley, Richard L; James, Michael M

    2016-04-01

    The identification of acoustic sources is critical to targeted noise reduction efforts for jets on high-performance tactical aircraft. This paper describes the imaging of acoustic sources from a tactical jet using near-field acoustical holography techniques. The measurement consists of a series of scans over the hologram with a dense microphone array. Partial field decomposition methods are performed to generate coherent holograms. Numerical extrapolation of data beyond the measurement aperture mitigates artifacts near the aperture edges. A multisource equivalent wave model is used that includes the effects of the ground reflection on the measurement. Multisource statistically optimized near-field acoustical holography (M-SONAH) is used to reconstruct apparent source distributions between 20 and 1250 Hz at four engine powers. It is shown that M-SONAH produces accurate field reconstructions for both inward and outward propagation in the region spanned by the physical hologram measurement. Reconstructions across the set of engine powers and frequencies suggests that directivity depends mainly on estimated source location; sources farther downstream radiate at a higher angle relative to the inlet axis. At some frequencies and engine powers, reconstructed fields exhibit multiple radiation lobes originating from overlapped source regions, which is a phenomenon relatively recently reported for full-scale jets. PMID:27106340

  10. Comparative Performance Of A Standard And High Line Rate Video Imaging System In A Cardiac Catherization Laboratory

    NASA Astrophysics Data System (ADS)

    Rossi, Raymond P.; Ahrens, Charles; Groves, Bertron M.

    1985-09-01

    The performance of a new high line rate (1023) video imaging system (VHR) installed in the cardiac catherization laboratory at the University of Colorado Health Sciences Center is compared to the previously installed standard line rate (525) video imaging system (pre-VHR). Comparative performance was assessed both quantitatively using a standardized evaluation protocol and qualitatively based on analysis of data collected during the observation of clinical procedures for which the cardiologists were asked to rank the quality of the fluoroscopic image. The results of this comparative study are presented and suggest that the performance of the high line rate system is significantly improved over the standard line rate system.

  11. A comparison of traffic estimates of nocturnal flying animals using radar, thermal imaging, and acoustic recording.

    PubMed

    Horton, Kyle G; Shriver, W Gregory; Buler, Jeffrey J

    2015-03-01

    There are several remote-sensing tools readily available for the study of nocturnally flying animals (e.g., migrating birds), each possessing unique measurement biases. We used three tools (weather surveillance radar, thermal infrared camera, and acoustic recorder) to measure temporal and spatial patterns of nocturnal traffic estimates of flying animals during the spring and fall of 2011 and 2012 in Lewes, Delaware, USA. Our objective was to compare measures among different technologies to better understand their animal detection biases. For radar and thermal imaging, the greatest observed traffic rate tended to occur at, or shortly after, evening twilight, whereas for the acoustic recorder, peak bird flight-calling activity was observed just prior to morning twilight. Comparing traffic rates during the night for all seasons, we found that mean nightly correlations between acoustics and the other two tools were weakly correlated (thermal infrared camera and acoustics, r = 0.004 ± 0.04 SE, n = 100 nights; radar and acoustics, r = 0.14 ± 0.04 SE, n = 101 nights), but highly variable on an individual nightly basis (range = -0.84 to 0.92, range = -0.73 to 0.94). The mean nightly correlations between traffic rates estimated by radar and by thermal infrared camera during the night were more strongly positively correlated (r = 0.39 ± 0.04 SE, n = 125 nights), but also were highly variable for individual nights (range = -0.76 to 0.98). Through comparison with radar data among numerous height intervals, we determined that flying animal height above the ground influenced thermal imaging positively and flight call detections negatively. Moreover, thermal imaging detections decreased with the presence of cloud cover and increased with mean ground flight speed of animals, whereas acoustic detections showed no relationship with cloud cover presence but did decrease with increased flight speed. We found sampling methods to be positively correlated when comparing mean nightly

  12. A comparison of traffic estimates of nocturnal flying animals using radar, thermal imaging, and acoustic recording.

    PubMed

    Horton, Kyle G; Shriver, W Gregory; Buler, Jeffrey J

    2015-03-01

    There are several remote-sensing tools readily available for the study of nocturnally flying animals (e.g., migrating birds), each possessing unique measurement biases. We used three tools (weather surveillance radar, thermal infrared camera, and acoustic recorder) to measure temporal and spatial patterns of nocturnal traffic estimates of flying animals during the spring and fall of 2011 and 2012 in Lewes, Delaware, USA. Our objective was to compare measures among different technologies to better understand their animal detection biases. For radar and thermal imaging, the greatest observed traffic rate tended to occur at, or shortly after, evening twilight, whereas for the acoustic recorder, peak bird flight-calling activity was observed just prior to morning twilight. Comparing traffic rates during the night for all seasons, we found that mean nightly correlations between acoustics and the other two tools were weakly correlated (thermal infrared camera and acoustics, r = 0.004 ± 0.04 SE, n = 100 nights; radar and acoustics, r = 0.14 ± 0.04 SE, n = 101 nights), but highly variable on an individual nightly basis (range = -0.84 to 0.92, range = -0.73 to 0.94). The mean nightly correlations between traffic rates estimated by radar and by thermal infrared camera during the night were more strongly positively correlated (r = 0.39 ± 0.04 SE, n = 125 nights), but also were highly variable for individual nights (range = -0.76 to 0.98). Through comparison with radar data among numerous height intervals, we determined that flying animal height above the ground influenced thermal imaging positively and flight call detections negatively. Moreover, thermal imaging detections decreased with the presence of cloud cover and increased with mean ground flight speed of animals, whereas acoustic detections showed no relationship with cloud cover presence but did decrease with increased flight speed. We found sampling methods to be positively correlated when comparing mean nightly

  13. Resonant acoustic nonlinearity for defect-selective imaging and NDT

    NASA Astrophysics Data System (ADS)

    Solodov, Igor

    2015-10-01

    The bottleneck problem of nonlinear NDT is a low efficiency of conversion from fundamental frequency to nonlinear frequency components. In this paper, it is proposed to use a combination of mechanical resonance and nonlinearity of defects to enhance the input-output conversion. The concept of the defect as a nonlinear oscillator brings about new dynamic and frequency scenarios characteristic of parametric oscillations. The modes observed in experiment include sub- and superharmonic resonances with anomalously efficient generation of the higher harmonics and subharmonics. A modified version of the superharmonic resonance (combination frequency resonance) is used to enhance the efficiency of frequency mixing mode of nonlinear NDT. All the resonant nonlinear modes are strongly localized in the defect area that provides a background for high-contrast highly-sensitive defect- and frequency-selective imaging.

  14. Development of passive submillimeter-wave video imaging systems for security applications

    NASA Astrophysics Data System (ADS)

    Heinz, Erik; May, Torsten; Born, Detlef; Zieger, Gabriel; Brömel, Anika; Anders, Solveig; Zakosarenko, Vyacheslav; Krause, Torsten; Krüger, André; Schulz, Marco; Bauer, Frank; Meyer, Hans-Georg

    2012-10-01

    Passive submillimeter-wave imaging is a concept that has been in the focus of interest as a promising technology for security applications for a number of years. It utilizes the unique optical properties of submillimeter waves and promises an alternative to millimeter-wave and X-ray backscattering portals for personal security screening in particular. Possible application scenarios demand sensitive, fast, and flexible high-quality imaging techniques. Considering the low radiometric contrast of indoor scenes in the submillimeter range, this objective calls for an extremely high detector sensitivity that can only be achieved using cooled detectors. Our approach to this task is a series of passive standoff video cameras for the 350 GHz band that represent an evolving concept and a continuous development since 2007. The cameras utilize arrays of superconducting transition-edge sensors (TES), i. e. cryogenic microbolometers, as radiation detectors. The TES are operated at temperatures below 1 K, cooled by a closed-cycle cooling system, and coupled to superconducting readout electronics. By this means, background limited photometry (BLIP) mode is achieved providing the maximum possible signal to noise ratio. At video rates, this leads to a pixel NETD well below 1K. The imaging system is completed by reflector optics based on free-form mirrors. For object distances of 3-10 m, a field of view up to 2m height and a diffraction-limited spatial resolution in the order of 1-2 cm is provided. Opto-mechanical scanning systems are part of the optical setup and capable frame rates up to 25 frames per second. Both spiraliform and linear scanning schemes have been developed. Several electronic and software components are used for system control, signal amplification, and data processing. Our objective is the design of an application-ready and user-friendly imaging system. For application in real world security screening scenarios, it can be extended using image processing and

  15. Performance characterization of image and video analysis systems at Siemens Corporate Research

    NASA Astrophysics Data System (ADS)

    Ramesh, Visvanathan; Jolly, Marie-Pierre; Greiffenhagen, Michael

    2000-06-01

    There has been a significant increase in commercial products using imaging analysis techniques to solve real-world problems in diverse fields such as manufacturing, medical imaging, document analysis, transportation and public security, etc. This has been accelerated by various factors: more advanced algorithms, the availability of cheaper sensors, and faster processors. While algorithms continue to improve in performance, a major stumbling block in translating improvements in algorithms to faster deployment of image analysis systems is the lack of characterization of limits of algorithms and how they affect total system performance. The research community has realized the need for performance analysis and there have been significant efforts in the last few years to remedy the situation. Our efforts at SCR have been on statistical modeling and characterization of modules and systems. The emphasis is on both white-box and black box methodologies to evaluate and optimize vision systems. In the first part of this paper we review the literature on performance characterization and then provide an overview of the status of research in performance characterization of image and video understanding systems. The second part of the paper is on performance evaluation of medical image segmentation algorithms. Finally, we highlight some research issues in performance analysis in medical imaging systems.

  16. Security SVGA image sensor with on-chip video data authentication and cryptographic circuit

    NASA Astrophysics Data System (ADS)

    Stifter, P.; Eberhardt, K.; Erni, A.; Hofmann, K.

    2005-10-01

    Security applications of sensors in a networking environment has a strong demand of sensor authentication and secure data transmission due to the possibility of man-in-the-middle and address spoofing attacks. Therefore a secure sensor system should fulfil the three standard requirements of cryptography, namely data integrity, authentication and non-repudiation. This paper is intended to present the unique sensor development by AIM, the so called SecVGA, which is a high performance, monochrome (B/W) CMOS active pixel image sensor. The device is capable of capturing still and motion images with a resolution of 800x600 active pixels and converting the image into a digital data stream. The distinguishing feature of this development in comparison to standard imaging sensors is the on-chip cryptographic engine which provides the sensor authentication, based on a one-way challenge/response protocol. The implemented protocol results in the exchange of a session-key which will secure the following video data transmission. This is achieved by calculating a cryptographic checksum derived from a stateful hash value of the complete image frame. Every sensor contains an EEPROM memory cell for the non-volatile storage of a unique identifier. The imager is programmable via a two-wire I2C compatible interface which controls the integration time, the active window size of the pixel array, the frame rate and various operating modes including the authentication procedure.

  17. Overview of image processing tools to extract physical information from JET videos

    NASA Astrophysics Data System (ADS)

    Craciunescu, T.; Murari, A.; Gelfusa, M.; Tiseanu, I.; Zoita, V.; EFDA Contributors, JET

    2014-11-01

    In magnetic confinement nuclear fusion devices such as JET, the last few years have witnessed a significant increase in the use of digital imagery, not only for the surveying and control of experiments, but also for the physical interpretation of results. More than 25 cameras are routinely used for imaging on JET in the infrared (IR) and visible spectral regions. These cameras can produce up to tens of Gbytes per shot and their information content can be very different, depending on the experimental conditions. However, the relevant information about the underlying physical processes is generally of much reduced dimensionality compared to the recorded data. The extraction of this information, which allows full exploitation of these diagnostics, is a challenging task. The image analysis consists, in most cases, of inverse problems which are typically ill-posed mathematically. The typology of objects to be analysed is very wide, and usually the images are affected by noise, low levels of contrast, low grey-level in-depth resolution, reshaping of moving objects, etc. Moreover, the plasma events have time constants of ms or tens of ms, which imposes tough conditions for real-time applications. On JET, in the last few years new tools and methods have been developed for physical information retrieval. The methodology of optical flow has allowed, under certain assumptions, the derivation of information about the dynamics of video objects associated with different physical phenomena, such as instabilities, pellets and filaments. The approach has been extended in order to approximate the optical flow within the MPEG compressed domain, allowing the manipulation of the large JET video databases and, in specific cases, even real-time data processing. The fast visible camera may provide new information that is potentially useful for disruption prediction. A set of methods, based on the extraction of structural information from the visual scene, have been developed for the

  18. Development of single-channel stereoscopic video imaging modality for real-time retinal imaging

    NASA Astrophysics Data System (ADS)

    Radfar, Edalat; Park, Jihoon; Lee, Sangyeob; Ha, Myungjin; Yu, Sungkon; Jang, Seulki; Jung, Byungjo

    2016-03-01

    Stereoscopic retinal image can effectively help doctors. Most of stereo imaging surgical microscopes are based on dual optical channels and benefit from dual cameras in which left and right cameras capture corresponding left and right eye views. This study developed a single-channel stereoscopic retinal imaging modality based on a transparent rotating deflector (TRD). Two different viewing angles are generated by imaging through the TRD which is mounted on a motor synchronized with a camera and is placed in single optical channel. Because of the function of objective lens in the imaging modality which generate stereo-image from an object at its focal point, and according to eye structure, the optical set up of the imaging modality can compatible for retinal imaging when the cornea and eye lens are engaged in objective lens.

  19. Microstructure Imaging Using Frequency Spectrum Spatially Resolved Acoustic Spectroscopy F-Sras

    NASA Astrophysics Data System (ADS)

    Sharples, S. D.; Li, W.; Clark, M.; Somekh, M. G.

    2010-02-01

    Material microstructure can have a profound effect on the mechanical properties of a component, such as strength and resistance to creep and fatigue. SRAS—spatially resolved acoustic spectroscopy—is a laser ultrasonic technique which can image microstructure using highly localized surface acoustic wave (SAW) velocity as a contrast mechanism, as this is sensitive to crystallographic orientation. The technique is noncontact, nondestructive, rapid, can be used on large components, and is highly tolerant of acoustic aberrations. Previously, the SRAS technique has been demonstrated using a fixed frequency excitation laser and a variable grating period (к-vector) to determine the most efficiently generated SAWs, and hence the velocity. Here, we demonstrate an implementation which uses a fixed grating period with a broadband laser excitation source. The velocity is determined by analyzing the measured frequency spectrum. Experimental results using this "frequency spectrum SRAS" (f-SRAS) method are presented. Images of microstructure on an industrially relevant material are compared to those obtained using the previous SRAS method ("k-SRAS"), excellent agreement is observed. Moreover, f-SRAS is much simpler and potentially much more rapid than k-SRAS as the velocity can be determined at each sample point in one single laser shot, rather than scanning the grating period.

  20. Symmetry analysis for nonlinear time reversal methods applied to nonlinear acoustic imaging

    NASA Astrophysics Data System (ADS)

    Dos Santos, Serge; Chaline, Jennifer

    2015-10-01

    Using symmetry invariance, nonlinear Time Reversal (TR) and reciprocity properties, the classical NEWS methods are supplemented and improved by new excitations having the intrinsic property of enlarging frequency analysis bandwidth and time domain scales, with now both medical acoustics and electromagnetic applications. The analysis of invariant quantities is a well-known tool which is often used in nonlinear acoustics in order to simplify complex equations. Based on a fundamental physical principle known as symmetry analysis, this approach consists in finding judicious variables, intrinsically scale dependant, and able to describe all stages of behaviour on the same theoretical foundation. Based on previously published results within the nonlinear acoustic areas, some practical implementation will be proposed as a new way to define TR-NEWS based methods applied to NDT and medical bubble based non-destructive imaging. This paper tends to show how symmetry analysis can help us to define new methodologies and new experimental set-up involving modern signal processing tools. Some example of practical realizations will be proposed in the context of biomedical non-destructive imaging using Ultrasound Contrast Agents (ACUs) where symmetry and invariance properties allow us to define a microscopic scale-invariant experimental set-up describing intrinsic symmetries of the microscopic complex system.

  1. Eigenfunction analysis of stochastic backscatter for characterization of acoustic aberration in medical ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Varslot, Trond; Krogstad, Harald; Mo, Eirik; Angelsen, Bjørn A.

    2004-06-01

    Presented here is a characterization of aberration in medical ultrasound imaging. The characterization is optimal in the sense of maximizing the expected energy in a modified beamformer output of the received acoustic backscatter. Aberration correction based on this characterization takes the form of an aberration correction filter. The situation considered is frequently found in applications when imaging organs through a body wall: aberration is introduced in a layer close to the transducer, and acoustic backscatter from a scattering region behind the body wall is measured at the transducer surface. The scattering region consists of scatterers randomly distributed with very short correlation length compared to the acoustic wavelength of the transmit pulse. The scatterer distribution is therefore assumed to be δ correlated. This paper shows how maximizing the expected energy in a modified beamformer output signal naturally leads to eigenfunctions of a Fredholm integral operator, where the associated kernel function is a spatial correlation function of the received stochastic signal. Aberration characterization and aberration correction are presented for simulated data constructed to mimic aberration introduced by the abdominal wall. The results compare well with what is obtainable using data from a simulated point source.

  2. Bond-selective photoacoustic imaging by converting molecular vibration into acoustic waves

    PubMed Central

    Hui, Jie; Li, Rui; Phillips, Evan H.; Goergen, Craig J.; Sturek, Michael; Cheng, Ji-Xin

    2016-01-01

    The quantized vibration of chemical bonds provides a way of detecting specific molecules in a complex tissue environment. Unlike pure optical methods, for which imaging depth is limited to a few hundred micrometers by significant optical scattering, photoacoustic detection of vibrational absorption breaks through the optical diffusion limit by taking advantage of diffused photons and weak acoustic scattering. Key features of this method include both high scalability of imaging depth from a few millimeters to a few centimeters and chemical bond selectivity as a novel contrast mechanism for photoacoustic imaging. Its biomedical applications spans detection of white matter loss and regeneration, assessment of breast tumor margins, and diagnosis of vulnerable atherosclerotic plaques. This review provides an overview of the recent advances made in vibration-based photoacoustic imaging and various biomedical applications enabled by this new technology. PMID:27069873

  3. Bond-selective photoacoustic imaging by converting molecular vibration into acoustic waves.

    PubMed

    Hui, Jie; Li, Rui; Phillips, Evan H; Goergen, Craig J; Sturek, Michael; Cheng, Ji-Xin

    2016-03-01

    The quantized vibration of chemical bonds provides a way of detecting specific molecules in a complex tissue environment. Unlike pure optical methods, for which imaging depth is limited to a few hundred micrometers by significant optical scattering, photoacoustic detection of vibrational absorption breaks through the optical diffusion limit by taking advantage of diffused photons and weak acoustic scattering. Key features of this method include both high scalability of imaging depth from a few millimeters to a few centimeters and chemical bond selectivity as a novel contrast mechanism for photoacoustic imaging. Its biomedical applications spans detection of white matter loss and regeneration, assessment of breast tumor margins, and diagnosis of vulnerable atherosclerotic plaques. This review provides an overview of the recent advances made in vibration-based photoacoustic imaging and various biomedical applications enabled by this new technology.

  4. ACOUSTICAL IMAGING AND MECHANICAL PROPERTIES OF SOFT ROCK AND MARINE SEDIMENTS

    SciTech Connect

    Thurman E. Scott, Jr., Ph.D.; Younane Abousleiman, Ph.D.; Musharraf Zaman, Ph.D., P.E.

    2002-11-18

    During the sixth quarter of this research project the research team developed a method and the experimental procedures for acquiring the data needed for ultrasonic tomography of rock core samples under triaxial stress conditions as outlined in Task 10. Traditional triaxial compression experiments, where compressional and shear wave velocities are measured, provide little or no information about the internal spatial distribution of mechanical damage within the sample. The velocities measured between platen-to-platen or sensor-to-sensor reflects an averaging of all the velocities occurring along that particular raypath across the boundaries of the rock. The research team is attempting to develop and refine a laboratory equivalent of seismic tomography for use on rock samples deformed under triaxial stress conditions. Seismic tomography, utilized for example in crosswell tomography, allows an imaging of the velocities within a discrete zone within the rock. Ultrasonic or acoustic tomography is essentially the extension of that field technology applied to rock samples deforming in the laboratory at high pressures. This report outlines the technical steps and procedures for developing this technology for use on weak, soft chalk samples. Laboratory tests indicate that the chalk samples exhibit major changes in compressional and shear wave velocities during compaction. Since chalk is the rock type responsible for the severe subsidence and compaction in the North Sea it was selected for the first efforts at tomographic imaging of soft rocks. Field evidence from the North Sea suggests that compaction, which has resulted in over 30 feet of subsidence to date, is heterogeneously distributed within the reservoir. The research team will attempt to image this very process in chalk samples. The initial tomographic studies (Scott et al., 1994a,b; 1998) were accomplished on well cemented, competent rocks such as Berea sandstone. The extension of the technology to weaker samples is

  5. High-density optical discs for audio, video, and image applications

    NASA Astrophysics Data System (ADS)

    Gan, Fuxi; Hou, Lisong

    2003-04-01

    Great progress in optical storage has taken place in the last decade. The development of optical discs is always towards higher and higher storage density and data transfer rate in order to meet the ever-increasing requirements of applications in audio, video and image areas. It has been proved a logical and effective approach to employ laser light of shorter wavelength and lenses of higher numerical aperture for increasing storage density, as is shown by the evolution of optical disc from CD family to DVD family. At present, research and development of high density DVD (HD-DVD), blu-ray disc and advanced storage magneto-optical (AS-MO) disc are carried out very extensively. Meanwhile, miniaturization of disc size and use of multiplication techniques to increase the storage density and capacity have already given rise to new formats such as iD Photo disc and Data Play disc as well as multi-layer discs. Digital holographic storage (DHS) disc is also one of the research and development subjects of many companies and research institutions. Some new concept optical storage such as fluorescent multiplayer disc (FMD) is also under intensive development. All these have greatly promoted applications of optical discs in audio, video and image devices.

  6. The use of digital imaging, video conferencing, and telepathology in histopathology: a national survey

    PubMed Central

    Dennis, T; Start, R D; Cross, S S

    2005-01-01

    Aims: To undertake a large scale survey of histopathologists in the UK to determine the current infrastructure, training, and attitudes to digital pathology. Methods: A postal questionnaire was sent to 500 consultant histopathologists randomly selected from the membership of the Royal College of Pathologists in the UK. Results: There was a response rate of 47%. Sixty four per cent of respondents had a digital camera mounted on their microscope, but only 12% had any sort of telepathology equipment. Thirty per cent used digital images in electronic presentations at meetings at least once a year and only 24% had ever used telepathology in a diagnostic situation. Fifty nine per cent had received no training in digital imaging. Fifty eight per cent felt that the medicolegal implications of duty of care were a barrier to its use. A large proportion of pathologists (69%) were interested in using video conferencing for remote attendance at multidisciplinary team meetings. Conclusions: There is a reasonable level of equipment and communications infrastructure among histopathologists in the UK but a very low level of training. There is resistance to the use of telepathology in the diagnostic context but enthusiasm for the use of video conferencing in multidisciplinary team meetings. PMID:15735155

  7. Guidance for horizontal image translation (HIT) of high definition stereoscopic video production

    NASA Astrophysics Data System (ADS)

    Broberg, David K.

    2011-03-01

    Horizontal image translation (HIT) is an electronic process for shifting the left-eye and right-eye images horizontally as a way to alter the stereoscopic characteristics and alignment of 3D content after signals have been captured by stereoscopic cameras. When used cautiously and with full awareness of the impact on other interrelated aspects of the stereography, HIT is a valuable tool in the post production process as a means to modify stereoscopic content for more comfortable viewing. Most commonly it is used to alter the zero parallax setting (ZPS), to compensate for stereo window violations or to compensate for excessive positive or negative parallax in the source material. As more and more cinematic 3D content migrates to television distribution channels the use of this tool will likely expand. Without proper attention to certain guidelines the use of HIT can actually harm the 3D viewing experience. This paper provides guidance on the most effective use and describes some of the interrelationships and trade-offs. The paper recommends the adoption of the cinematic 2K video format as a 3D source master format for high definition television distribution of stereoscopic 3D video programming.

  8. Acoustic Image Models for Obstacle Avoidance with Forward-Looking Sonar

    NASA Astrophysics Data System (ADS)

    Masek, T.; Kölsch, M.

    Long-range forward-looking sonars (FLS) have recently been deployed in autonomous unmanned vehicles (AUV). We present models for various features in acoustic images, with the goal of using this sensor for altitude maintenance, obstacle detection and obstacle avoidance. First, we model the backscatter and FLS noise as pixel-based, spatially-varying intensity distributions. Experiments show that these models predict noise with an accuracy of over 98%. Next, the presence of acoustic noise from two other sources including a modem is reliably detected with a template-based filter and a threshold learned from training data. Lastly, the ocean floor location and orientation is estimated with a gradient-descent method using a site-independent template, yielding sufficiently accurate results in 95% of the frames. Temporal information is expected to further improve the performance.

  9. Finite element modeling of atomic force microscopy cantilever dynamics during video rate imaging

    SciTech Connect

    Howard-Knight, J. P.; Hobbs, J. K.

    2011-04-01

    A dynamic finite element model has been constructed to simulate the behavior of low spring constant atomic force microscope (AFM) cantilevers used for imaging at high speed without active feedback as in VideoAFM. The model is tested against experimental data collected at 20 frame/s and good agreement is found. The complex dynamics of the cantilever, consisting of traveling waves coming from the tip sample interaction, reflecting off the cantilever-substrate junction, and interfering with new waves created at the tip, are revealed. The construction of the image from this resulting nonequilibrium cantilever deflection is also examined. Transient tip-sample forces are found to reach values up to 260 nN on a calibration grid sample, and the maximum forces do not always correspond to the position of steepest features as a result of energy stored in the cantilever.

  10. ACOUSTICAL IMAGING AND MECHANICAL PROPERTIES OF SOFT ROCK AND MARINE SEDIMENTS

    SciTech Connect

    Thurman E. Scott, Jr., Ph.D.; Younane Abousleiman, Ph.D.; Musharraf Zaman, Ph.D., P.E.

    2002-11-18

    During the seven quarter of the project the research team analyzed some of the acoustic velocity data and rock deformation data. The goal is to create a series of ''deformation-velocity maps'' which can outline the types of rock deformational mechanisms which can occur at high pressures and then associate those with specific compressional or shear wave velocity signatures. During this quarter, we began to analyze both the acoustical and deformational properties of the various rock types. Some of the preliminary velocity data from the Danian chalk will be presented in this report. This rock type was selected for the initial efforts as it will be used in the tomographic imaging study outlined in Task 10. This is one of the more important rock types in the study as the Danian chalk is thought to represent an excellent analog to the Ekofisk chalk that has caused so many problems in the North Sea. Some of the preliminary acoustic velocity data obtained during this phase of the project indicates that during pore collapse and compaction of this chalk, the acoustic velocities can change by as much as 200 m/s. Theoretically, this significant velocity change should be detectable during repeated successive 3-D seismic images. In addition, research continues with an analysis of the unconsolidated sand samples at high confining pressures obtained in Task 9. The analysis of the results indicate that sands with 10% volume of fines can undergo liquefaction at lower stress conditions than sand samples which do not have fines added. This liquefaction and/or sand flow is similar to ''shallow water'' flows observed during drilling in the offshore Gulf of Mexico.

  11. Acoustic imaging of the Mediterranean water outflowing through the Strait of Gibraltar

    NASA Astrophysics Data System (ADS)

    Biescas Gorriz, Berta; Carniel, Sandro; Sallarès, Valentí; Rodriguez Ranero, Cesar

    2016-04-01

    Acoustic imaging of the Mediterranean water outflowing through the Strait of Gibraltar Berta Biescas (1), Sandro Carniel (2) , Valentí Sallarès (3) and Cesar R. Ranero(3) (1) Istituto di Scienze Marine, CNR, Bologna, Italy (2) Istituto di Scienze Marine, CNR, Venice, Italy (3) Institut de Ciències del Mar, CSIC, Barcelona, Spain Acoustic reflectivity acquired with multichannel seismic reflection (MCS) systems allow to detect and explore the thermohaline structure in the ocean with vertical and lateral resolutions in the order of 10 m, covering hundreds of kilometers in the lateral dimension and the full-depth water column. In this work we present a MCS 2D profile that crosses the Strait of Gibraltar, from the Alboran Sea to the internal Gulf of Cadiz (NE Atlantic Ocean). The MCS data was acquired during the Topomed-Gassis Cruise (European Science Foundation TopoEurope), which was carried out on board of the Spanish R/V Sarmiento de Gamboa in October 2011. The strong thermohaline contrast between the Mediterranean water and the Atlantic water, characterizes this area and allows to visualize, with unprecedented resolution, the acoustic reflectivity associated to the dense flow of the Mediterranean water outflowing through the prominent slope of the Strait of Gibraltar. During the first kilometers, the dense flow drops attached to the continental slope until it reaches the buoyancy depth at 700 m. Then, it detaches from the sea floor and continues flowing towards the Atlantic Ocean, occupying the layer at 700-1500 m deep and developing clear staircase layers. The reflectivity images display near seabed reflections that could well correspond to turbidity layers. The XBT data acquired coincident in time and space with the MCS data will help us in the interpretation and analysis of the acoustic data.

  12. Toward high-sensitivity and high-resolution submillimeter-wave video imaging

    NASA Astrophysics Data System (ADS)

    Heinz, Erik; May, Torsten; Born, Detlef; Zieger, Gabriel; Thorwirth, Günter; Anders, Solveig; Zakosarenko, Vyacheslav; Krause, Torsten; Krüger, André; Schulz, Marco; Meyer, Hans-Georg; Schubert, Marco; Starkloff, Michael

    2011-11-01

    Against a background of newly emerged security threats, the well-established idea of utilizing submillimeter-wave radiation for personal security screening applications has recently evolved into a promising technology. Possible application scenarios demand sensitive, fast, flexible and high-quality imaging techniques. At present, best results are obtained by passive imaging using cryogenic microbolometers as radiation detectors. Building upon the concept of a passive submillimeter-wave stand-off video camera introduced previously, we present the evolution of this concept into a practical application-ready imaging device. This has been achieved using a variety of measures such as optimizing the detector parameters, improving the scanning mechanism, increasing the sampling speed, and enhancing the image generation software. The camera concept is based on a Cassegrain-type mirror optics, an optomechanical scanner, an array of 20 superconducting transition-edge sensors operated at a temperature of 450 to 650 mK, and a closed-cycle cryogen-free cooling system. The main figures of the system include: a frequency band of 350+/-40 GHz, an object distance of 7 to 10 m, a circular field of view of 1.05 m diameter, and a spatial resolution in the image center of 2 cm at 8.5 m distance, a noise equivalent temperature difference of 0.1 to 0.4 K, and a maximum frame rate of 10 Hz.

  13. Realization of a video-rate distributed aperture millimeter-wave imaging system using optical upconversion

    NASA Astrophysics Data System (ADS)

    Schuetz, Christopher; Martin, Richard; Dillon, Thomas; Yao, Peng; Mackrides, Daniel; Harrity, Charles; Zablocki, Alicia; Shreve, Kevin; Bonnett, James; Curt, Petersen; Prather, Dennis

    2013-05-01

    Passive imaging using millimeter waves (mmWs) has many advantages and applications in the defense and security markets. All terrestrial bodies emit mmW radiation and these wavelengths are able to penetrate smoke, fog/clouds/marine layers, and even clothing. One primary obstacle to imaging in this spectrum is that longer wavelengths require larger apertures to achieve the resolutions desired for many applications. Accordingly, lens-based focal plane systems and scanning systems tend to require large aperture optics, which increase the achievable size and weight of such systems to beyond what can be supported by many applications. To overcome this limitation, a distributed aperture detection scheme is used in which the effective aperture size can be increased without the associated volumetric increase in imager size. This distributed aperture system is realized through conversion of the received mmW energy into sidebands on an optical carrier. This conversion serves, in essence, to scale the mmW sparse aperture array signals onto a complementary optical array. The side bands are subsequently stripped from the optical carrier and recombined to provide a real time snapshot of the mmW signal. Using this technique, we have constructed a real-time, video-rate imager operating at 75 GHz. A distributed aperture consisting of 220 upconversion channels is used to realize 2.5k pixels with passive sensitivity. Details of the construction and operation of this imager as well as field testing results will be presented herein.

  14. Cherenkov Video Imaging Allows for the First Visualization of Radiation Therapy in Real Time

    SciTech Connect

    Jarvis, Lesley A.; Zhang, Rongxiao; Gladstone, David J.; Jiang, Shudong; Hitchcock, Whitney; Friedman, Oscar D.; Glaser, Adam K.; Jermyn, Michael; Pogue, Brian W.

    2014-07-01

    Purpose: To determine whether Cherenkov light imaging can visualize radiation therapy in real time during breast radiation therapy. Methods and Materials: An intensified charge-coupled device (CCD) camera was synchronized to the 3.25-μs radiation pulses of the clinical linear accelerator with the intensifier set × 100. Cherenkov images were acquired continuously (2.8 frames/s) during fractionated whole breast irradiation with each frame an accumulation of 100 radiation pulses (approximately 5 monitor units). Results: The first patient images ever created are used to illustrate that Cherenkov emission can be visualized as a video during conditions typical for breast radiation therapy, even with complex treatment plans, mixed energies, and modulated treatment fields. Images were generated correlating to the superficial dose received by the patient and potentially the location of the resulting skin reactions. Major blood vessels are visible in the image, providing the potential to use these as biological landmarks for improved geometric accuracy. The potential for this system to detect radiation therapy misadministrations, which can result from hardware malfunction or patient positioning setup errors during individual fractions, is shown. Conclusions: Cherenkoscopy is a unique method for visualizing surface dose resulting in real-time quality control. We propose that this system could detect radiation therapy errors in everyday clinical practice at a time when these errors can be corrected to result in improved safety and quality of radiation therapy.

  15. Intracardiac Acoustic Radiation Force Impulse Imaging: A Novel Imaging Method for Intraprocedural Evaluation of Radiofrequency Ablation Lesions

    PubMed Central

    Eyerly, Stephanie A.; Bahnson, Tristram D.; Koontz, Jason I.; Bradway, David P.; Dumont, Douglas M.; Trahey, Gregg E.; Wolf, Patrick D.

    2012-01-01

    Background Arrhythmia recurrence after cardiac radiofrequency ablation (RFA) for atrial fibrillation (AF) has been linked to conduction through discontinuous lesion lines. Intraprocedural visualization and corrective ablation of lesion line discontinuities could decrease post-procedure AF recurrence. Intracardiac acoustic radiation force impulse (ARFI) imaging is a new imaging technique that visualizes RFA lesions by mapping the relative elasticity contrast between compliant-unablated and stiff-RFA treated myocardium. Objective To determine if intraprocedure ARFI images can identify RFA treated myocardium in vivo. Methods In eight canines, an electroanatomical mapping (EAM) guided intracardiac echo catheter (ICE) was used to acquire 2D ARFI images along right atrial ablation lines before and after RFA. ARFI images were acquired during diastole with the myocardium positioned at the ARFI focus (1.5 cm) and parallel to the ICE transducer for maximal and uniform energy delivery to the tissue. Three reviewers categorized each ARFI image as depicting no lesion, non-contiguous, or contiguous lesion. For comparison, three separate reviewers confirmed RFA lesion presence and contiguity based on functional conduction block at the imaging plane location on EAM activation maps. Results Ten percent of ARFI images were discarded due to motion artifacts. Reviewers of the ARFI images detected RFA-treated sites with high sensitivity (95.7%) and specificity (91.5%). Reviewer identification of contiguous lesion had 75.3% specificity and 47.1% sensitivity. Conclusions Intracardiac ARFI imaging was successful in identifying endocardial RFA treatment when specific imaging conditions were maintained. Further advances in ARFI imaging technology would facilitate a wider range of imaging opportunities for clinical lesion evaluation. PMID:22772134

  16. Automatic detection of motion blur in intravital video microscopy image sequences via directional statistics of log-Gabor energy maps.

    PubMed

    Ferrari, Ricardo J; Pinto, Carlos H Villa; da Silva, Bruno C Gregório; Bernardes, Danielle; Carvalho-Tavares, Juliana

    2015-02-01

    Intravital microscopy is an important experimental tool for the study of cellular and molecular mechanisms of the leukocyte-endothelial interactions in the microcirculation of various tissues and in different inflammatory conditions of in vivo specimens. However, due to the limited control over the conditions of the image acquisition, motion blur and artifacts, resulting mainly from the heartbeat and respiratory movements of the in vivo specimen, will very often be present. This problem can significantly undermine the results of either visual or computerized analysis of the acquired video images. Since only a fraction of the total number of images are usually corrupted by severe motion blur, it is necessary to have a procedure to automatically identify such images in the video for either further restoration or removal. This paper proposes a new technique for the detection of motion blur in intravital video microscopy based on directional statistics of local energy maps computed using a bank of 2D log-Gabor filters. Quantitative assessment using both artificially corrupted images and real microscopy data were conducted to test the effectiveness of the proposed method. Results showed an area under the receiver operating characteristic curve (AUC) of 0.95 (AUC = 0.95; 95 % CI 0.93-0.97) when tested on 329 video images visually ranked by four observers.

  17. Using numerical models and volume rendering to interpret acoustic imaging of hydrothermal flow

    NASA Astrophysics Data System (ADS)

    Bemis, K. G.; Bennett, K.; Takle, J.; Rona, P. A.; Silver, D.

    2009-12-01

    Our acoustic imaging system will be installed onto the Neptune Canada observatory at the Main Endeavour Field, Juan de Fuca Ridge, which is a Ridge 2000 Integrated Study Site. Thereafter, 16-30 Gb of acoustic imaging data will be collected daily. We are developing a numerical model of merging plumes that will be used to guide expectations and volume rendering software that transforms volumetric acoustic data into photo-like images. Hydrothermal flow is modeled as a combination of merged point sources which can be configured in any geometry. The model stipulates the dissipation or dilution of the flow and uses potential fields and complex analysis to combine the entrainment fields produced by each source. The strengths of this model are (a) the ability to handle a variety of scales especially the small scale as the potential fields can be specified with an effectively infinite boundary condition, (b) the ability to handle line, circle and areal source configurations, and (c) the ability to handle both high temperature focused flow and low temperature diffuse flow. This model predicts the vertical and horizontal velocities and the spatial distribution of effluent from combined sources of variable strength in a steady ambient velocity field. To verify the accuracy of the model’s results, we compare the model predictions of plume centerlines for the merging of two relatively strong point sources with the acoustic imaging data collected at Clam Acres, Southwest Vent Field, EPR 21°N in 1990. The two chimneys are 3.5 m apart and the plumes emanating from their tops merge approximately 18 mab. The model is able to predict the height of merging and the bending of the centerlines. Merging is implicitly observed at Grotto Vent, Main Endeavour Field, in our VIP 2000 data from July 2000: although there are at least 5 vigorous black smokers only a single plume is discernable in the acoustic imaging data. Furthermore, the observed Doppler velocity data increases with height

  18. Modeling of video traffic in packet networks, low rate video compression, and the development of a lossy+lossless image compression algorithm

    NASA Technical Reports Server (NTRS)

    Sayood, K.; Chen, Y. C.; Wang, X.

    1992-01-01

    During this reporting period we have worked on three somewhat different problems. These are modeling of video traffic in packet networks, low rate video compression, and the development of a lossy + lossless image compression algorithm, which might have some application in browsing algorithms. The lossy + lossless scheme is an extension of work previously done under this grant. It provides a simple technique for incorporating browsing capability. The low rate coding scheme is also a simple variation on the standard discrete cosine transform (DCT) coding approach. In spite of its simplicity, the approach provides surprisingly high quality reconstructions. The modeling approach is borrowed from the speech recognition literature, and seems to be promising in that it provides a simple way of obtaining an idea about the second order behavior of a particular coding scheme. Details about these are presented.

  19. Ultrasound-Stimulated Acoustic Emission in Thermal Image-Guided HIFU Therapy: A Phantom Study

    SciTech Connect

    Jiang, C. P.; Lin, W. T.; Chen, W. S.

    2006-05-08

    Magnetic resonance image (MRI) is a promising monitoring tool for non-invasive real-time thermal guidance in high intensity focused ultrasound (HIFU) during thermal ablation surgery. However, this approach has two main drawbacks: 1) majority of components need to be redesigned to be MR compatible in order to avoid effecting MR images, and 2) the cost of operating MRI facilities is high. Alternately, ultrasound-stimulated acoustic emission (USAE) method has been applied for detecting thermal variations in tissues. An optical transparent phantom, made from polyacrylamide, containing thermal sensitive indicator protein (Bovine Serum Albumin), was prepared for observing the HIFU-induced denaturalization. A thermal-couple was set up for validation of temperature distribution. Experimental results show that thermal image can be captured clearly under stationary conditions.

  20. Sensing the delivery and endocytosis of nanoparticles using magneto-photo-acoustic imaging

    PubMed Central

    Qu, M.; Mehrmohammadi, M.; Emelianov, S.Y.

    2015-01-01

    Many biomedical applications necessitate a targeted intracellular delivery of the nanomaterial to specific cells. Therefore, a non-invasive and reliable imaging tool is required to detect both the delivery and cellular endocytosis of the nanoparticles. Herein, we demonstrate that magneto-photo-acoustic (MPA) imaging can be used to monitor the delivery and to identify endocytosis of magnetic and optically absorbing nanoparticles. The relationship between photoacoustic (PA) and magneto-motive ultrasound (MMUS) signals from the in vitro samples were analyzed to identify the delivery and endocytosis of nanoparticles. The results indicated that during the delivery of nanoparticles to the vicinity of the cells, both PA and MMUS signals are almost linearly proportional. However, accumulation of nanoparticles within the cells leads to nonlinear MMUS-PA relationship, due to non-linear MMUS signal amplification. Therefore, through longitudinal MPA imaging, it is possible to monitor the delivery of nanoparticles and identify the endocytosis of the nanoparticles by living cells. PMID:26640773

  1. An efficient system for reliably transmitting image and video data over low bit rate noisy channels

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Huang, Y. F.; Stevenson, Robert L.

    1994-01-01

    This research project is intended to develop an efficient system for reliably transmitting image and video data over low bit rate noisy channels. The basic ideas behind the proposed approach are the following: employ statistical-based image modeling to facilitate pre- and post-processing and error detection, use spare redundancy that the source compression did not remove to add robustness, and implement coded modulation to improve bandwidth efficiency and noise rejection. Over the last six months, progress has been made on various aspects of the project. Through our studies of the integrated system, a list-based iterative Trellis decoder has been developed. The decoder accepts feedback from a post-processor which can detect channel errors in the reconstructed image. The error detection is based on the Huber Markov random field image model for the compressed image. The compression scheme used here is that of JPEG (Joint Photographic Experts Group). Experiments were performed and the results are quite encouraging. The principal ideas here are extendable to other compression techniques. In addition, research was also performed on unequal error protection channel coding, subband vector quantization as a means of source coding, and post processing for reducing coding artifacts. Our studies on unequal error protection (UEP) coding for image transmission focused on examining the properties of the UEP capabilities of convolutional codes. The investigation of subband vector quantization employed a wavelet transform with special emphasis on exploiting interband redundancy. The outcome of this investigation included the development of three algorithms for subband vector quantization. The reduction of transform coding artifacts was studied with the aid of a non-Gaussian Markov random field model. This results in improved image decompression. These studies are summarized and the technical papers included in the appendices.

  2. High-resolution, High-speed, Three-dimensional Video Imaging with Digital Fringe Projection Techniques

    PubMed Central

    Ekstrand, Laura; Karpinsky, Nikolaus; Wang, Yajun; Zhang, Song

    2013-01-01

    Digital fringe projection (DFP) techniques provide dense 3D measurements of dynamically changing surfaces. Like the human eyes and brain, DFP uses triangulation between matching points in two views of the same scene at different angles to compute depth. However, unlike a stereo-based method, DFP uses a digital video projector to replace one of the cameras1. The projector rapidly projects a known sinusoidal pattern onto the subject, and the surface of the subject distorts these patterns in the camera’s field of view. Three distorted patterns (fringe images) from the camera can be used to compute the depth using triangulation. Unlike other 3D measurement methods, DFP techniques lead to systems that tend to be faster, lower in equipment cost, more flexible, and easier to develop. DFP systems can also achieve the same measurement resolution as the camera. For this reason, DFP and other digital structured light techniques have recently been the focus of intense research (as summarized in1-5). Taking advantage of DFP, the graphics processing unit, and optimized algorithms, we have developed a system capable of 30 Hz 3D video data acquisition, reconstruction, and display for over 300,000 measurement points per frame6,7. Binary defocusing DFP methods can achieve even greater speeds8. Diverse applications can benefit from DFP techniques. Our collaborators have used our systems for facial function analysis9, facial animation10, cardiac mechanics studies11, and fluid surface measurements, but many other potential applications exist. This video will teach the fundamentals of DFP techniques and illustrate the design and operation of a binary defocusing DFP system. PMID:24326674

  3. High-resolution, high-speed, three-dimensional video imaging with digital fringe projection techniques.

    PubMed

    Ekstrand, Laura; Karpinsky, Nikolaus; Wang, Yajun; Zhang, Song

    2013-01-01

    Digital fringe projection (DFP) techniques provide dense 3D measurements of dynamically changing surfaces. Like the human eyes and brain, DFP uses triangulation between matching points in two views of the same scene at different angles to compute depth. However, unlike a stereo-based method, DFP uses a digital video projector to replace one of the cameras(1). The projector rapidly projects a known sinusoidal pattern onto the subject, and the surface of the subject distorts these patterns in the camera's field of view. Three distorted patterns (fringe images) from the camera can be used to compute the depth using triangulation. Unlike other 3D measurement methods, DFP techniques lead to systems that tend to be faster, lower in equipment cost, more flexible, and easier to develop. DFP systems can also achieve the same measurement resolution as the camera. For this reason, DFP and other digital structured light techniques have recently been the focus of intense research (as summarized in(1-5)). Taking advantage of DFP, the graphics processing unit, and optimized algorithms, we have developed a system capable of 30 Hz 3D video data acquisition, reconstruction, and display for over 300,000 measurement points per frame(6,7). Binary defocusing DFP methods can achieve even greater speeds(8). Diverse applications can benefit from DFP techniques. Our collaborators have used our systems for facial function analysis(9), facial animation(10), cardiac mechanics studies(11), and fluid surface measurements, but many other potential applications exist. This video will teach the fundamentals of DFP techniques and illustrate the design and operation of a binary defocusing DFP system. PMID:24326674

  4. Guiding synchrotron X-ray diffraction by multimodal video-rate protein crystal imaging.

    PubMed

    Newman, Justin A; Zhang, Shijie; Sullivan, Shane Z; Dow, Ximeng Y; Becker, Michael; Sheedlo, Michael J; Stepanov, Sergey; Carlsen, Mark S; Everly, R Michael; Das, Chittaranjan; Fischetti, Robert F; Simpson, Garth J

    2016-07-01

    Synchronous digitization, in which an optical sensor is probed synchronously with the firing of an ultrafast laser, was integrated into an optical imaging station for macromolecular crystal positioning prior to synchrotron X-ray diffraction. Using the synchronous digitization instrument, second-harmonic generation, two-photon-excited fluorescence and bright field by laser transmittance were all acquired simultaneously with perfect image registry at up to video-rate (15 frames s(-1)). A simple change in the incident wavelength enabled simultaneous imaging by two-photon-excited ultraviolet fluorescence, one-photon-excited visible fluorescence and laser transmittance. Development of an analytical model for the signal-to-noise enhancement afforded by synchronous digitization suggests a 15.6-fold improvement over previous photon-counting techniques. This improvement in turn allowed acquisition on nearly an order of magnitude more pixels than the preceding generation of instrumentation and reductions of well over an order of magnitude in image acquisition times. These improvements have allowed detection of protein crystals on the order of 1 µm in thickness under cryogenic conditions in the beamline. These capabilities are well suited to support serial crystallography of crystals approaching 1 µm or less in dimension. PMID:27359145

  5. The concept of cyclic sound intensity and its application to acoustical imaging

    NASA Astrophysics Data System (ADS)

    Lafon, B.; Antoni, J.; Sidahmed, M.; Polac, L.

    2011-04-01

    This paper demonstrates how to take advantage of the cyclostationarity property of engine signals to define a new acoustical quantity, the cyclic sound intensity, which displays the instantaneous flux of acoustical energy in the angle-frequency domain during an average engine cycle. This quantity is attractive in that it possesses the ability of being instantaneous and averaged at the same time, thus reconciling two conflicting properties into a rigourous and unambiguous framework. Cyclic sound intensity is a rich concept with several original ramifications. Among other things, it returns a unique decomposition into instantaneous active and reactive parts. Associated to acoustical imaging techniques, it allows the construction of sound radiation movies that evolve within the engine cycle and whose each frame is a sound intensity map calculated at a specific time - or crankshaft angle - in the engine cycle. This enables the accurate localisation of sources in space, in frequency and in time (crankshaft angle). Furthermore, associated to cyclic Wiener filtering, this methodology makes it possible to decompose the overall radiated sound into several noise source contributions whose cyclic sound intensities can then be analysed independently.

  6. Acoustic Property Reconstruction of a Neonate Yangtze Finless Porpoise's (Neophocaena asiaeorientalis) Head Based on CT Imaging

    PubMed Central

    Wei, Chong; Wang, Zhitao; Song, Zhongchang; Wang, Kexiong; Wang, Ding; Au, Whitlow W. L.; Zhang, Yu

    2015-01-01

    The reconstruction of the acoustic properties of a neonate finless porpoise’s head was performed using X-ray computed tomography (CT). The head of the deceased neonate porpoise was also segmented across the body axis and cut into slices. The averaged sound velocity and density were measured, and the Hounsfield units (HU) of the corresponding slices were obtained from computed tomography scanning. A regression analysis was employed to show the linear relationships between the Hounsfield unit and both sound velocity and density of samples. Furthermore, the CT imaging data were used to compare the HU value, sound velocity, density and acoustic characteristic impedance of the main tissues in the porpoise’s head. The results showed that the linear relationships between HU and both sound velocity and density were qualitatively consistent with previous studies on Indo-pacific humpback dolphins and Cuvier’s beaked whales. However, there was no significant increase of the sound velocity and acoustic impedance from the inner core to the outer layer in this neonate finless porpoise’s melon. PMID:25856588

  7. Acoustic property reconstruction of a neonate Yangtze finless porpoise's (Neophocaena asiaeorientalis) head based on CT imaging.

    PubMed

    Wei, Chong; Wang, Zhitao; Song, Zhongchang; Wang, Kexiong; Wang, Ding; Au, Whitlow W L; Zhang, Yu

    2015-01-01

    The reconstruction of the acoustic properties of a neonate finless porpoise's head was performed using X-ray computed tomography (CT). The head of the deceased neonate porpoise was also segmented across the body axis and cut into slices. The averaged sound velocity and density were measured, and the Hounsfield units (HU) of the corresponding slices were obtained from computed tomography scanning. A regression analysis was employed to show the linear relationships between the Hounsfield unit and both sound velocity and density of samples. Furthermore, the CT imaging data were used to compare the HU value, sound velocity, density and acoustic characteristic impedance of the main tissues in the porpoise's head. The results showed that the linear relationships between HU and both sound velocity and density were qualitatively consistent with previous studies on Indo-pacific humpback dolphins and Cuvier's beaked whales. However, there was no significant increase of the sound velocity and acoustic impedance from the inner core to the outer layer in this neonate finless porpoise's melon.

  8. Contribution of the supraglottic larynx to the vocal product: imaging and acoustic analysis

    NASA Astrophysics Data System (ADS)

    Gracco, L. Carol

    1996-04-01

    Horizontal supraglottic laryngectomy is a surgical procedure to remove a mass lesion located in the region of the pharynx superior to the true vocal folds. In contrast to full or partial laryngectomy, patients who undergo horizontal supraglottic laryngectomy often present with little or nor involvement to the true vocal folds. This population provides an opportunity to examine the acoustic consequences of altering the pharynx while sparing the laryngeal sound source. Acoustic and magnetic resonance imaging (MRI) data were acquired in a group of four patients before and after supraglottic laryngectomy. Acoustic measures included the identification of vocal tract resonances and the fundamental frequency of the vocal fold vibration. 3D reconstruction of the pharyngeal portion of each subjects' vocal tract were made from MRIs taken during phonation and volume measures were obtained. These measures reveal a variable, but often dramatic difference in the surgically-altered area of the pharynx and changes in the formant frequencies of the vowel/i/post surgically. In some cases the presence of the tumor created a deviation from the expected formant values pre-operatively with post-operative values approaching normal. Patients who also underwent radiation treatment post surgically tended to have greater constriction in the pharyngeal area of the vocal tract.

  9. [Quantification and improvement of speech transmission performance using headphones in acoustic stimulated functional magnetic resonance imaging].

    PubMed

    Yamamura, Ken ichiro; Takatsu, Yasuo; Miyati, Tosiaki; Kimura, Tetsuya

    2014-10-01

    Functional magnetic resonance imaging (fMRI) has made a major contribution to the understanding of higher brain function, but fMRI with auditory stimulation, used in the planning of brain tumor surgery, is often inaccurate because there is a risk that the sounds used in the trial may not be correctly transmitted to the subjects due to acoustic noise. This prompted us to devise a method of digitizing sound transmission ability from the accuracy rate of 67 syllables, classified into three types. We evaluated this with and without acoustic noise during imaging. We also improved the structure of the headphones and compared their sound transmission ability with that of conventional headphones attached to an MRI device (a GE Signa HDxt 3.0 T). We calculated and compared the sound transmission ability of the conventional headphones with that of the improved model. The 95 percent upper confidence limit (UCL) was used as the threshold for accuracy rate of hearing for both headphone models. There was a statistically significant difference between the conventional model and the improved model during imaging (p < 0.01). The rate of accuracy of the improved model was 16 percent higher. 29 and 22 syllables were accurate at a 95% UCL in the improved model and the conventional model, respectively. This study revealed the evaluation system used in this study to be useful for correctly identifying syllables during fMRI.

  10. Imaging of transient surface acoustic waves by full-field photorefractive interferometry.

    PubMed

    Xiong, Jichuan; Xu, Xiaodong; Glorieux, Christ; Matsuda, Osamu; Cheng, Liping

    2015-05-01

    A stroboscopic full-field imaging technique based on photorefractive interferometry for the visualization of rapidly changing surface displacement fields by using of a standard charge-coupled device (CCD) camera is presented. The photorefractive buildup of the space charge field during and after probe laser pulses is simulated numerically. The resulting anisotropic diffraction upon the refractive index grating and the interference between the polarization-rotated diffracted reference beam and the transmitted signal beam are modeled theoretically. The method is experimentally demonstrated by full-field imaging of the propagation of photoacoustically generated surface acoustic waves with a temporal resolution of nanoseconds. The surface acoustic wave propagation in a 23 mm × 17 mm area on an aluminum plate was visualized with 520 × 696 pixels of the CCD sensor, yielding a spatial resolution of 33 μm. The short pulse duration (8 ns) of the probe laser yields the capability of imaging SAWs with frequencies up to 60 MHz. PMID:26026514

  11. Imaging of Acoustically Coupled Oscillations Due to Flow Past a Shallow Cavity: Effect of Cavity Length Scale

    SciTech Connect

    P. Oshkai; M. Geveci; D. Rockwell; M. Pollack

    2002-12-12

    Flow-acoustic interactions due to fully turbulent inflow past a shallow axisymmetric cavity mounted in a pipe are investigated using a technique of high-image-density particle image velocimetry in conjunction with unsteady pressure measurements. This imaging leads to patterns of velocity, vorticity, streamline topology, and hydrodynamic contributions to the acoustic power integral. Global instantaneous images, as well as time-averaged images, are evaluated to provide insight into the flow physics during tone generation. Emphasis is on the manner in which the streamwise length scale of the cavity alters the major features of the flow structure. These image-based approaches allow identification of regions of the unsteady shear layer that contribute to the instantaneous hydrodynamic component of the acoustic power, which is necessary to maintain a flow tone. In addition, combined image analysis and pressure measurements allow categorization of the instantaneous flow patterns that are associated with types of time traces and spectra of the fluctuating pressure. In contrast to consideration based solely on pressure spectra, it is demonstrated that locked-on tones may actually exhibit intermittent, non-phase-locked images, apparently due to low damping of the acoustic resonator. Locked-on flow tones (without modulation or intermittency), locked-on flow tones with modulation, and non-locked-on oscillations with short-term, highly coherent fluctuations are defined and represented by selected cases. Depending on which of,these regimes occur, the time-averaged Q (quality)-factor and the dimensionless peak pressure are substantially altered.

  12. Investigating the emotional response to room acoustics: A functional magnetic resonance imaging study.

    PubMed

    Lawless, M S; Vigeant, M C

    2015-10-01

    While previous research has demonstrated the powerful influence of pleasant and unpleasant music on emotions, the present study utilizes functional magnetic resonance imaging (fMRI) to assess the positive and negative emotional responses as demonstrated in the brain when listening to music convolved with varying room acoustic conditions. During fMRI scans, subjects rated auralizations created in a simulated concert hall with varying reverberation times. The analysis detected activations in the dorsal striatum, a region associated with anticipation of reward, for two individuals for the highest rated stimulus, though no activations were found for regions associated with negative emotions in any subject.

  13. Investigating the emotional response to room acoustics: A functional magnetic resonance imaging study.

    PubMed

    Lawless, M S; Vigeant, M C

    2015-10-01

    While previous research has demonstrated the powerful influence of pleasant and unpleasant music on emotions, the present study utilizes functional magnetic resonance imaging (fMRI) to assess the positive and negative emotional responses as demonstrated in the brain when listening to music convolved with varying room acoustic conditions. During fMRI scans, subjects rated auralizations created in a simulated concert hall with varying reverberation times. The analysis detected activations in the dorsal striatum, a region associated with anticipation of reward, for two individuals for the highest rated stimulus, though no activations were found for regions associated with negative emotions in any subject. PMID:26520354

  14. Quantitative Analysis Of Sperm Motion Kinematics From Real-Time Video-Edge Images

    NASA Astrophysics Data System (ADS)

    Davis, Russell O...; Katz, David F.

    1988-02-01

    A new model of sperm swimming kinematics, which uses signal processing methods and multivariate statistical techniques to identify individual cell-motion parameters and unique cell populations, is presented. Swimming paths of individual cells are obtained using real-time, video-edge digitization. Raw paths are adaptively filtered to identify average paths, and measurements of space-time oscillations about average paths are made. Time-dependent frequency information is extracted from spatial variations about average paths using harmonic analysis. Raw-path and average-path measures such as curvature, curve length, and straight-line length, and measures of oscillations about average paths such as time-dependent amplitude and frequency variations, are used in a multivariate, cluster analysis to identify unique cell populations. The entire process, including digitization of sperm video images, is computer-automated. Preliminary results indicate that this method of tracking, digitization, and kinematic analysis accurately identifies unique cell subpopulations, including: the relative numbers of cells in each subpopulation, how subpopulations differ, and the extent and significance of such differences. With appropriate work, this approach may be useful for clinical discrimination between normal and abnormal semen specimens.

  15. Temperature-dependent differences in the nonlinear acoustic behavior of ultrasound contrast agents revealed by high-speed imaging and bulk acoustics.

    PubMed

    Mulvana, Helen; Stride, Eleanor; Tang, Mengxing; Hajnal, Jo V; Eckersley, Robert

    2011-09-01

    Previous work by the authors has established that increasing the temperature of the suspending liquid from 20°C to body temperature has a significant impact on the bulk acoustic properties and stability of an ultrasound contrast agent suspension (SonoVue, Bracco Suisse SA, Manno, Lugano, Switzerland). In this paper the influence of temperature on the nonlinear behavior of microbubbles is investigated, because this is one of the most important parameters in the context of diagnostic imaging. High-speed imaging showed that raising the temperature significantly influences the dynamic behavior of individual microbubbles. At body temperature, microbubbles exhibit greater radial excursion and oscillate less spherically, with a greater incidence of jetting and gas expulsion, and therefore collapse, than they do at room temperature. Bulk acoustics revealed an associated increase in the harmonic content of the scattered signals. These findings emphasize the importance of conducting laboratory studies at body temperature if the results are to be interpreted for in vivo applications.

  16. Three-dimensional imaging applications in Earth Sciences using video data acquired from an unmanned aerial vehicle

    NASA Astrophysics Data System (ADS)

    McLeod, Tara

    For three dimensional (3D) aerial images, unmanned aerial vehicles (UAVs) are cheaper to operate and easier to fly than the typical manned craft mounted with a laser scanner. This project explores the feasibility of using 2D video images acquired with a UAV and transforming them into 3D point clouds. The Aeryon Scout -- a quad-copter micro UAV -- flew two missions: the first at York University Keele campus and the second at the Canadian Wollastonite Mine Property. Neptec's ViDAR software was used to extract 3D information from the 2D video using structure from motion. The resulting point clouds were sparsely populated, yet captured vegetation well. They were used successfully to measure fracture orientation in rock walls. Any improvement in the video resolution would cascade through the processing and improve the overall results.

  17. Advances in EEG: home video telemetry, high frequency oscillations and electrical source imaging.

    PubMed

    Patel, Anjla C; Thornton, Rachel C; Mitchell, Tejal N; Michell, Andrew W

    2016-10-01

    Over the last two decades, technological advances in electroencephalography (EEG) have allowed us to extend its clinical utility for the evaluation of patients with epilepsy. This article reviews three main areas in which substantial advances have been made in the diagnosis and pre-surgical planning of patients with epilepsy. Firstly, the development of small portable video-EEG systems have allowed some patients to record their attacks at home, thereby improving diagnosis, with consequent substantial healthcare and economic implications. Secondly, in specialist centres carrying out epilepsy surgery, there has been considerable interest in whether bursts of very high frequency EEG activity can help to determine the regions of the brain likely to be generating the seizures. Identification of these discharges, initially only recorded from intracranial electrodes, may thus allow better surgical planning and improve surgical outcomes. Finally we discuss the contribution of electrical source imaging in the pre-surgical evaluation of patients with focal epilepsy, and its prospects for the future.

  18. Assembly of a Multi-channel Video System to Simultaneously Record Cerebral Emboli with Cerebral Imaging

    PubMed Central

    Stoner-Duncan, Benjamin; Kim, Sae Jin; Mergeche, Joanna L.; Anastasian, Zirka H.; Heyer, Eric J.

    2011-01-01

    Stroke remains a significant risk of carotid revascularization for atherosclerotic disease. Emboli generated at the time of treatment either using endarterectomy or stent-angioplasty may progress with blood flow and lodge in brain arteries. Recently, the use of protection devices to trap emboli created at the time of revascularization has helped to establish a role for stent-supported angioplasty compared with endarterectomy. Several devices have been developed to reduce or detect emboli that may be dislodged during carotid artery stenting (CAS) to treat carotid artery stenosis. A significant challenge in assessing the efficacy of these devices is precisely determining when emboli are dislodged in real-time. To address this challenge, we devised a method of simultaneously recording fluoroscopic images, transcranial Doppler (TCD) data, vital signs, and digital video of the patient/physician. This method permits accurate causative analysis and allows procedural events to be precisely correlated to embolic events in real-time. PMID:21441834

  19. [Sexuality and the human body: the subject's view through video images].

    PubMed

    Vargas, E; Siqueira, V H

    1999-11-01

    This study analyzes images of the body linked to sexual and reproductive behavior found in the communication processes mediated by so-called educational videos. In the relationship between subject and technology, the paper is intended to characterize the discourses and the view or perspective currently shaping health education practices. Focusing on the potential in the relationship between the enunciator and subjects represented in the text and the interaction between health professionals and messages, the study attempts to characterize the discourses and questions providing the basis for a given view of the body and sexuality. The study was conducted in the years 1996-1997 and focused on health professionals from the public health system. The results show a concept of sexuality that tends to generalize the meaning ascribed to sexual experience, ignoring the various ways by which different culturally defined groups attribute meaning to the body.

  20. Acoustic structure quantification by using ultrasound Nakagami imaging for assessing liver fibrosis.

    PubMed

    Tsui, Po-Hsiang; Ho, Ming-Chih; Tai, Dar-In; Lin, Ying-Hsiu; Wang, Chiao-Yin; Ma, Hsiang-Yang

    2016-01-01

    Acoustic structure quantification (ASQ) is a recently developed technique widely used for detecting liver fibrosis. Ultrasound Nakagami parametric imaging based on the Nakagami distribution has been widely used to model echo amplitude distribution for tissue characterization. We explored the feasibility of using ultrasound Nakagami imaging as a model-based ASQ technique for assessing liver fibrosis. Standard ultrasound examinations were performed on 19 healthy volunteers and 91 patients with chronic hepatitis B and C (n = 110). Liver biopsy and ultrasound Nakagami imaging analysis were conducted to compare the METAVIR score and Nakagami parameter. The diagnostic value of ultrasound Nakagami imaging was evaluated using receiver operating characteristic (ROC) curves. The Nakagami parameter obtained through ultrasound Nakagami imaging decreased with an increase in the METAVIR score (p < 0.0001), representing an increase in the extent of pre-Rayleigh statistics for echo amplitude distribution. The area under the ROC curve (AUROC) was 0.88 for the diagnosis of any degree of fibrosis (≥F1), whereas it was 0.84, 0.69, and 0.67 for ≥F2, ≥F3, and ≥F4, respectively. Ultrasound Nakagami imaging is a model-based ASQ technique that can be beneficial for the clinical diagnosis of early liver fibrosis. PMID:27605260

  1. Acoustic structure quantification by using ultrasound Nakagami imaging for assessing liver fibrosis

    PubMed Central

    Tsui, Po-Hsiang; Ho, Ming-Chih; Tai, Dar-In; Lin, Ying-Hsiu; Wang, Chiao-Yin; Ma, Hsiang-Yang

    2016-01-01

    Acoustic structure quantification (ASQ) is a recently developed technique widely used for detecting liver fibrosis. Ultrasound Nakagami parametric imaging based on the Nakagami distribution has been widely used to model echo amplitude distribution for tissue characterization. We explored the feasibility of using ultrasound Nakagami imaging as a model-based ASQ technique for assessing liver fibrosis. Standard ultrasound examinations were performed on 19 healthy volunteers and 91 patients with chronic hepatitis B and C (n = 110). Liver biopsy and ultrasound Nakagami imaging analysis were conducted to compare the METAVIR score and Nakagami parameter. The diagnostic value of ultrasound Nakagami imaging was evaluated using receiver operating characteristic (ROC) curves. The Nakagami parameter obtained through ultrasound Nakagami imaging decreased with an increase in the METAVIR score (p < 0.0001), representing an increase in the extent of pre-Rayleigh statistics for echo amplitude distribution. The area under the ROC curve (AUROC) was 0.88 for the diagnosis of any degree of fibrosis (≥F1), whereas it was 0.84, 0.69, and 0.67 for ≥F2, ≥F3, and ≥F4, respectively. Ultrasound Nakagami imaging is a model-based ASQ technique that can be beneficial for the clinical diagnosis of early liver fibrosis. PMID:27605260

  2. Acoustic quasi-holographic images of scattering by vertical cylinders from one-dimensional bistatic scans.

    PubMed

    Baik, Kyungmin; Dudley, Christopher; Marston, Philip L

    2011-12-01

    When synthetic aperture sonar (SAS) is used to image elastic targets in water, subtle features can be present in the images associated with the dynamical response of the target being viewed. In an effort to improve the understanding of such responses, as well as to explore alternative image processing methods, a laboratory-based system was developed in which targets were illuminated by a transient acoustic source, and bistatic responses were recorded by scanning a hydrophone along a rail system. Images were constructed using a relatively conventional bistatic SAS algorithm and were compared with images based on supersonic holography. The holographic method is a simplification of one previously used to view the time evolution of a target's response [Hefner and Marston, ARLO 2, 55-60 (2001)]. In the holographic method, the space-time evolution of the scattering was used to construct a two-dimensional image with cross range and time as coordinates. Various features for vertically hung cylindrical targets were interpreted using high frequency ray theory. This includes contributions from guided surface elastic waves, as well as transmitted-wave features and specular reflection.

  3. Fast Orientation of Video Images of Buildings Acquired from a UAV without Stabilization.

    PubMed

    Kedzierski, Michal; Delis, Paulina

    2016-06-23

    The aim of this research was to assess the possibility of conducting an absolute orientation procedure for video imagery, in which the external orientation for the first image was typical for aerial photogrammetry whereas the external orientation of the second was typical for terrestrial photogrammetry. Starting from the collinearity equations, assuming that the camera tilt angle is equal to 90°, a simplified mathematical model is proposed. The proposed method can be used to determine the X, Y, Z coordinates of points based on a set of collinearity equations of a pair of images. The use of simplified collinearity equations can considerably shorten the processing tine of image data from Unmanned Aerial Vehicles (UAVs), especially in low cost systems. The conducted experiments have shown that it is possible to carry out a complete photogrammetric project of an architectural structure using a camera tilted 85°-90° ( φ or ω) and simplified collinearity equations. It is also concluded that there is a correlation between the speed of the UAV and the discrepancy between the established and actual camera tilt angles.

  4. Fast Orientation of Video Images of Buildings Acquired from a UAV without Stabilization

    PubMed Central

    Kedzierski, Michal; Delis, Paulina

    2016-01-01

    The aim of this research was to assess the possibility of conducting an absolute orientation procedure for video imagery, in which the external orientation for the first image was typical for aerial photogrammetry whereas the external orientation of the second was typical for terrestrial photogrammetry. Starting from the collinearity equations, assuming that the camera tilt angle is equal to 90°, a simplified mathematical model is proposed. The proposed method can be used to determine the X, Y, Z coordinates of points based on a set of collinearity equations of a pair of images. The use of simplified collinearity equations can considerably shorten the processing tine of image data from Unmanned Aerial Vehicles (UAVs), especially in low cost systems. The conducted experiments have shown that it is possible to carry out a complete photogrammetric project of an architectural structure using a camera tilted 85°–90° (φ or ω) and simplified collinearity equations. It is also concluded that there is a correlation between the speed of the UAV and the discrepancy between the established and actual camera tilt angles. PMID:27347954

  5. Content-based storage and retrieval scheme for image and video databases

    NASA Astrophysics Data System (ADS)

    Herodotou, Nicos; Plataniotis, Konstantinos N.; Venetsanopoulos, Anastasios N.

    1998-01-01

    In this paper, a technique is presented to locate and track the facial areas in image and video databases. The extracted facial regions are used to obtain a number features that are suitable for content-based storage and retrieval. The proposed face localization method consists of essentially two components: i) a color processing unit, and ii) a shape and color analysis module. The color processing component utilizes the distribution of skin-tones in the HSV color space to obtain an initial set of candidate regions or objects. The latter shape and color analysis module is used to correctly identify the facial regions when falsely detected objects are extracted. A number of features such as hair color, skin-tone, and face location and size are subsequently determined from the extracted facial areas. The hair and skin colors provide useful descriptions related to the human characteristics while the face location and size can reveal information about the activity within the scene, and the type of image. These features can be effectively combined with others and employed in user queries to retrieve particular facial images.

  6. Fast Orientation of Video Images of Buildings Acquired from a UAV without Stabilization.

    PubMed

    Kedzierski, Michal; Delis, Paulina

    2016-01-01

    The aim of this research was to assess the possibility of conducting an absolute orientation procedure for video imagery, in which the external orientation for the first image was typical for aerial photogrammetry whereas the external orientation of the second was typical for terrestrial photogrammetry. Starting from the collinearity equations, assuming that the camera tilt angle is equal to 90°, a simplified mathematical model is proposed. The proposed method can be used to determine the X, Y, Z coordinates of points based on a set of collinearity equations of a pair of images. The use of simplified collinearity equations can considerably shorten the processing tine of image data from Unmanned Aerial Vehicles (UAVs), especially in low cost systems. The conducted experiments have shown that it is possible to carry out a complete photogrammetric project of an architectural structure using a camera tilted 85°-90° ( φ or ω) and simplified collinearity equations. It is also concluded that there is a correlation between the speed of the UAV and the discrepancy between the established and actual camera tilt angles. PMID:27347954

  7. Multiwavelength Fluorescence Otoscope for Video-Rate Chemical Imaging of Middle Ear Pathology

    PubMed Central

    2015-01-01

    A common motif in otolaryngology is the lack of certainty regarding diagnosis for middle ear conditions, resulting in many patients being overtreated under the worst-case assumption. Although pneumatic otoscopy and adjunctive tests offer additional information, white light otoscopy has been the main tool for diagnosis of external auditory canal and middle ear pathologies for over a century. In middle ear pathologies, the inability to avail high-resolution structural and/or molecular imaging is particularly glaring, leading to a complicated and erratic decision analysis. Here, we propose a novel multiwavelength fluorescence-based video-rate imaging strategy that combines readily available optical elements and software components to create a novel otoscopic device. This modified otoscope enables low-cost, detailed and objective diagnosis of common middle ear pathological conditions. Using the detection of congenital cholesteatoma as a specific example, we demonstrate the feasibility of fluorescence imaging to differentiate this proliferative lesion from uninvolved middle ear tissue based on the characteristic autofluorescence signals. Availability of real-time, wide-field chemical information should enable more complete removal of cholesteatoma, allowing for better hearing preservation and substantially reducing the well-documented risks, costs and psychological effects of repeated surgical procedures. PMID:25226556

  8. Video Image Analysis of Turbulent Buoyant Jets Using a Novel Laboratory Apparatus

    NASA Astrophysics Data System (ADS)

    Crone, T. J.; Colgan, R. E.; Ferencevych, P. G.

    2012-12-01

    Turbulent buoyant jets play an important role in the transport of heat and mass in a variety of environmental settings on Earth. Naturally occurring examples include the discharges from high-temperature seafloor hydrothermal vents and from some types of subaerial volcanic eruptions. Anthropogenic examples include flows from industrial smokestacks and the flow from the damaged well after the Deepwater Horizon oil leak of 2010. Motivated by a desire to find non-invasive methods for measuring the volumetric flow rates of turbulent buoyant jets, we have constructed a laboratory apparatus that can generate these types of flows with easily adjustable nozzle velocities and fluid densities. The jet fluid comprises a variable mixture of nitrogen and carbon dioxide gas, which can be injected at any angle with respect to the vertical into the quiescent surrounding air. To make the flow visible we seed the jet fluid with a water fog generated by an array of piezoelectric diaphragms oscillating at ultrasonic frequencies. The system can generate jets that have initial densities ranging from approximately 2-48% greater than the ambient air. We obtain independent estimates of the volumetric flow rates using well-calibrated rotameters, and collect video image sequences for analysis at frame rates up to 120 frames per second using a machine vision camera. We are using this apparatus to investigate several outstanding problems related to the physics of these flows and their analysis using video imagery. First, we are working to better constrain several theoretical parameters that describe the trajectory of these flows when their initial velocities are not parallel to the buoyancy force. The ultimate goal of this effort is to develop well-calibrated methods for establishing volumetric flow rates using trajectory analysis. Second, we are working to refine optical plume velocimetry (OPV), a non-invasive technique for estimating flow rates using temporal cross-correlation of image

  9. Computer detection of the rapid diffusion of fluorescent membrane fusion markers in images observed with video microscopy.

    PubMed Central

    Niles, W D; Li, Q; Cohen, F S

    1992-01-01

    We have developed an algorithm for automated detection of the dynamic pattern characterizing flashes of fluorescence in video images of membrane fusion. The algorithm detects the spatially localized, transient increases and decreases in brightness that result from the dequenching of fluorescent dye in phospholipid vesicles or lipid-enveloped virions fusing with a planar membrane. The flash is identified in video images by its nonzero time derivative and the symmetry of its spatial profile. Differentiation is implemented by forward and backward subtractions of video frames. The algorithm groups spatially connected pixels brighter than a user-specified threshold into distinct objects in forward- and backward-differentiated images. Objects are classified as either flashes or noise particles by comparing the symmetries of matched forward and backward difference profiles and then by tracking each profile in successive difference images. The number of flashes identified depends on the brightness threshold, the size of the convolution kernel used to filter the image, and the time difference between the subtracted video frames. When these parameters are changed so that the algorithm identifies an increasing percentage of the flashes recognized by eye, an increasing number of noise objects are mistakenly identified as flashes. These mistaken flashes can be eliminated by a human observer. The algorithm considerably shortens the time needed to analyze video data. Tested extensively with phospholipid vesicle and virion fusion with planar membranes, our implementation of the algorithm accurately determined the rate of fusion of influenza virions labeled with the lipophilic dye octadecylrhodamine (R18). Images FIGURE 2 FIGURE 4 FIGURE 5 FIGURE 6 PMID:1420909

  10. Three-dimensional tomographic imaging for dynamic radiation behavior study using infrared imaging video bolometers in large helical device plasma.

    PubMed

    Sano, Ryuichi; Peterson, Byron J; Teranishi, Masaru; Iwama, Naofumi; Kobayashi, Masahiro; Mukai, Kiyofumi; Pandya, Shwetang N

    2016-05-01

    A three-dimensional (3D) tomography system using four InfraRed imaging Video Bolometers (IRVBs) has been designed with a helical periodicity assumption for the purpose of plasma radiation measurement in the large helical device. For the spatial inversion of large sized arrays, the system has been numerically and experimentally examined using the Tikhonov regularization with the criterion of minimum generalized cross validation, which is the standard solver of inverse problems. The 3D transport code EMC3-EIRENE for impurity behavior and related radiation has been used to produce phantoms for numerical tests, and the relative calibration of the IRVB images has been carried out with a simple function model of the decaying plasma in a radiation collapse. The tomography system can respond to temporal changes in the plasma profile and identify the 3D dynamic behavior of radiation, such as the radiation enhancement that starts from the inboard side of the torus, during the radiation collapse. The reconstruction results are also consistent with the output signals of a resistive bolometer. These results indicate that the designed 3D tomography system is available for the 3D imaging of radiation. The first 3D direct tomographic measurement of a magnetically confined plasma has been achieved.

  11. Capturing and displaying microscopic images used in medical diagnostics and forensic science using 4K video resolution - an application in higher education.

    PubMed

    Maier, Hans; de Heer, Gert; Ortac, Ajda; Kuijten, Jan

    2015-11-01

    To analyze, interpret and evaluate microscopic images, used in medical diagnostics and forensic science, video images for educational purposes were made with a very high resolution of 4096 × 2160 pixels (4K), which is four times as many pixels as High-Definition Video (1920 × 1080 pixels). The unprecedented high resolution makes it possible to see details that remain invisible to any other video format. The images of the specimens (blood cells, tissue sections, hair, fibre, etc.) are recorded using a 4K video camera which is attached to a light microscope. After processing, this resulted in very sharp and highly detailed images. This material was then used in education for classroom discussion. Spoken explanation by experts in the field of medical diagnostics and forensic science was also added to the high-resolution video images to make it suitable for self-study.

  12. OPTIMISATION OF OCCUPATIONAL RADIATION PROTECTION IN IMAGE-GUIDED INTERVENTIONS: EXPLORING VIDEO RECORDINGS AS A TOOL IN THE PROCESS.

    PubMed

    Almén, Anja; Sandblom, Viktor; Rystedt, Hans; von Wrangel, Alexa; Ivarsson, Jonas; Båth, Magnus; Lundh, Charlotta

    2016-06-01

    The overall purpose of this work was to explore how video recordings can contribute to the process of optimising occupational radiation protection in image-guided interventions. Video-recorded material from two image-guided interventions was produced and used to investigate to what extent it is conceivable to observe and assess dose-affecting actions in video recordings. Using the recorded material, it was to some extent possible to connect the choice of imaging techniques to the medical events during the procedure and, to a less extent, to connect these technical and medical issues to the occupational exposure. It was possible to identify a relationship between occupational exposure level to staff and positioning and use of shielding. However, detailed values of the dose rates were not possible to observe on the recordings, and the change in occupational exposure level from adjustments of exposure settings was not possible to identify. In conclusion, the use of video recordings is a promising tool to identify dose-affecting instances, allowing for a deeper knowledge of the interdependency between the management of the medical procedure, the applied imaging technology and the occupational exposure level. However, for a full information about the dose-affecting actions, the equipment used and the recording settings have to be thoroughly planned. PMID:27056142

  13. Transactions and Answer Judging in Multimedia Instruction: A Way to Transact with Features Appearing in Video and Graphic Images.

    ERIC Educational Resources Information Center

    Casey, Carl

    1992-01-01

    Discussion of transactions in computer-based instruction for ill-structured and visual domains focuses on two transactions developed for meteorology training that provide the capability to interact with video and graphic images at a very detailed level. Potential applications for the transactions are suggested, and early evaluation reports are…

  14. Stress-Induced Fracturing of Reservoir Rocks: Acoustic Monitoring and μCT Image Analysis

    NASA Astrophysics Data System (ADS)

    Pradhan, Srutarshi; Stroisz, Anna M.; Fjær, Erling; Stenebråten, Jørn F.; Lund, Hans K.; Sønstebø, Eyvind F.

    2015-11-01

    Stress-induced fracturing in reservoir rocks is an important issue for the petroleum industry. While productivity can be enhanced by a controlled fracturing operation, it can trigger borehole instability problems by reactivating existing fractures/faults in a reservoir. However, safe fracturing can improve the quality of operations during CO2 storage, geothermal installation and gas production at and from the reservoir rocks. Therefore, understanding the fracturing behavior of different types of reservoir rocks is a basic need for planning field operations toward these activities. In our study, stress-induced fracturing of rock samples has been monitored by acoustic emission (AE) and post-experiment computer tomography (CT) scans. We have used hollow cylinder cores of sandstones and chalks, which are representatives of reservoir rocks. The fracture-triggering stress has been measured for different rocks and compared with theoretical estimates. The population of AE events shows the location of main fracture arms which is in a good agreement with post-test CT image analysis, and the fracture patterns inside the samples are visualized through 3D image reconstructions. The amplitudes and energies of acoustic events clearly indicate initiation and propagation of the main fractures. Time evolution of the radial strain measured in the fracturing tests will later be compared to model predictions of fracture size.

  15. A novel imaging technique based on the spatial coherence of backscattered waves: demonstration in the presence of acoustical clutter

    NASA Astrophysics Data System (ADS)

    Dahl, Jeremy J.; Pinton, Gianmarco F.; Lediju, Muyinatu; Trahey, Gregg E.

    2011-03-01

    In the last 20 years, the number of suboptimal and inadequate ultrasound exams has increased. This trend has been linked to the increasing population of overweight and obese individuals. The primary causes of image degradation in these individuals are often attributed to phase aberration and clutter. Phase aberration degrades image quality by distorting the transmitted and received pressure waves, while clutter degrades image quality by introducing incoherent acoustical interference into the received pressure wavefront. Although significant research efforts have pursued the correction of image degradation due to phase aberration, few efforts have characterized or corrected image degradation due to clutter. We have developed a novel imaging technique that is capable of differentiating ultrasonic signals corrupted by acoustical interference. The technique, named short-lag spatial coherence (SLSC) imaging, is based on the spatial coherence of the received ultrasonic wavefront at small spatial distances across the transducer aperture. We demonstrate comparative B-mode and SLSC images using full-wave simulations that include the effects of clutter and show that SLSC imaging generates contrast-to-noise ratios (CNR) and signal-to-noise ratios (SNR) that are significantly better than B-mode imaging under noise-free conditions. In the presence of noise, SLSC imaging significantly outperforms conventional B-mode imaging in all image quality metrics. We demonstrate the use of SLSC imaging in vivo and compare B-mode and SLSC images of human thyroid and liver.

  16. Acoustic wavefield and Mach wave radiation of flashing arcs in strombolian explosion measured by image luminance

    NASA Astrophysics Data System (ADS)

    Genco, Riccardo; Ripepe, Maurizio; Marchetti, Emanuele; Bonadonna, Costanza; Biass, Sebastien

    2014-10-01

    Explosive activity often generates visible flashing arcs in the volcanic plume considered as the evidence of the shock-front propagation induced by supersonic dynamics. High-speed image processing is used to visualize the pressure wavefield associated with flashing arcs observed in strombolian explosions. Image luminance is converted in virtual acoustic signal compatible with the signal recorded by pressure transducer. Luminance variations are moving with a spherical front at a 344.7 m/s velocity. Flashing arcs travel at the sound speed already 14 m above the vent and are not necessarily the evidence of a supersonic explosive dynamics. However, seconds later, the velocity of small fragments increases, and the spherical acousto-luminance wavefront becomes planar recalling the Mach wave radiation generated by large scale turbulence in high-speed jet. This planar wavefront forms a Mach angle of 55° with the explosive jet axis, suggesting an explosive dynamics moving at Mo = 1.22 Mach number.

  17. Radon transform imaging: low-cost video compressive imaging at extreme resolutions

    NASA Astrophysics Data System (ADS)

    Sankaranarayanan, Aswin C.; Wang, Jian; Gupta, Mohit

    2016-05-01

    Most compressive imaging architectures rely on programmable light-modulators to obtain coded linear measurements of a signal. As a consequence, the properties of the light modulator place fundamental limits on the cost, performance, practicality, and capabilities of the compressive camera. For example, the spatial resolution of the single pixel camera is limited to that of its light modulator, which is seldom greater than 4 megapixels. In this paper, we describe a novel approach to compressive imaging that avoids the use of spatial light modulator. In its place, we use novel cylindrical optics and a rotation gantry to directly sample the Radon transform of the image focused on the sensor plane. We show that the reconstruction problem is identical to sparse tomographic recovery and we can leverage the vast literature in compressive magnetic resonance imaging (MRI) to good effect. The proposed design has many important advantages over existing compressive cameras. First, we can achieve a resolution of N × N pixels using a sensor with N photodetectors; hence, with commercially available SWIR line-detectors with 10k pixels, we can potentially achieve spatial resolutions of 100 megapixels, a capability that is unprecedented. Second, our design is scalable more gracefully across wavebands of light since we only require sensors and optics that are optimized for the wavelengths of interest; in contrast, spatial light modulators like DMDs require expensive coatings to be effective in non-visible wavebands. Third, we can exploit properties of line-detectors including electronic shutters and pixels with large aspect ratios to optimize light throughput. On the ip side, a drawback of our approach is the need for moving components in the imaging architecture.

  18. Imaging the position-dependent 3D force on microbeads subjected to acoustic radiation forces and streaming.

    PubMed

    Lamprecht, Andreas; Lakämper, Stefan; Baasch, Thierry; Schaap, Iwan A T; Dual, Jurg

    2016-07-01

    Acoustic particle manipulation in microfluidic channels is becoming a powerful tool in microfluidics to control micrometer sized objects in medical, chemical and biological applications. By creating a standing acoustic wave in the channel, the resulting pressure field can be employed to trap or sort particles. To design efficient and reproducible devices, it is important to characterize the pressure field throughout the volume of the microfluidic device. Here, we used an optically trapped particle as probe to measure the forces in all three dimensions. By moving the probe through the volume of the channel, we imaged spatial variations in the pressure field. In the direction of the standing wave this revealed a periodic energy landscape for 2 μm beads, resulting in an effective stiffness of 2.6 nN m(-1) for the acoustic trap. We found that multiple fabricated devices showed consistent pressure fields. Surprisingly, forces perpendicular to the direction of the standing wave reached values of up to 20% of the main-axis-values. To separate the direct acoustic force from secondary effects, we performed experiments with different bead sizes, which attributed some of the perpendicular forces to acoustic streaming. This method to image acoustically generated forces in 3D can be used to either minimize perpendicular forces or to employ them for specific applications in novel acoustofluidic designs. PMID:27302661

  19. Imaging the position-dependent 3D force on microbeads subjected to acoustic radiation forces and streaming.

    PubMed

    Lamprecht, Andreas; Lakämper, Stefan; Baasch, Thierry; Schaap, Iwan A T; Dual, Jurg

    2016-07-01

    Acoustic particle manipulation in microfluidic channels is becoming a powerful tool in microfluidics to control micrometer sized objects in medical, chemical and biological applications. By creating a standing acoustic wave in the channel, the resulting pressure field can be employed to trap or sort particles. To design efficient and reproducible devices, it is important to characterize the pressure field throughout the volume of the microfluidic device. Here, we used an optically trapped particle as probe to measure the forces in all three dimensions. By moving the probe through the volume of the channel, we imaged spatial variations in the pressure field. In the direction of the standing wave this revealed a periodic energy landscape for 2 μm beads, resulting in an effective stiffness of 2.6 nN m(-1) for the acoustic trap. We found that multiple fabricated devices showed consistent pressure fields. Surprisingly, forces perpendicular to the direction of the standing wave reached values of up to 20% of the main-axis-values. To separate the direct acoustic force from secondary effects, we performed experiments with different bead sizes, which attributed some of the perpendicular forces to acoustic streaming. This method to image acoustically generated forces in 3D can be used to either minimize perpendicular forces or to employ them for specific applications in novel acoustofluidic designs.

  20. Acoustic radiation- and streaming-induced microparticle velocities determined by microparticle image velocimetry in an ultrasound symmetry plane.

    PubMed

    Barnkob, Rune; Augustsson, Per; Laurell, Thomas; Bruus, Henrik

    2012-11-01

    We present microparticle image velocimetry measurements of suspended microparticles of diameters from 0.6 to 10 μm undergoing acoustophoresis in an ultrasound symmetry plane in a microchannel. The motion of the smallest particles is dominated by the Stokes drag from the induced acoustic streaming flow, while the motion of the largest particles is dominated by the acoustic radiation force. For all particle sizes we predict theoretically how much of the particle velocity is due to radiation and streaming, respectively. These predictions include corrections for particle-wall interactions and ultrasonic thermoviscous effects and match our measurements within the experimental uncertainty. Finally, we predict theoretically and confirm experimentally that the ratio between the acoustic radiation- and streaming-induced particle velocities is proportional to the actuation frequency, the acoustic contrast factor, and the square of the particle size, while it is inversely proportional to the kinematic viscosity.

  1. Video flowmeter

    DOEpatents

    Lord, David E.; Carter, Gary W.; Petrini, Richard R.

    1983-01-01

    A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid (10) containing entrained particles (12) is formed and positioned by a rod optic lens assembly (31) on the raster area of a low-light level television camera (20). The particles (12) are illuminated by light transmitted through a bundle of glass fibers (32) surrounding the rod optic lens assembly (31). Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen (40). The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid (10).

  2. Video flowmeter

    DOEpatents

    Lord, D.E.; Carter, G.W.; Petrini, R.R.

    1983-08-02

    A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid containing entrained particles is formed and positioned by a rod optic lens assembly on the raster area of a low-light level television camera. The particles are illuminated by light transmitted through a bundle of glass fibers surrounding the rod optic lens assembly. Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen. The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid. 4 figs.

  3. Video flowmeter

    DOEpatents

    Lord, D.E.; Carter, G.W.; Petrini, R.R.

    1981-06-10

    A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid containing entrained particles is formed and positioned by a rod optic lens assembly on the raster area of a low-light level television camera. The particles are illuminated by light transmitted through a bundle of glass fibers surrounding the rod optic lens assembly. Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen. The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid.

  4. Acoustic radiation force impulse and supersonic shear imaging versus transient elastography for liver fibrosis assessment.

    PubMed

    Sporea, Ioan; Bota, Simona; Jurchis, Ana; Sirli, Roxana; Grădinaru-Tascău, Oana; Popescu, Alina; Ratiu, Iulia; Szilaski, Milana

    2013-11-01

    Our study compared three elastographic methods--transient elastography (TE), acoustic radiation force impulse (ARFI) imaging and supersonic shear imaging (SSI)--with respect to the feasibility of their use in liver fibrosis evaluation. We also compared the performance of ARFI imaging and SSI, with TE as the reference method. The study included 332 patients, with or without hepatopathies, in which liver stiffness was evaluated using TE, ARFI and SSI. Reliable measurements were defined as a median value of 10 (TE, ARFI imaging) or 5 (SSI) liver stiffness measurements with a success rate ≥60% and an interquartile range interval <30%. A significantly higher percentage of reliable measurements were obtained using ARFI than by using TE and SSI: 92.1% versus 72.2% (p < 0.0001) and 92.1% versus 71.3% (p < 0.0001). Higher body mass index and older age were significantly associated with inability to obtain reliable measurements of liver stiffness using TE and SSI. In 55.4% of patients, reliable liver stiffness measurements were obtained using all three elastographic methods, and ARFI imaging and TE were similarly accurate in diagnosing significant fibrosis and cirrhosis, with TE as the reference method.

  5. Design factors of intravascular dual frequency transducers for super-harmonic contrast imaging and acoustic angiography

    NASA Astrophysics Data System (ADS)

    Ma, Jianguo; Martin, K. Heath; Li, Yang; Dayton, Paul A.; Shung, K. Kirk; Zhou, Qifa; Jiang, Xiaoning

    2015-05-01

    Imaging of coronary vasa vasorum may lead to assessment of the vulnerable plaque development in diagnosis of atherosclerosis diseases. Dual frequency transducers capable of detection of microbubble super-harmonics have shown promise as a new contrast-enhanced intravascular ultrasound (CE-IVUS) platform with the capability of vasa vasorum imaging. Contrast-to-tissue ratio (CTR) in CE-IVUS imaging can be closely associated with low frequency transmitter performance. In this paper, transducer designs encompassing different transducer layouts, transmitting frequencies, and transducer materials are compared for optimization of imaging performance. In the layout selection, the stacked configuration showed superior super-harmonic imaging compared with the interleaved configuration. In the transmitter frequency selection, a decrease in frequency from 6.5 MHz to 5 MHz resulted in an increase of CTR from 15 dB to 22 dB when receiving frequency was kept constant at 30 MHz. In the material selection, the dual frequency transducer with the lead magnesium niobate-lead titanate (PMN-PT) 1-3 composite transmitter yielded higher axial resolution compared to single crystal transmitters (70 μm compared to 150 μm pulse length). These comparisons provide guidelines for the design of intravascular acoustic angiography transducers.

  6. Design factors of intravascular dual frequency transducers for super-harmonic contrast imaging and acoustic angiography

    PubMed Central

    Ma, Jianguo; Martin, K. Heath; Li, Yang; Dayton, Paul A.; Shung, K. Kirk; Zhou, Qifa; Jiang, Xiaoning

    2015-01-01

    Imaging of coronary vasa vasorum may lead to assessment of the vulnerable plaque development in diagnosis of atherosclerosis diseases. Dual frequency transducers capable of detection of microbubble super-harmonics have shown promise as a new contrast-enhanced intravascular ultrasound (CE-IVUS) platform with the capability of vasa vasorum imaging. Contrast-to-tissue ratio (CTR) in CE-IVUS imaging can be closely associated with the low frequency transmitter performance. In this paper, transducer designs encompassing different transducer layouts, transmitting frequencies, and transducer materials are compared for optimization of imaging performance. In the layout selection, the stacked configuration showed superior super-harmonic imaging compared with the interleaved configuration. In the transmitter frequency selection, a decrease in frequency from 6.5 MHz to 5 MHz resulted in an increase of CTR from 15 dB to 22 dB when receiving frequency was kept constant at 30 MHz. In the material selection, the dual frequency transducer with the lead magnesium niobate-lead titanate (PMN-PT) 1-3 composite transmitter yielded higher axial resolution compared to single crystal transmitters (70 μm compared to 150 μm pulse length). These comparisons provide guidelines for design of intravascular acoustic angiography transducers. PMID:25856384

  7. Design factors of intravascular dual frequency transducers for super-harmonic contrast imaging and acoustic angiography.

    PubMed

    Ma, Jianguo; Martin, K Heath; Li, Yang; Dayton, Paul A; Shung, K Kirk; Zhou, Qifa; Jiang, Xiaoning

    2015-05-01

    Imaging of coronary vasa vasorum may lead to assessment of the vulnerable plaque development in diagnosis of atherosclerosis diseases. Dual frequency transducers capable of detection of microbubble super-harmonics have shown promise as a new contrast-enhanced intravascular ultrasound (CE-IVUS) platform with the capability of vasa vasorum imaging. Contrast-to-tissue ratio (CTR) in CE-IVUS imaging can be closely associated with low frequency transmitter performance. In this paper, transducer designs encompassing different transducer layouts, transmitting frequencies, and transducer materials are compared for optimization of imaging performance. In the layout selection, the stacked configuration showed superior super-harmonic imaging compared with the interleaved configuration. In the transmitter frequency selection, a decrease in frequency from 6.5 MHz to 5 MHz resulted in an increase of CTR from 15 dB to 22 dB when receiving frequency was kept constant at 30 MHz. In the material selection, the dual frequency transducer with the lead magnesium niobate-lead titanate (PMN-PT) 1-3 composite transmitter yielded higher axial resolution compared to single crystal transmitters (70 μm compared to 150 μm pulse length). These comparisons provide guidelines for the design of intravascular acoustic angiography transducers. PMID:25856384

  8. Green's Function Retrieval and Marchenko Imaging in a Dissipative Acoustic Medium

    NASA Astrophysics Data System (ADS)

    Slob, Evert

    2016-04-01

    Single-sided Marchenko equations for Green's function construction and imaging relate the measured reflection response of a lossless heterogeneous medium to an acoustic wave field inside this medium. I derive two sets of single-sided Marchenko equations for the same purpose, each in a heterogeneous medium, with one medium being dissipative and the other a corresponding medium with negative dissipation. Double-sided scattering data of the dissipative medium are required as input to compute the surface reflection response in the corresponding medium with negative dissipation. I show that each set of single-sided Marchenko equations leads to Green's functions with a virtual receiver inside the medium: one exists inside the dissipative medium and one in the medium with negative dissipation. This forms the basis of imaging inside a dissipative heterogeneous medium. I relate the Green's functions to the reflection response inside each medium, from which the image can be constructed. I illustrate the method with a one-dimensional example that shows the image quality. The method has a potentially wide range of imaging applications where the material under test is accessible from two sides.

  9. Video-rate in vivo fluorescence imaging with a line-scanned dual-axis confocal microscope

    PubMed Central

    Chen, Ye; Wang, Danni; Khan, Altaz; Wang, Yu; Borwege, Sabine; Sanai, Nader; Liu, Jonathan T. C.

    2015-01-01

    Abstract. Video-rate optical-sectioning microscopy of living organisms would allow for the investigation of dynamic biological processes and would also reduce motion artifacts, especially for in vivo imaging applications. Previous feasibility studies, with a slow stage-scanned line-scanned dual-axis confocal (LS-DAC) microscope, have demonstrated that LS-DAC microscopy is capable of imaging tissues with subcellular resolution and high contrast at moderate depths of up to several hundred microns. However, the sensitivity and performance of a video-rate LS-DAC imaging system, with low-numerical aperture optics, have yet to be demonstrated. Here, we report on the construction and validation of a video-rate LS-DAC system that possesses sufficient sensitivity to visualize fluorescent contrast agents that are topically applied or systemically delivered in animal and human tissues. We present images of murine oral mucosa that are topically stained with methylene blue, and images of protoporphyrin IX-expressing brain tumor from glioma patients that have been administered 5-aminolevulinic acid prior to surgery. In addition, we demonstrate in vivo fluorescence imaging of red blood cells trafficking within the capillaries of a mouse ear, at frame rates of up to 30 fps. These results can serve as a benchmark for miniature in vivo microscopy devices under development. PMID:26509413

  10. Super-image mosaic of infant retinal fundus: selection and registration of the best-quality frames from videos.

    PubMed

    Poletti, Enea; Benedetti, Giulio; Ruggeri, Alfredo

    2013-01-01

    Wide-field retinal fundus cameras are commercially available devices that allow acquiring videos of a wide area of infants' eye, considered of clinical interest in screening for ROP (Retinopathy of Prematurity). Many frames of the video are often altered by defects such as artifacts, interlacing and defocus, which make critical and time consuming the search and choice of the good frames to be analyzed. We developed a computerized system that automatically selects the best still frames from the video and builds a mosaic from these images. It will allow clinicians to examine a single large, best quality image. The best frames are identified using several image quality parameters that measure sharpness and steadiness, and then registered to obtain a single mosaic image. A custom blending procedure is then applied in order to provide a final image with homogeneous luminosity and contrast, devoid of the dark areas typically present in the outer regions of single frames. The best-frame selection module showed a PPV of 0.92, while the visual inspection of resulting mosaics confirmed the remarkable capability of the proposed system to provide higher quality images. PMID:24111077

  11. Video-rate in vivo fluorescence imaging with a line-scanned dual-axis confocal microscope

    NASA Astrophysics Data System (ADS)

    Chen, Ye; Wang, Danni; Khan, Altaz; Wang, Yu; Borwege, Sabine; Sanai, Nader; Liu, Jonathan T. C.

    2015-10-01

    Video-rate optical-sectioning microscopy of living organisms would allow for the investigation of dynamic biological processes and would also reduce motion artifacts, especially for in vivo imaging applications. Previous feasibility studies, with a slow stage-scanned line-scanned dual-axis confocal (LS-DAC) microscope, have demonstrated that LS-DAC microscopy is capable of imaging tissues with subcellular resolution and high contrast at moderate depths of up to several hundred microns. However, the sensitivity and performance of a video-rate LS-DAC imaging system, with low-numerical aperture optics, have yet to be demonstrated. Here, we report on the construction and validation of a video-rate LS-DAC system that possesses sufficient sensitivity to visualize fluorescent contrast agents that are topically applied or systemically delivered in animal and human tissues. We present images of murine oral mucosa that are topically stained with methylene blue, and images of protoporphyrin IX-expressing brain tumor from glioma patients that have been administered 5-aminolevulinic acid prior to surgery. In addition, we demonstrate in vivo fluorescence imaging of red blood cells trafficking within the capillaries of a mouse ear, at frame rates of up to 30 fps. These results can serve as a benchmark for miniature in vivo microscopy devices under development.

  12. Towards high-sensitivity and high-resolution submillimeter-wave video imaging

    NASA Astrophysics Data System (ADS)

    Heinz, Erik; May, Torsten; Born, Detlef; Zieger, Gabriel; Anders, Solveig; Zakosarenko, Viatcheslav; Schubert, Marco; Krause, Torsten; Krüger, André; Schulz, Marco; Meyer, Hans-Georg

    2011-05-01

    Against a background of newly emerged security threats the well-established idea of utilizing submillimeter-wave radiation for personal security screening applications has recently evolved into a promising technology. Possible application scenarios demand sensitive, fast, flexible and high-quality imaging techniques. At present, best results are obtained by passive imaging using cryogenic microbolometers as radiation detectors. Building upon the concept of a passive submillimeter-wave stand-off video camera introduced previously, we present the evolution of this concept in a practical application-ready imaging device. This has been achieved using a variety of measures such as optimizing the detector parameters, improving the scanning mechanism, increasing the sampling speed, and enhancing the camera software. The image generation algorithm has been improved and an automatic sensor calibration technique has been implemented taking advantage of redundancy in the sensor data. The concept is based on a Cassegrain-type mirror optics, an opto-mechanical scanner providing spiraliform scanning traces, and an array of 20 superconducting transition-edge sensors (TES) operated at a temperature of 450-650 mK. The TES are cooled by a closed-cycle cooling system and read out by superconducting quantum interference devices (SQUIDs). The frequency band of operation centers around 350 GHz. The camera can operate at an object distance of 7-10 m. At 9m distance it covers a field of view of 110 cm diameter, achieves a spatial resolution of 2 cm and a pixel NETD (noise equivalent temperature difference) of 0.1-0.4 K. The maximum frame rate is 10 frames per second.

  13. A Review on Video/Image Authentication and Tamper Detection Techniques

    NASA Astrophysics Data System (ADS)

    Parmar, Zarna; Upadhyay, Saurabh

    2013-02-01

    With the innovations and development in sophisticated video editing technology and a wide spread of video information and services in our society, it is becoming increasingly significant to assure the trustworthiness of video information. Therefore in surveillance, medical and various other fields, video contents must be protected against attempt to manipulate them. Such malicious alterations could affect the decisions based on these videos. A lot of techniques are proposed by various researchers in the literature that assure the authenticity of video information in their own way. In this paper we present a brief survey on video authentication techniques with their classification. These authentication techniques are generally classified into following categories: digital signature based techniques, watermark based techniques, and other authentication techniques.

  14. ACOUSTICAL IMAGING AND MECHANICAL PROPERTIES OF SOFT ROCK AND MARINE SEDIMENTS

    SciTech Connect

    Thurman E. Scott, Jr., Ph.D.; Musharraf Zaman, Ph.D.; Younane Abousleiman, Ph.D.

    2001-04-01

    The oil and gas industry has encountered significant problems in the production of oil and gas from weak rocks (such as chalks and limestones) and from unconsolidated sand formations. Problems include subsidence, compaction, sand production, and catastrophic shallow water sand flows during deep water drilling. Together these cost the petroleum industry hundreds of millions of dollars annually. The goals of this first quarterly report is to document the progress on the project to provide data on the acoustic imaging and mechanical properties of soft rock and marine sediments. The project is intended to determine the geophysical (acoustic velocities) rock properties of weak, poorly cemented rocks and unconsolidated sands. In some cases these weak formations can create problems for reservoir engineers. For example, it cost Phillips Petroleum 1 billion dollars to repair of offshore production facilities damaged during the unexpected subsidence and compaction of the Ekofisk Field in the North Sea (Sulak 1991). Another example is the problem of shallow water flows (SWF) occurring in sands just below the seafloor encountered during deep water drilling operations. In these cases the unconsolidated sands uncontrollably flow up around the annulus of the borehole resulting in loss of the drill casing. The $150 million dollar loss of the Ursa development project in the U.S. Gulf Coast resulted from an uncontrolled SWF (Furlow 1998a,b; 1999a,b). The first three tasks outlined in the work plan are: (1) obtain rock samples, (2) construct new acoustic platens, (3) calibrate and test the equipment. These have been completed as scheduled. Rock Mechanics Institute researchers at the University of Oklahoma have obtained eight different types of samples for the experimental program. These include: (a) Danian Chalk, (b) Cordoba Cream Limestone, (c) Indiana Limestone, (d) Ekofisk Chalk, (e) Oil Creek Sandstone, (f) unconsolidated Oil Creek sand, and (g) unconsolidated Brazos river sand

  15. Variable ultrasound trigger delay for improved magnetic resonance acoustic radiation force imaging

    NASA Astrophysics Data System (ADS)

    Mougenot, Charles; Waspe, Adam; Looi, Thomas; Drake, James M.

    2016-01-01

    Magnetic resonance acoustic radiation force imaging (MR-ARFI) allows the quantification of microscopic displacements induced by ultrasound pulses, which are proportional to the local acoustic intensity. This study describes a new method to acquire MR-ARFI maps, which reduces the measurement noise in the quantification of displacement as well as improving its robustness in the presence of motion. Two MR-ARFI sequences were compared in this study. The first sequence ‘variable MSG’ involves switching the polarity of the motion sensitive gradient (MSG) between odd and even image frames. The second sequence named ‘static MSG’ involves a variable ultrasound trigger delay to sonicate during the first or second MSG for odd and even image frames, respectively. As previously published, the data acquired with a variable MSG required the use of reference data acquired prior to any sonication to process displacement maps. In contrary, data acquired with a static MSG were converted to displacement maps without using reference data acquired prior to the sonication. Displacement maps acquired with both sequences were compared by performing sonications for three different conditions: in a polyacrylamide phantom, in the leg muscle of a freely breathing pig and in the leg muscle of pig under apnea. The comparison of images acquired at even image frames and odd image frames indicates that the sequence with a static MSG provides a significantly better steady state (p  <  0.001 based on a Student’s t-test) than the images acquired with a variable MSG. In addition no reference data prior to sonication were required to process displacement maps for data acquired with a static MSG. The absence of reference data prior to sonication provided a 41% reduction of the spatial distribution of noise (p  <  0.001 based on a Student’s t-test) and reduced the sensitivity to motion for displacements acquired with a static MSG. No significant differences were expected and

  16. A Marker-less Monitoring System for Movement Analysis of Infants Using Video Images

    NASA Astrophysics Data System (ADS)

    Shima, Keisuke; Osawa, Yuko; Bu, Nan; Tsuji, Tokuo; Tsuji, Toshio; Ishii, Idaku; Matsuda, Hiroshi; Orito, Kensuke; Ikeda, Tomoaki; Noda, Shunichi

    This paper proposes a marker-less motion measurement and analysis system for infants. This system calculates eight types of evaluation indices related to the movement of an infant such as “amount of body motion” and “activity of body” from binary images that are extracted from video images using the background difference and frame difference. Thus, medical doctors can intuitively understand the movements of infants without long-term observations, and this may be helpful in supporting their diagnoses and detecting disabilities and diseases in the early stages. The distinctive feature of this system is that the movements of infants can be measured without using any markers for motion capture and thus it is expected that the natural and inherent tendencies of infants can be analyzed and evaluated. In this paper, the evaluation indices and features of movements between full-term infants (FTIs) and low birth weight infants (LBWIs) are compared using the developed prototype. We found that the amount of body motion and symmetry of upper and lower body movements of LBWIs became lower than those of FTIs. The difference between the movements of FTIs and LBWIs can be evaluated using the proposed system.

  17. Adapted waveform analysis: a tool for audio, image, and video enhancement

    NASA Astrophysics Data System (ADS)

    Coifman, Ronald R.; Woog, Lionel J.

    1997-02-01

    Adapted wave form analysis, refers to a collection of FFT like adapted transform algorithms. Given a signal these methods provide special matched collections of templates (orthonormal bases) enabling an efficient extraction of structural components. As a result various operations such as denoising, undesirable background suppression, sharpening and enhancement can be achieved efficiently. Perhaps the closest well known example of such coding method is provided by musical notation, where each segment of music is represented by a musical score made up of notes (templates) characterized by their duration, pitch, location and amplitude, our method corresponds to transcribing the music in as few notes as possible. Since noise and static are difficult to describe efficiently we obtain as a byproduct a denoised version of the sound. This transcription in a score can be developed into a mathematical musical orchestration as described below. The extension to images and video is straightforward we describe the image by collections of oscillatory patterns (paint brush strokes) of various sizes, locations, and amplitudes using a variety of orthogonal bases.

  18. Preliminary study of copper oxide nanoparticles acoustic and magnetic properties for medical imaging

    NASA Astrophysics Data System (ADS)

    Perlman, Or; Weitz, Iris S.; Azhari, Haim

    2015-03-01

    The implementation of multimodal imaging in medicine is highly beneficial as different physical properties may provide complementary information, augmented detection ability, and diagnosis verification. Nanoparticles have been recently used as contrast agents for various imaging modalities. Their significant advantage over conventional large-scale contrast agents is the ability of detection at early stages of the disease, being less prone to obstacles on their path to the target region, and possible conjunction to therapeutics. Copper ions play essential role in human health. They are used as a cofactor for multiple key enzymes involved in various fundamental biochemistry processes. Extremely small size copper oxide nanoparticles (CuO-NPs) are readily soluble in water with high colloidal stability yielding high bioavailability. The goal of this study was to examine the magnetic and acoustic characteristics of CuO-NPs in order to evaluate their potential to serve as contrast imaging agent for both MRI and ultrasound. CuO-NPs 7nm in diameter were synthesized by hot solution method. The particles were scanned using a 9.4T MRI and demonstrated a concentration dependent T1 relaxation time shortening phenomenon. In addition, it was revealed that CuO-NPs can be detected using the ultrasonic B-scan imaging. Finally, speed of sound based ultrasonic computed tomography was applied and showed that CuO-NPs can be clearly imaged. In conclusion, the preliminary results obtained, positively indicate that CuO-NPs may be imaged by both MRI and ultrasound. The results motivate additional in-vivo studies, in which the clinical utility of fused images derived from both modalities for diagnosis improvement will be studied.

  19. Evaluation of real-time acoustical holography for breast imaging and biopsy guidance

    NASA Astrophysics Data System (ADS)

    Lehman, Constance D.; Andre, Michael P.; Fecht, Barbara A.; Johansen, Jennifer M.; Shelby, Ronald L.; Shelby, Jerod O.

    1999-05-01

    Ultrasound is an attractive modality for adjunctive characterization of certain breast lesions, but it is not considered specific for cancer and it is not recommended for screening. An imaging technique remarkably different from pulse-echo ultrasound, termed Optical SonographyTM (Advanced Diagnostics, Inc.), uses the through-transmission signal. The method was applied to breast examinations in 41 asymptomatic and symptomatic women ranging in age from 18 to 83 years to evaluate this imaging modality for detection and characterization of breast disease and normal tissue. This approach uses coherent sound and coherent light to produce real-time, large field-of-view images with pronounced edge definition in soft tissues of the body. The system patient interface was modified to improve coupling to the breast and bring the chest wall to within 3 cm of the sound beam. System resolution (full width half maximum of the line-spread function) was 0.5 mm for a swept-frequency beam centered at 2.7 MHz. Resolution degrades slightly in the periphery of the very large 15.2-cm field of view. Dynamic range of the reconstructed 'raw' images (no post processing) was 3000:1. Included in the study population were women with dense parenchyma, palpable ductal carcinoma in situ with negative mammography, superficial and deep fibroadenomas, and calcifications. Successful breast imaging was performed in 40 of 41 women. These images were then compared with images generated using conventional X-ray mammography and pulse-echo ultrasound. Margins of lesions and internal textures were particularly well defined and provided substantial contrast to fatty and dense parenchyma. In two malignant lesions, Optical SonographyTM appeared to approximate more closely tumor extent compared to mammography than pulse-echo sonography. These preliminary studies indicate the method has unique potential for detecting, differentiating, and guiding the biopsy of breast lesions using real-time acoustical holography.

  20. Analysis and segmentation of images in case of solving problems of detecting and tracing objects on real-time video

    NASA Astrophysics Data System (ADS)

    Ezhova, Kseniia; Fedorenko, Dmitriy; Chuhlamov, Anton

    2016-04-01

    The article deals with the methods of image segmentation based on color space conversion, and allow the most efficient way to carry out the detection of a single color in a complex background and lighting, as well as detection of objects on a homogeneous background. The results of the analysis of segmentation algorithms of this type, the possibility of their implementation for creating software. The implemented algorithm is very time-consuming counting, making it a limited application for the analysis of the video, however, it allows us to solve the problem of analysis of objects in the image if there is no dictionary of images and knowledge bases, as well as the problem of choosing the optimal parameters of the frame quantization for video analysis.

  1. Video Event Trigger

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.; Lichter, Michael J.

    1994-01-01

    Video event trigger (VET) processes video image data to generate trigger signal when image shows significant change like motion or appearance, disappearance, change in color, change in brightness, or dilation of object. System aids in efficient utilization of image-data-storage and image-data-processing equipment in applications in which many video frames show no changes and are wasteful to record and analyze all frames when only relatively few frames show changes of interest. Applications include video recording of automobile crash tests, automated video monitoring of entrances, exits, parking lots, and secure areas.

  2. Imaging of human tooth using ultrasound based chirp-coded nonlinear time reversal acoustics.

    PubMed

    Dos Santos, Serge; Prevorovsky, Zdenek

    2011-08-01

    Human tooth imaging sonography is investigated experimentally with an acousto-optic noncoupling set-up based on the chirp-coded nonlinear time reversal acoustic concept. The complexity of the tooth internal structure (enamel-dentine interface, cracks between internal tubules) is analyzed by adapting the nonlinear elastic wave spectroscopy (NEWS) with the objective of the tomography of damage. Optimization of excitations using intrinsic symmetries, such as time reversal (TR) invariance, reciprocity, correlation properties are then proposed and implemented experimentally. The proposed medical application of this TR-NEWS approach is implemented on a third molar human tooth and constitutes an alternative of noncoupling echodentography techniques. A 10 MHz bandwidth ultrasonic instrumentation has been developed including a laser vibrometer and a 20 MHz contact piezoelectric transducer. The calibrated chirp-coded TR-NEWS imaging of the tooth is obtained using symmetrized excitations, pre- and post-signal processing, and the highly sensitive 14 bit resolution TR-NEWS instrumentation previously calibrated. Nonlinear signature coming from the symmetry properties is observed experimentally in the tooth using this bi-modal TR-NEWS imaging after and before the focusing induced by the time-compression process. The TR-NEWS polar B-scan of the tooth is described and suggested as a potential application for modern echodentography. It constitutes the basis of the self-consistent harmonic imaging sonography for monitoring cracks propagation in the dentine, responsible of human tooth structural health.

  3. Acoustic radiation force impulse imaging of vulnerable plaques: a finite element method parametric analysis

    PubMed Central

    Doherty, Joshua R.; Dumont, Douglas M.; Trahey, Gregg E.; Palmeri, Mark L.

    2012-01-01

    Plaque rupture is the most common cause of complications such as stroke and coronary heart failure. Recent histopathological evidence suggests that several plaque features, including a large lipid core and a thin fibrous cap, are associated with plaques most at risk for rupture. Acoustic Radiation Force Impulse (ARFI) imaging, a recently developed ultrasound-based elasticity imaging technique, shows promise for imaging these features noninvasively. Clinically, this could be used to distinguish vulnerable plaques, for which surgical intervention may be required, from those less prone to rupture. In this study, a parametric analysis using Finite-Element Method (FEM) models was performed to simulate ARFI imaging of five different carotid artery plaques across a wide range of material properties. It was demonstrated that ARFI could resolve the softer lipid pool from the surrounding, stiffer media and fibrous cap and was most dependent upon the stiffness of the lipid pool component. Stress concentrations due to an ARFI excitation were located in the media and fibrous cap components. In all cases, the maximum Von Mises stress was < 1.2 kPa. In comparing these results with others investigating plaque rupture, it is concluded that while the mechanisms may be different, the Von Mises stresses imposed by ARFI are orders of magnitude lower than the stresses associated with blood pressure. PMID:23122224

  4. Photoacoustic and ultrasound imaging with a gas-coupled laser acoustic line detector

    NASA Astrophysics Data System (ADS)

    Johnson, Jami L.; van Wijk, Kasper; Caron, James N.; Timmerman, Miriam

    2016-03-01

    Conventional contacting transducers are highly sensitive and readily available for ultrasonic and photoacoustic imaging. On the other hand, optical detection can be advantageous when a small sensor footprint, large bandwidth and no contact are essential. However, most optical methods utilizing interferometry or Doppler vibrometry rely on the reflection of light from the object. We present a non-contact detection method for photoacoustic and ultrasound imaging--termed Gas-Coupled Laser Acoustic Detection (GCLAD)--that does not involve surface reflectivity. GCLAD measures the displacement along a line in the air parallel to the object. Information about point displacements along the line is lost with this method, but resolution is increased over techniques that utilize finite point-detectors when used as an integrating line detector. In this proceeding, we present a formula for quantifying surface displacement remotely with GCLAD. We will validate this result by comparison with a commercial vibrometer. Finally, we will present two-dimensional imaging results using GCLAD as a line detector for photoacoustic and laser-ultrasound imaging.

  5. Acoustically active liposome-nanobubble complexes for enhanced ultrasonic imaging and ultrasound-triggered drug delivery.

    PubMed

    Nguyen, An T; Wrenn, Steven P

    2014-01-01

    Ultrasound is well known as a safe, reliable imaging modality. A historical limitation of ultrasound, however, was its inability to resolve structures at length scales less than nominally 20 µm, which meant that classical ultrasound could not be used in applications such as echocardiography and angiogenesis where one requires the ability to image small blood vessels. The advent of ultrasound contrast agents, or microbubbles, removed this limitation and ushered in a new wave of enhanced ultrasound applications. In recent years, the microbubbles have been designed to achieve yet another application, namely ultrasound-triggered drug delivery. Ultrasound contrast agents are thus tantamount to 'theranostic' vehicles, meaning they can do both therapy (drug delivery) and imaging (diagnostics). The use of ultrasound contrast agents as drug delivery vehicles, however, is perhaps less than ideal when compared to traditional drug delivery vehicles (e.g., polymeric microcapsules and liposomes) which have greater drug carrying capacities. The drawback of the traditional drug delivery vehicles is that they are not naturally acoustically active and cannot be used for imaging. The notion of a theranostic vehicle is sufficiently intriguing that many attempts have been made in recent years to achieve a vehicle that combines the echogenicity of microbubbles with the drug carrying capacity of liposomes. The attempts can be classified into three categories, namely entrapping, tethering, and nesting. Of these, nesting is the newest-and perhaps the most promising.

  6. Comparison of ultrasound B-mode, strain imaging, acoustic radiation force impulse displacement and shear wave velocity imaging using real time clinical breast images

    NASA Astrophysics Data System (ADS)

    Manickam, Kavitha; Machireddy, Ramasubba Reddy; Raghavan, Bagyam

    2016-04-01

    It has been observed that many pathological process increase the elastic modulus of soft tissue compared to normal. In order to image tissue stiffness using ultrasound, a mechanical compression is applied to tissues of interest and local tissue deformation is measured. Based on the mechanical excitation, ultrasound stiffness imaging methods are classified as compression or strain imaging which is based on external compression and Acoustic Radiation Force Impulse (ARFI) imaging which is based on force generated by focused ultrasound. When ultrasound is focused on tissue, shear wave is generated in lateral direction and shear wave velocity is proportional to stiffness of tissues. The work presented in this paper investigates strain elastography and ARFI imaging in clinical cancer diagnostics using real time patient data. Ultrasound B-mode imaging, strain imaging, ARFI displacement and ARFI shear wave velocity imaging were conducted on 50 patients (31 Benign and 23 malignant categories) using Siemens S2000 machine. True modulus contrast values were calculated from the measured shear wave velocities. For ultrasound B-mode, ARFI displacement imaging and strain imaging, observed image contrast and Contrast to Noise Ratio were calculated for benign and malignant cancers. Observed contrast values were compared based on the true modulus contrast values calculated from shear wave velocity imaging. In addition to that, student unpaired t-test was conducted for all the four techniques and box plots are presented. Results show that, strain imaging is better for malignant cancers whereas ARFI imaging is superior than strain imaging and B-mode for benign lesions representations.

  7. Acoustically induced tissue displacement for shear wave elasticity imaging using MRI

    NASA Astrophysics Data System (ADS)

    Haworth, Kevin; Kripfgans, Oliver; Steele, Derek; Swanson, Scott; Sutin, Alexander; Sarvazyan, Armen

    2005-09-01

    Palpitation detects tissue abnormalities by exploiting the vast range of elastic properties found in vivo. The method is limited by tactile sensitivity and the inability to probe tissues at depth. Recent efforts seek to remove these limitation by developing a medical imaging modality based on radiation force shear wave excitation. Our approach uses an acoustic source to launch a shear wave in a tissue-mimicking phantom and MRI to record microscopic displacements. Gelatin (10% wt/vol) was used for the tissue-mimicking phantom. Results for in situ elasticity were obtained using an air-backed 10-cm-diam piezoelectric crystal. To correct for future in vivo beam aberrations, we also employ a high-pressure 1-bit time-reversal cavity. Frequency and pulse duration were selected to optimize the TRA system for acoustic output pressure. Shear wave displacements were recorded by MRI in 1-ms time increments in a complete basis that allowed for 3-D reconstruction and analysis. The Lamé coefficients are then derived from the shear wave velocity and attenuation.

  8. A clinical pilot study of a modular video-CT augmentation system for image-guided skull base surgery

    NASA Astrophysics Data System (ADS)

    Liu, Wen P.; Mirota, Daniel J.; Uneri, Ali; Otake, Yoshito; Hager, Gregory; Reh, Douglas D.; Ishii, Masaru; Gallia, Gary L.; Siewerdsen, Jeffrey H.

    2012-02-01

    Augmentation of endoscopic video with preoperative or intraoperative image data [e.g., planning data and/or anatomical segmentations defined in computed tomography (CT) and magnetic resonance (MR)], can improve navigation, spatial orientation, confidence, and tissue resection in skull base surgery, especially with respect to critical neurovascular structures that may be difficult to visualize in the video scene. This paper presents the engineering and evaluation of a video augmentation system for endoscopic skull base surgery translated to use in a clinical study. Extension of previous research yielded a practical system with a modular design that can be applied to other endoscopic surgeries, including orthopedic, abdominal, and thoracic procedures. A clinical pilot study is underway to assess feasibility and benefit to surgical performance by overlaying CT or MR planning data in realtime, high-definition endoscopic video. Preoperative planning included segmentation of the carotid arteries, optic nerves, and surgical target volume (e.g., tumor). An automated camera calibration process was developed that demonstrates mean re-projection accuracy (0.7+/-0.3) pixels and mean target registration error of (2.3+/-1.5) mm. An IRB-approved clinical study involving fifteen patients undergoing skull base tumor surgery is underway in which each surgery includes the experimental video-CT system deployed in parallel to the standard-of-care (unaugmented) video display. Questionnaires distributed to one neurosurgeon and two otolaryngologists are used to assess primary outcome measures regarding the benefit to surgical confidence in localizing critical structures and targets by means of video overlay during surgical approach, resection, and reconstruction.

  9. Evaluating the intensity of the acoustic radiation force impulse (ARFI) in intravascular ultrasound (IVUS) imaging: Preliminary in vitro results.

    PubMed

    Shih, Cho-Chiang; Lai, Ting-Yu; Huang, Chih-Chung

    2016-08-01

    The ability to measure the elastic properties of plaques and vessels is significant in clinical diagnosis, particularly for detecting a vulnerable plaque. A novel concept of combining intravascular ultrasound (IVUS) imaging and acoustic radiation force impulse (ARFI) imaging has recently been proposed. This method has potential in elastography for distinguishing between the stiffness of plaques and arterial vessel walls. However, the intensity of the acoustic radiation force requires calibration as a standard for the further development of an ARFI-IVUS imaging device that could be used in clinical applications. In this study, a dual-frequency transducer with 11MHz and 48MHz was used to measure the association between the biological tissue displacement and the applied acoustic radiation force. The output intensity of the acoustic radiation force generated by the pushing element ranged from 1.8 to 57.9mW/cm(2), as measured using a calibrated hydrophone. The results reveal that all of the acoustic intensities produced by the transducer in the experiments were within the limits specified by FDA regulations and could still displace the biological tissues. Furthermore, blood clots with different hematocrits, which have elastic properties similar to the lipid pool of plaques, with stiffness ranging from 0.5 to 1.9kPa could be displaced from 1 to 4μm, whereas the porcine arteries with stiffness ranging from 120 to 291kPa were displaced from 0.4 to 1.3μm when an acoustic intensity of 57.9mW/cm(2) was used. The in vitro ARFI images of the artery with a blood clot and artificial arteriosclerosis showed a clear distinction of the stiffness distributions of the vessel wall. All the results reveal that ARFI-IVUS imaging has the potential to distinguish the elastic properties of plaques and vessels. Moreover, the acoustic intensity used in ARFI imaging has been experimentally quantified. Although the size of this two-element transducer is unsuitable for IVUS imaging, the

  10. Simultaneous bilateral real-time 3-d transcranial ultrasound imaging at 1 MHz through poor acoustic windows.

    PubMed

    Lindsey, Brooks D; Nicoletto, Heather A; Bennett, Ellen R; Laskowitz, Daniel T; Smith, Stephen W

    2013-04-01

    Ultrasound imaging has been proposed as a rapid, portable alternative imaging modality to examine stroke patients in pre-hospital or emergency room settings. However, in performing transcranial ultrasound examinations, 8%-29% of patients in a general population may present with window failure, in which case it is not possible to acquire clinically useful sonographic information through the temporal bone acoustic window. In this work, we describe the technical considerations, design and fabrication of low-frequency (1.2 MHz), large aperture (25.3 mm) sparse matrix array transducers for 3-D imaging in the event of window failure. These transducers are integrated into a system for real-time 3-D bilateral transcranial imaging-the ultrasound brain helmet-and color flow imaging capabilities at 1.2 MHz are directly compared with arrays operating at 1.8 MHz in a flow phantom with attenuation comparable to the in vivo case. Contrast-enhanced imaging allowed visualization of arteries of the Circle of Willis in 5 of 5 subjects and 8 of 10 sides of the head despite probe placement outside of the acoustic window. Results suggest that this type of transducer may allow acquisition of useful images either in individuals with poor windows or outside of the temporal acoustic window in the field.

  11. Acoustic characterization of ultrasound contrast microbubbles and echogenic liposomes: Applications to imaging and drug-delivery

    NASA Astrophysics Data System (ADS)

    Paul, Shirshendu

    Micron- to nanometer - sized ultrasound agents, like encapsulated microbubbles and echogenic liposomes (ELIPs), are being actively developed for possible clinical implementations in diagnostic imaging and ultrasound mediated drug/gene delivery. The primary objective of this thesis is to characterize the acoustic behavior of and the ultrasound-mediated contents release from these contrast agents for developing multi-functional ultrasound contrast agents. Subharmonic imaging using contrast microbubbles can improve image quality by providing a higher signal to noise ratio. However, the design and development of contrast microbubbles with favorable subharmonic behavior requires accurate mathematical models capable of predicting their nonlinear dynamics. To this goal, 'strain-softening' viscoelastic interfacial models of the encapsulation were developed and subsequently utilized to simulate the dynamics of encapsulated microbubbles. A hierarchical two-pronged approach of modeling --- a model is applied to one set of experimental data to obtain the model parameters (material characterization), and then the model is validated against a second independent experiment --- is demonstrated in this thesis for two lipid coated (SonazoidRTM and DefinityRTM) and a few polymer (polylactide) encapsulated microbubbles. The proposed models were successful in predicting several experimentally observed behaviors e.g., low subharmonic thresholds and "compression-only" radial oscillations. Results indicate that neglecting the polydisperse size distribution of contrast agent suspensions, a common practice in the literature, can lead to inaccurate results. In vitro experimental investigation of the dependence of subharmonic response from these microbubbles on the ambient pressure is also in conformity with the recent numerical investigations, showing both increase or decrease under appropriate excitation conditions. Experimental characterization of the ELIPs and polymersomes was performed

  12. Imaging of Acoustically Coupled Oscillations Due to Flow Past a Shallow Cavity: Effect of Cavity Length Scale

    SciTech Connect

    P Oshkai; M Geveci; D Rockwell; M Pollack

    2004-05-24

    Flow-acoustic interactions due to fully turbulent inflow past a shallow axisymmetric cavity mounted in a pipe, which give rise to flow tones, are investigated using a technique of high-image-density particle image velocimetry in conjunction with unsteady pressure measurements. This imaging leads to patterns of velocity, vorticity, streamline topology, and hydrodynamic contributions to the acoustic power integral. Global instantaneous images, as well as time-averaged images, are evaluated to provide insight into the flow physics during tone generation. Emphasis is on the manner in which the streamwise length scale of the cavity alters the major features of the flow structure. These image-based approaches allow identification of regions of the unsteady shear layer that contribute to the instantaneous hydrodynamic component of the acoustic power, which is necessary to maintain a flow tone. In addition, combined image analysis and pressure measurements allow categorization of the instantaneous flow patterns that are associated with types of time traces and spectra of the fluctuating pressure. In contrast to consideration based solely on pressure spectra, it is demonstrated that locked-on tones may actually exhibit intermittent, non-phase-locked images, apparently due to low damping of the acoustic resonator. Locked-on flow tones (without modulation or intermittency), locked-on flow tones with modulation, and non-locked-on oscillations with short-term, highly coherent fluctuations are defined and represented by selected cases. Depending on which of these regimes occur, the time-averaged Q (quality)-factor and the dimensionless peak pressure are substantially altered.

  13. A Bayesian approach for characterization of soft tissue viscoelasticity in acoustic radiation force imaging.

    PubMed

    Zhao, Xiaodong; Pelegri, Assimina A

    2016-04-01

    Biomechanical imaging techniques based on acoustic radiation force (ARF) have been developed to characterize the viscoelasticity of soft tissue by measuring the motion excited by ARF non-invasively. The unknown stress distribution in the region of excitation limits an accurate inverse characterization of soft tissue viscoelasticity, and single degree-of-freedom simplified models have been applied to solve the inverse problem approximately. In this study, the ARF-induced creep imaging is employed to estimate the time constant of a Voigt viscoelastic tissue model, and an inverse finite element (FE) characterization procedure based on a Bayesian formulation is presented. The Bayesian approach aims to estimate a reasonable quantification of the probability distributions of soft tissue mechanical properties in the presence of measurement noise and model parameter uncertainty. Gaussian process metamodeling is applied to provide a fast statistical approximation based on a small number of computationally expensive FE model runs. Numerical simulation results demonstrate that the Bayesian approach provides an efficient and practical estimation of the probability distributions of time constant in the ARF-induced creep imaging. In a comparison study with the single degree of freedom models, the Bayesian approach with FE models improves the estimation results even in the presence of large uncertainty levels of the model parameters.

  14. An application of time-reversed acoustics to the imaging of a salt-dome flank

    NASA Astrophysics Data System (ADS)

    Willis, M. E.; Lu, R.; Campman, X.; Toksöz, N.; Zhang, Y.; de Hoop, M. V.

    2005-12-01

    We present results of applying the concept of time-reversed acoustics (TRA) to the imaging of a salt-dome flank in a v(z) medium. A simulated multi-level walk-away VSP survey with sources at the surface and receivers in the borehole can be sorted into an equivalent reverse VSP (RVSP) with effective downhole sources and surface receivers. We apply the TRA process to the RVSP traces and create a zero offset seismic section as if it had been collected from collocated downhole sources and receivers. This procedure effectively redatums the wavefield from the surface to the borehole, eliminating the need for any complicated processing. The redatummed traces are created by summing the autocorrelations of the traces in the RVSP common shot gather. Theory says that each shot gather should be from receivers which completely surround the source. From practical considerations, we only have available the RVSP common receivers on the earth's surface, so we obtain an approximate zero offset section. Even with this restriction, our example shows that the results are encouraging. The image of the salt dome flank is created from the redatummed traces using a standard post-stack depth migration algorithm. This image compares favorably with the salt dome flank model.

  15. A Bayesian approach for characterization of soft tissue viscoelasticity in acoustic radiation force imaging.

    PubMed

    Zhao, Xiaodong; Pelegri, Assimina A

    2016-04-01

    Biomechanical imaging techniques based on acoustic radiation force (ARF) have been developed to characterize the viscoelasticity of soft tissue by measuring the motion excited by ARF non-invasively. The unknown stress distribution in the region of excitation limits an accurate inverse characterization of soft tissue viscoelasticity, and single degree-of-freedom simplified models have been applied to solve the inverse problem approximately. In this study, the ARF-induced creep imaging is employed to estimate the time constant of a Voigt viscoelastic tissue model, and an inverse finite element (FE) characterization procedure based on a Bayesian formulation is presented. The Bayesian approach aims to estimate a reasonable quantification of the probability distributions of soft tissue mechanical properties in the presence of measurement noise and model parameter uncertainty. Gaussian process metamodeling is applied to provide a fast statistical approximation based on a small number of computationally expensive FE model runs. Numerical simulation results demonstrate that the Bayesian approach provides an efficient and practical estimation of the probability distributions of time constant in the ARF-induced creep imaging. In a comparison study with the single degree of freedom models, the Bayesian approach with FE models improves the estimation results even in the presence of large uncertainty levels of the model parameters. PMID:26255624

  16. All-optical video-image encryption with enforced security level using independent component analysis

    NASA Astrophysics Data System (ADS)

    Alfalou, A.; Mansour, A.

    2007-10-01

    In the last two decades, wireless communications have been introduced in various applications. However, the transmitted data can be, at any moment, intercepted by non-authorized people. That could explain why data encryption and secure transmission have gained enormous popularity. In order to secure data transmission, we should pay attention to two aspects: transmission rate and encryption security level. In this paper, we address these two aspects by proposing a new video-image transmission scheme. This new system consists in using the advantage of optical high transmission rate and some powerful signal processing tools to secure the transmitted data. The main idea of our approach is to secure transmitted information at two levels: at the classical level by using an adaptation of standard optical techniques and at a second level (spatial diversity) by using independent transmitters. In the second level, a hacker would need to intercept not only one channel but all of them in order to retrieve information. At the receiver, we can easily apply ICA algorithms to decrypt the received signals and retrieve information.

  17. Sparse approximation using M-term pursuit and application in image and video coding.

    PubMed

    Rahmoune, Adel; Vandergheynst, Pierre; Frossard, Pascal

    2012-04-01

    This paper introduces a novel algorithm for sparse approximation in redundant dictionaries called the M-term pursuit (MTP). This algorithm decomposes a signal into a linear combination of atoms that are selected in order to represent the main signal components. The MTP algorithm provides an adaptive representation for signals in any complete dictionary. The basic idea behind the MTP is to partition the dictionary into L quasi-disjoint subdictionaries. A k-term signal approximation is then iteratively computed, where each iteration leads to the selection of M ≤ L atoms based on thresholding. The MTP algorithm is shown to achieve competitive performance with the matching pursuit (MP) algorithm that greedily selects atoms one by one. This is due to efficient partitioning of the dictionary. At the same time, the computational complexity is dramatically reduced compared to MP due to the batch selection of atoms. We finally illustrate the performance of MTP in image and video compression applications, where we show that the suboptimal atom selection of MTP is largely compensated by the reduction in complexity compared with MP.

  18. Application of Video Image Correlation Techniques to the Space Shuttle External Tank Foam Materials

    NASA Technical Reports Server (NTRS)

    Hilburger, Mark W.; Nemeth, Michael P.

    2005-01-01

    Results that illustrate the use of a video-image-correlation-based displacement and strain measurement system to assess the effects of material nonuniformities on the behavior of the sprayed-on foam insulation (SOFI) used for the thermal protection system on the Space Shuttle External Tank are presented. Standard structural verification specimens for the SOFI material with and without cracks and subjected to mechanical or thermal loading conditions were tested. Measured full-field displacements and strains are presented for selected loading conditions to illustrate the behavior of the foam and the viability of the measurement technology. The results indicate that significant strain localization can occur in the foam because of material nonuniformities. In particular, elongated cells in the foam can interact with other geometric or material discontinuities in the foam and develop large-magnitude localized strain concentrations that likely initiate failures. Furthermore, some of the results suggest that continuum mechanics and linear elastic fracture mechanics might not adequately represent the physical behavior of the foam, and failure predictions based on homogeneous linear material models are likely to be inadequate.

  19. Application of Video Image Correlation Techniques to the Space Shuttle External Tank Foam Materials

    NASA Technical Reports Server (NTRS)

    Hilburger, Mark W.; Nemeth, Michael P.

    2006-01-01

    Results that illustrate the use of a video-image-correlation-based displacement and strain measurement system to assess the effects of material nonuniformities on the behavior of the sprayed-on foam insulation (SOFI) used for the thermal protection system on the Space Shuttle External Tank are presented. Standard structural verification specimens for the SOFI material with and without cracks and subjected to mechanical or thermal loading conditions were tested. Measured full-field displacements and strains are presented for selected loading conditions to illustrate the behavior of the foam and the viability of the measurement technology. The results indicate that significant strain localization can occur in the foam because of material nonuniformities. In particular, elongated cells in the foam can interact with other geometric or material discontinuities in the foam and develop large-magnitude localized strain concentrations that likely initiate failures. Furthermore, some of the results suggest that continuum mechanics and linear elastic fracture mechanics might not adequately represent the physical behavior of the foam, and failure predictions based on homogeneous linear material models are likely to be inadequate.

  20. Behavior and identification of ephemeral sand dunes at the backshore zone using video images.

    PubMed

    Guimarães, Pedro V; Pereira, Pedro S; Calliari, Lauro J; Ellis, Jean T

    2016-09-01

    The backshore zone is transitional environment strongly affected by ocean, air and sand movements. On dissipative beaches, the formation of ephemeral dunes over the backshore zone plays significant contribution in the beach morphodynamics and sediment budget. The aim of this work is to describe a novel method to identify ephemeral dunes in the backshore region and to discuss their morphodynamic behavior. The beach morphology is identified using Argus video imagery, which reveals the behavior of morphologies at Cassino Beach, Rio Grande do Sul, Brasil. Daily images from 2005 to 2007, topographic profiles, meteorological data, and sedimentological parameters were used to determine the frequency and pervasiveness of these features on the backshore. Results indicated that coastline orientation relative to the dominant NE and E winds and the dissipative morphological beach state favored aeolian sand transport towards the backshore. Prevailing NE winds increase sand transportation to the backshore, resulting in the formation of barchans, transverse, and barchanoid-linguiod dunes. Precipitation inhibits aeolian transport and ephemeral dune formation and maintains the existing morphologies during strong SE and SW winds, provided the storm surge is not too high.

  1. A new engineering approach to reveal correlation of physiological change and spontaneous expression from video images

    NASA Astrophysics Data System (ADS)

    Yang, Fenglei; Hu, Sijung; Ma, Xiaoyun; Hassan, Harnani; Wei, Dongqing

    2015-03-01

    Spontaneous expression is associated with physiological states, i.e., heart rate, respiration, oxygen saturation (SpO2%), and heart rate variability (HRV). There have yet not sufficient efforts to explore correlation of physiological change and spontaneous expression. This study aims to study how spontaneous expression is associated with physiological changes with an approved protocol or through the videos provided from Denver Intensity of Spontaneous Facial Action Database. Not like a posed expression, motion artefact in spontaneous expression is one of evitable challenges to be overcome in the study. To obtain a physiological signs from a region of interest (ROI), a new engineering approach is being developed with an artefact-reduction method consolidated 3D active appearance model (AAM) based track, affine transformation based alignment with opto-physiological mode based imaging photoplethysmography. Also, a statistical association spaces is being used to interpret correlation of spontaneous expressions and physiological states including their probability densities by means of Gaussian Mixture Model. The present work is revealing a new avenue of study associations of spontaneous expressions and physiological states with its prospect of applications on physiological and psychological assessment.

  2. Behavior and identification of ephemeral sand dunes at the backshore zone using video images.

    PubMed

    Guimarães, Pedro V; Pereira, Pedro S; Calliari, Lauro J; Ellis, Jean T

    2016-09-01

    The backshore zone is transitional environment strongly affected by ocean, air and sand movements. On dissipative beaches, the formation of ephemeral dunes over the backshore zone plays significant contribution in the beach morphodynamics and sediment budget. The aim of this work is to describe a novel method to identify ephemeral dunes in the backshore region and to discuss their morphodynamic behavior. The beach morphology is identified using Argus video imagery, which reveals the behavior of morphologies at Cassino Beach, Rio Grande do Sul, Brasil. Daily images from 2005 to 2007, topographic profiles, meteorological data, and sedimentological parameters were used to determine the frequency and pervasiveness of these features on the backshore. Results indicated that coastline orientation relative to the dominant NE and E winds and the dissipative morphological beach state favored aeolian sand transport towards the backshore. Prevailing NE winds increase sand transportation to the backshore, resulting in the formation of barchans, transverse, and barchanoid-linguiod dunes. Precipitation inhibits aeolian transport and ephemeral dune formation and maintains the existing morphologies during strong SE and SW winds, provided the storm surge is not too high. PMID:27598845

  3. Characterization of anisotropic diffusion tensor of solute in tissue by video-FRAP imaging technique.

    PubMed

    Travascio, Francesco; Zhao, Weizhao; Gu, Wei Yong

    2009-04-01

    In this study, a new method for determination of an anisotropic diffusion tensor by a single fluorescence recovery after photobleaching (FRAP) experiment was developed. The method was based on two independent analyses of video-FRAP images: the fast Fourier transform and the Karhunen-Loève transform. Computer-simulated FRAP tests were used to evaluate the sensitivity of the method to experimental parameters, such as the initial size of the bleached spot, the choice of the frequencies used in the Fourier analysis, the orientation of the diffusion tensor, and experimental noise. The new method was also experimentally validated by determining the anisotropic diffusion tensor of fluorescein (332 Da) in bovine annulus fibrosus. The results obtained were in agreement with those reported in a previous study. Finally, the method was used to characterize fluorescein diffusion in bovine meniscus. Our findings indicate that fluorescein diffusion in bovine meniscus is anisotropic. This study provides a new tool for the determination of anisotropic diffusion tensor that could be used to investigate the correlation between the structure of biological tissues and their transport properties. PMID:19224367

  4. Comparison of Inter-Observer Variability and Diagnostic Performance of the Fifth Edition of BI-RADS for Breast Ultrasound of Static versus Video Images.

    PubMed

    Youk, Ji Hyun; Jung, Inkyung; Yoon, Jung Hyun; Kim, Sung Hun; Kim, You Me; Lee, Eun Hye; Jeong, Sun Hye; Kim, Min Jung

    2016-09-01

    Our aim was to compare the inter-observer variability and diagnostic performance of the Breast Imaging Reporting and Data System (BI-RADS) lexicon for breast ultrasound of static and video images. Ninety-nine breast masses visible on ultrasound examination from 95 women 19-81 y of age at five institutions were enrolled in this study. They were scheduled to undergo biopsy or surgery or had been stable for at least 2 y of ultrasound follow-up after benign biopsy results or typically benign findings. For each mass, representative long- and short-axis static ultrasound images were acquired; real-time long- and short-axis B-mode video images through the mass area were separately saved as cine clips. Each image was reviewed independently by five radiologists who were asked to classify ultrasound features according to the fifth edition of the BI-RADS lexicon. Inter-observer variability was assessed using kappa (κ) statistics. Diagnostic performance on static and video images was compared using the area under the receiver operating characteristic curve. No significant difference was found in κ values between static and video images for all descriptors, although κ values of video images were higher than those of static images for shape, orientation, margin and calcifications. After receiver operating characteristic curve analysis, the video images (0.83, range: 0.77-0.87) had higher areas under the curve than the static images (0.80, range: 0.75-0.83; p = 0.08). Inter-observer variability and diagnostic performance of video images was similar to that of static images on breast ultrasonography according to the new edition of BI-RADS.

  5. Intracardiac Acoustic Radiation Force Impulse (ARFI) and Shear Wave Imaging in Pigs with Focal Infarctions

    PubMed Central

    Hollender, Peter; Bradway, David; Wolf, Patrick; Goswami, Robi; Trahey, Gregg

    2013-01-01

    Four pigs, three with focal infarctions in the apical intraventricular septum (IVS) and/or left ventricular free wall (LVFW), were imaged with an intracardiac echocardiography (ICE) transducer. Custom beam sequences were used to excite the myocardium with focused acoustic radiation force (ARF) impulses and image the subsequent tissue response. Tissue displacement in response to the ARF excitation was calculated with a phase-based estimator, and transverse wave magnitude and velocity were each estimated at every depth. The excitation sequence was repeated rapidly, either in the same location to generate 40 Hz M-Modes at a single steering angle, or with a modulated steering angle to synthesize 2-D displacement magnitude and shear wave velocity images at 17 points in the cardiac cycle. Both types of images were acquired from various views in the right and left ventricles, in and out of infarcted regions. In all animals, ARFI and SWEI estimates indicated diastolic relaxation and systolic contraction in non-infarcted tissues. The M-Mode sequences showed high beat-to-beat spatio-temporal repeatability of the measurements for each imaging plane. In views of noninfarcted tissue in the diseased animals, no significant elastic remodeling was indicated when compared to the control. Where available, views of infarcted tissue were compared to similar views from the control animal. In views of the LVFW, the infarcted tissue presented as stiff and non-contractile compared to the control. In a view of the IVS, no significant difference was seen between infarcted and healthy tissue, while in another view, a heterogeneous infarction was seen presenting itself as non-contractile in systole. PMID:25004538

  6. Extraction of Benthic Cover Information from Video Tows and Photographs Using Object-Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Estomata, M. T. L.; Blanco, A. C.; Nadaoka, K.; Tomoling, E. C. M.

    2012-07-01

    Mapping benthic cover in deep waters comprises a very small proportion of studies in the field of research. Majority of benthic cover mapping makes use of satellite images and usually, classification is carried out only for shallow waters. To map the seafloor in optically deep waters, underwater videos and photos are needed. Some researchers have applied this method on underwater photos, but made use of different classification methods such as: Neural Networks, and rapid classification via down sampling. In this study, accurate bathymetric data obtained using a multi-beam echo sounder (MBES) was attempted to be used as complementary data with the underwater photographs. Due to the absence of a motion reference unit (MRU), which applies correction to the data gathered by the MBES, accuracy of the said depth data was compromised. Nevertheless, even with the absence of accurate bathymetric data, object-based image analysis (OBIA), which used rule sets based on information such as shape, size, area, relative distance, and spectral information, was still applied. Compared to pixel-based classifications, OBIA was able to classify more specific benthic cover types other than coral and sand, such as rubble and fish. Through the use of rule sets on area, less than or equal to 700 pixels for fish and between 700 to 10,000 pixels for rubble, as well as standard deviation values to distinguish texture, fish and rubble were identified. OBIA produced benthic cover maps that had higher overall accuracy, 93.78±0.85%, as compared to pixel-based methods that had an average accuracy of only 87.30±6.11% (p-value = 0.0001, α = 0.05).

  7. Integrated homeland security system with passive thermal imaging and advanced video analytics

    NASA Astrophysics Data System (ADS)

    Francisco, Glen; Tillman, Jennifer; Hanna, Keith; Heubusch, Jeff; Ayers, Robert

    2007-04-01

    A complete detection, management, and control security system is absolutely essential to preempting criminal and terrorist assaults on key assets and critical infrastructure. According to Tom Ridge, former Secretary of the US Department of Homeland Security, "Voluntary efforts alone are not sufficient to provide the level of assurance Americans deserve and they must take steps to improve security." Further, it is expected that Congress will mandate private sector investment of over $20 billion in infrastructure protection between 2007 and 2015, which is incremental to funds currently being allocated to key sites by the department of Homeland Security. Nearly 500,000 individual sites have been identified by the US Department of Homeland Security as critical infrastructure sites that would suffer severe and extensive damage if a security breach should occur. In fact, one major breach in any of 7,000 critical infrastructure facilities threatens more than 10,000 people. And one major breach in any of 123 facilities-identified as "most critical" among the 500,000-threatens more than 1,000,000 people. Current visible, nightvision or near infrared imaging technology alone has limited foul-weather viewing capability, poor nighttime performance, and limited nighttime range. And many systems today yield excessive false alarms, are managed by fatigued operators, are unable to manage the voluminous data captured, or lack the ability to pinpoint where an intrusion occurred. In our 2006 paper, "Critical Infrastructure Security Confidence Through Automated Thermal Imaging", we showed how a highly effective security solution can be developed by integrating what are now available "next-generation technologies" which include: Thermal imaging for the highly effective detection of intruders in the dark of night and in challenging weather conditions at the sensor imaging level - we refer to this as the passive thermal sensor level detection building block Automated software detection

  8. Post Treatment of Acoustic Neuroma

    MedlinePlus

    Home What is an AN What is an Acoustic Neuroma? Identifying an AN Symptoms Acoustic Neuroma Keywords Educational Video Pre-Treatment Treatment Options Summary Treatment Options Watch and Wait Radiation Microsurgery Acoustic Neuroma Decision Tree Questions for Your Physician Questions ...

  9. Spatially resolved acoustic spectroscopy for rapid imaging of material microstructure and grain orientation

    NASA Astrophysics Data System (ADS)

    Smith, Richard J.; Li, Wenqi; Coulson, Jethro; Clark, Matt; Somekh, Michael G.; Sharples, Steve D.

    2014-05-01

    Measuring the grain structure of aerospace materials is very important to understand their mechanical properties and in-service performance. Spatially resolved acoustic spectroscopy is an acoustic technique utilizing surface acoustic waves to map the grain structure of a material. When combined with measurements in multiple acoustic propagation directions, the grain orientation can be obtained by fitting the velocity surface to a model. The new instrument presented here can take thousands of acoustic velocity measurements per second. The spatial and velocity resolution can be adjusted by simple modification to the system; this is discussed in detail by comparison of theoretical expectations with experimental data.

  10. A view of the world through the bat's ear: the formation of acoustic images in echolocation.

    PubMed

    Simmons, J A

    1989-11-01

    Echolocating bats perceive objects as acoustic images derived from echoes of the ultrasonic sounds they emit. They can detect, track, identify, and intercept flying insects using sonar. Many species, such as the big brown bat, Eptesicus fuscus, emit frequency-modulated sonar sounds and perceive the distance to targets, or target range, from the delay of echoes. For Eptesicus, a point-target's image has a sharpness along the range axis that is determined by the acuity of echo-delay perception, which is about 10 ns under favorable conditions. The image as a whole has a fine range structure that corresponds to the cross-correlation function between emissions and echoes. A complex target- which has reflecting points, called "glints", located at slightly different distances and reflects echoes containing overlapping components with slightly different delays--is perceived in terms of its range profile. The separation of the glints along the range dimension is encoded by the shape of the echo spectrum created by interference between overlapping echo components. However, Eptesicus transforms the echo spectrum back into an estimate of the original delay separation of echo components. The bat thus converts spectral cues into elements of an image expressed in terms of range. The absolute range of the nearest glint is encoded by the arrival time of the earliest echo component, and the spectrally encoded range separation of additional glints is referred to this time-encoded reference range for the image as a whole. Each individual glint is represented by a cross-correlation function for its own echo component, the nearest of which is computed directly from arrival-time measurements while further ones are computed by transformation of the echo spectrum. The bat then sums the cross-correlation functions for multiple glints to form the entire image of the complex target. Range and shape are two distinct features of targets that are separately encoded by the bat's auditory system

  11. How video image size interacts with evidence strength, defendant emotion, and the defendant-victim relationship to alter perceptions of the defendant.

    PubMed

    Heath, Wendy P; Grannemann, Bruce D

    2014-01-01

    Courtroom video presentations can range from images on small screens installed in the jury box to images on courtroom video monitors or projection screens. Does video image size affect jurors' perceptions of information presented during trials? To investigate this we manipulated video image size as well as defendant emotion level presented during testimony (low, moderate), the defendant-victim relationship (spouses, strangers), and the strength of the evidence (weak, strong). Participants (N=263) read a case and trial summary, watched video of defendant testimony, and then answered a questionnaire. Larger screens generally accentuated what was presented (e.g., made stronger evidence seem stronger and weaker evidence seem weaker), acting mainly upon trial outcome variables (e.g., verdict). Non-trial outcomes (e.g., defendant credibility) were generally affected by defendant emotion level and the defendant-victim relationship. Researchers and attorneys presenting video images need to recognize that respondents may evaluate videotaped trial evidence differently as a function of how video evidence is presented.

  12. High-fidelity video and still-image communication based on spectral information: natural vision system and its applications

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Masahiro; Haneishi, Hideaki; Fukuda, Hiroyuki; Kishimoto, Junko; Kanazawa, Hiroshi; Tsuchida, Masaru; Iwama, Ryo; Ohyama, Nagaaki

    2006-01-01

    In addition to the great advancement of high-resolution and large-screen imaging technology, the issue of color is now receiving considerable attention as another aspect than the image resolution. It is difficult to reproduce the original color of subject in conventional imaging systems, and that obstructs the applications of visual communication systems in telemedicine, electronic commerce, and digital museum. To breakthrough the limitation of conventional RGB 3-primary systems, "Natural Vision" project aims at an innovative video and still-image communication technology with high-fidelity color reproduction capability, based on spectral information. This paper summarizes the results of NV project including the development of multispectral and multiprimary imaging technologies and the experimental investigations on the applications to medicine, digital archives, electronic commerce, and computer graphics.

  13. Short term exposure to attractive and muscular singers in music video clips negatively affects men's body image and mood.

    PubMed

    Mulgrew, K E; Volcevski-Kostas, D

    2012-09-01

    Viewing idealized images has been shown to reduce men's body satisfaction; however no research has examined the impact of music video clips. This was the first study to examine the effects of exposure to muscular images in music clips on men's body image, mood and cognitions. Ninety men viewed 5 min of clips containing scenery, muscular or average-looking singers, and completed pre- and posttest measures of mood and body image. Appearance schema activation was also measured. Men exposed to the muscular clips showed poorer posttest levels of anger, body and muscle tone satisfaction compared to men exposed to the scenery or average clips. No evidence of schema activation was found, although potential problems with the measure are noted. These preliminary findings suggest that even short term exposure to music clips can produce negative effects on men's body image and mood.

  14. Short term exposure to attractive and muscular singers in music video clips negatively affects men's body image and mood.

    PubMed

    Mulgrew, K E; Volcevski-Kostas, D

    2012-09-01

    Viewing idealized images has been shown to reduce men's body satisfaction; however no research has examined the impact of music video clips. This was the first study to examine the effects of exposure to muscular images in music clips on men's body image, mood and cognitions. Ninety men viewed 5 min of clips containing scenery, muscular or average-looking singers, and completed pre- and posttest measures of mood and body image. Appearance schema activation was also measured. Men exposed to the muscular clips showed poorer posttest levels of anger, body and muscle tone satisfaction compared to men exposed to the scenery or average clips. No evidence of schema activation was found, although potential problems with the measure are noted. These preliminary findings suggest that even short term exposure to music clips can produce negative effects on men's body image and mood. PMID:22673451

  15. Imaging velocity and attenuation anomalies in mining environments using Acoustic Emissions

    NASA Astrophysics Data System (ADS)

    Cesca, S.; Monna, S.; Kaiser, D.; Dahm, T.

    2012-04-01

    Imaging structural properties and monitoring fracturing processes in mining environments is of importance for mining exploitation. It is also helpful to characterize damages induced by mining activities, thus it is of primary interest for mining engineering and civil protection. Additionally, the development of improved monitoring and imaging methods is of great importance for salt deposits as potential reservoirs for CO2 sequestration. The analysis of Acoustic Emission (AE) and microseismicity data, which are routinely used in mining survey, is typically limited to estimate location of induced microcracks and seismicity. AE data will be here further analysed to obtain images of the seismic structure. We focus on an AE dataset recorded at the Morsleben salt mine, in Germany; the dataset contains more than 1 million events, recorded during a period of two months, with AE magnitudes spanning 5 units. Arrival times of first P and S onsets, as well as maximal amplitudes recorded for both seismic phases, are used to assess the seismic velocities and attenuation properties of the mining environment. Given the large size of the considered dataset, a spatial clustering of the events is first performed and a spatial homogeneous catalog of averaged "pseudoevents" is built. This new catalog is then used to provide first averaged images of the attenuation and velocity anomalies at specific depths. Results points to clear velocity and attenuation anomalies, which are correlated with the main structural features and the geometry of the salt body. The potential of the dataset for tomographic applications is investigated, both including synthetic simulations and considering real data. This study is funded by the project MINE, which is part of the R&D-Programme GEOTECHNOLOGIEN. The project MINE is funded by the German Ministry of Education and Research (BMBF), Grant of project BMBF03G0737.

  16. Quantitative observations of a deep-sea hydrothermal plume using an acoustic imaging sonar

    NASA Astrophysics Data System (ADS)

    Xu, Guangyu

    The Cabled Observatory Vent Imaging Sonar (COVIS) is used to quantitatively monitor the hydrothermal discharge from the Grotto mound, a venting sulfide structure on the Endeavour Segment of the Juan de Fuca Ridge. Since its deployment in September 2010, COVIS has recorded a multi-year long, near-continuous acoustic backscatter dataset. Further analysis of this dataset sheds light on the backscattering mechanisms within the buoyant plumes above Grotto and yields quantitative information on the influences of oceanic, atmospheric, and geological processes on the dynamics and heat source of the plumes. An investigation of the acoustic scattering mechanisms within the buoyant plumes issuing from Grotto suggests the dominant scattering mechanism within the plumes is the temperature fluctuations caused by the turbulent mixing of the buoyant plumes with the ambient seawater. In comparison, the backscatter from plume particles is negligible at lower levels of the plume but can potentially be significant at higher levels. Furthermore, this finding demonstrates the potential of inverting the acoustic backsatter to estimate the temperature fluctuations within the plumes. Processing the backscatter dataset recorded by COVIS yields time-series measurements of the vertical flow rate, volume transport, expansion rate of the largest buoyant plume above Grotto. Further analysis of those time-series measurements suggests the rate at which the ambient seawater is entrained into the plume increases with the magnitude of the ambient ocean currents---the current-driven entrainment. Furthermore, the oscillations in the ambient ocean currents that are driven by tidal and atmospheric forcing are introduced into the flow field within the plume through the current-driven entrainment. An inverse method has been developed to estimate the source heat transport driving the largest plume above Grotto from its volume transport estimates. The result suggests the heat transport driving the plume was

  17. An electrochemical and high-speed imaging study of micropore decontamination by acoustic bubble entrapment.

    PubMed

    Offin, Douglas G; Birkin, Peter R; Leighton, Timothy G

    2014-03-14

    Electrochemical and high-speed imaging techniques are used to study the abilities of ultrasonically-activated bubbles to clean out micropores. Cylindrical pores with dimensions (diameter × depth) of 500 μm × 400 μm (aspect ratio 0.8), 125 μm × 350 μm (aspect ratio 2.8) and 50 μm × 200 μm (aspect ratio 4.0) are fabricated in glass substrates. Each pore is contaminated by filling it with an electrochemically inactive blocking organic material (thickened methyl salicylate) before the substrate is placed in a solution containing an electroactive species (Fe(CN)6(3-)). An electrode is fabricated at the base of each pore and the Faradaic current is used to monitor the decontamination as a function of time. For the largest pore, decontamination driven by ultrasound (generated by a horn type transducer) and bulk fluid flow are compared. It is shown that ultrasound is much more effective than flow alone, and that bulk fluid flow at the rates used cannot decontaminate the pore completely, but that ultrasound can. In the case of the 125 μm pore, high-speed imaging is used to elucidate the cleaning mechanisms involved in ultrasonic decontamination and reveals that acoustic bubble entrapment is a key feature. The smallest pore is used to explore the limits of decontamination and it is found that ultrasound is still effective at this size under the conditions employed.

  18. The Automatically Triggered Video or Imaging Station (ATVIS): An Inexpensive Way to Catch Geomorphic Events on Camera

    NASA Astrophysics Data System (ADS)

    Wickert, A. D.

    2010-12-01

    To understand how single events can affect landscape change, we must catch the landscape in the act. Direct observations are rare and often dangerous. While video is a good alternative, commercially-available video systems for field installation cost 11,000, weigh ~100 pounds (45 kg), and shoot 640x480 pixel video at 4 frames per second. This is the same resolution as a cheap point-and-shoot camera, with a frame rate that is nearly an order of magnitude worse. To overcome these limitations of resolution, cost, and portability, I designed and built a new observation station. This system, called ATVIS (Automatically Triggered Video or Imaging Station), costs 450--500 and weighs about 15 pounds. It can take roughly 3 hours of 1280x720 pixel video, 6.5 hours of 640x480 video, or 98,000 1600x1200 pixel photos (one photo every 7 seconds for 8 days). The design calls for a simple Canon point-and-shoot camera fitted with custom firmware that allows 5V pulses through its USB cable to trigger it to take a picture or to initiate or stop video recording. These pulses are provided by a programmable microcontroller that can take input from either sensors or a data logger. The design is easily modifiable to a variety of camera and sensor types, and can also be used for continuous time-lapse imagery. We currently have prototypes set up at a gully near West Bijou Creek on the Colorado high plains and at tributaries to Marble Canyon in northern Arizona. Hopefully, a relatively inexpensive and portable system such as this will allow geomorphologists to supplement sensor networks with photo or video monitoring and allow them to see—and better quantify—the fantastic array of processes that modify landscapes as they unfold. Camera station set up at Badger Canyon, Arizona.Inset: view into box. Clockwise from bottom right: camera, microcontroller (blue), DC converter (red), solar charge controller, 12V battery. Materials and installation assistance courtesy of Ron Griffiths and the

  19. From Video to Photo

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Ever wonder whether a still shot from a home video could serve as a "picture perfect" photograph worthy of being framed and proudly displayed on the mantle? Wonder no more. A critical imaging code used to enhance video footage taken from spaceborne imaging instruments is now available within a portable photography tool capable of producing an optimized, high-resolution image from multiple video frames.

  20. Modular video endoscopy for in vivo cross-polarized and vital-dye fluorescence imaging of Barrett's-associated neoplasia

    NASA Astrophysics Data System (ADS)

    Thekkek, Nadhi; Pierce, Mark C.; Lee, Michelle H.; Polydorides, Alexandros D.; Flores, Raja M.; Anandasabapathy, Sharmila; Richards-Kortum, Rebecca R.

    2013-02-01

    A modular video endoscope is developed and tested to allow imaging in different modalities. This system incorporates white light imaging (WLI), cross-polarized imaging (CPI), and vital-dye fluorescence imaging (VFI), using interchangeable filter modules. CPI and VFI are novel endoscopic modalities that probe mucosal features associated with Barrett's neoplasia. CPI enhances vasculature, while VFI enhances glandular architecture. In this pilot study, we demonstrate the integration of these modalities by imaging areas of Barrett's metaplasia and neoplasia in an esophagectomy specimen. We verify that those key image features are also observed during an in vivo surveillance procedure. CPI images demonstrate improved visualization of branching blood vessels associated with neoplasia. VFI images show glandular architecture with increased glandular effacement associated with neoplasia. Results suggests that important pathologic features seen in CPI and VFI are not visible during standard endoscopic white light imaging, and thus the modalities may be useful in future in vivo studies for discriminating neoplasia from Barrett's metaplasia. We further demonstrate that the integrated WLI/CPI/VFI endoscope is compatible with complementary high-resolution endomicroscopy techniques such as the high-resolution microendoscope, potentially enabling two-step ("red-flag" widefield plus confirmatory high-resolution imaging) protocols to be enhanced.