Sample records for acoustic video images

  1. Photo-acoustic and video-acoustic methods for sensing distant sound sources

    NASA Astrophysics Data System (ADS)

    Slater, Dan; Kozacik, Stephen; Kelmelis, Eric

    2017-05-01

    , doing so requires overcoming significant limitations typically including much lower sample rates, reduced sensitivity and dynamic range, more expensive video hardware, and the need for sophisticated video processing. The ATCOM real time image processing software environment provides many of the needed capabilities for researching video-acoustic signal extraction. ATCOM currently is a powerful tool for the visual enhancement of atmospheric turbulence distorted telescopic views. In order to explore the potential of acoustic signal recovery from video imagery we modified ATCOM to extract audio waveforms from the same telescopic video sources. In this paper, we demonstrate and compare both readout techniques for several aerospace test scenarios to better show where each has advantages.

  2. Detecting Human Activity Using Acoustic, Seismic, Accelerometer, Video, and E-field Sensors

    DTIC Science & Technology

    2011-09-01

    Detecting Human Activity using Acoustic, Seismic, Accelerometer, Video, and E-field Sensors by Sarah H. Walker and Geoffrey H. Goldman...Adelphi, MD 20783-1197 ARL-TR-5729 September 2011 Detecting Human Activity using Acoustic, Seismic, Accelerometer, Video, and E-field Sensors...DD-MM-YYYY) September 2011 2. REPORT TYPE 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE Detecting Human Activity using Acoustic

  3. Video Image Stabilization and Registration

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor); Meyer, Paul J. (Inventor)

    2002-01-01

    A method of stabilizing and registering a video image in multiple video fields of a video sequence provides accurate determination of the image change in magnification, rotation and translation between video fields, so that the video fields may be accurately corrected for these changes in the image in the video sequence. In a described embodiment, a key area of a key video field is selected which contains an image which it is desired to stabilize in a video sequence. The key area is subdivided into nested pixel blocks and the translation of each of the pixel blocks from the key video field to a new video field is determined as a precursor to determining change in magnification, rotation and translation of the image from the key video field to the new video field.

  4. Video Image Stabilization and Registration

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor); Meyer, Paul J. (Inventor)

    2003-01-01

    A method of stabilizing and registering a video image in multiple video fields of a video sequence provides accurate determination of the image change in magnification, rotation and translation between video fields, so that the video fields may be accurately corrected for these changes in the image in the video sequence. In a described embodiment, a key area of a key video field is selected which contains an image which it is desired to stabilize in a video sequence. The key area is subdivided into nested pixel blocks and the translation of each of the pixel blocks from the key video field to a new video field is determined as a precursor to determining change in magnification, rotation and translation of the image from the key video field to the new video field.

  5. Magneto-photo-acoustic imaging

    PubMed Central

    Qu, Min; Mallidi, Srivalleesha; Mehrmohammadi, Mohammad; Truby, Ryan; Homan, Kimberly; Joshi, Pratixa; Chen, Yun-Sheng; Sokolov, Konstantin; Emelianov, Stanislav

    2011-01-01

    Magneto-photo-acoustic imaging, a technique based on the synergy of magneto-motive ultrasound, photoacoustic and ultrasound imaging, is introduced. Hybrid nanoconstructs, liposomes encapsulating gold nanorods and iron oxide nanoparticles, were used as a dual-contrast agent for magneto-photo-acoustic imaging. Tissue-mimicking phantom and macrophage cells embedded in ex vivo porcine tissue were used to demonstrate that magneto-photo-acoustic imaging is capable of visualizing the location of cells or tissues labeled with dual-contrast nanoparticles with sufficient contrast, excellent contrast resolution and high spatial resolution in the context of the anatomical structure of the surrounding tissues. Therefore, magneto-photo-acoustic imaging is capable of identifying the nanoparticle-labeled pathological regions from the normal tissue, providing a promising platform to noninvasively diagnose and characterize pathologies. PMID:21339883

  6. Video Toroid Cavity Imager

    DOEpatents

    Gerald, II, Rex E.; Sanchez, Jairo; Rathke, Jerome W.

    2004-08-10

    A video toroid cavity imager for in situ measurement of electrochemical properties of an electrolytic material sample includes a cylindrical toroid cavity resonator containing the sample and employs NMR and video imaging for providing high-resolution spectral and visual information of molecular characteristics of the sample on a real-time basis. A large magnetic field is applied to the sample under controlled temperature and pressure conditions to simultaneously provide NMR spectroscopy and video imaging capabilities for investigating electrochemical transformations of materials or the evolution of long-range molecular aggregation during cooling of hydrocarbon melts. The video toroid cavity imager includes a miniature commercial video camera with an adjustable lens, a modified compression coin cell imager with a fiat circular principal detector element, and a sample mounted on a transparent circular glass disk, and provides NMR information as well as a video image of a sample, such as a polymer film, with micrometer resolution.

  7. Investigation of an acoustical holography system for real-time imaging

    NASA Astrophysics Data System (ADS)

    Fecht, Barbara A.; Andre, Michael P.; Garlick, George F.; Shelby, Ronald L.; Shelby, Jerod O.; Lehman, Constance D.

    1998-07-01

    A new prototype imaging system based on ultrasound transmission through the object of interest -- acoustical holography -- was developed which incorporates significant improvements in acoustical and optical design. This system is being evaluated for potential clinical application in the musculoskeletal system, interventional radiology, pediatrics, monitoring of tumor ablation, vascular imaging and breast imaging. System limiting resolution was estimated using a line-pair target with decreasing line thickness and equal separation. For a swept frequency beam from 2.6 - 3.0 MHz, the minimum resolution was 0.5 lp/mm. Apatite crystals were suspended in castor oil to approximate breast microcalcifications. Crystals from 0.425 - 1.18 mm in diameter were well resolved in the acoustic zoom mode. Needle visibility was examined with both a 14-gauge biopsy needle and a 0.6 mm needle. The needle tip was clearly visible throughout the dynamic imaging sequence as it was slowly inserted into a RMI tissue-equivalent breast biopsy phantom. A selection of human images was acquired in several volunteers: a 25 year-old female volunteer with normal breast tissue, a lateral view of the elbow joint showing muscle fascia and tendon insertions, and the superficial vessels in the forearm. Real-time video images of these studies will be presented. In all of these studies, conventional sonography was used for comparison. These preliminary investigations with the new prototype acoustical holography system showed favorable results in comparison to state-of-the-art pulse-echo ultrasound and demonstrate it to be suitable for further clinical study. The new patient interfaces will facilitate orthopedic soft tissue evaluation, study of superficial vascular structures and potentially breast imaging.

  8. Video image stabilization and registration--plus

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor)

    2009-01-01

    A method of stabilizing a video image displayed in multiple video fields of a video sequence includes the steps of: subdividing a selected area of a first video field into nested pixel blocks; determining horizontal and vertical translation of each of the pixel blocks in each of the pixel block subdivision levels from the first video field to a second video field; and determining translation of the image from the first video field to the second video field by determining a change in magnification of the image from the first video field to the second video field in each of horizontal and vertical directions, and determining shear of the image from the first video field to the second video field in each of the horizontal and vertical directions.

  9. First images of thunder: Acoustic imaging of triggered lightning

    NASA Astrophysics Data System (ADS)

    Dayeh, M. A.; Evans, N. D.; Fuselier, S. A.; Trevino, J.; Ramaekers, J.; Dwyer, J. R.; Lucia, R.; Rassoul, H. K.; Kotovsky, D. A.; Jordan, D. M.; Uman, M. A.

    2015-07-01

    An acoustic camera comprising a linear microphone array is used to image the thunder signature of triggered lightning. Measurements were taken at the International Center for Lightning Research and Testing in Camp Blanding, FL, during the summer of 2014. The array was positioned in an end-fire orientation thus enabling the peak acoustic reception pattern to be steered vertically with a frequency-dependent spatial resolution. On 14 July 2014, a lightning event with nine return strokes was successfully triggered. We present the first acoustic images of individual return strokes at high frequencies (>1 kHz) and compare the acoustically inferred profile with optical images. We find (i) a strong correlation between the return stroke peak current and the radiated acoustic pressure and (ii) an acoustic signature from an M component current pulse with an unusual fast rise time. These results show that acoustic imaging enables clear identification and quantification of thunder sources as a function of lightning channel altitude.

  10. Acoustic Waves in Medical Imaging and Diagnostics

    PubMed Central

    Sarvazyan, Armen P.; Urban, Matthew W.; Greenleaf, James F.

    2013-01-01

    Up until about two decades ago acoustic imaging and ultrasound imaging were synonymous. The term “ultrasonography,” or its abbreviated version “sonography” meant an imaging modality based on the use of ultrasonic compressional bulk waves. Since the 1990s numerous acoustic imaging modalities started to emerge based on the use of a different mode of acoustic wave: shear waves. It was demonstrated that imaging with these waves can provide very useful and very different information about the biological tissue being examined. We will discuss physical basis for the differences between these two basic modes of acoustic waves used in medical imaging and analyze the advantages associated with shear acoustic imaging. A comprehensive analysis of the range of acoustic wavelengths, velocities, and frequencies that have been used in different imaging applications will be presented. We will discuss the potential for future shear wave imaging applications. PMID:23643056

  11. Video image position determination

    DOEpatents

    Christensen, Wynn; Anderson, Forrest L.; Kortegaard, Birchard L.

    1991-01-01

    An optical beam position controller in which a video camera captures an image of the beam in its video frames, and conveys those images to a processing board which calculates the centroid coordinates for the image. The image coordinates are used by motor controllers and stepper motors to position the beam in a predetermined alignment. In one embodiment, system noise, used in conjunction with Bernoulli trials, yields higher resolution centroid coordinates.

  12. Acoustic-noise-optimized diffusion-weighted imaging.

    PubMed

    Ott, Martin; Blaimer, Martin; Grodzki, David M; Breuer, Felix A; Roesch, Julie; Dörfler, Arnd; Heismann, Björn; Jakob, Peter M

    2015-12-01

    This work was aimed at reducing acoustic noise in diffusion-weighted MR imaging (DWI) that might reach acoustic noise levels of over 100 dB(A) in clinical practice. A diffusion-weighted readout-segmented echo-planar imaging (EPI) sequence was optimized for acoustic noise by utilizing small readout segment widths to obtain low gradient slew rates and amplitudes instead of faster k-space coverage. In addition, all other gradients were optimized for low slew rates. Volunteer and patient imaging experiments were conducted to demonstrate the feasibility of the method. Acoustic noise measurements were performed and analyzed for four different DWI measurement protocols at 1.5T and 3T. An acoustic noise reduction of up to 20 dB(A) was achieved, which corresponds to a fourfold reduction in acoustic perception. The image quality was preserved at the level of a standard single-shot (ss)-EPI sequence, with a 27-54% increase in scan time. The diffusion-weighted imaging technique proposed in this study allowed a substantial reduction in the level of acoustic noise compared to standard single-shot diffusion-weighted EPI. This is expected to afford considerably more patient comfort, but a larger study would be necessary to fully characterize the subjective changes in patient experience.

  13. Video image processing

    NASA Technical Reports Server (NTRS)

    Murray, N. D.

    1985-01-01

    Current technology projections indicate a lack of availability of special purpose computing for Space Station applications. Potential functions for video image special purpose processing are being investigated, such as smoothing, enhancement, restoration and filtering, data compression, feature extraction, object detection and identification, pixel interpolation/extrapolation, spectral estimation and factorization, and vision synthesis. Also, architectural approaches are being identified and a conceptual design generated. Computationally simple algorithms will be research and their image/vision effectiveness determined. Suitable algorithms will be implimented into an overall architectural approach that will provide image/vision processing at video rates that are flexible, selectable, and programmable. Information is given in the form of charts, diagrams and outlines.

  14. Magneto-acoustic imaging by continuous-wave excitation.

    PubMed

    Shunqi, Zhang; Zhou, Xiaoqing; Tao, Yin; Zhipeng, Liu

    2017-04-01

    The electrical characteristics of tissue yield valuable information for early diagnosis of pathological changes. Magneto-acoustic imaging is a functional approach for imaging of electrical conductivity. This study proposes a continuous-wave magneto-acoustic imaging method. A kHz-range continuous signal with an amplitude range of several volts is used to excite the magneto-acoustic signal and improve the signal-to-noise ratio. The magneto-acoustic signal amplitude and phase are measured to locate the acoustic source via lock-in technology. An optimisation algorithm incorporating nonlinear equations is used to reconstruct the magneto-acoustic source distribution based on the measured amplitude and phase at various frequencies. Validation simulations and experiments were performed in pork samples. The experimental and simulation results agreed well. While the excitation current was reduced to 10 mA, the acoustic signal magnitude increased up to 10 -7  Pa. Experimental reconstruction of the pork tissue showed that the image resolution reached mm levels when the excitation signal was in the kHz range. The signal-to-noise ratio of the detected magneto-acoustic signal was improved by more than 25 dB at 5 kHz when compared to classical 1 MHz pulse excitation. The results reported here will aid further research into magneto-acoustic generation mechanisms and internal tissue conductivity imaging.

  15. Non-intrusive telemetry applications in the oilsands: from visible light and x-ray video to acoustic imaging and spectroscopy

    NASA Astrophysics Data System (ADS)

    Shaw, John M.

    2013-06-01

    While the production, transport and refining of oils from the oilsands of Alberta, and comparable resources elsewhere is performed at industrial scales, numerous technical and technological challenges and opportunities persist due to the ill defined nature of the resource. For example, bitumen and heavy oil comprise multiple bulk phases, self-organizing constituents at the microscale (liquid crystals) and the nano scale. There are no quantitative measures available at the molecular level. Non-intrusive telemetry is providing promising paths toward solutions, be they enabling technologies targeting process design, development or optimization, or more prosaic process control or process monitoring applications. Operation examples include automated large object and poor quality ore during mining, and monitoring the thickness and location of oil water interfacial zones within separation vessels. These applications involve real-time video image processing. X-ray transmission video imaging is used to enumerate organic phases present within a vessel, and to detect individual phase volumes, densities and elemental compositions. This is an enabling technology that provides phase equilibrium and phase composition data for production and refining process development, and fluid property myth debunking. A high-resolution two-dimensional acoustic mapping technique now at the proof of concept stage is expected to provide simultaneous fluid flow and fluid composition data within porous inorganic media. Again this is an enabling technology targeting visualization of diverse oil production process fundamentals at the pore scale. Far infrared spectroscopy coupled with detailed quantum mechanical calculations, may provide characteristic molecular motifs and intermolecular association data required for fluid characterization and process modeling. X-ray scattering (SAXS/WAXS/USAXS) provides characteristic supramolecular structure information that impacts fluid rheology and process

  16. Laser-induced acoustic imaging of underground objects

    NASA Astrophysics Data System (ADS)

    Li, Wen; DiMarzio, Charles A.; McKnight, Stephen W.; Sauermann, Gerhard O.; Miller, Eric L.

    1999-02-01

    This paper introduces a new demining technique based on the photo-acoustic interaction, together with results from photo- acoustic experiments. We have buried different types of targets (metal, rubber and plastic) in different media (sand, soil and water) and imaged them by measuring reflection of acoustic waves generated by irradiation with a CO2 laser. Research has been focused on the signal acquisition and signal processing. A deconvolution method using Wiener filters is utilized in data processing. Using a uniform spatial distribution of laser pulses at the ground's surface, we obtained 3D images of buried objects. The images give us a clear representation of the shapes of the underground objects. The quality of the images depends on the mismatch of acoustic impedance of the buried objects, the bandwidth and center frequency of the acoustic sensors and the selection of filter functions.

  17. Application of acoustic imaging techniques on snowmobile pass-by noise.

    PubMed

    Padois, Thomas; Berry, Alain

    2017-02-01

    Snowmobile manufacturers invest important efforts to reduce the noise emission of their products. The noise sources of snowmobiles are multiple and closely spaced, leading to difficult source separation in practice. In this study, source imaging results for snowmobile pass-by noise are discussed. The experiments involve a 193-microphone Underbrink array, with synchronization of acoustic with video data provided by a high-speed camera. Both conventional beamforming and Clean-SC deconvolution are implemented to provide noise source maps of the snowmobile. The results clearly reveal noise emission from the engine, exhaust, and track depending on the frequency range considered.

  18. Acoustic imaging system

    DOEpatents

    Smith, Richard W.

    1979-01-01

    An acoustic imaging system for displaying an object viewed by a moving array of transducers as the array is pivoted about a fixed point within a given plane. A plurality of transducers are fixedly positioned and equally spaced within a laterally extending array and operatively directed to transmit and receive acoustic signals along substantially parallel transmission paths. The transducers are sequentially activated along the array to transmit and receive acoustic signals according to a preestablished sequence. Means are provided for generating output voltages for each reception of an acoustic signal, corresponding to the coordinate position of the object viewed as the array is pivoted. Receptions from each of the transducers are presented on the same display at coordinates corresponding to the actual position of the object viewed to form a plane view of the object scanned.

  19. Registration of multiple video images to preoperative CT for image-guided surgery

    NASA Astrophysics Data System (ADS)

    Clarkson, Matthew J.; Rueckert, Daniel; Hill, Derek L.; Hawkes, David J.

    1999-05-01

    In this paper we propose a method which uses multiple video images to establish the pose of a CT volume with respect to video camera coordinates for use in image guided surgery. The majority of neurosurgical procedures require the neurosurgeon to relate the pre-operative MR/CT data to the intra-operative scene. Registration of 2D video images to the pre-operative 3D image enables a perspective projection of the pre-operative data to be overlaid onto the video image. Our registration method is based on image intensity and uses a simple iterative optimization scheme to maximize the mutual information between a video image and a rendering from the pre-operative data. Video images are obtained from a stereo operating microscope, with a field of view of approximately 110 X 80 mm. We have extended an existing information theoretical framework for 2D-3D registration, so that multiple video images can be registered simultaneously to the pre-operative data. Experiments were performed on video and CT images of a skull phantom. We took three video images, and our algorithm registered these individually to the 3D image. The mean projection error varied between 4.33 and 9.81 millimeters (mm), and the mean 3D error varied between 4.47 and 11.92 mm. Using our novel techniques we then registered five video views simultaneously to the 3D model. This produced an accurate and robust registration with a mean projection error of 0.68 mm and a mean 3D error of 1.05 mm.

  20. Mutual conversion between B-mode image and acoustic impedance image

    NASA Astrophysics Data System (ADS)

    Chean, Tan Wei; Hozumi, Naohiro; Yoshida, Sachiko; Kobayashi, Kazuto; Ogura, Yuki

    2017-07-01

    To study the acoustic properties of a B-mode image, two ways of analysis methods were proposed in this report. The first method is the conversion of an acoustic impedance image into a B-mode image (Z to B). The time domain reflectometry theory and transmission line model were used as reference in the calculation. The second method is the direct a conversion of B-mode image into an acoustic impedance image (B to Z). The theoretical background of the second method is similar to that of the first method; however, the calculation is in the opposite direction. Significant scatter, refraction, and attenuation were assumed not to take place during the propagation of an ultrasonic wave. Hence, they were ignored in both calculations. In this study, rat cerebellar tissue and human cheek skin were used to determine the feasibility of the first and second methods respectively. Some good results are obtained and hence both methods showed their possible applications in the study of acoustic properties of B-mode images.

  1. Video enhancement workbench: an operational real-time video image processing system

    NASA Astrophysics Data System (ADS)

    Yool, Stephen R.; Van Vactor, David L.; Smedley, Kirk G.

    1993-01-01

    Video image sequences can be exploited in real-time, giving analysts rapid access to information for military or criminal investigations. Video-rate dynamic range adjustment subdues fluctuations in image intensity, thereby assisting discrimination of small or low- contrast objects. Contrast-regulated unsharp masking enhances differentially shadowed or otherwise low-contrast image regions. Real-time removal of localized hotspots, when combined with automatic histogram equalization, may enhance resolution of objects directly adjacent. In video imagery corrupted by zero-mean noise, real-time frame averaging can assist resolution and location of small or low-contrast objects. To maximize analyst efficiency, lengthy video sequences can be screened automatically for low-frequency, high-magnitude events. Combined zoom, roam, and automatic dynamic range adjustment permit rapid analysis of facial features captured by video cameras recording crimes in progress. When trying to resolve small objects in murky seawater, stereo video places the moving imagery in an optimal setting for human interpretation.

  2. Temporal compressive imaging for video

    NASA Astrophysics Data System (ADS)

    Zhou, Qun; Zhang, Linxia; Ke, Jun

    2018-01-01

    In many situations, imagers are required to have higher imaging speed, such as gunpowder blasting analysis and observing high-speed biology phenomena. However, measuring high-speed video is a challenge to camera design, especially, in infrared spectrum. In this paper, we reconstruct a high-frame-rate video from compressive video measurements using temporal compressive imaging (TCI) with a temporal compression ratio T=8. This means that, 8 unique high-speed temporal frames will be obtained from a single compressive frame using a reconstruction algorithm. Equivalently, the video frame rates is increased by 8 times. Two methods, two-step iterative shrinkage/threshold (TwIST) algorithm and the Gaussian mixture model (GMM) method, are used for reconstruction. To reduce reconstruction time and memory usage, each frame of size 256×256 is divided into patches of size 8×8. The influence of different coded mask to reconstruction is discussed. The reconstruction qualities using TwIST and GMM are also compared.

  3. Mass-storage management for distributed image/video archives

    NASA Astrophysics Data System (ADS)

    Franchi, Santina; Guarda, Roberto; Prampolini, Franco

    1993-04-01

    The realization of image/video database requires a specific design for both database structures and mass storage management. This issue has addressed the project of the digital image/video database system that has been designed at IBM SEMEA Scientific & Technical Solution Center. Proper database structures have been defined to catalog image/video coding technique with the related parameters, and the description of image/video contents. User workstations and servers are distributed along a local area network. Image/video files are not managed directly by the DBMS server. Because of their wide size, they are stored outside the database on network devices. The database contains the pointers to the image/video files and the description of the storage devices. The system can use different kinds of storage media, organized in a hierarchical structure. Three levels of functions are available to manage the storage resources. The functions of the lower level provide media management. They allow it to catalog devices and to modify device status and device network location. The medium level manages image/video files on a physical basis. It manages file migration between high capacity media and low access time media. The functions of the upper level work on image/video file on a logical basis, as they archive, move and copy image/video data selected by user defined queries. These functions are used to support the implementation of a storage management strategy. The database information about characteristics of both storage devices and coding techniques are used by the third level functions to fit delivery/visualization requirements and to reduce archiving costs.

  4. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA Marshall Space Flight Center, atmospheric scientist Paul Meyer (left) and solar physicist Dr. David Hathaway, have developed promising new software, called Video Image Stabilization and Registration (VISAR), that may help law enforcement agencies to catch criminals by improving the quality of video recorded at crime scenes, VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects; produces clearer images of moving objects; smoothes jagged edges; enhances still images; and reduces video noise of snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of Ultrasounds which are infamous for their grainy, blurred quality. It would be especially useful for tornadoes, tracking whirling objects and helping to determine the tornado's wind speed. This image shows two scientists reviewing an enhanced video image of a license plate taken from a moving automobile.

  5. Video-to-film color-image recorder.

    NASA Technical Reports Server (NTRS)

    Montuori, J. S.; Carnes, W. R.; Shim, I. H.

    1973-01-01

    A precision video-to-film recorder for use in image data processing systems, being developed for NASA, will convert three video input signals (red, blue, green) into a single full-color light beam for image recording on color film. Argon ion and krypton lasers are used to produce three spectral lines which are independently modulated by the appropriate video signals, combined into a single full-color light beam, and swept over the recording film in a raster format for image recording. A rotating multi-faceted spinner mounted on a translating carriage generates the raster, and an annotation head is used to record up to 512 alphanumeric characters in a designated area outside the image area.

  6. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR). VISAR may help law enforcement agencies catch criminals by improving the quality of video recorded at crime scenes. In this photograph, the single frame at left, taken at night, was brightened in order to enhance details and reduce noise or snow. To further overcome the video defects in one frame, Law enforcement officials can use VISAR software to add information from multiple frames to reveal a person. Images from less than a second of videotape were added together to create the clarified image at right. VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. The software can be used for defense application by improving recornaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  7. Optimization of a Biometric System Based on Acoustic Images

    PubMed Central

    Izquierdo Fuente, Alberto; Del Val Puente, Lara; Villacorta Calvo, Juan J.; Raboso Mateos, Mariano

    2014-01-01

    On the basis of an acoustic biometric system that captures 16 acoustic images of a person for 4 frequencies and 4 positions, a study was carried out to improve the performance of the system. On a first stage, an analysis to determine which images provide more information to the system was carried out showing that a set of 12 images allows the system to obtain results that are equivalent to using all of the 16 images. Finally, optimization techniques were used to obtain the set of weights associated with each acoustic image that maximizes the performance of the biometric system. These results improve significantly the performance of the preliminary system, while reducing the time of acquisition and computational burden, since the number of acoustic images was reduced. PMID:24616643

  8. Reflective echo tomographic imaging using acoustic beams

    DOEpatents

    Kisner, Roger; Santos-Villalobos, Hector J

    2014-11-25

    An inspection system includes a plurality of acoustic beamformers, where each of the plurality of acoustic beamformers including a plurality of acoustic transmitter elements. The system also includes at least one controller configured for causing each of the plurality of acoustic beamformers to generate an acoustic beam directed to a point in a volume of interest during a first time. Based on a reflected wave intensity detected at a plurality of acoustic receiver elements, an image of the volume of interest can be generated.

  9. Femtosecond imaging of nonlinear acoustics in gold.

    PubMed

    Pezeril, Thomas; Klieber, Christoph; Shalagatskyi, Viktor; Vaudel, Gwenaelle; Temnov, Vasily; Schmidt, Oliver G; Makarov, Denys

    2014-02-24

    We have developed a high-sensitivity, low-noise femtosecond imaging technique based on pump-probe time-resolved measurements with a standard CCD camera. The approach used in the experiment is based on lock-in acquisitions of images generated by a femtosecond laser probe synchronized to modulation of a femtosecond laser pump at the same rate. This technique allows time-resolved imaging of laser-excited phenomena with femtosecond time resolution. We illustrate the technique by time-resolved imaging of the nonlinear reshaping of a laser-excited picosecond acoustic pulse after propagation through a thin gold layer. Image analysis reveals the direct 2D visualization of the nonlinear acoustic propagation of the picosecond acoustic pulse. Many ultrafast pump-probe investigations can profit from this technique because of the wealth of information it provides over a typical single diode and lock-in amplifier setup, for example it can be used to image ultrasonic echoes in biological samples.

  10. Video library for video imaging detection at intersection stop lines.

    DOT National Transportation Integrated Search

    2010-04-01

    The objective of this activity was to record video that could be used for controlled : evaluation of video image vehicle detection system (VIVDS) products and software upgrades to : existing products based on a list of conditions that might be diffic...

  11. Computerized tomography using video recorded fluoroscopic images

    NASA Technical Reports Server (NTRS)

    Kak, A. C.; Jakowatz, C. V., Jr.; Baily, N. A.; Keller, R. A.

    1975-01-01

    A computerized tomographic imaging system is examined which employs video-recorded fluoroscopic images as input data. By hooking the video recorder to a digital computer through a suitable interface, such a system permits very rapid construction of tomograms.

  12. Combined photoacoustic and magneto-acoustic imaging.

    PubMed

    Qu, Min; Mallidi, Srivalleesha; Mehrmohammadi, Mohammad; Ma, Li Leo; Johnston, Keith P; Sokolov, Konstantin; Emelianov, Stanislav

    2009-01-01

    Ultrasound is a widely used modality with excellent spatial resolution, low cost, portability, reliability and safety. In clinical practice and in the biomedical field, molecular ultrasound-based imaging techniques are desired to visualize tissue pathologies, such as cancer. In this paper, we present an advanced imaging technique - combined photoacoustic and magneto-acoustic imaging - capable of visualizing the anatomical, functional and biomechanical properties of tissues or organs. The experiments to test the combined imaging technique were performed using dual, nanoparticle-based contrast agents that exhibit the desired optical and magnetic properties. The results of our study demonstrate the feasibility of the combined photoacoustic and magneto-acoustic imaging that takes the advantages of each imaging techniques and provides high sensitivity, reliable contrast and good penetrating depth. Therefore, the developed imaging technique can be used in wide range of biomedical and clinical application.

  13. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center, atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  14. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  15. Synthetic Aperture Acoustic Imaging of Non-Metallic Cords

    DTIC Science & Technology

    2012-04-01

    Washington Headquarters Services , Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA, 22202-4302...collected with a research prototype synthetic aperture acoustic ( SAA ) imaging system. SAA imaging is an emerging technique that can serve as an...inexpensive alternative or logical complement to synthetic aperture radar (SAR). The SAA imaging system uses an acoustic transceiver (speaker and

  16. Imaging of acoustic fields using optical feedback interferometry.

    PubMed

    Bertling, Karl; Perchoux, Julien; Taimre, Thomas; Malkin, Robert; Robert, Daniel; Rakić, Aleksandar D; Bosch, Thierry

    2014-12-01

    This study introduces optical feedback interferometry as a simple and effective technique for the two-dimensional visualisation of acoustic fields. We present imaging results for several pressure distributions including those for progressive waves, standing waves, as well as the diffraction and interference patterns of the acoustic waves. The proposed solution has the distinct advantage of extreme optical simplicity and robustness thus opening the way to a low cost acoustic field imaging system based on mass produced laser diodes.

  17. Reproducibility of dynamically represented acoustic lung images from healthy individuals

    PubMed Central

    Maher, T M; Gat, M; Allen, D; Devaraj, A; Wells, A U; Geddes, D M

    2008-01-01

    Background and aim: Acoustic lung imaging offers a unique method for visualising the lung. This study was designed to demonstrate reproducibility of acoustic lung images recorded from healthy individuals at different time points and to assess intra- and inter-rater agreement in the assessment of dynamically represented acoustic lung images. Methods: Recordings from 29 healthy volunteers were made on three separate occasions using vibration response imaging. Reproducibility was measured using quantitative, computerised assessment of vibration energy. Dynamically represented acoustic lung images were scored by six blinded raters. Results: Quantitative measurement of acoustic recordings was highly reproducible with an intraclass correlation score of 0.86 (very good agreement). Intraclass correlations for inter-rater agreement and reproducibility were 0.61 (good agreement) and 0.86 (very good agreement), respectively. There was no significant difference found between the six raters at any time point. Raters ranged from 88% to 95% in their ability to identically evaluate the different features of the same image presented to them blinded on two separate occasions. Conclusion: Acoustic lung imaging is reproducible in healthy individuals. Graphic representation of lung images can be interpreted with a high degree of accuracy by the same and by different reviewers. PMID:18024534

  18. Ultrasound Imaging System Video

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In this video, astronaut Peggy Whitson uses the Human Research Facility (HRF) Ultrasound Imaging System in the Destiny Laboratory of the International Space Station (ISS) to image her own heart. The Ultrasound Imaging System provides three-dimension image enlargement of the heart and other organs, muscles, and blood vessels. It is capable of high resolution imaging in a wide range of applications, both research and diagnostic, such as Echocardiography (ultrasound of the heart), abdominal, vascular, gynecological, muscle, tendon, and transcranial ultrasound.

  19. Acoustic Radiation Force Elasticity Imaging in Diagnostic Ultrasound

    PubMed Central

    Doherty, Joshua R.; Trahey, Gregg E.; Nightingale, Kathryn R.; Palmeri, Mark L.

    2013-01-01

    The development of ultrasound-based elasticity imaging methods has been the focus of intense research activity since the mid-1990s. In characterizing the mechanical properties of soft tissues, these techniques image an entirely new subset of tissue properties that cannot be derived with conventional ultrasound techniques. Clinically, tissue elasticity is known to be associated with pathological condition and with the ability to image these features in vivo, elasticity imaging methods may prove to be invaluable tools for the diagnosis and/or monitoring of disease. This review focuses on ultrasound-based elasticity imaging methods that generate an acoustic radiation force to induce tissue displacements. These methods can be performed non-invasively during routine exams to provide either qualitative or quantitative metrics of tissue elasticity. A brief overview of soft tissue mechanics relevant to elasticity imaging is provided, including a derivation of acoustic radiation force, and an overview of the various acoustic radiation force elasticity imaging methods. PMID:23549529

  20. Acoustic radiation force elasticity imaging in diagnostic ultrasound.

    PubMed

    Doherty, Joshua R; Trahey, Gregg E; Nightingale, Kathryn R; Palmeri, Mark L

    2013-04-01

    The development of ultrasound-based elasticity imaging methods has been the focus of intense research activity since the mid-1990s. In characterizing the mechanical properties of soft tissues, these techniques image an entirely new subset of tissue properties that cannot be derived with conventional ultrasound techniques. Clinically, tissue elasticity is known to be associated with pathological condition and with the ability to image these features in vivo; elasticity imaging methods may prove to be invaluable tools for the diagnosis and/or monitoring of disease. This review focuses on ultrasound-based elasticity imaging methods that generate an acoustic radiation force to induce tissue displacements. These methods can be performed noninvasively during routine exams to provide either qualitative or quantitative metrics of tissue elasticity. A brief overview of soft tissue mechanics relevant to elasticity imaging is provided, including a derivation of acoustic radiation force, and an overview of the various acoustic radiation force elasticity imaging methods.

  1. Correlation Time of Ocean Ambient Noise Intensity in San Diego Bay and Target Recognition in Acoustic Daylight Images

    NASA Astrophysics Data System (ADS)

    Wadsworth, Adam J.

    A method for passively detecting and imaging underwater targets using ambient noise as the sole source of illumination (named acoustic daylight) was successfully implemented in the form of the Acoustic Daylight Ocean Noise Imaging System (ADONIS). In a series of imaging experiments conducted in San Diego Bay, where the dominant source of high-frequency ambient noise is snapping shrimp, a large quantity of ambient noise intensity data was collected with the ADONIS (Epifanio, 1997). In a subset of the experimental data sets, fluctuations of time-averaged ambient noise intensity exhibited a diurnal pattern consistent with the increase in frequency of shrimp snapping near dawn and dusk. The same subset of experimental data is revisited here and the correlation time is estimated and analysed for sequences of ambient noise data several minutes in length, with the aim of detecting possible periodicities or other trends in the fluctuation of the shrimp-dominated ambient noise field. Using videos formed from sequences of acoustic daylight images along with other experimental information, candidate segments of static-configuration ADONIS raw ambient noise data were isolated. For each segment, the normalized intensity auto-correlation closely resembled the delta function, the auto-correlation of white noise. No intensity fluctuation patterns at timescales smaller than a few minutes were discernible, suggesting that the shrimp do not communicate, synchronise, or exhibit any periodicities in their snapping. Also presented here is a ADONIS-specific target recognition algorithm based on principal component analysis, along with basic experimental results using a database of acoustic daylight images.

  2. Vibro-acoustic Imaging at the Breazeale Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, James Arthur; Jewell, James Keith; Lee, James Edwin

    2016-09-01

    The INL is developing Vibro-acoustic imaging technology to characterize microstructure in fuels and materials in spent fuel pools and within reactor vessels. A vibro-acoustic development laboratory has been established at the INL. The progress in developing the vibro-acoustic technology at the INL is the focus of this report. A successful technology demonstration was performed in a working TRIGA research reactor. Vibro-acoustic imaging was performed in the reactor pool of the Breazeale reactor in late September of 2015. A confocal transducer driven at a nominal 3 MHz was used to collect the 60 kHz differential beat frequency induced in a spentmore » TRIGA fuel rod and empty gamma tube located in the main reactor water pool. Data was collected and analyzed with the INLDAS data acquisition software using a short time Fourier transform.« less

  3. Self-Image--Alien Image: A Bilateral Video Project.

    ERIC Educational Resources Information Center

    Kracsay, Susanne

    1995-01-01

    Describes a project in which Austrian and Hungarian students learned how people see each other by creating video pictures and letters of their neighbors (alien images) that were returned with corrections (self-images). Discussion includes student critiques, impressions, and misconceptions. (AEF)

  4. A Macintosh-Based Scientific Images Video Analysis System

    NASA Technical Reports Server (NTRS)

    Groleau, Nicolas; Friedland, Peter (Technical Monitor)

    1994-01-01

    A set of experiments was designed at MIT's Man-Vehicle Laboratory in order to evaluate the effects of zero gravity on the human orientation system. During many of these experiments, the movements of the eyes are recorded on high quality video cassettes. The images must be analyzed off-line to calculate the position of the eyes at every moment in time. To this aim, I have implemented a simple inexpensive computerized system which measures the angle of rotation of the eye from digitized video images. The system is implemented on a desktop Macintosh computer, processes one play-back frame per second and exhibits adequate levels of accuracy and precision. The system uses LabVIEW, a digital output board, and a video input board to control a VCR, digitize video images, analyze them, and provide a user friendly interface for the various phases of the process. The system uses the Concept Vi LabVIEW library (Graftek's Image, Meudon la Foret, France) for image grabbing and displaying as well as translation to and from LabVIEW arrays. Graftek's software layer drives an Image Grabber board from Neotech (Eastleigh, United Kingdom). A Colour Adapter box from Neotech provides adequate video signal synchronization. The system also requires a LabVIEW driven digital output board (MacADIOS II from GW Instruments, Cambridge, MA) controlling a slightly modified VCR remote control used mainly to advance the video tape frame by frame.

  5. Method and apparatus for acoustic imaging of objects in water

    DOEpatents

    Deason, Vance A.; Telschow, Kenneth L.

    2005-01-25

    A method, system and underwater camera for acoustic imaging of objects in water or other liquids includes an acoustic source for generating an acoustic wavefront for reflecting from a target object as a reflected wavefront. The reflected acoustic wavefront deforms a screen on an acoustic side and correspondingly deforms the opposing optical side of the screen. An optical processing system is optically coupled to the optical side of the screen and converts the deformations on the optical side of the screen into an optical intensity image of the target object.

  6. Reconstruction of an acoustic pressure field in a resonance tube by particle image velocimetry.

    PubMed

    Kuzuu, K; Hasegawa, S

    2015-11-01

    A technique for estimating an acoustic field in a resonance tube is suggested. The estimation of an acoustic field in a resonance tube is important for the development of the thermoacoustic engine, and can be conducted employing two sensors to measure pressure. While this measurement technique is known as the two-sensor method, care needs to be taken with the location of pressure sensors when conducting pressure measurements. In the present study, particle image velocimetry (PIV) is employed instead of a pressure measurement by a sensor, and two-dimensional velocity vector images are extracted as sequential data from only a one- time recording made by a video camera of PIV. The spatial velocity amplitude is obtained from those images, and a pressure distribution is calculated from velocity amplitudes at two points by extending the equations derived for the two-sensor method. By means of this method, problems relating to the locations and calibrations of multiple pressure sensors are avoided. Furthermore, to verify the accuracy of the present method, the experiments are conducted employing the conventional two-sensor method and laser Doppler velocimetry (LDV). Then, results by the proposed method are compared with those obtained with the two-sensor method and LDV.

  7. Energy Efficient Image/Video Data Transmission on Commercial Multi-Core Processors

    PubMed Central

    Lee, Sungju; Kim, Heegon; Chung, Yongwha; Park, Daihee

    2012-01-01

    In transmitting image/video data over Video Sensor Networks (VSNs), energy consumption must be minimized while maintaining high image/video quality. Although image/video compression is well known for its efficiency and usefulness in VSNs, the excessive costs associated with encoding computation and complexity still hinder its adoption for practical use. However, it is anticipated that high-performance handheld multi-core devices will be used as VSN processing nodes in the near future. In this paper, we propose a way to improve the energy efficiency of image and video compression with multi-core processors while maintaining the image/video quality. We improve the compression efficiency at the algorithmic level or derive the optimal parameters for the combination of a machine and compression based on the tradeoff between the energy consumption and the image/video quality. Based on experimental results, we confirm that the proposed approach can improve the energy efficiency of the straightforward approach by a factor of 2∼5 without compromising image/video quality. PMID:23202181

  8. Acoustical imaging of high-frequency elastic responses of targets

    NASA Astrophysics Data System (ADS)

    Morse, Scot F.; Hefner, Brian T.; Marston, Philip L.

    2002-05-01

    Acoustical imaging was used to investigate high-frequency elastic responses to sound of two targets in water. The backscattering of broadband bipolar acoustic pulses by a truncated cylindrical shell was recorded over a wide range of tilt angles [S. F. Morse and P. L. Marston, ``Backscattering of transients by tilted truncated cylindrical shells: time-frequency identification of ray contributions from measurements,'' J. Acoust. Soc. Am. (in press)]. This data set was used to form synthetic aperture images of the target based on the data within different angular apertures. Over a range of viewing angles, the visibility of the cylinder's closest rear corner was significantly enhanced by the meridional flexural wave contribution to the backscattering. In another experiment, the time evolution of acoustic holographic images was used to explore the response of tilted elastic circular disks to tone bursts having frequencies of 250 and 300 kHz. For different tilt angles, specific responses that enhance the backscattering were identified from the time evolution of the images [B. T. Hefner and P. L. Marston, Acoust. Res. Lett. Online 2, 55-60 (2001)]. [Work supported by ONR.

  9. Synchrotron x-ray imaging of acoustic cavitation bubbles induced by acoustic excitation

    NASA Astrophysics Data System (ADS)

    Jung, Sung Yong; Park, Han Wook; Park, Sung Ho; Lee, Sang Joon

    2017-04-01

    The cavitation induced by acoustic excitation has been widely applied in various biomedical applications because cavitation bubbles can enhance the exchanges of mass and energy. In order to minimize the hazardous effects of the induced cavitation, it is essential to understand the spatial distribution of cavitation bubbles. The spatial distribution of cavitation bubbles visualized by the synchrotron x-ray imaging technique is compared to that obtained with a conventional x-ray tube. Cavitation bubbles with high density in the region close to the tip of the probe are visualized using the synchrotron x-ray imaging technique, however, the spatial distribution of cavitation bubbles in the whole ultrasound field is not detected. In this study, the effects of the ultrasound power of acoustic excitation and working medium on the shape and density of the induced cavitation bubbles are examined. As a result, the synchrotron x-ray imaging technique is useful for visualizing spatial distributions of cavitation bubbles, and it could be used for optimizing the operation conditions of acoustic cavitation.

  10. Objective analysis of image quality of video image capture systems

    NASA Astrophysics Data System (ADS)

    Rowberg, Alan H.

    1990-07-01

    As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use methods of digitizing the video signal that is produced for display on the scanner viewing console itself. A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These images have been captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and noise filters can also add or modify artifacts seen in the captured images, often giving unusual results. Each image is described, together with the tests which were performed using them. One image contains alternating black and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give

  11. Extended image differencing for change detection in UAV video mosaics

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Krüger, Wolfgang; Schumann, Arne

    2014-03-01

    Change detection is one of the most important tasks when using unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. We address changes of short time scale, i.e. the observations are taken in time distances from several minutes up to a few hours. Each observation is a short video sequence acquired by the UAV in near-nadir view and the relevant changes are, e.g., recently parked or moved vehicles. In this paper we extend our previous approach of image differencing for single video frames to video mosaics. A precise image-to-image registration combined with a robust matching approach is needed to stitch the video frames to a mosaic. Additionally, this matching algorithm is applied to mosaic pairs in order to align them to a common geometry. The resulting registered video mosaic pairs are the input of the change detection procedure based on extended image differencing. A change mask is generated by an adaptive threshold applied to a linear combination of difference images of intensity and gradient magnitude. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are stereo disparity at 3D structures of the scene, changed size of shadows, and compression or transmission artifacts. The special effects of video mosaicking such as geometric distortions and artifacts at moving objects have to be considered, too. In our experiments we analyze the influence of these effects on the change detection results by considering several scenes. The results show that for video mosaics this task is more difficult than for single video frames. Therefore, we extended the image registration by estimating an elastic transformation using a thin plate spline approach. The results for mosaics are comparable to that of single video frames and are useful for interactive image exploitation due to a larger scene coverage.

  12. Video-based noncooperative iris image segmentation.

    PubMed

    Du, Yingzi; Arslanturk, Emrah; Zhou, Zhi; Belcher, Craig

    2011-02-01

    In this paper, we propose a video-based noncooperative iris image segmentation scheme that incorporates a quality filter to quickly eliminate images without an eye, employs a coarse-to-fine segmentation scheme to improve the overall efficiency, uses a direct least squares fitting of ellipses method to model the deformed pupil and limbic boundaries, and develops a window gradient-based method to remove noise in the iris region. A remote iris acquisition system is set up to collect noncooperative iris video images. An objective method is used to quantitatively evaluate the accuracy of the segmentation results. The experimental results demonstrate the effectiveness of this method. The proposed method would make noncooperative iris recognition or iris surveillance possible.

  13. Image and Video Compression with VLSI Neural Networks

    NASA Technical Reports Server (NTRS)

    Fang, W.; Sheu, B.

    1993-01-01

    An advanced motion-compensated predictive video compression system based on artificial neural networks has been developed to effectively eliminate the temporal and spatial redundancy of video image sequences and thus reduce the bandwidth and storage required for the transmission and recording of the video signal. The VLSI neuroprocessor for high-speed high-ratio image compression based upon a self-organization network and the conventional algorithm for vector quantization are compared. The proposed method is quite efficient and can achieve near-optimal results.

  14. An Acoustic Charge Transport Imager for High Definition Television

    NASA Technical Reports Server (NTRS)

    Hunt, William D.; Brennan, Kevin; May, Gary; Glenn, William E.; Richardson, Mike; Solomon, Richard

    1999-01-01

    This project, over its term, included funding to a variety of companies and organizations. In addition to Georgia Tech these included Florida Atlantic University with Dr. William E. Glenn as the P.I., Kodak with Mr. Mike Richardson as the P.I. and M.I.T./Polaroid with Dr. Richard Solomon as the P.I. The focus of the work conducted by these organizations was the development of camera hardware for High Definition Television (HDTV). The focus of the research at Georgia Tech was the development of new semiconductor technology to achieve a next generation solid state imager chip that would operate at a high frame rate (I 70 frames per second), operate at low light levels (via the use of avalanche photodiodes as the detector element) and contain 2 million pixels. The actual cost required to create this new semiconductor technology was probably at least 5 or 6 times the investment made under this program and hence we fell short of achieving this rather grand goal. We did, however, produce a number of spin-off technologies as a result of our efforts. These include, among others, improved avalanche photodiode structures, significant advancement of the state of understanding of ZnO/GaAs structures and significant contributions to the analysis of general GaAs semiconductor devices and the design of Surface Acoustic Wave resonator filters for wireless communication. More of these will be described in the report. The work conducted at the partner sites resulted in the development of 4 prototype HDTV cameras. The HDTV camera developed by Kodak uses the Kodak KAI-2091M high- definition monochrome image sensor. This progressively-scanned charge-coupled device (CCD) can operate at video frame rates and has 9 gm square pixels. The photosensitive area has a 16:9 aspect ratio and is consistent with the "Common Image Format" (CIF). It features an active image area of 1928 horizontal by 1084 vertical pixels and has a 55% fill factor. The camera is designed to operate in continuous mode

  15. Video Imaging System Particularly Suited for Dynamic Gear Inspection

    NASA Technical Reports Server (NTRS)

    Broughton, Howard (Inventor)

    1999-01-01

    A digital video imaging system that captures the image of a single tooth of interest of a rotating gear is disclosed. The video imaging system detects the complete rotation of the gear and divide that rotation into discrete time intervals so that each tooth of interest of the gear is precisely determined when it is at a desired location that is illuminated in unison with a digital video camera so as to record a single digital image for each tooth. The digital images are available to provide instantaneous analysis of the tooth of interest, or to be stored and later provide images that yield a history that may be used to predict gear failure, such as gear fatigue. The imaging system is completely automated by a controlling program so that it may run for several days acquiring images without supervision from the user.

  16. 17 CFR 232.304 - Graphic, image, audio and video material.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in an...

  17. 17 CFR 232.304 - Graphic, image, audio and video material.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in an...

  18. 17 CFR 232.304 - Graphic, image, audio and video material.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in an...

  19. 17 CFR 232.304 - Graphic, image, audio and video material.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in an...

  20. 17 CFR 232.304 - Graphic, image, audio and video material.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in an...

  1. Snapshot spectral and polarimetric imaging; target identification with multispectral video

    NASA Astrophysics Data System (ADS)

    Bartlett, Brent D.; Rodriguez, Mikel D.

    2013-05-01

    As the number of pixels continue to grow in consumer and scientific imaging devices, it has become feasible to collect the incident light field. In this paper, an imaging device developed around light field imaging is used to collect multispectral and polarimetric imagery in a snapshot fashion. The sensor is described and a video data set is shown highlighting the advantage of snapshot spectral imaging. Several novel computer vision approaches are applied to the video cubes to perform scene characterization and target identification. It is shown how the addition of spectral and polarimetric data to the video stream allows for multi-target identification and tracking not possible with traditional RGB video collection.

  2. VLSI-based video event triggering for image data compression

    NASA Astrophysics Data System (ADS)

    Williams, Glenn L.

    1994-02-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  3. VLSI-based Video Event Triggering for Image Data Compression

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    1994-01-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  4. Intergraph video and images exploitation capabilities

    NASA Astrophysics Data System (ADS)

    Colla, Simone; Manesis, Charalampos

    2013-08-01

    The current paper focuses on the capture, fusion and process of aerial imagery in order to leverage full motion video, giving analysts the ability to collect, analyze, and maximize the value of video assets. Unmanned aerial vehicles (UAV) have provided critical real-time surveillance and operational support to military organizations, and are a key source of intelligence, particularly when integrated with other geospatial data. In the current workflow, at first, the UAV operators plan the flight by using a flight planning software. During the flight the UAV send a live video stream directly on the field to be processed by Intergraph software, to generate and disseminate georeferenced images trough a service oriented architecture based on ERDAS Apollo suite. The raw video-based data sources provide the most recent view of a situation and can augment other forms of geospatial intelligence - such as satellite imagery and aerial photos - to provide a richer, more detailed view of the area of interest. To effectively use video as a source of intelligence, however, the analyst needs to seamlessly fuse the video with these other types of intelligence, such as map features and annotations. Intergraph has developed an application that automatically generates mosaicked georeferenced image, tags along the video route which can then be seamlessly integrated with other forms of static data, such as aerial photos, satellite imagery, or geospatial layers and features. Consumers will finally have the ability to use a single, streamlined system to complete the entire geospatial information lifecycle: capturing geospatial data using sensor technology; processing vector, raster, terrain data into actionable information; managing, fusing, and sharing geospatial data and video toghether; and finally, rapidly and securely delivering integrated information products, ensuring individuals can make timely decisions.

  5. Quantifying cell mono-layer cultures by video imaging.

    PubMed

    Miller, K S; Hook, L A

    1996-04-01

    A method is described in which the relative number of adherent cells in multi-well tissue-culture plates is assayed by staining the cells with Giemsa and capturing the image of the stained cells with a video camera and charged-coupled device. The resultant image is quantified using the associated video imaging software. The method is shown to be sensitive and reproducible and should be useful for studies where quantifying relative cell numbers and/or proliferation in vitro is required.

  6. Improving stop line detection using video imaging detectors.

    DOT National Transportation Integrated Search

    2010-11-01

    The Texas Department of Transportation and other state departments of transportation as well as cities : nationwide are using video detection successfully at signalized intersections. However, operational : issues with video imaging vehicle detection...

  7. Submillimeter video imaging with a superconducting bolometer array

    NASA Astrophysics Data System (ADS)

    Becker, Daniel Thomas

    Millimeter wavelength radiation holds promise for detection of security threats at a distance, including suicide bombers and maritime threats in poor weather. The high sensitivity of superconducting Transition Edge Sensor (TES) bolometers makes them ideal for passive imaging of thermal signals at millimeter and submillimeter wavelengths. I have built a 350 GHz video-rate imaging system using an array of feedhorn-coupled TES bolometers. The system operates at standoff distances of 16 m to 28 m with a measured spatial resolution of 1.4 cm (at 17 m). It currently contains one 251-detector sub-array, and can be expanded to contain four sub-arrays for a total of 1004 detectors. The system has been used to take video images that reveal the presence of weapons concealed beneath a shirt in an indoor setting. This dissertation describes the design, implementation and characterization of this system. It presents an overview of the challenges associated with standoff passive imaging and how these problems can be overcome through the use of large-format TES bolometer arrays. I describe the design of the system and cover the results of detector and optical characterization. I explain the procedure used to generate video images using the system, and present a noise analysis of those images. This analysis indicates that the Noise Equivalent Temperature Difference (NETD) of the video images is currently limited by artifacts of the scanning process. More sophisticated image processing algorithms can eliminate these artifacts and reduce the NETD to 100 mK, which is the target value for the most demanding passive imaging scenarios. I finish with an overview of future directions for this system.

  8. Interpreting Underwater Acoustic Images of the Upper Ocean Boundary Layer

    ERIC Educational Resources Information Center

    Ulloa, Marco J.

    2007-01-01

    A challenging task in physical studies of the upper ocean using underwater sound is the interpretation of high-resolution acoustic images. This paper covers a number of basic concepts necessary for undergraduate and postgraduate students to identify the most distinctive features of the images, providing a link with the acoustic signatures of…

  9. USB video image controller used in CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Zhang, Wenxuan; Wang, Yuxia; Fan, Hong

    2002-09-01

    CMOS process is mainstream technique in VLSI, possesses high integration. SE402 is multifunction microcontroller, which integrates image data I/O ports, clock control, exposure control and digital signal processing into one chip. SE402 reduces the number of chips and PCB's room. The paper studies emphatically on USB video image controller used in CMOS image sensor and give the application on digital still camera.

  10. Low-cost synchronization of high-speed audio and video recordings in bio-acoustic experiments.

    PubMed

    Laurijssen, Dennis; Verreycken, Erik; Geipel, Inga; Daems, Walter; Peremans, Herbert; Steckel, Jan

    2018-02-27

    In this paper, we present a method for synchronizing high-speed audio and video recordings of bio-acoustic experiments. By embedding a random signal into the recorded video and audio data, robust synchronization of a diverse set of sensor streams can be performed without the need to keep detailed records. The synchronization can be performed using recording devices without dedicated synchronization inputs. We demonstrate the efficacy of the approach in two sets of experiments: behavioral experiments on different species of echolocating bats and the recordings of field crickets. We present the general operating principle of the synchronization method, discuss its synchronization strength and provide insights into how to construct such a device using off-the-shelf components. © 2018. Published by The Company of Biologists Ltd.

  11. Development of novel imaging probe for optical/acoustic radiation imaging (OARI).

    PubMed

    Ejofodomi, O'tega A; Zderic, Vesna; Zara, Jason M

    2013-11-01

    Optical/acoustic radiation imaging (OARI) is a novel imaging modality being developed to interrogate the optical and mechanical properties of soft tissues. OARI uses acoustic radiation force to generate displacement in soft tissue. Optical images before and after the application of the force are used to generate displacement maps that provide information about the mechanical properties of the tissue under interrogation. Since the images are optical images, they also represent the optical properties of the tissue as well. In this paper, the authors present the first imaging probe that uses acoustic radiation force in conjunction with optical coherence tomography (OCT) to provide information about the optical and mechanical properties of tissues to assist in the diagnosis and staging of epithelial cancers, and in particular bladder cancer. The OARI prototype probe consisted of an OCT probe encased in a plastic sheath, a miniaturized transducer glued to a plastic holder, both of which were encased in a 10 cm stainless steel tube with an inner diameter of 10 mm. The transducer delivered an acoustic intensity of 18 W/cm(2) and the OCT probe had a spatial resolution of approximately 10-20 μm. The tube was filled with deionized water for acoustic coupling and covered by a low density polyethylene cap. The OARI probe was characterized and tested on bladder wall phantoms. The phantoms possessed Young's moduli ranging from 10.2 to 12 kPa, mass density of 1.05 g/cm(3), acoustic attenuation coefficient of 0.66 dB/cm MHz, speed of sound of 1591 m/s, and optical scattering coefficient of 1.80 mm(-1). Finite element model (FEM) theoretical simulations were performed to assess the performance of the OARI probe. The authors obtained displacements of 9.4, 8.7, and 3.4 μm for the 3%, 4%, and 5% bladder wall phantoms, respectively. This shows that the probe is capable of generating optical images, and also has the ability to generate and track displacements in tissue. This will

  12. Method and apparatus for reading meters from a video image

    DOEpatents

    Lewis, Trevor J.; Ferguson, Jeffrey J.

    1997-01-01

    A method and system to enable acquisition of data about an environment from one or more meters using video images. One or more meters are imaged by a video camera and the video signal is digitized. Then, each region of the digital image which corresponds to the indicator of the meter is calibrated and the video signal is analyzed to determine the value indicated by each meter indicator. Finally, from the value indicated by each meter indicator in the calibrated region, a meter reading is generated. The method and system offer the advantages of automatic data collection in a relatively non-intrusive manner without making any complicated or expensive electronic connections, and without requiring intensive manpower.

  13. Acoustic and optical borehole-wall imaging for fractured-rock aquifer studies

    USGS Publications Warehouse

    Williams, J.H.; Johnson, C.D.

    2004-01-01

    Imaging with acoustic and optical televiewers results in continuous and oriented 360?? views of the borehole wall from which the character, relation, and orientation of lithologic and structural planar features can be defined for studies of fractured-rock aquifers. Fractures are more clearly defined under a wider range of conditions on acoustic images than on optical images including dark-colored rocks, cloudy borehole water, and coated borehole walls. However, optical images allow for the direct viewing of the character of and relation between lithology, fractures, foliation, and bedding. The most powerful approach is the combined application of acoustic and optical imaging with integrated interpretation. Imaging of the borehole wall provides information useful for the collection and interpretation of flowmeter and other geophysical logs, core samples, and hydraulic and water-quality data from packer testing and monitoring. ?? 2003 Elsevier B.V. All rights reserved.

  14. Does Instructor's Image Size in Video Lectures Affect Learning Outcomes?

    ERIC Educational Resources Information Center

    Pi, Z.; Hong, J.; Yang, J.

    2017-01-01

    One of the most commonly used forms of video lectures is a combination of an instructor's image and accompanying lecture slides as a picture-in-picture. As the image size of the instructor varies significantly across video lectures, and so do the learning outcomes associated with this technology, the influence of the instructor's image size should…

  15. Characterizing Response to Elemental Unit of Acoustic Imaging Noise: An fMRI Study

    PubMed Central

    Luh, Wen-Ming; Talavage, Thomas M.

    2010-01-01

    Acoustic imaging noise produced during functional magnetic resonance imaging (fMRI) studies can hinder auditory fMRI research analysis by altering the properties of the acquired time-series data. Acoustic imaging noise can be especially confounding when estimating the time course of the hemodynamic response (HDR) in auditory event-related fMRI (fMRI) experiments. This study is motivated by the desire to establish a baseline function that can serve not only as a comparison to other quantities of acoustic imaging noise for determining how detrimental is one's experimental noise, but also as a foundation for a model that compensates for the response to acoustic imaging noise. Therefore, the amplitude and spatial extent of the HDR to the elemental unit of acoustic imaging noise (i.e., a single ping) associated with echoplanar acquisition were characterized and modeled. Results from this fMRI study at 1.5 T indicate that the group-averaged HDR in left and right auditory cortex to acoustic imaging noise (duration of 46 ms) has an estimated peak magnitude of 0.29% (right) to 0.48% (left) signal change from baseline, peaks between 3 and 5 s after stimulus presentation, and returns to baseline and remains within the noise range approximately 8 s after stimulus presentation. PMID:19304477

  16. Imaging and detection of mines from acoustic measurements

    NASA Astrophysics Data System (ADS)

    Witten, Alan J.; DiMarzio, Charles A.; Li, Wen; McKnight, Stephen W.

    1999-08-01

    A laboratory-scale acoustic experiment is described where a buried target, a hockey puck cut in half, is shallowly buried in a sand box. To avoid the need for source and receiver coupling to the host sand, an acoustic wave is generated in the subsurface by a pulsed laser suspended above the air-sand interface. Similarly, an airborne microphone is suspended above this interface and moved in unison with the laser. After some pre-processing of the data, reflections for the target, although weak, could clearly be identified. While the existence and location of the target can be determined by inspection of the data, its unique shape can not. Since target discrimination is important in mine detection, a 3D imaging algorithm was applied to the acquired acoustic data. This algorithm yielded a reconstructed image where the shape of the target was resolved.

  17. Method and apparatus for reading meters from a video image

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, T.J.; Ferguson, J.J.

    1995-12-31

    A method and system enable acquisition of data about an environment from one or more meters using video images. One or more meters are imaged by a video camera and the video signal is digitized. Then, each region of the digital image which corresponds to the indicator of the meter is calibrated and the video signal is analyzed to determine the value indicated by each meter indicator. Finally, from the value indicated by each meter indicator in the calibrated region, a meter reading is generated. The method and system offer the advantages of automatic data collection in a relatively non-intrusivemore » manner without making any complicated or expensive electronic connections, and without requiring intensive manpower.« less

  18. Method and apparatus for reading meters from a video image

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, T.J.; Ferguson, J.J.

    1997-09-30

    A method and system to enable acquisition of data about an environment from one or more meters using video images. One or more meters are imaged by a video camera and the video signal is digitized. Then, each region of the digital image which corresponds to the indicator of the meter is calibrated and the video signal is analyzed to determine the value indicated by each meter indicator. Finally, from the value indicated by each meter indicator in the calibrated region, a meter reading is generated. The method and system offer the advantages of automatic data collection in a relativelymore » non-intrusive manner without making any complicated or expensive electronic connections, and without requiring intensive manpower. 1 fig.« less

  19. Progress in video immersion using Panospheric imaging

    NASA Astrophysics Data System (ADS)

    Bogner, Stephen L.; Southwell, David T.; Penzes, Steven G.; Brosinsky, Chris A.; Anderson, Ron; Hanna, Doug M.

    1998-09-01

    Having demonstrated significant technical and marketplace advantages over other modalities for video immersion, PanosphericTM Imaging (PI) continues to evolve rapidly. This paper reports on progress achieved since AeroSense 97. The first practical field deployment of the technology occurred in June-August 1997 during the NASA-CMU 'Atacama Desert Trek' activity, where the Nomad mobile robot was teleoperated via immersive PanosphericTM imagery from a distance of several thousand kilometers. Research using teleoperated vehicles at DRES has also verified the exceptional utility of the PI technology for achieving high levels of situational awareness, operator confidence, and mission effectiveness. Important performance enhancements have been achieved with the completion of the 4th Generation PI DSP-based array processor system. The system is now able to provide dynamic full video-rate generation of spatial and computational transformations, resulting in a programmable and fully interactive immersive video telepresence. A new multi- CCD camera architecture has been created to exploit the bandwidth of this processor, yielding a well-matched PI system with greatly improved resolution. While the initial commercial application for this technology is expected to be video tele- conferencing, it also appears to have excellent potential for application in the 'Immersive Cockpit' concept. Additional progress is reported in the areas of Long Wave Infrared PI Imaging, Stereo PI concepts, PI based Video-Servoing concepts, PI based Video Navigation concepts, and Foveation concepts (to merge localized high-resolution views with immersive views).

  20. Combination of acoustical radiosity and the image source method.

    PubMed

    Koutsouris, Georgios I; Brunskog, Jonas; Jeong, Cheol-Ho; Jacobsen, Finn

    2013-06-01

    A combined model for room acoustic predictions is developed, aiming to treat both diffuse and specular reflections in a unified way. Two established methods are incorporated: acoustical radiosity, accounting for the diffuse part, and the image source method, accounting for the specular part. The model is based on conservation of acoustical energy. Losses are taken into account by the energy absorption coefficient, and the diffuse reflections are controlled via the scattering coefficient, which defines the portion of energy that has been diffusely reflected. The way the model is formulated allows for a dynamic control of the image source production, so that no fixed maximum reflection order is required. The model is optimized for energy impulse response predictions in arbitrary polyhedral rooms. The predictions are validated by comparison with published measured data for a real music studio hall. The proposed model turns out to be promising for acoustic predictions providing a high level of detail and accuracy.

  1. PIZZARO: Forensic analysis and restoration of image and video data.

    PubMed

    Kamenicky, Jan; Bartos, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozamsky, Adam; Saic, Stanislav; Sroubek, Filip; Sorel, Michal; Zita, Ales; Zitova, Barbara; Sima, Zdenek; Svarc, Petr; Horinek, Jan

    2016-07-01

    This paper introduces a set of methods for image and video forensic analysis. They were designed to help to assess image and video credibility and origin and to restore and increase image quality by diminishing unwanted blur, noise, and other possible artifacts. The motivation came from the best practices used in the criminal investigation utilizing images and/or videos. The determination of the image source, the verification of the image content, and image restoration were identified as the most important issues of which automation can facilitate criminalists work. Novel theoretical results complemented with existing approaches (LCD re-capture detection and denoising) were implemented in the PIZZARO software tool, which consists of the image processing functionality as well as of reporting and archiving functions to ensure the repeatability of image analysis procedures and thus fulfills formal aspects of the image/video analysis work. Comparison of new proposed methods with the state of the art approaches is shown. Real use cases are presented, which illustrate the functionality of the developed methods and demonstrate their applicability in different situations. The use cases as well as the method design were solved in tight cooperation of scientists from the Institute of Criminalistics, National Drug Headquarters of the Criminal Police and Investigation Service of the Police of the Czech Republic, and image processing experts from the Czech Academy of Sciences. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. High throughput imaging cytometer with acoustic focussing.

    PubMed

    Zmijan, Robert; Jonnalagadda, Umesh S; Carugo, Dario; Kochi, Yu; Lemm, Elizabeth; Packham, Graham; Hill, Martyn; Glynne-Jones, Peter

    2015-10-31

    We demonstrate an imaging flow cytometer that uses acoustic levitation to assemble cells and other particles into a sheet structure. This technique enables a high resolution, low noise CMOS camera to capture images of thousands of cells with each frame. While ultrasonic focussing has previously been demonstrated for 1D cytometry systems, extending the technology to a planar, much higher throughput format and integrating imaging is non-trivial, and represents a significant jump forward in capability, leading to diagnostic possibilities not achievable with current systems. A galvo mirror is used to track the images of the moving cells permitting exposure times of 10 ms at frame rates of 50 fps with motion blur of only a few pixels. At 80 fps, we demonstrate a throughput of 208 000 beads per second. We investigate the factors affecting motion blur and throughput, and demonstrate the system with fluorescent beads, leukaemia cells and a chondrocyte cell line. Cells require more time to reach the acoustic focus than beads, resulting in lower throughputs; however a longer device would remove this constraint.

  3. Feature Extraction in Sequential Multimedia Images: with Applications in Satellite Images and On-line Videos

    NASA Astrophysics Data System (ADS)

    Liang, Yu-Li

    Multimedia data is increasingly important in scientific discovery and people's daily lives. Content of massive multimedia is often diverse and noisy, and motion between frames is sometimes crucial in analyzing those data. Among all, still images and videos are commonly used formats. Images are compact in size but do not contain motion information. Videos record motion but are sometimes too big to be analyzed. Sequential images, which are a set of continuous images with low frame rate, stand out because they are smaller than videos and still maintain motion information. This thesis investigates features in different types of noisy sequential images, and the proposed solutions that intelligently combined multiple features to successfully retrieve visual information from on-line videos and cloudy satellite images. The first task is detecting supraglacial lakes above ice sheet in sequential satellite images. The dynamics of supraglacial lakes on the Greenland ice sheet deeply affect glacier movement, which is directly related to sea level rise and global environment change. Detecting lakes above ice is suffering from diverse image qualities and unexpected clouds. A new method is proposed to efficiently extract prominent lake candidates with irregular shapes, heterogeneous backgrounds, and in cloudy images. The proposed system fully automatize the procedure that track lakes with high accuracy. We further cooperated with geoscientists to examine the tracked lakes and found new scientific findings. The second one is detecting obscene content in on-line video chat services, such as Chatroulette, that randomly match pairs of users in video chat sessions. A big problem encountered in such systems is the presence of flashers and obscene content. Because of various obscene content and unstable qualities of videos capture by home web-camera, detecting misbehaving users is a highly challenging task. We propose SafeVchat, which is the first solution that achieves satisfactory

  4. Achieving real-time capsule endoscopy (CE) video visualization through panoramic imaging

    NASA Astrophysics Data System (ADS)

    Yi, Steven; Xie, Jean; Mui, Peter; Leighton, Jonathan A.

    2013-02-01

    In this paper, we mainly present a novel and real-time capsule endoscopy (CE) video visualization concept based on panoramic imaging. Typical CE videos run about 8 hours and are manually reviewed by physicians to locate diseases such as bleedings and polyps. To date, there is no commercially available tool capable of providing stabilized and processed CE video that is easy to analyze in real time. The burden on physicians' disease finding efforts is thus big. In fact, since the CE camera sensor has a limited forward looking view and low image frame rate (typical 2 frames per second), and captures very close range imaging on the GI tract surface, it is no surprise that traditional visualization method based on tracking and registration often fails to work. This paper presents a novel concept for real-time CE video stabilization and display. Instead of directly working on traditional forward looking FOV (field of view) images, we work on panoramic images to bypass many problems facing traditional imaging modalities. Methods on panoramic image generation based on optical lens principle leading to real-time data visualization will be presented. In addition, non-rigid panoramic image registration methods will be discussed.

  5. Contrast Enhancement for Thermal Acoustic Breast Cancer Imaging via Resonant Stimulation

    DTIC Science & Technology

    2008-03-01

    AD_________________ Award Number: W81XWH-06-1-0389 TITLE: Contrast Enhancement for Thermal...5a. CONTRACT NUMBER Contrast Enhancement for Thermal Acoustic Breast Cancer Imaging via Resonant Stimulation 5b. GRANT NUMBER W81XWH-06-1-0389...13. SUPPLEMENTARY NOTES 14. ABSTRACT This research plans to develop enhanced contrast thermal acoustic imaging (TAI) technology for the

  6. Quantification of video-taped images in microcirculation research using inexpensive imaging software (Adobe Photoshop).

    PubMed

    Brunner, J; Krummenauer, F; Lehr, H A

    2000-04-01

    Study end-points in microcirculation research are usually video-taped images rather than numeric computer print-outs. Analysis of these video-taped images for the quantification of microcirculatory parameters usually requires computer-based image analysis systems. Most software programs for image analysis are custom-made, expensive, and limited in their applicability to selected parameters and study end-points. We demonstrate herein that an inexpensive, commercially available computer software (Adobe Photoshop), run on a Macintosh G3 computer with inbuilt graphic capture board provides versatile, easy to use tools for the quantification of digitized video images. Using images obtained by intravital fluorescence microscopy from the pre- and postischemic muscle microcirculation in the skinfold chamber model in hamsters, Photoshop allows simple and rapid quantification (i) of microvessel diameters, (ii) of the functional capillary density and (iii) of postischemic leakage of FITC-labeled high molecular weight dextran from postcapillary venules. We present evidence of the technical accuracy of the software tools and of a high degree of interobserver reliability. Inexpensive commercially available imaging programs (i.e., Adobe Photoshop) provide versatile tools for image analysis with a wide range of potential applications in microcirculation research.

  7. Opto-acoustic breast imaging with co-registered ultrasound

    NASA Astrophysics Data System (ADS)

    Zalev, Jason; Clingman, Bryan; Herzog, Don; Miller, Tom; Stavros, A. Thomas; Oraevsky, Alexander; Kist, Kenneth; Dornbluth, N. Carol; Otto, Pamela

    2014-03-01

    We present results from a recent study involving the ImagioTM breast imaging system, which produces fused real-time two-dimensional color-coded opto-acoustic (OA) images that are co-registered and temporally inter- leaved with real-time gray scale ultrasound using a specialized duplex handheld probe. The use of dual optical wavelengths provides functional blood map images of breast tissue and tumors displayed with high contrast based on total hemoglobin and oxygen saturation of the blood. This provides functional diagnostic information pertaining to tumor metabolism. OA also shows morphologic information about tumor neo-vascularity that is complementary to the morphological information obtained with conventional gray scale ultrasound. This fusion technology conveniently enables real-time analysis of the functional opto-acoustic features of lesions detected by readers familiar with anatomical gray scale ultrasound. We demonstrate co-registered opto-acoustic and ultrasonic images of malignant and benign tumors from a recent clinical study that provide new insight into the function of tumors in-vivo. Results from the Feasibility Study show preliminary evidence that the technology may have the capability to improve characterization of benign and malignant breast masses over conventional diagnostic breast ultrasound alone and to improve overall accuracy of breast mass diagnosis. In particular, OA improved speci city over that of conventional diagnostic ultrasound, which could potentially reduce the number of negative biopsies performed without missing cancers.

  8. Two dimensional photoacoustic imaging using microfiber interferometric acoustic transducers

    NASA Astrophysics Data System (ADS)

    Wang, Xiu Xin; Li, Zhang Yong; Tian, Yin; Wang, Wei; Pang, Yu; Tam, Kin Yip

    2018-07-01

    Photoacoustic imaging transducer with a pair of wavelength-matched Bragg gratings (forming a Fabry-Perot cavity) inscribed on a short section of microfiber has been developed. A tunable laser with wavelength that matched to one of selected fringe slopes was used to transmit the acoustic induced wavelength. Interferometric fringes with high finesse in transmission significantly enhanced the sensitivity of the transducer even under very small acoustic perturbations. The performance of this novel transducer was evaluated through the imaging studies of human hairs (∼98 μm in diameter). The spatial resolution is 300 μm. We have demonstrated that the novel transducer developed in this study is a versatile tool for photoacoustic imaging study.

  9. Evaluation of a video image detection system : final report.

    DOT National Transportation Integrated Search

    1994-05-01

    A video image detection system (VIDS) is an advanced wide-area traffic monitoring system : that processes input from a video camera. The Autoscope VIDS coupled with an information : management system was selected as the monitoring device because test...

  10. Heterogeneity image patch index and its application to consumer video summarization.

    PubMed

    Dang, Chinh T; Radha, Hayder

    2014-06-01

    Automatic video summarization is indispensable for fast browsing and efficient management of large video libraries. In this paper, we introduce an image feature that we refer to as heterogeneity image patch (HIP) index. The proposed HIP index provides a new entropy-based measure of the heterogeneity of patches within any picture. By evaluating this index for every frame in a video sequence, we generate a HIP curve for that sequence. We exploit the HIP curve in solving two categories of video summarization applications: key frame extraction and dynamic video skimming. Under the key frame extraction frame-work, a set of candidate key frames is selected from abundant video frames based on the HIP curve. Then, a proposed patch-based image dissimilarity measure is used to create affinity matrix of these candidates. Finally, a set of key frames is extracted from the affinity matrix using a min–max based algorithm. Under video skimming, we propose a method to measure the distance between a video and its skimmed representation. The video skimming problem is then mapped into an optimization framework and solved by minimizing a HIP-based distance for a set of extracted excerpts. The HIP framework is pixel-based and does not require semantic information or complex camera motion estimation. Our simulation results are based on experiments performed on consumer videos and are compared with state-of-the-art methods. It is shown that the HIP approach outperforms other leading methods, while maintaining low complexity.

  11. Nonlinear ultrasound imaging of nanoscale acoustic biomolecules

    NASA Astrophysics Data System (ADS)

    Maresca, David; Lakshmanan, Anupama; Lee-Gosselin, Audrey; Melis, Johan M.; Ni, Yu-Li; Bourdeau, Raymond W.; Kochmann, Dennis M.; Shapiro, Mikhail G.

    2017-02-01

    Ultrasound imaging is widely used to probe the mechanical structure of tissues and visualize blood flow. However, the ability of ultrasound to observe specific molecular and cellular signals is limited. Recently, a unique class of gas-filled protein nanostructures called gas vesicles (GVs) was introduced as nanoscale (˜250 nm) contrast agents for ultrasound, accompanied by the possibilities of genetic engineering, imaging of targets outside the vasculature and monitoring of cellular signals such as gene expression. These possibilities would be aided by methods to discriminate GV-generated ultrasound signals from anatomical background. Here, we show that the nonlinear response of engineered GVs to acoustic pressure enables selective imaging of these nanostructures using a tailored amplitude modulation strategy. Finite element modeling predicted a strongly nonlinear mechanical deformation and acoustic response to ultrasound in engineered GVs. This response was confirmed with ultrasound measurements in the range of 10 to 25 MHz. An amplitude modulation pulse sequence based on this nonlinear response allows engineered GVs to be distinguished from linear scatterers and other GV types with a contrast ratio greater than 11.5 dB. We demonstrate the effectiveness of this nonlinear imaging strategy in vitro, in cellulo, and in vivo.

  12. Photo acoustic imaging: technology, systems and market trends

    NASA Astrophysics Data System (ADS)

    Faucheux, Marc; d'Humières, Benoît; Cochard, Jacques

    2017-03-01

    Although the Photo Acoustic effect was observed by Graham Bell in 1880, the first applications (gas analysis) occurred in 1970's using the required energetic light pulses from lasers. During mid 1990's medical imaging research begun to use Photo Acoustic effect and in vivo images were obtained in mid-2000. Since 2009, the number of patent related to Photo Acoustic Imaging (PAI) has dramatically increased. PAI machines for pre-clinical and small animal imaging have been being used in a routine way for several years. Based on its very interesting features (non-ionizing radiation, noninvasive, high depth resolution ratio, scalability, moderate price) and because it is able to deliver not only anatomical, but functional and molecular information, PAI is a very promising clinical imaging modality. It penetrates deeper into tissue than OCT (Optical Coherence Tomography) and provides a higher resolution than ultrasounds. The PAI is one of the most growing imaging modality and some innovative clinical systems are planned to be on market in 2017. Our study analyzes the different approaches such as photoacoustic computed tomography, 3D photoacoustic microscopy, multispectral photoacoustic tomography and endoscopy with the recent and tremendous technological progress over the past decade: advances in image reconstruction algorithms, laser technology, ultrasound detectors and miniaturization. We analyze which medical domains and applications are the most concerned and explain what should be the forthcoming medical system in the near future. We segment the market in four parts: Components and R&D, pre-clinical, analytics, clinical. We analyzed what should be, quantitatively and qualitatively, the PAI medical markets in each segment and its main trends. We point out the market accessibility (patents, regulations, clinical evaluations, clinical acceptance, funding). In conclusion, we explain the main market drivers and challenges to overcome and give a road map for medical

  13. Temporal pattern of acoustic imaging noise asymmetrically modulates activation in the auditory cortex.

    PubMed

    Ranaweera, Ruwan D; Kwon, Minseok; Hu, Shuowen; Tamer, Gregory G; Luh, Wen-Ming; Talavage, Thomas M

    2016-01-01

    This study investigated the hemisphere-specific effects of the temporal pattern of imaging related acoustic noise on auditory cortex activation. Hemodynamic responses (HDRs) to five temporal patterns of imaging noise corresponding to noise generated by unique combinations of imaging volume and effective repetition time (TR), were obtained using a stroboscopic event-related paradigm with extra-long (≥27.5 s) TR to minimize inter-acquisition effects. In addition to confirmation that fMRI responses in auditory cortex do not behave in a linear manner, temporal patterns of imaging noise were found to modulate both the shape and spatial extent of hemodynamic responses, with classically non-auditory areas exhibiting responses to longer duration noise conditions. Hemispheric analysis revealed the right primary auditory cortex to be more sensitive than the left to the presence of imaging related acoustic noise. Right primary auditory cortex responses were significantly larger during all the conditions. This asymmetry of response to imaging related acoustic noise could lead to different baseline activation levels during acquisition schemes using short TR, inducing an observed asymmetry in the responses to an intended acoustic stimulus through limitations of dynamic range, rather than due to differences in neuronal processing of the stimulus. These results emphasize the importance of accounting for the temporal pattern of the acoustic noise when comparing findings across different fMRI studies, especially those involving acoustic stimulation. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Experimental design and analysis of JND test on coded image/video

    NASA Astrophysics Data System (ADS)

    Lin, Joe Yuchieh; Jin, Lina; Hu, Sudeng; Katsavounidis, Ioannis; Li, Zhi; Aaron, Anne; Kuo, C.-C. Jay

    2015-09-01

    The visual Just-Noticeable-Difference (JND) metric is characterized by the detectable minimum amount of two visual stimuli. Conducting the subjective JND test is a labor-intensive task. In this work, we present a novel interactive method in performing the visual JND test on compressed image/video. JND has been used to enhance perceptual visual quality in the context of image/video compression. Given a set of coding parameters, a JND test is designed to determine the distinguishable quality level against a reference image/video, which is called the anchor. The JND metric can be used to save coding bitrates by exploiting the special characteristics of the human visual system. The proposed JND test is conducted using a binary-forced choice, which is often adopted to discriminate the difference in perception in a psychophysical experiment. The assessors are asked to compare coded image/video pairs and determine whether they are of the same quality or not. A bisection procedure is designed to find the JND locations so as to reduce the required number of comparisons over a wide range of bitrates. We will demonstrate the efficiency of the proposed JND test, report experimental results on the image and video JND tests.

  15. EBLAST: an efficient high-compression image transformation 3. application to Internet image and video transmission

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.

    2001-12-01

    A wide variety of digital image compression transforms developed for still imaging and broadcast video transmission are unsuitable for Internet video applications due to insufficient compression ratio, poor reconstruction fidelity, or excessive computational requirements. Examples include hierarchical transforms that require all, or large portion of, a source image to reside in memory at one time, transforms that induce significant locking effect at operationally salient compression ratios, and algorithms that require large amounts of floating-point computation. The latter constraint holds especially for video compression by small mobile imaging devices for transmission to, and compression on, platforms such as palmtop computers or personal digital assistants (PDAs). As Internet video requirements for frame rate and resolution increase to produce more detailed, less discontinuous motion sequences, a new class of compression transforms will be needed, especially for small memory models and displays such as those found on PDAs. In this, the third series of papers, we discuss the EBLAST compression transform and its application to Internet communication. Leading transforms for compression of Internet video and still imagery are reviewed and analyzed, including GIF, JPEG, AWIC (wavelet-based), wavelet packets, and SPIHT, whose performance is compared with EBLAST. Performance analysis criteria include time and space complexity and quality of the decompressed image. The latter is determined by rate-distortion data obtained from a database of realistic test images. Discussion also includes issues such as robustness of the compressed format to channel noise. EBLAST has been shown to perform superiorly to JPEG and, unlike current wavelet compression transforms, supports fast implementation on embedded processors with small memory models.

  16. Ultrasound-Mediated Biophotonic Imaging: A Review of Acousto-Optical Tomography and Photo-Acoustic Tomography

    PubMed Central

    Wang, Lihong V.

    2004-01-01

    This article reviews two types of ultrasound-mediated biophotonic imaging–acousto-optical tomography (AOT, also called ultrasound-modulated optical tomography) and photo-acoustic tomography (PAT, also called opto-acoustic or thermo-acoustic tomography)–both of which are based on non-ionizing optical and ultrasonic waves. The goal of these technologies is to combine the contrast advantage of the optical properties and the resolution advantage of ultrasound. In these two technologies, the imaging contrast is based primarily on the optical properties of biological tissues, and the imaging resolution is based primarily on the ultrasonic waves that either are provided externally or produced internally, within the biological tissues. In fact, ultrasonic mediation overcomes both the resolution disadvantage of pure optical imaging in thick tissues and the contrast and speckle disadvantages of pure ultrasonic imaging. In our discussion of AOT, the relationship between modulation depth and acoustic amplitude is clarified. Potential clinical applications of ultrasound-mediated biophotonic imaging include early cancer detection, functional imaging, and molecular imaging. PMID:15096709

  17. High Resolution X-ray Phase Contrast Imaging with Acoustic Tissue-Selective Contrast Enhancement

    DTIC Science & Technology

    2008-06-01

    Imaging with Acoustic Tissue-Selective Contrast Enhancement PRINCIPAL INVESTIGATOR: Gerald J. Diebold, Ph.D. CONTRACTING... Contrast Imaging with Acoustic Tissue-Selective Contrast Enhancement 5b. GRANT NUMBER W81XWH-04-1-0481 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S...additional phase contrast features are visible at the interfaces of soft tissues as slight contrast enhancements . The image sequence in Fig. 2 shows an image

  18. Evaluation of Skybox Video and Still Image products

    NASA Astrophysics Data System (ADS)

    d'Angelo, P.; Kuschk, G.; Reinartz, P.

    2014-11-01

    The SkySat-1 satellite lauched by Skybox Imaging on November 21 in 2013 opens a new chapter in civilian earth observation as it is the first civilian satellite to image a target in high definition panchromatic video for up to 90 seconds. The small satellite with a mass of 100 kg carries a telescope with 3 frame sensors. Two products are available: Panchromatic video with a resolution of around 1 meter and a frame size of 2560 × 1080 pixels at 30 frames per second. Additionally, the satellite can collect still imagery with a swath of 8 km in the panchromatic band, and multispectral images with 4 bands. Using super-resolution techniques, sub-meter accuracy is reached for the still imagery. The paper provides an overview of the satellite design and imaging products. The still imagery product consists of 3 stripes of frame images with a footprint of approximately 2.6 × 1.1 km. Using bundle block adjustment, the frames are registered, and their accuracy is evaluated. Image quality of the panchromatic, multispectral and pansharpened products are evaluated. The video product used in this evaluation consists of a 60 second gazing acquisition of Las Vegas. A DSM is generated by dense stereo matching. Multiple techniques such as pairwise matching or multi image matching are used and compared. As no ground truth height reference model is availble to the authors, comparisons on flat surface and compare differently matched DSMs are performed. Additionally, visual inspection of DSM and DSM profiles show a detailed reconstruction of small features and large skyscrapers.

  19. Negative refraction imaging of acoustic metamaterial lens in the supersonic range

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Jianning; Wen, Tingdun; Key Laboratory of Electronic Testing Technology, North University of China, Taiyuan 030051

    2014-05-15

    Acoustic metamaterials with negative refraction index is the most promising method to overcome the diffraction limit of acoustic imaging to achieve ultrahigh resolution. In this paper, we use localized resonant phononic crystal as the unit cell to construct the acoustic negative refraction lens. Based on the vibration model of the phononic crystal, negative quality parameters of the lens are obtained while excited near the system resonance frequency. Simulation results show that negative refraction of the acoustic lens can be achieved when a sound wave transmiting through the phononic crystal plate. The patterns of the imaging field agree well with thatmore » of the incident wave, while the dispersion is very weak. The unit cell size in the simulation is 0.0005 m and the wavelength of the sound source is 0.02 m, from which we show that acoustic signal can be manipulated through structures with dimensions much smaller than the wavelength of incident wave.« less

  20. Note: Sound recovery from video using SVD-based information extraction

    NASA Astrophysics Data System (ADS)

    Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Chang'an

    2016-08-01

    This note reports an efficient singular value decomposition (SVD)-based vibration extraction approach that recovers sound information in silent high-speed video. A high-speed camera of which frame rates are in the range of 2 kHz-10 kHz is applied to film the vibrating objects. Sub-images cut from video frames are transformed into column vectors and then reconstructed to a new matrix. The SVD of the new matrix produces orthonormal image bases (OIBs) and image projections onto specific OIB can be recovered as understandable acoustical signals. Standard frequencies of 256 Hz and 512 Hz tuning forks are extracted offline from their vibrating surfaces and a 3.35 s speech signal is recovered online from a piece of paper that is stimulated by sound waves within 1 min.

  1. Exemplar-Based Image and Video Stylization Using Fully Convolutional Semantic Features.

    PubMed

    Zhu, Feida; Yan, Zhicheng; Bu, Jiajun; Yu, Yizhou

    2017-05-10

    Color and tone stylization in images and videos strives to enhance unique themes with artistic color and tone adjustments. It has a broad range of applications from professional image postprocessing to photo sharing over social networks. Mainstream photo enhancement softwares, such as Adobe Lightroom and Instagram, provide users with predefined styles, which are often hand-crafted through a trial-and-error process. Such photo adjustment tools lack a semantic understanding of image contents and the resulting global color transform limits the range of artistic styles it can represent. On the other hand, stylistic enhancement needs to apply distinct adjustments to various semantic regions. Such an ability enables a broader range of visual styles. In this paper, we first propose a novel deep learning architecture for exemplar-based image stylization, which learns local enhancement styles from image pairs. Our deep learning architecture consists of fully convolutional networks (FCNs) for automatic semantics-aware feature extraction and fully connected neural layers for adjustment prediction. Image stylization can be efficiently accomplished with a single forward pass through our deep network. To extend our deep network from image stylization to video stylization, we exploit temporal superpixels (TSPs) to facilitate the transfer of artistic styles from image exemplars to videos. Experiments on a number of datasets for image stylization as well as a diverse set of video clips demonstrate the effectiveness of our deep learning architecture.

  2. The path to COVIS: A review of acoustic imaging of hydrothermal flow regimes

    NASA Astrophysics Data System (ADS)

    Bemis, Karen G.; Silver, Deborah; Xu, Guangyu; Light, Russ; Jackson, Darrell; Jones, Christopher; Ozer, Sedat; Liu, Li

    2015-11-01

    Acoustic imaging of hydrothermal flow regimes started with the incidental recognition of a plume on a routine sonar scan for obstacles in the path of the human-occupied submersible ALVIN. Developments in sonar engineering, acoustic data processing and scientific visualization have been combined to develop technology which can effectively capture the behavior of focused and diffuse hydrothermal discharge. This paper traces the development of these acoustic imaging techniques for hydrothermal flow regimes from their conception through to the development of the Cabled Observatory Vent Imaging Sonar (COVIS). COVIS has monitored such flow eight times a day for several years. Successful acoustic techniques for estimating plume entrainment, bending, vertical rise, volume flux, and heat flux are presented as is the state-of-the-art in diffuse flow detection.

  3. Platforms for hyperspectral imaging, in-situ optical and acoustical imaging in urbanized regions

    NASA Astrophysics Data System (ADS)

    Bostater, Charles R.; Oney, Taylor

    2016-10-01

    Hyperspectral measurements of the water surface of urban coastal waters are presented. Oblique bidirectional reflectance factor imagery was acquired made in a turbid coastal sub estuary of the Indian River Lagoon, Florida and along coastal surf zone waters of the nearby Atlantic Ocean. Imagery was also collected using a pushbroom hyperspectral imager mounted on a fixed platform with a calibrated circular mechatronic rotation stage. Oblique imagery of the shoreline and subsurface features clearly shows subsurface bottom features and rip current features within the surf zone water column. In-situ hyperspectral optical signatures were acquired from a vessel as a function of depth to determine the attenuation spectrum in Palm Bay. A unique stationary platform methodology to acquire subsurface acoustic images showing the presence of moving bottom boundary nephelometric layers passing through the acoustic fan beam. The acoustic fan beam imagery indicated the presence of oscillatory subsurface waves in the urbanized coastal estuary. Hyperspectral imaging using the fixed platform techniques are being used to collect hyperspectral bidirectional reflectance factor (BRF) measurements from locations at buildings and bridges in order to provide new opportunities to advance our scientific understanding of aquatic environments in urbanized regions.

  4. Performance Evaluation of a Biometric System Based on Acoustic Images

    PubMed Central

    Izquierdo-Fuente, Alberto; del Val, Lara; Jiménez, María I.; Villacorta, Juan J.

    2011-01-01

    An acoustic electronic scanning array for acquiring images from a person using a biometric application is developed. Based on pulse-echo techniques, multifrequency acoustic images are obtained for a set of positions of a person (front, front with arms outstretched, back and side). Two Uniform Linear Arrays (ULA) with 15 λ/2-equispaced sensors have been employed, using different spatial apertures in order to reduce sidelobe levels. Working frequencies have been designed on the basis of the main lobe width, the grating lobe levels and the frequency responses of people and sensors. For a case-study with 10 people, the acoustic profiles, formed by all images acquired, are evaluated and compared in a mean square error sense. Finally, system performance, using False Match Rate (FMR)/False Non-Match Rate (FNMR) parameters and the Receiver Operating Characteristic (ROC) curve, is evaluated. On the basis of the obtained results, this system could be used for biometric applications. PMID:22163708

  5. Computational multispectral video imaging [Invited].

    PubMed

    Wang, Peng; Menon, Rajesh

    2018-01-01

    Multispectral imagers reveal information unperceivable to humans and conventional cameras. Here, we demonstrate a compact single-shot multispectral video-imaging camera by placing a micro-structured diffractive filter in close proximity to the image sensor. The diffractive filter converts spectral information to a spatial code on the sensor pixels. Following a calibration step, this code can be inverted via regularization-based linear algebra to compute the multispectral image. We experimentally demonstrated spectral resolution of 9.6 nm within the visible band (430-718 nm). We further show that the spatial resolution is enhanced by over 30% compared with the case without the diffractive filter. We also demonstrate Vis-IR imaging with the same sensor. Because no absorptive color filters are utilized, sensitivity is preserved as well. Finally, the diffractive filters can be easily manufactured using optical lithography and replication techniques.

  6. Remote Acoustic Imaging of Geosynchronous Satellites

    NASA Astrophysics Data System (ADS)

    Watson, Z.; Hart, M.

    Identification and characterization of orbiting objects that are not spatially resolved are challenging problems for traditional remote sensing methods. Hyper temporal imaging, enabled by fast, low-noise electro-optical detectors is a new sensing modality which may allow the direct detection of acoustic resonances on satellites enabling a new regime of signature and state detection. Detectable signatures may be caused by the oscillations of solar panels, high-gain antennae, or other on-board subsystems driven by thermal gradients, fluctuations in solar radiation pressure, worn reaction wheels, or orbit maneuvers. Herein we present the first hyper-temporal observations of geosynchronous satellites. Data were collected at the Kuiper 1.54-meter telescope in Arizona using an experimental dual-channel imaging instrument that simultaneously measures light in two orthogonally polarized beams at sampling rates extending up to 1 kHz. In these observations, we see evidence of acoustic resonances in the polarization state of satellites. The technique is expected to support object identification and characterization of on-board components and to act as a discriminant between active satellites, debris, and passive bodies.

  7. Analysis of Particle Image Velocimetry (PIV) Data for Acoustic Velocity Measurements

    NASA Technical Reports Server (NTRS)

    Blackshire, James L.

    1997-01-01

    Acoustic velocity measurements were taken using Particle Image Velocimetry (PIV) in a Normal Incidence Tube configuration at various frequency, phase, and amplitude levels. This report presents the results of the PIV analysis and data reduction portions of the test and details the processing that was done. Estimates of lower measurement sensitivity levels were determined based on PIV image quality, correlation, and noise level parameters used in the test. Comparison of measurements with linear acoustic theory are presented. The onset of nonlinear, harmonic frequency acoustic levels were also studied for various decibel and frequency levels ranging from 90 to 132 dB and 500 to 3000 Hz, respectively.

  8. Real-time UAV trajectory generation using feature points matching between video image sequences

    NASA Astrophysics Data System (ADS)

    Byun, Younggi; Song, Jeongheon; Han, Dongyeob

    2017-09-01

    Unmanned aerial vehicles (UAVs), equipped with navigation systems and video capability, are currently being deployed for intelligence, reconnaissance and surveillance mission. In this paper, we present a systematic approach for the generation of UAV trajectory using a video image matching system based on SURF (Speeded up Robust Feature) and Preemptive RANSAC (Random Sample Consensus). Video image matching to find matching points is one of the most important steps for the accurate generation of UAV trajectory (sequence of poses in 3D space). We used the SURF algorithm to find the matching points between video image sequences, and removed mismatching by using the Preemptive RANSAC which divides all matching points to outliers and inliers. The inliers are only used to determine the epipolar geometry for estimating the relative pose (rotation and translation) between image sequences. Experimental results from simulated video image sequences showed that our approach has a good potential to be applied to the automatic geo-localization of the UAVs system

  9. Biased lineup instructions and face identification from video images.

    PubMed

    Thompson, W Burt; Johnson, Jaime

    2008-01-01

    Previous eyewitness memory research has shown that biased lineup instructions reduce identification accuracy, primarily by increasing false-positive identifications in target-absent lineups. Because some attempts at identification do not rely on a witness's memory of the perpetrator but instead involve matching photos to images on surveillance video, the authors investigated the effects of biased instructions on identification accuracy in a matching task. In Experiment 1, biased instructions did not affect the overall accuracy of participants who used video images as an identification aid, but nearly all correct decisions occurred with target-present photo spreads. Both biased and unbiased instructions resulted in high false-positive rates. In Experiment 2, which focused on video-photo matching accuracy with target-absent photo spreads, unbiased instructions led to more correct responses (i.e., fewer false positives). These findings suggest that investigators should not relax precautions against biased instructions when people attempt to match photos to an unfamiliar person recorded on video.

  10. Using underwater video imaging as an assessment tool for coastal condition

    EPA Science Inventory

    As part of an effort to monitor ecological conditions in nearshore habitats, from 2009-2012 underwater videos were captured at over 400 locations throughout the Laurentian Great Lakes. This study focuses on developing a video rating system and assessing video images. This ratin...

  11. A Fisheries Application of a Dual-Frequency Identification Sonar Acoustic Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moursund, Russell A.; Carlson, Thomas J.; Peters, Rock D.

    2003-06-01

    The uses of an acoustic camera in fish passage research at hydropower facilities are being explored by the U.S. Army Corps of Engineers. The Dual-Frequency Identification Sonar (DIDSON) is a high-resolution imaging sonar that obtains near video-quality images for the identification of objects underwater. Developed originally for the Navy by the University of Washington?s Applied Physics Laboratory, it bridges the gap between existing fisheries assessment sonar and optical systems. Traditional fisheries assessment sonars detect targets at long ranges but cannot record the shape of targets. The images within 12 m of this acoustic camera are so clear that one canmore » see fish undulating as they swim and can tell the head from the tail in otherwise zero-visibility water. In the 1.8 MHz high-frequency mode, this system is composed of 96 beams over a 29-degree field of view. This high resolution and a fast frame rate allow the acoustic camera to produce near video-quality images of objects through time. This technology redefines many of the traditional limitations of sonar for fisheries and aquatic ecology. Images can be taken of fish in confined spaces, close to structural or surface boundaries, and in the presence of entrained air. The targets themselves can be visualized in real time. The DIDSON can be used where conventional underwater cameras would be limited in sampling range to < 1 m by low light levels and high turbidity, and where traditional sonar would be limited by the confined sample volume. Results of recent testing at The Dalles Dam, on the lower Columbia River in Oregon, USA, are shown.« less

  12. Quantitative Ultrasound Imaging Using Acoustic Backscatter Coefficients.

    NASA Astrophysics Data System (ADS)

    Boote, Evan Jeffery

    Current clinical ultrasound scanners render images which have brightness levels related to the degree of backscattered energy from the tissue being imaged. These images offer the interpreter a qualitative impression of the scattering characteristics of the tissue being examined, but due to the complex factors which affect the amplitude and character of the echoed acoustic energy, it is difficult to make quantitative assessments of scattering nature of the tissue, and thus, difficult to make precise diagnosis when subtle disease effects are present. In this dissertation, a method of data reduction for determining acoustic backscatter coefficients is adapted for use in forming quantitative ultrasound images of this parameter. In these images, the brightness level of an individual pixel corresponds to the backscatter coefficient determined for the spatial position represented by that pixel. The data reduction method utilized rigorously accounts for extraneous factors which affect the scattered echo waveform and has been demonstrated to accurately determine backscatter coefficients under a wide range of conditions. The algorithms and procedures used to form backscatter coefficient images are described. These were tested using tissue-mimicking phantoms which have regions of varying scattering levels. Another phantom has a fat-mimicking layer for testing these techniques under more clinically relevant conditions. Backscatter coefficient images were also formed of in vitro human liver tissue. A clinical ultrasound scanner has been adapted for use as a backscatter coefficient imaging platform. The digital interface between the scanner and the computer used for data reduction are described. Initial tests, using phantoms are presented. A study of backscatter coefficient imaging of in vivo liver was performed using several normal, healthy human subjects.

  13. A novel Kalman filter based video image processing scheme for two-photon fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Sun, Wenqing; Huang, Xia; Li, Chunqiang; Xiao, Chuan; Qian, Wei

    2016-03-01

    Two-photon fluorescence microscopy (TPFM) is a perfect optical imaging equipment to monitor the interaction between fast moving viruses and hosts. However, due to strong unavoidable background noises from the culture, videos obtained by this technique are too noisy to elaborate this fast infection process without video image processing. In this study, we developed a novel scheme to eliminate background noises, recover background bacteria images and improve video qualities. In our scheme, we modified and implemented the following methods for both host and virus videos: correlation method, round identification method, tree-structured nonlinear filters, Kalman filters, and cell tracking method. After these procedures, most of noises were eliminated and host images were recovered with their moving directions and speed highlighted in the videos. From the analysis of the processed videos, 93% bacteria and 98% viruses were correctly detected in each frame on average.

  14. Opti-acoustic stereo imaging: on system calibration and 3-D target reconstruction.

    PubMed

    Negahdaripour, Shahriar; Sekkati, Hicham; Pirsiavash, Hamed

    2009-06-01

    Utilization of an acoustic camera for range measurements is a key advantage for 3-D shape recovery of underwater targets by opti-acoustic stereo imaging, where the associated epipolar geometry of optical and acoustic image correspondences can be described in terms of conic sections. In this paper, we propose methods for system calibration and 3-D scene reconstruction by maximum likelihood estimation from noisy image measurements. The recursive 3-D reconstruction method utilized as initial condition a closed-form solution that integrates the advantages of two other closed-form solutions, referred to as the range and azimuth solutions. Synthetic data tests are given to provide insight into the merits of the new target imaging and 3-D reconstruction paradigm, while experiments with real data confirm the findings based on computer simulations, and demonstrate the merits of this novel 3-D reconstruction paradigm.

  15. Acquisition and Analysis of Dynamic Responses of a Historic Pedestrian Bridge using Video Image Processing

    NASA Astrophysics Data System (ADS)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; O'Donnell, Deirdre; Wright, Robert; Pakrashi, Vikram

    2015-07-01

    Video based tracking is capable of analysing bridge vibrations that are characterised by large amplitudes and low frequencies. This paper presents the use of video images and associated image processing techniques to obtain the dynamic response of a pedestrian suspension bridge in Cork, Ireland. This historic structure is one of the four suspension bridges in Ireland and is notable for its dynamic nature. A video camera is mounted on the river-bank and the dynamic responses of the bridge have been measured from the video images. The dynamic response is assessed without the need of a reflector on the bridge and in the presence of various forms of luminous complexities in the video image scenes. Vertical deformations of the bridge were measured in this regard. The video image tracking for the measurement of dynamic responses of the bridge were based on correlating patches in time-lagged scenes in video images and utilisinga zero mean normalisedcross correlation (ZNCC) metric. The bridge was excited by designed pedestrian movement and by individual cyclists traversing the bridge. The time series data of dynamic displacement responses of the bridge were analysedto obtain the frequency domain response. Frequencies obtained from video analysis were checked against accelerometer data from the bridge obtained while carrying out the same set of experiments used for video image based recognition.

  16. Acquisition and Analysis of Dynamic Responses of a Historic Pedestrian Bridge using Video Image Processing

    NASA Astrophysics Data System (ADS)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; O'Donnell, Deirdre; Wright, Robert; Pakrashi, Vikram

    2015-07-01

    Video based tracking is capable of analysing bridge vibrations that are characterised by large amplitudes and low frequencies. This paper presents the use of video images and associated image processing techniques to obtain the dynamic response of a pedestrian suspension bridge in Cork, Ireland. This historic structure is one of the four suspension bridges in Ireland and is notable for its dynamic nature. A video camera is mounted on the river-bank and the dynamic responses of the bridge have been measured from the video images. The dynamic response is assessed without the need of a reflector on the bridge and in the presence of various forms of luminous complexities in the video image scenes. Vertical deformations of the bridge were measured in this regard. The video image tracking for the measurement of dynamic responses of the bridge were based on correlating patches in time-lagged scenes in video images and utilisinga zero mean normalised cross correlation (ZNCC) metric. The bridge was excited by designed pedestrian movement and by individual cyclists traversing the bridge. The time series data of dynamic displacement responses of the bridge were analysedto obtain the frequency domain response. Frequencies obtained from video analysis were checked against accelerometer data from the bridge obtained while carrying out the same set of experiments used for video image based recognition.

  17. Method and apparatus for detecting internal structures of bulk objects using acoustic imaging

    DOEpatents

    Deason, Vance A.; Telschow, Kenneth L.

    2002-01-01

    Apparatus for producing an acoustic image of an object according to the present invention may comprise an excitation source for vibrating the object to produce at least one acoustic wave therein. The acoustic wave results in the formation of at least one surface displacement on the surface of the object. A light source produces an optical object wavefront and an optical reference wavefront and directs the optical object wavefront toward the surface of the object to produce a modulated optical object wavefront. A modulator operatively associated with the optical reference wavefront modulates the optical reference wavefront in synchronization with the acoustic wave to produce a modulated optical reference wavefront. A sensing medium positioned to receive the modulated optical object wavefront and the modulated optical reference wavefront combines the modulated optical object and reference wavefronts to produce an image related to the surface displacement on the surface of the object. A detector detects the image related to the surface displacement produced by the sensing medium. A processing system operatively associated with the detector constructs an acoustic image of interior features of the object based on the phase and amplitude of the surface displacement on the surface of the object.

  18. Video guidance, landing, and imaging systems

    NASA Technical Reports Server (NTRS)

    Schappell, R. T.; Knickerbocker, R. L.; Tietz, J. C.; Grant, C.; Rice, R. B.; Moog, R. D.

    1975-01-01

    The adaptive potential of video guidance technology for earth orbital and interplanetary missions was explored. The application of video acquisition, pointing, tracking, and navigation technology was considered to three primary missions: planetary landing, earth resources satellite, and spacecraft rendezvous and docking. It was found that an imaging system can be mechanized to provide a spacecraft or satellite with a considerable amount of adaptability with respect to its environment. It also provides a level of autonomy essential to many future missions and enhances their data gathering ability. The feasibility of an autonomous video guidance system capable of observing a planetary surface during terminal descent and selecting the most acceptable landing site was successfully demonstrated in the laboratory. The techniques developed for acquisition, pointing, and tracking show promise for recognizing and tracking coastlines, rivers, and other constituents of interest. Routines were written and checked for rendezvous, docking, and station-keeping functions.

  19. Do Stereotypic Images in Video Games Affect Attitudes and Behavior? Adolescents' Perspectives.

    PubMed

    Henning, Alexandra; Brenick, Alaina; Killen, Melanie; O'Connor, Alexander; Collins, Michael J

    This study examined adolescents' attitudes about video games along with their self-reported play frequency. Ninth and eleventh grade students (N = 361), approximately evenly divided by grade and gender, were surveyed about whether video games have stereotypic images, involve harmful consequences or affect one's attitudes, whether game playing should be regulated by parents or the government, and whether game playing is a personal choice. Adolescents who played video games frequently showed decreased concern about the effects that games with negatively stereotyped images may have on the players' attitudes compared to adolescents who played games infrequently or not at all. With age, adolescents were more likely to view images as negative, but were also less likely to recognize stereotypic images of females as harmful and more likely to judge video-game playing as a personal choice. The paper discusses other findings in relation to research on adolescents' social cognitive judgments.

  20. Imaging acoustic vibrations in an ear model using spectrally encoded interferometry

    NASA Astrophysics Data System (ADS)

    Grechin, Sveta; Yelin, Dvir

    2018-01-01

    Imaging vibrational patterns of the tympanic membrane would allow an accurate measurement of its mechanical properties and provide early diagnosis of various hearing disorders. Various optical technologies have been suggested to address this challenge and demonstrated in vitro using point scanning and full-field interferometry. Spectrally encoded imaging has been previously demonstrated capable of imaging tissue acoustic vibrations with high spatial resolution, including two-dimensional phase and amplitude mapping. In this work, we demonstrate a compact optical apparatus for imaging acoustic vibrations that could be incorporated into a commercially available digital otoscope. By transmitting harmonic sound waves through the otoscope insufflation port and analyzing the spectral interferograms using custom-built software, we demonstrate high-resolution vibration imaging of a circular rubber membrane within an ear model.

  1. Segmentation of the spinous process and its acoustic shadow in vertebral ultrasound images.

    PubMed

    Berton, Florian; Cheriet, Farida; Miron, Marie-Claude; Laporte, Catherine

    2016-05-01

    Spinal ultrasound imaging is emerging as a low-cost, radiation-free alternative to conventional X-ray imaging for the clinical follow-up of patients with scoliosis. Currently, deformity measurement relies almost entirely on manual identification of key vertebral landmarks. However, the interpretation of vertebral ultrasound images is challenging, primarily because acoustic waves are entirely reflected by bone. To alleviate this problem, we propose an algorithm to segment these images into three regions: the spinous process, its acoustic shadow and other tissues. This method consists, first, in the extraction of several image features and the selection of the most relevant ones for the discrimination of the three regions. Then, using this set of features and linear discriminant analysis, each pixel of the image is classified as belonging to one of the three regions. Finally, the image is segmented by regularizing the pixel-wise classification results to account for some geometrical properties of vertebrae. The feature set was first validated by analyzing the classification results across a learning database. The database contained 107 vertebral ultrasound images acquired with convex and linear probes. Classification rates of 84%, 92% and 91% were achieved for the spinous process, the acoustic shadow and other tissues, respectively. Dice similarity coefficients of 0.72 and 0.88 were obtained respectively for the spinous process and acoustic shadow, confirming that the proposed method accurately segments the spinous process and its acoustic shadow in vertebral ultrasound images. Furthermore, the centroid of the automatically segmented spinous process was located at an average distance of 0.38 mm from that of the manually labeled spinous process, which is on the order of image resolution. This suggests that the proposed method is a promising tool for the measurement of the Spinous Process Angle and, more generally, for assisting ultrasound-based assessment of scoliosis

  2. [Development of a video image system for wireless capsule endoscopes based on DSP].

    PubMed

    Yang, Li; Peng, Chenglin; Wu, Huafeng; Zhao, Dechun; Zhang, Jinhua

    2008-02-01

    A video image recorder to record video picture for wireless capsule endoscopes was designed. TMS320C6211 DSP of Texas Instruments Inc. is the core processor of this system. Images are periodically acquired from Composite Video Broadcast Signal (CVBS) source and scaled by video decoder (SAA7114H). Video data is transported from high speed buffer First-in First-out (FIFO) to Digital Signal Processor (DSP) under the control of Complex Programmable Logic Device (CPLD). This paper adopts JPEG algorithm for image coding, and the compressed data in DSP was stored to Compact Flash (CF) card. TMS320C6211 DSP is mainly used for image compression and data transporting. Fast Discrete Cosine Transform (DCT) algorithm and fast coefficient quantization algorithm are used to accelerate operation speed of DSP and decrease the executing code. At the same time, proper address is assigned for each memory, which has different speed;the memory structure is also optimized. In addition, this system uses plenty of Extended Direct Memory Access (EDMA) to transport and process image data, which results in stable and high performance.

  3. Classroom Materials from the Acoustical Society of America

    NASA Astrophysics Data System (ADS)

    Adams, W. K.; Clark, A.; Schneider, K.

    2013-09-01

    As part of the new education initiatives of the Acoustical Society of America (ASA), an activity kit for teachers that includes a variety of lessons addressing acoustics for a range of students (K-12) has been created. The "Sound and Music Activity Kit" is free to K-12 teachers. It includes materials sufficient to teach a class of 30 students plus a USB thumb drive containing 47 research-based, interactive, student-tested lessons, laboratory exercises, several assessments, and video clips of a class using the materials. ASA has also partnered with both the Optical Society of America (OSA) and the American Association of Physics Teachers. AAPT Physics Teaching Resource Agents (PTRA) have reviewed the lessons along with members of the ASA Teacher Activity Kit Committee. Topics include basic learning goals for teaching the physics of sound with examples and applications relating to medical imaging, animal bioacoustics, physical and psychological acoustics, speech, audiology, and architectural acoustics.

  4. Potential usefulness of a video printer for producing secondary images from digitized chest radiographs

    NASA Astrophysics Data System (ADS)

    Nishikawa, Robert M.; MacMahon, Heber; Doi, Kunio; Bosworth, Eric

    1991-05-01

    Communication between radiologists and clinicians could be improved if a secondary image (copy of the original image) accompanied the radiologic report. In addition, the number of lost original radiographs could be decreased, since clinicians would have less need to borrow films. The secondary image should be simple and inexpensive to produce, while providing sufficient image quality for verification of the diagnosis. We are investigating the potential usefulness of a video printer for producing copies of radiographs, i.e. images printed on thermal paper. The video printer we examined (Seikosha model VP-3500) can provide 64 shades of gray. It is capable of recording images up to 1,280 pixels by 1,240 lines and can accept any raster-type video signal. The video printer was characterized in terms of its linearity, contrast, latitude, resolution, and noise properties. The quality of video-printer images was also evaluated in an observer study using portable chest radiographs. We found that observers could confirm up to 90 of the reported findings in the thorax using video- printer images, when the original radiographs were of high quality. The number of verified findings was diminished when high spatial resolution was required (e.g. detection of a subtle pneumothorax) or when a low-contrast finding was located in the mediastinal area or below the diaphragm (e.g. nasogastric tubes).

  5. Do Stereotypic Images in Video Games Affect Attitudes and Behavior? Adolescents’ Perspectives

    PubMed Central

    Henning, Alexandra; Brenick, Alaina; Killen, Melanie; O’Connor, Alexander; Collins, Michael J.

    2015-01-01

    This study examined adolescents’ attitudes about video games along with their self-reported play frequency. Ninth and eleventh grade students (N = 361), approximately evenly divided by grade and gender, were surveyed about whether video games have stereotypic images, involve harmful consequences or affect one’s attitudes, whether game playing should be regulated by parents or the government, and whether game playing is a personal choice. Adolescents who played video games frequently showed decreased concern about the effects that games with negatively stereotyped images may have on the players’ attitudes compared to adolescents who played games infrequently or not at all. With age, adolescents were more likely to view images as negative, but were also less likely to recognize stereotypic images of females as harmful and more likely to judge video-game playing as a personal choice. The paper discusses other findings in relation to research on adolescents’ social cognitive judgments. PMID:25729336

  6. Complex Event Processing for Content-Based Text, Image, and Video Retrieval

    DTIC Science & Technology

    2016-06-01

    NY): Wiley- Interscience; 2000. Feldman R, Sanger J. The text mining handbook: advanced approaches in analyzing unstructured data. New York (NY...ARL-TR-7705 ● JUNE 2016 US Army Research Laboratory Complex Event Processing for Content-Based Text , Image, and Video Retrieval...ARL-TR-7705 ● JUNE 2016 US Army Research Laboratory Complex Event Processing for Content-Based Text , Image, and Video Retrieval

  7. Sub-component modeling for face image reconstruction in video communications

    NASA Astrophysics Data System (ADS)

    Shiell, Derek J.; Xiao, Jing; Katsaggelos, Aggelos K.

    2008-08-01

    Emerging communications trends point to streaming video as a new form of content delivery. These systems are implemented over wired systems, such as cable or ethernet, and wireless networks, cell phones, and portable game systems. These communications systems require sophisticated methods of compression and error-resilience encoding to enable communications across band-limited and noisy delivery channels. Additionally, the transmitted video data must be of high enough quality to ensure a satisfactory end-user experience. Traditionally, video compression makes use of temporal and spatial coherence to reduce the information required to represent an image. In many communications systems, the communications channel is characterized by a probabilistic model which describes the capacity or fidelity of the channel. The implication is that information is lost or distorted in the channel, and requires concealment on the receiving end. We demonstrate a generative model based transmission scheme to compress human face images in video, which has the advantages of a potentially higher compression ratio, while maintaining robustness to errors and data corruption. This is accomplished by training an offline face model and using the model to reconstruct face images on the receiving end. We propose a sub-component AAM modeling the appearance of sub-facial components individually, and show face reconstruction results under different types of video degradation using a weighted and non-weighted version of the sub-component AAM.

  8. Biologically relevant photoacoustic imaging phantoms with tunable optical and acoustic properties

    PubMed Central

    Vogt, William C.; Jia, Congxian; Wear, Keith A.; Garra, Brian S.; Joshua Pfefer, T.

    2016-01-01

    Abstract. Established medical imaging technologies such as magnetic resonance imaging and computed tomography rely on well-validated tissue-simulating phantoms for standardized testing of device image quality. The availability of high-quality phantoms for optical-acoustic diagnostics such as photoacoustic tomography (PAT) will facilitate standardization and clinical translation of these emerging approaches. Materials used in prior PAT phantoms do not provide a suitable combination of long-term stability and realistic acoustic and optical properties. Therefore, we have investigated the use of custom polyvinyl chloride plastisol (PVCP) formulations for imaging phantoms and identified a dual-plasticizer approach that provides biologically relevant ranges of relevant properties. Speed of sound and acoustic attenuation were determined over a frequency range of 4 to 9 MHz and optical absorption and scattering over a wavelength range of 400 to 1100 nm. We present characterization of several PVCP formulations, including one designed to mimic breast tissue. This material is used to construct a phantom comprised of an array of cylindrical, hemoglobin-filled inclusions for evaluation of penetration depth. Measurements with a custom near-infrared PAT imager provide quantitative and qualitative comparisons of phantom and tissue images. Results indicate that our PVCP material is uniquely suitable for PAT system image quality evaluation and may provide a practical tool for device validation and intercomparison. PMID:26886681

  9. High-spatial-resolution sub-surface imaging using a laser-based acoustic microscopy technique.

    PubMed

    Balogun, Oluwaseyi; Cole, Garrett D; Huber, Robert; Chinn, Diane; Murray, Todd W; Spicer, James B

    2011-01-01

    Scanning acoustic microscopy techniques operating at frequencies in the gigahertz range are suitable for the elastic characterization and interior imaging of solid media with micrometer-scale spatial resolution. Acoustic wave propagation at these frequencies is strongly limited by energy losses, particularly from attenuation in the coupling media used to transmit ultrasound to a specimen, leading to a decrease in the depth in a specimen that can be interrogated. In this work, a laser-based acoustic microscopy technique is presented that uses a pulsed laser source for the generation of broadband acoustic waves and an optical interferometer for detection. The use of a 900-ps microchip pulsed laser facilitates the generation of acoustic waves with frequencies extending up to 1 GHz which allows for the resolution of micrometer-scale features in a specimen. Furthermore, the combination of optical generation and detection approaches eliminates the use of an ultrasonic coupling medium, and allows for elastic characterization and interior imaging at penetration depths on the order of several hundred micrometers. Experimental results illustrating the use of the laser-based acoustic microscopy technique for imaging micrometer-scale subsurface geometrical features in a 70-μm-thick single-crystal silicon wafer with a (100) orientation are presented.

  10. Imaging fall Chinook salmon redds in the Columbia River with a dual-frequency identification sonar

    USGS Publications Warehouse

    Tiffan, K.F.; Rondorf, D.W.; Skalicky, J.J.

    2004-01-01

    We tested the efficacy of a dual-frequency identification sonar (DIDSON) for imaging and enumeration of fall Chinook salmon Oncorhynchus tshawytscha redds in a spawning area below Bonneville Dam on the Columbia River. The DIDSON uses sound to form near-video-quality images and has the advantages of imaging in zero-visibility water and possessing a greater detection range and field of view than underwater video cameras. We suspected that the large size and distinct morphology of a fall Chinook salmon redd would facilitate acoustic imaging if the DIDSON was towed near the river bottom so as to cast an acoustic shadow from the tailspill over the redd pocket. We tested this idea by observing 22 different redds with an underwater video camera, spatially referencing their locations, and then navigating to them while imaging them with the DIDSON. All 22 redds were successfully imaged with the DIDSON. We subsequently conducted redd searches along transects to compare the number of redds imaged by the DIDSON with the number observed using an underwater video camera. We counted 117 redds with the DIDSON and 81 redds with the underwater video camera. Only one of the redds observed with the underwater video camera was not also documented by the DIDSON. In spite of the DIDSON's high cost, it may serve as a useful tool for enumerating fall Chinook salmon redds in conditions that are not conducive to underwater videography.

  11. Efficacy of passive acoustic screening: implications for the design of imager and MR-suite.

    PubMed

    Moelker, Adriaan; Vogel, Mika W; Pattynama, Peter M T

    2003-02-01

    To investigate the efficacy of passive acoustic screening in the magnetic resonance (MR) environment by reducing direct and indirect MR-related acoustic noise, both from the patient's and health worker's perspective. Direct acoustic noise refers to sound originating from the inner and outer shrouds of the MR imager, and indirect noise to acoustic reflections from the walls of the MR suite. Sound measurements were obtained inside the magnet bore (patient position) and at the entrance of the MR imager (health worker position). Inner and outer shrouds and walls were lined with thick layers of sound insulation to eliminate the direct and indirect acoustic pathways. Sound pressure levels (SPLs) and octave band frequencies were acquired during various MR imaging sequences at 1.5 T. Inside the magnet bore, direct acoustic noise radiating from the inner shroud was most relevant, with substantial reductions of up to 18.8 dB when using passive screening of the magnetic bore. At the magnet bore entrance, blocking acoustic noise from the outer shroud and reflections showed significant reductions of 4.5 and 2.8 dB, respectively, and 9.4 dB when simultaneously applied. Inner shroud coverage contributed minimally to the overall SPL reduction. Maximum noise reduction by passive acoustic screening can be achieved by reducing direct sound conduction through the inner and outer shrouds. Additional measures to optimize the acoustic properties of the MR suite have only little effect. Copyright 2003 Wiley-Liss, Inc.

  12. High-quality and small-capacity e-learning video featuring lecturer-superimposing PC screen images

    NASA Astrophysics Data System (ADS)

    Nomura, Yoshihiko; Murakami, Michinobu; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko

    2006-10-01

    Information processing and communication technology are progressing quickly, and are prevailing throughout various technological fields. Therefore, the development of such technology should respond to the needs for improvement of quality in the e-learning education system. The authors propose a new video-image compression processing system that ingeniously employs the features of the lecturing scene. While dynamic lecturing scene is shot by a digital video camera, screen images are electronically stored by a PC screen image capturing software in relatively long period at a practical class. Then, a lecturer and a lecture stick are extracted from the digital video images by pattern recognition techniques, and the extracted images are superimposed on the appropriate PC screen images by off-line processing. Thus, we have succeeded to create a high-quality and small-capacity (HQ/SC) video-on-demand educational content featuring the advantages: the high quality of image sharpness, the small electronic file capacity, and the realistic lecturer motion.

  13. Guided filtering for solar image/video processing

    NASA Astrophysics Data System (ADS)

    Xu, Long; Yan, Yihua; Cheng, Jun

    2017-06-01

    A new image enhancement algorithm employing guided filtering is proposed in this work for the enhancement of solar images and videos so that users can easily figure out important fine structures embedded in the recorded images/movies for solar observation. The proposed algorithm can efficiently remove image noises, including Gaussian and impulse noises. Meanwhile, it can further highlight fibrous structures on/beyond the solar disk. These fibrous structures can clearly demonstrate the progress of solar flare, prominence coronal mass emission, magnetic field, and so on. The experimental results prove that the proposed algorithm gives significant enhancement of visual quality of solar images beyond original input and several classical image enhancement algorithms, thus facilitating easier determination of interesting solar burst activities from recorded images/movies.

  14. Static hand gesture recognition from a video

    NASA Astrophysics Data System (ADS)

    Rokade, Rajeshree S.; Doye, Dharmpal

    2011-10-01

    A sign language (also signed language) is a language which, instead of acoustically conveyed sound patterns, uses visually transmitted sign patterns to convey meaning- "simultaneously combining hand shapes, orientation and movement of the hands". Sign languages commonly develop in deaf communities, which can include interpreters, friends and families of deaf people as well as people who are deaf or hard of hearing themselves. In this paper, we proposed a novel system for recognition of static hand gestures from a video, based on Kohonen neural network. We proposed algorithm to separate out key frames, which include correct gestures from a video sequence. We segment, hand images from complex and non uniform background. Features are extracted by applying Kohonen on key frames and recognition is done.

  15. Reconstructed imaging of acoustic cloak using time-lapse reversal method

    NASA Astrophysics Data System (ADS)

    Zhou, Chen; Cheng, Ying; Xu, Jian-yi; Li, Bo; Liu, Xiao-jun

    2014-08-01

    We proposed and investigated a solution to the inverse acoustic cloak problem, an anti-stealth technology to make cloaks visible, using the time-lapse reversal (TLR) method. The TLR method reconstructs the image of an unknown acoustic cloak by utilizing scattered acoustic waves. Compared to previous anti-stealth methods, the TLR method can determine not only the existence of a cloak but also its exact geometric information like definite shape, size, and position. Here, we present the process for TLR reconstruction based on time reversal invariance. This technology may have potential applications in detecting various types of cloaks with different geometric parameters.

  16. Acoustic Imaging of Snowpack Physical Properties

    NASA Astrophysics Data System (ADS)

    Kinar, N. J.; Pomeroy, J. W.

    2011-12-01

    Measurements of snowpack depth, density, structure and temperature have often been conducted by the use of snowpits and invasive measurement devices. Previous research has shown that acoustic waves passing through snow are capable of measuring these properties. An experimental observation device (SAS2, System for the Acoustic Sounding of Snow) was used to autonomously send audible sound waves into the top of the snowpack and to receive and process the waves reflected from the interior and bottom of the snowpack. A loudspeaker and microphone array separated by an offset distance was suspended in the air above the surface of the snowpack. Sound waves produced from a loudspeaker as frequency-swept sequences and maximum length sequences were used as source signals. Up to 24 microphones measured the audible signal from the snowpack. The signal-to-noise ratio was compared between sequences in the presence of environmental noise contributed by wind and reflections from vegetation. Beamforming algorithms were used to reject spurious reflections and to compensate for movement of the sensor assembly during the time of data collection. A custom-designed circuit with digital signal processing hardware implemented an inversion algorithm to relate the reflected sound wave data to snowpack physical properties and to create a two-dimensional image of snowpack stratigraphy. The low power consumption circuit was powered by batteries and through WiFi and Bluetooth interfaces enabled the display of processed data on a mobile device. Acoustic observations were logged to an SD card after each measurement. The SAS2 system was deployed at remote field locations in the Rocky Mountains of Alberta, Canada. Acoustic snow properties data was compared with data collected from gravimetric sampling, thermocouple arrays, radiometers and snowpit observations of density, stratigraphy and crystal structure. Aspects for further research and limitations of the acoustic sensing system are also discussed.

  17. Acoustical holographic recording with coherent optical read-out and image processing

    NASA Astrophysics Data System (ADS)

    Liu, H. K.

    1980-10-01

    New acoustic holographic wave memory devices have been designed for real-time in-situ recording applications. The basic operating principles of these devices and experimental results through the use of some of the prototypes of the devices are presented. Recording media used in the device include thermoplastic resin, Crisco vegetable oil, and Wilson corn oil. In addition, nonlinear coherent optical image processing techniques including equidensitometry, A-D conversion, and pseudo-color, all based on the new contact screen technique, are discussed with regard to the enhancement of the normally poor-resolved acoustical holographic images.

  18. Facial Attractiveness Ratings from Video-Clips and Static Images Tell the Same Story

    PubMed Central

    Rhodes, Gillian; Lie, Hanne C.; Thevaraja, Nishta; Taylor, Libby; Iredell, Natasha; Curran, Christine; Tan, Shi Qin Claire; Carnemolla, Pia; Simmons, Leigh W.

    2011-01-01

    Most of what we know about what makes a face attractive and why we have the preferences we do is based on attractiveness ratings of static images of faces, usually photographs. However, several reports that such ratings fail to correlate significantly with ratings made to dynamic video clips, which provide richer samples of appearance, challenge the validity of this literature. Here, we tested the validity of attractiveness ratings made to static images, using a substantial sample of male faces. We found that these ratings agreed very strongly with ratings made to videos of these men, despite the presence of much more information in the videos (multiple views, neutral and smiling expressions and speech-related movements). Not surprisingly, given this high agreement, the components of video-attractiveness were also very similar to those reported previously for static-attractiveness. Specifically, averageness, symmetry and masculinity were all significant components of attractiveness rated from videos. Finally, regression analyses yielded very similar effects of attractiveness on success in obtaining sexual partners, whether attractiveness was rated from videos or static images. These results validate the widespread use of attractiveness ratings made to static images in evolutionary and social psychological research. We speculate that this validity may stem from our tendency to make rapid and robust judgements of attractiveness. PMID:22096491

  19. Moving object detection in top-view aerial videos improved by image stacking

    NASA Astrophysics Data System (ADS)

    Teutsch, Michael; Krüger, Wolfgang; Beyerer, Jürgen

    2017-08-01

    Image stacking is a well-known method that is used to improve the quality of images in video data. A set of consecutive images is aligned by applying image registration and warping. In the resulting image stack, each pixel has redundant information about its intensity value. This redundant information can be used to suppress image noise, resharpen blurry images, or even enhance the spatial image resolution as done in super-resolution. Small moving objects in the videos usually get blurred or distorted by image stacking and thus need to be handled explicitly. We use image stacking in an innovative way: image registration is applied to small moving objects only, and image warping blurs the stationary background that surrounds the moving objects. Our video data are coming from a small fixed-wing unmanned aerial vehicle (UAV) that acquires top-view gray-value images of urban scenes. Moving objects are mainly cars but also other vehicles such as motorcycles. The resulting images, after applying our proposed image stacking approach, are used to improve baseline algorithms for vehicle detection and segmentation. We improve precision and recall by up to 0.011, which corresponds to a reduction of the number of false positive and false negative detections by more than 3 per second. Furthermore, we show how our proposed image stacking approach can be implemented efficiently.

  20. Automatic and quantitative measurement of laryngeal video stroboscopic images.

    PubMed

    Kuo, Chung-Feng Jeffrey; Kuo, Joseph; Hsiao, Shang-Wun; Lee, Chi-Lung; Lee, Jih-Chin; Ke, Bo-Han

    2017-01-01

    The laryngeal video stroboscope is an important instrument for physicians to analyze abnormalities and diseases in the glottal area. Stroboscope has been widely used around the world. However, without quantized indices, physicians can only make subjective judgment on glottal images. We designed a new laser projection marking module and applied it onto the laryngeal video stroboscope to provide scale conversion reference parameters for glottal imaging and to convert the physiological parameters of glottis. Image processing technology was used to segment the important image regions of interest. Information of the glottis was quantified, and the vocal fold image segmentation system was completed to assist clinical diagnosis and increase accuracy. Regarding image processing, histogram equalization was used to enhance glottis image contrast. The center weighted median filters image noise while retaining the texture of the glottal image. Statistical threshold determination was used for automatic segmentation of a glottal image. As the glottis image contains saliva and light spots, which are classified as the noise of the image, noise was eliminated by erosion, expansion, disconnection, and closure techniques to highlight the vocal area. We also used image processing to automatically identify an image of vocal fold region in order to quantify information from the glottal image, such as glottal area, vocal fold perimeter, vocal fold length, glottal width, and vocal fold angle. The quantized glottis image database was created to assist physicians in diagnosing glottis diseases more objectively.

  1. From image captioning to video summary using deep recurrent networks and unsupervised segmentation

    NASA Astrophysics Data System (ADS)

    Morosanu, Bogdan-Andrei; Lemnaru, Camelia

    2018-04-01

    Automatic captioning systems based on recurrent neural networks have been tremendously successful at providing realistic natural language captions for complex and varied image data. We explore methods for adapting existing models trained on large image caption data sets to a similar problem, that of summarising videos using natural language descriptions and frame selection. These architectures create internal high level representations of the input image that can be used to define probability distributions and distance metrics on these distributions. Specifically, we interpret each hidden unit inside a layer of the caption model as representing the un-normalised log probability of some unknown image feature of interest for the caption generation process. We can then apply well understood statistical divergence measures to express the difference between images and create an unsupervised segmentation of video frames, classifying consecutive images of low divergence as belonging to the same context, and those of high divergence as belonging to different contexts. To provide a final summary of the video, we provide a group of selected frames and a text description accompanying them, allowing a user to perform a quick exploration of large unlabeled video databases.

  2. Acoustic levitator for structure measurements on low temperature liquid droplets.

    PubMed

    Weber, J K R; Rey, C A; Neuefeind, J; Benmore, C J

    2009-08-01

    A single-axis acoustic levitator was constructed and used to levitate liquid and solid drops of 1-3 mm in diameter at temperatures in the range -40 to +40 degrees C. The levitator comprised (i) two acoustic transducers mounted on a rigid vertical support that was bolted to an optical breadboard, (ii) an acoustic power supply that controlled acoustic intensity, relative phase of the drive to the transducers, and could modulate the acoustic forces at frequencies up to 1 kHz, (iii) a video camera, and (iv) a system for providing a stream of controlled temperature gas flow over the sample. The acoustic transducers were operated at their resonant frequency of approximately 22 kHz and could produce sound pressure levels of up to 160 dB. The force applied by the acoustic field could be modulated to excite oscillations in the sample. Sample temperature was controlled using a modified Cryostream Plus and measured using thermocouples and an infrared thermal imager. The levitator was installed at x-ray beamline 11 ID-C at the Advanced Photon Source and used to investigate the structure of supercooled liquids.

  3. Acoustic levitator for structure measurements on low temperature liquid droplets

    NASA Astrophysics Data System (ADS)

    Weber, J. K. R.; Rey, C. A.; Neuefeind, J.; Benmore, C. J.

    2009-08-01

    A single-axis acoustic levitator was constructed and used to levitate liquid and solid drops of 1-3 mm in diameter at temperatures in the range -40 to +40 °C. The levitator comprised (i) two acoustic transducers mounted on a rigid vertical support that was bolted to an optical breadboard, (ii) an acoustic power supply that controlled acoustic intensity, relative phase of the drive to the transducers, and could modulate the acoustic forces at frequencies up to 1 kHz, (iii) a video camera, and (iv) a system for providing a stream of controlled temperature gas flow over the sample. The acoustic transducers were operated at their resonant frequency of ˜22 kHz and could produce sound pressure levels of up to 160 dB. The force applied by the acoustic field could be modulated to excite oscillations in the sample. Sample temperature was controlled using a modified Cryostream Plus and measured using thermocouples and an infrared thermal imager. The levitator was installed at x-ray beamline 11 ID-C at the Advanced Photon Source and used to investigate the structure of supercooled liquids.

  4. Laser Imaging of Airborne Acoustic Emission by Nonlinear Defects

    NASA Astrophysics Data System (ADS)

    Solodov, Igor; Döring, Daniel; Busse, Gerd

    2008-06-01

    Strongly nonlinear vibrations of near-surface fractured defects driven by an elastic wave radiate acoustic energy into adjacent air in a wide frequency range. The variations of pressure in the emitted airborne waves change the refractive index of air thus providing an acoustooptic interaction with a collimated laser beam. Such an air-coupled vibrometry (ACV) is proposed for detecting and imaging of acoustic radiation of nonlinear spectral components by cracked defects. The photoelastic relation in air is used to derive induced phase modulation of laser light in the heterodyne interferometer setup. The sensitivity of the scanning ACV to different spatial components of the acoustic radiation is analyzed. The animated airborne emission patterns are visualized for the higher harmonic and frequency mixing fields radiated by planar defects. The results confirm a high localization of the nonlinear acoustic emission around the defects and complicated directivity patterns appreciably different from those observed for fundamental frequencies.

  5. What do we do with all this video? Better understanding public engagement for image and video annotation

    NASA Astrophysics Data System (ADS)

    Wiener, C.; Miller, A.; Zykov, V.

    2016-12-01

    Advanced robotic vehicles are increasingly being used by oceanographic research vessels to enable more efficient and widespread exploration of the ocean, particularly the deep ocean. With cutting-edge capabilities mounted onto robotic vehicles, data at high resolutions is being generated more than ever before, enabling enhanced data collection and the potential for broader participation. For example, high resolution camera technology not only improves visualization of the ocean environment, but also expands the capacity to engage participants remotely through increased use of telepresence and virtual reality techniques. Schmidt Ocean Institute is a private, non-profit operating foundation established to advance the understanding of the world's oceans through technological advancement, intelligent observation and analysis, and open sharing of information. Telepresence-enabled research is an important component of Schmidt Ocean Institute's science research cruises, which this presentation will highlight. Schmidt Ocean Institute is one of the only research programs that make their entire underwater vehicle dive series available online, creating a collection of video that enables anyone to follow deep sea research in real time. We encourage students, educators and the general public to take advantage of freely available dive videos. Additionally, other SOI-supported internet platforms, have engaged the public in image and video annotation activities. Examples of these new online platforms, which utilize citizen scientists to annotate scientific image and video data will be provided. This presentation will include an introduction to SOI-supported video and image tagging citizen science projects, real-time robot tracking, live ship-to-shore communications, and an array of outreach activities that enable scientists to interact with the public and explore the ocean in fascinating detail.

  6. Clinical applications of commercially available video recording and monitoring systems: inexpensive, high-quality video recording and monitoring systems for endoscopy and microsurgery.

    PubMed

    Tsunoda, Koichi; Tsunoda, Atsunobu; Ishimoto, ShinnIchi; Kimura, Satoko

    2006-01-01

    The exclusive charge-coupled device (CCD) camera system for the endoscope and electronic fiberscopes are in widespread use. However, both are usually stationary in an office or examination room, and a wheeled cart is needed for mobility. The total costs of the CCD camera system and electronic fiberscopy system are at least US Dollars 10,000 and US Dollars 30,000, respectively. Recently, the performance of audio and visual instruments has improved dramatically, with a concomitant reduction in their cost. Commercially available CCD video cameras with small monitors have become common. They provide excellent image quality and are much smaller and less expensive than previous models. The authors have developed adaptors for the popular mini-digital video (mini-DV) camera. The camera also provides video and acoustic output signals; therefore, the endoscopic images can be viewed on a large monitor simultaneously. The new system (a mini-DV video camera and an adaptor) costs only US Dollars 1,000. Therefore, the system is both cost-effective and useful for the outpatient clinic or casualty setting, or on house calls for the purpose of patient education. In the future, the authors plan to introduce the clinical application of a high-vision camera and an infrared camera as medical instruments for clinical and research situations.

  7. Learning Computational Models of Video Memorability from fMRI Brain Imaging.

    PubMed

    Han, Junwei; Chen, Changyuan; Shao, Ling; Hu, Xintao; Han, Jungong; Liu, Tianming

    2015-08-01

    Generally, various visual media are unequally memorable by the human brain. This paper looks into a new direction of modeling the memorability of video clips and automatically predicting how memorable they are by learning from brain functional magnetic resonance imaging (fMRI). We propose a novel computational framework by integrating the power of low-level audiovisual features and brain activity decoding via fMRI. Initially, a user study experiment is performed to create a ground truth database for measuring video memorability and a set of effective low-level audiovisual features is examined in this database. Then, human subjects' brain fMRI data are obtained when they are watching the video clips. The fMRI-derived features that convey the brain activity of memorizing videos are extracted using a universal brain reference system. Finally, due to the fact that fMRI scanning is expensive and time-consuming, a computational model is learned on our benchmark dataset with the objective of maximizing the correlation between the low-level audiovisual features and the fMRI-derived features using joint subspace learning. The learned model can then automatically predict the memorability of videos without fMRI scans. Evaluations on publically available image and video databases demonstrate the effectiveness of the proposed framework.

  8. Thermal imagers: from ancient analog video output to state-of-the-art video streaming

    NASA Astrophysics Data System (ADS)

    Haan, Hubertus; Feuchter, Timo; Münzberg, Mario; Fritze, Jörg; Schlemmer, Harry

    2013-06-01

    The video output of thermal imagers stayed constant over almost two decades. When the famous Common Modules were employed a thermal image at first was presented to the observer in the eye piece only. In the early 1990s TV cameras were attached and the standard output was CCIR. In the civil camera market output standards changed to digital formats a decade ago with digital video streaming being nowadays state-of-the-art. The reasons why the output technique in the thermal world stayed unchanged over such a long time are: the very conservative view of the military community, long planning and turn-around times of programs and a slower growth of pixel number of TIs in comparison to consumer cameras. With megapixel detectors the CCIR output format is not sufficient any longer. The paper discusses the state-of-the-art compression and streaming solutions for TIs.

  9. ACOUSTICAL IMAGING AND MECHANICAL PROPERTIES OF SOFT ROCK AND MARINE SEDIMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thurman E. Scott, Jr., Ph.D.; Younane Abousleiman, Ph.D.; Musharraf Zaman, Ph.D., P.E.

    2002-11-18

    During the sixth quarter of this research project the research team developed a method and the experimental procedures for acquiring the data needed for ultrasonic tomography of rock core samples under triaxial stress conditions as outlined in Task 10. Traditional triaxial compression experiments, where compressional and shear wave velocities are measured, provide little or no information about the internal spatial distribution of mechanical damage within the sample. The velocities measured between platen-to-platen or sensor-to-sensor reflects an averaging of all the velocities occurring along that particular raypath across the boundaries of the rock. The research team is attempting to develop andmore » refine a laboratory equivalent of seismic tomography for use on rock samples deformed under triaxial stress conditions. Seismic tomography, utilized for example in crosswell tomography, allows an imaging of the velocities within a discrete zone within the rock. Ultrasonic or acoustic tomography is essentially the extension of that field technology applied to rock samples deforming in the laboratory at high pressures. This report outlines the technical steps and procedures for developing this technology for use on weak, soft chalk samples. Laboratory tests indicate that the chalk samples exhibit major changes in compressional and shear wave velocities during compaction. Since chalk is the rock type responsible for the severe subsidence and compaction in the North Sea it was selected for the first efforts at tomographic imaging of soft rocks. Field evidence from the North Sea suggests that compaction, which has resulted in over 30 feet of subsidence to date, is heterogeneously distributed within the reservoir. The research team will attempt to image this very process in chalk samples. The initial tomographic studies (Scott et al., 1994a,b; 1998) were accomplished on well cemented, competent rocks such as Berea sandstone. The extension of the technology to weaker

  10. Deep linear autoencoder and patch clustering-based unified one-dimensional coding of image and video

    NASA Astrophysics Data System (ADS)

    Li, Honggui

    2017-09-01

    This paper proposes a unified one-dimensional (1-D) coding framework of image and video, which depends on deep learning neural network and image patch clustering. First, an improved K-means clustering algorithm for image patches is employed to obtain the compact inputs of deep artificial neural network. Second, for the purpose of best reconstructing original image patches, deep linear autoencoder (DLA), a linear version of the classical deep nonlinear autoencoder, is introduced to achieve the 1-D representation of image blocks. Under the circumstances of 1-D representation, DLA is capable of attaining zero reconstruction error, which is impossible for the classical nonlinear dimensionality reduction methods. Third, a unified 1-D coding infrastructure for image, intraframe, interframe, multiview video, three-dimensional (3-D) video, and multiview 3-D video is built by incorporating different categories of videos into the inputs of patch clustering algorithm. Finally, it is shown in the results of simulation experiments that the proposed methods can simultaneously gain higher compression ratio and peak signal-to-noise ratio than those of the state-of-the-art methods in the situation of low bitrate transmission.

  11. Full-wave Nonlinear Inverse Scattering for Acoustic and Electromagnetic Breast Imaging

    NASA Astrophysics Data System (ADS)

    Haynes, Mark Spencer

    Acoustic and electromagnetic full-wave nonlinear inverse scattering techniques are explored in both theory and experiment with the ultimate aim of noninvasively mapping the material properties of the breast. There is evidence that benign and malignant breast tissue have different acoustic and electrical properties and imaging these properties directly could provide higher quality images with better diagnostic certainty. In this dissertation, acoustic and electromagnetic inverse scattering algorithms are first developed and validated in simulation. The forward solvers and optimization cost functions are modified from traditional forms in order to handle the large or lossy imaging scenes present in ultrasonic and microwave breast imaging. An antenna model is then presented, modified, and experimentally validated for microwave S-parameter measurements. Using the antenna model, a new electromagnetic volume integral equation is derived in order to link the material properties of the inverse scattering algorithms to microwave S-parameters measurements allowing direct comparison of model predictions and measurements in the imaging algorithms. This volume integral equation is validated with several experiments and used as the basis of a free-space inverse scattering experiment, where images of the dielectric properties of plastic objects are formed without the use of calibration targets. These efforts are used as the foundation of a solution and formulation for the numerical characterization of a microwave near-field cavity-based breast imaging system. The system is constructed and imaging results of simple targets are given. Finally, the same techniques are used to explore a new self-characterization method for commercial ultrasound probes. The method is used to calibrate an ultrasound inverse scattering experiment and imaging results of simple targets are presented. This work has demonstrated the feasibility of quantitative microwave inverse scattering by way of a self

  12. Three dimensional full-wave nonlinear acoustic simulations: Applications to ultrasound imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pinton, Gianmarco

    Characterization of acoustic waves that propagate nonlinearly in an inhomogeneous medium has significant applications to diagnostic and therapeutic ultrasound. The generation of an ultrasound image of human tissue is based on the complex physics of acoustic wave propagation: diffraction, reflection, scattering, frequency dependent attenuation, and nonlinearity. The nonlinearity of wave propagation is used to the advantage of diagnostic scanners that use the harmonic components of the ultrasonic signal to improve the resolution and penetration of clinical scanners. One approach to simulating ultrasound images is to make approximations that can reduce the physics to systems that have a low computational cost.more » Here a maximalist approach is taken and the full three dimensional wave physics is simulated with finite differences. This paper demonstrates how finite difference simulations for the nonlinear acoustic wave equation can be used to generate physically realistic two and three dimensional ultrasound images anywhere in the body. A specific intercostal liver imaging scenario for two cases: with the ribs in place, and with the ribs removed. This configuration provides an imaging scenario that cannot be performed in vivo but that can test the influence of the ribs on image quality. Several imaging properties are studied, in particular the beamplots, the spatial coherence at the transducer surface, the distributed phase aberration, and the lesion detectability for imaging at the fundamental and harmonic frequencies. The results indicate, counterintuitively, that at the fundamental frequency the beamplot improves due to the apodization effect of the ribs but at the same time there is more degradation from reverberation clutter. At the harmonic frequency there is significantly less improvement in the beamplot and also significantly less degradation from reverberation. It is shown that even though simulating the full propagation physics is computationally

  13. Phase Time and Envelope Time in Time-Distance Analysis and Acoustic Imaging

    NASA Technical Reports Server (NTRS)

    Chou, Dean-Yi; Duvall, Thomas L.; Sun, Ming-Tsung; Chang, Hsiang-Kuang; Jimenez, Antonio; Rabello-Soares, Maria Cristina; Ai, Guoxiang; Wang, Gwo-Ping; Goode Philip; Marquette, William; hide

    1999-01-01

    Time-distance analysis and acoustic imaging are two related techniques to probe the local properties of solar interior. In this study, we discuss the relation of phase time and envelope time between the two techniques. The location of the envelope peak of the cross correlation function in time-distance analysis is identified as the travel time of the wave packet formed by modes with the same w/l. The phase time of the cross correlation function provides information of the phase change accumulated along the wave path, including the phase change at the boundaries of the mode cavity. The acoustic signals constructed with the technique of acoustic imaging contain both phase and intensity information. The phase of constructed signals can be studied by computing the cross correlation function between time series constructed with ingoing and outgoing waves. In this study, we use the data taken with the Taiwan Oscillation Network (TON) instrument and the Michelson Doppler Imager (MDI) instrument. The analysis is carried out for the quiet Sun. We use the relation of envelope time versus distance measured in time-distance analyses to construct the acoustic signals in acoustic imaging analyses. The phase time of the cross correlation function of constructed ingoing and outgoing time series is twice the difference between the phase time and envelope time in time-distance analyses as predicted. The envelope peak of the cross correlation function between constructed ingoing and outgoing time series is located at zero time as predicted for results of one-bounce at 3 mHz for all four data sets and two-bounce at 3 mHz for two TON data sets. But it is different from zero for other cases. The cause of the deviation of the envelope peak from zero is not known.

  14. Acoustic dipole radiation based conductivity image reconstruction for magnetoacoustic tomography with magnetic induction

    NASA Astrophysics Data System (ADS)

    Sun, Xiaodong; Zhang, Feng; Ma, Qingyu; Tu, Juan; Zhang, Dong

    2012-01-01

    Based on the acoustic dipole radiation theory, a tomograhic conductivity image reconstruction algorithm is developed for the magnetoacoustic tomography with magnetic induction (MAT-MI) in a cylindrical measurement configuration. It has been experimentally proved for a tissue-like phantom that not only the configuration but also the inner conductivity distribution can be reconstructed without any borderline stripe. Furthermore, the spatial resolution also can be improved without the limitation of acoustic vibration. The favorable results have provided solid verification for the feasibility of conductivity image reconstruction and suggested the potential applications of MAT-MI in the area of medical electrical impedance imaging.

  15. Lidar-Incorporated Traffic Sign Detection from Video Log Images of Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Li, Y.; Fan, J.; Huang, Y.; Chen, Z.

    2016-06-01

    Mobile Mapping System (MMS) simultaneously collects the Lidar points and video log images in a scenario with the laser profiler and digital camera. Besides the textural details of video log images, it also captures the 3D geometric shape of point cloud. It is widely used to survey the street view and roadside transportation infrastructure, such as traffic sign, guardrail, etc., in many transportation agencies. Although many literature on traffic sign detection are available, they only focus on either Lidar or imagery data of traffic sign. Based on the well-calibrated extrinsic parameters of MMS, 3D Lidar points are, the first time, incorporated into 2D video log images to enhance the detection of traffic sign both physically and visually. Based on the local elevation, the 3D pavement area is first located. Within a certain distance and height of the pavement, points of the overhead and roadside traffic signs can be obtained according to the setup specification of traffic signs in different transportation agencies. The 3D candidate planes of traffic signs are then fitted using the RANSAC plane-fitting of those points. By projecting the candidate planes onto the image, Regions of Interest (ROIs) of traffic signs are found physically with the geometric constraints between laser profiling and camera imaging. The Random forest learning of the visual color and shape features of traffic signs is adopted to validate the sign ROIs from the video log images. The sequential occurrence of a traffic sign among consecutive video log images are defined by the geometric constraint of the imaging geometry and GPS movement. Candidate ROIs are predicted in this temporal context to double-check the salient traffic sign among video log images. The proposed algorithm is tested on a diverse set of scenarios on the interstate highway G-4 near Beijing, China under varying lighting conditions and occlusions. Experimental results show the proposed algorithm enhances the rate of detecting

  16. Intravascular ultrasound catheter to enhance microbubble-based drug delivery via acoustic radiation force.

    PubMed

    Kilroy, Joseph P; Klibanov, Alexander L; Wamhoff, Brian R; Hossack, John A

    2012-10-01

    Previous research has demonstrated that acoustic radiation force enhances intravascular microbubble adhesion to blood vessels in the presence of flow for moleculartargeted ultrasound imaging and drug delivery. A prototype acoustic radiation force intravascular ultrasound (ARFIVUS) catheter was designed and fabricated to displace a microbubble contrast agent in flow representative of conditions encountered in the human carotid artery. The prototype ARFIVUS transducer was designed to match the resonance frequency of 1.4- to 2.6-μm-diameter microbubbles modeled by an experimentally verified 1-D microbubble acoustic radiation force translation model. The transducer element was an elongated Navy Type I (hard) lead zirconate titanate (PZT) ceramic designed to operate at 3 MHz. Fabricated devices operated with center frequencies of 3.3 and 3.6 MHz with -6-dB fractional bandwidths of 55% and 50%, respectively. Microbubble translation velocities as high as 0.86 m/s were measured using a high-speed streak camera when insonating with the ARFIVUS transducer. Finally, the prototype was used to displace microbubbles in a flow phantom while imaging with a commercial 45-MHz imaging IVUS transducer. A sustained increase of 31 dB in average video intensity was measured following insonation with the ARFIVUS, indicating microbubble accumulation resulting from the application of acoustic radiation force.

  17. A combined microphone and camera calibration technique with application to acoustic imaging.

    PubMed

    Legg, Mathew; Bradley, Stuart

    2013-10-01

    We present a calibration technique for an acoustic imaging microphone array, combined with a digital camera. Computer vision and acoustic time of arrival data are used to obtain microphone coordinates in the camera reference frame. Our new method allows acoustic maps to be plotted onto the camera images without the need for additional camera alignment or calibration. Microphones and cameras may be placed in an ad-hoc arrangement and, after calibration, the coordinates of the microphones are known in the reference frame of a camera in the array. No prior knowledge of microphone positions, inter-microphone spacings, or air temperature is required. This technique is applied to a spherical microphone array and a mean difference of 3 mm was obtained between the coordinates obtained with this calibration technique and those measured using a precision mechanical method.

  18. Design and Evaluation of a Scalable and Reconfigurable Multi-Platform System for Acoustic Imaging

    PubMed Central

    Izquierdo, Alberto; Villacorta, Juan José; del Val Puente, Lara; Suárez, Luis

    2016-01-01

    This paper proposes a scalable and multi-platform framework for signal acquisition and processing, which allows for the generation of acoustic images using planar arrays of MEMS (Micro-Electro-Mechanical Systems) microphones with low development and deployment costs. Acoustic characterization of MEMS sensors was performed, and the beam pattern of a module, based on an 8 × 8 planar array and of several clusters of modules, was obtained. A flexible framework, formed by an FPGA, an embedded processor, a computer desktop, and a graphic processing unit, was defined. The processing times of the algorithms used to obtain the acoustic images, including signal processing and wideband beamforming via FFT, were evaluated in each subsystem of the framework. Based on this analysis, three frameworks are proposed, defined by the specific subsystems used and the algorithms shared. Finally, a set of acoustic images obtained from sound reflected from a person are presented as a case study in the field of biometric identification. These results reveal the feasibility of the proposed system. PMID:27727174

  19. Chirped or time modulated excitation compared to short pulses for photoacoustic imaging in acoustic attenuating media

    NASA Astrophysics Data System (ADS)

    Burgholzer, P.; Motz, C.; Lang, O.; Berer, T.; Huemer, M.

    2018-02-01

    In photoacoustic imaging, optically generated acoustic waves transport the information about embedded structures to the sample surface. Usually, short laser pulses are used for the acoustic excitation. Acoustic attenuation increases for higher frequencies, which reduces the bandwidth and limits the spatial resolution. One could think of more efficient waveforms than single short pulses, such as pseudo noise codes, chirped, or harmonic excitation, which could enable a higher information-transfer from the samples interior to its surface by acoustic waves. We used a linear state space model to discretize the wave equation, such as the Stoke's equation, but this method could be used for any other linear wave equation. Linear estimators and a non-linear function inversion were applied to the measured surface data, for onedimensional image reconstruction. The proposed estimation method allows optimizing the temporal modulation of the excitation laser such that the accuracy and spatial resolution of the reconstructed image is maximized. We have restricted ourselves to one-dimensional models, as for higher dimensions the one-dimensional reconstruction, which corresponds to the acoustic wave without attenuation, can be used as input for any ultrasound imaging method, such as back-projection or time-reversal method.

  20. Display Considerations For Intravascular Ultrasonic Imaging

    NASA Astrophysics Data System (ADS)

    Gessert, James M.; Krinke, Charlie; Mallery, John A.; Zalesky, Paul J.

    1989-08-01

    A display has been developed for intravascular ultrasonic imaging. Design of this display has a primary goal of providing guidance information for therapeutic interventions such as balloons, lasers, and atherectomy devices. Design considerations include catheter configuration, anatomy, acoustic properties of normal and diseased tissue, catheterization laboratory and operating room environment, acoustic and electrical safety, acoustic data sampling issues, and logistical support such as image measurement, storage and retrieval. Intravascular imaging is in an early stage of development so design flexibility and expandability are very important. The display which has been developed is capable of acquisition and display of grey scale images at rates varying from static B-scans to 30 frames per second. It stores images in a 640 X 480 X 8 bit format and is capable of black and white as well as color display in multiplevideo formats. The design is based on the industry standard PC-AT architecture and consists of two AT style circuit cards, one for high speed sampling and the other for scan conversion, graphics and video generation.

  1. Vector Acoustics, Vector Sensors, and 3D Underwater Imaging

    NASA Astrophysics Data System (ADS)

    Lindwall, D.

    2007-12-01

    Vector acoustic data has two more dimensions of information than pressure data and may allow for 3D underwater imaging with much less data than with hydrophone data. The vector acoustic sensors measures the particle motions due to passing sound waves and, in conjunction with a collocated hydrophone, the direction of travel of the sound waves. When using a controlled source with known source and sensor locations, the reflection points of the sound field can be determined with a simple trigonometric calculation. I demonstrate this concept with an experiment that used an accelerometer based vector acoustic sensor in a water tank with a short-pulse source and passive scattering targets. The sensor consists of a three-axis accelerometer and a matched hydrophone. The sound source was a standard transducer driven by a short 7 kHz pulse. The sensor was suspended in a fixed location and the hydrophone was moved about the tank by a robotic arm to insonify the tank from many locations. Several floats were placed in the tank as acoustic targets at diagonal ranges of approximately one meter. The accelerometer data show the direct source wave as well as the target scattered waves and reflections from the nearby water surface, tank bottom and sides. Without resorting to the usual methods of seismic imaging, which in this case is only two dimensional and relied entirely on the use of a synthetic source aperture, the two targets, the tank walls, the tank bottom, and the water surface were imaged. A directional ambiguity inherent to vector sensors is removed by using collocated hydrophone data. Although this experiment was in a very simple environment, it suggests that 3-D seismic surveys may be achieved with vector sensors using the same logistics as a 2-D survey that uses conventional hydrophones. This work was supported by the Office of Naval Research, program element 61153N.

  2. Extracting Maximum Total Water Levels from Video "Brightest" Images

    NASA Astrophysics Data System (ADS)

    Brown, J. A.; Holman, R. A.; Stockdon, H. F.; Plant, N. G.; Long, J.; Brodie, K.

    2016-02-01

    An important parameter for predicting storm-induced coastal change is the maximum total water level (TWL). Most studies estimate the TWL as the sum of slowly varying water levels, including tides and storm surge, and the extreme runup parameter R2%, which includes wave setup and swash motions over minutes to seconds. Typically, R2% is measured using video remote sensing data, where cross-shore timestacks of pixel intensity are digitized to extract the horizontal runup timeseries. However, this technique must be repeated at multiple alongshore locations to resolve alongshore variability, and can be tedious and time consuming. We seek an efficient, video-based approach that yields a synoptic estimate of TWL that accounts for alongshore variability and can be applied during storms. In this work, the use of a video product termed the "brightest" image is tested; this represents the highest intensity of each pixel captured during a 10-minute collection period. Image filtering and edge detection techniques are applied to automatically determine the shoreward edge of the brightest region (i.e., the swash zone) at each alongshore pixel. The edge represents the horizontal position of the maximum TWL along the beach during the collection period, and is converted to vertical elevations using measured beach topography. This technique is evaluated using video and topographic data collected every half-hour at Duck, NC, during differing hydrodynamic conditions. Relationships between the maximum TWL estimates from the brightest images and various runup statistics computed using concurrent runup timestacks are examined, and errors associated with mapping the horizontal results to elevations are discussed. This technique is invaluable, as it can be used to routinely estimate maximum TWLs along a coastline from a single brightest image product, and provides a means for examining alongshore variability of TWLs at high alongshore resolution. These advantages will be useful in

  3. Video multiple watermarking technique based on image interlacing using DWT.

    PubMed

    Ibrahim, Mohamed M; Abdel Kader, Neamat S; Zorkany, M

    2014-01-01

    Digital watermarking is one of the important techniques to secure digital media files in the domains of data authentication and copyright protection. In the nonblind watermarking systems, the need of the original host file in the watermark recovery operation makes an overhead over the system resources, doubles memory capacity, and doubles communications bandwidth. In this paper, a robust video multiple watermarking technique is proposed to solve this problem. This technique is based on image interlacing. In this technique, three-level discrete wavelet transform (DWT) is used as a watermark embedding/extracting domain, Arnold transform is used as a watermark encryption/decryption method, and different types of media (gray image, color image, and video) are used as watermarks. The robustness of this technique is tested by applying different types of attacks such as: geometric, noising, format-compression, and image-processing attacks. The simulation results show the effectiveness and good performance of the proposed technique in saving system resources, memory capacity, and communications bandwidth.

  4. Acoustic noise reduction in T 1- and proton-density-weighted turbo spin-echo imaging.

    PubMed

    Ott, Martin; Blaimer, Martin; Breuer, Felix; Grodzki, David; Heismann, Björn; Jakob, Peter

    2016-02-01

    To reduce acoustic noise levels in T 1-weighted and proton-density-weighted turbo spin-echo (TSE) sequences, which typically reach acoustic noise levels up to 100 dB(A) in clinical practice. Five acoustic noise reduction strategies were combined: (1) gradient ramps and shapes were changed from trapezoidal to triangular, (2) variable-encoding-time imaging was implemented to relax the phase-encoding gradient timing, (3) RF pulses were adapted to avoid the need for reversing the polarity of the slice-rewinding gradient, (4) readout bandwidth was increased to provide more time for gradient activity on other axes, (5) the number of slices per TR was reduced to limit the total gradient activity per unit time. We evaluated the influence of each measure on the acoustic noise level, and conducted in vivo measurements on a healthy volunteer. Sound recordings were taken for comparison. An overall acoustic noise reduction of up to 16.8 dB(A) was obtained by the proposed strategies (1-4) and the acquisition of half the number of slices per TR only. Image quality in terms of SNR and CNR was found to be preserved. The proposed measures in this study allowed a threefold reduction in the acoustic perception of T 1-weighted and proton-density-weighted TSE sequences compared to a standard TSE-acquisition. This could be achieved without visible degradation of image quality, showing the potential to improve patient comfort and scan acceptability.

  5. Tiny videos: a large data set for nonparametric video retrieval and frame classification.

    PubMed

    Karpenko, Alexandre; Aarabi, Parham

    2011-03-01

    In this paper, we present a large database of over 50,000 user-labeled videos collected from YouTube. We develop a compact representation called "tiny videos" that achieves high video compression rates while retaining the overall visual appearance of the video as it varies over time. We show that frame sampling using affinity propagation-an exemplar-based clustering algorithm-achieves the best trade-off between compression and video recall. We use this large collection of user-labeled videos in conjunction with simple data mining techniques to perform related video retrieval, as well as classification of images and video frames. The classification results achieved by tiny videos are compared with the tiny images framework [24] for a variety of recognition tasks. The tiny images data set consists of 80 million images collected from the Internet. These are the largest labeled research data sets of videos and images available to date. We show that tiny videos are better suited for classifying scenery and sports activities, while tiny images perform better at recognizing objects. Furthermore, we demonstrate that combining the tiny images and tiny videos data sets improves classification precision in a wider range of categories.

  6. Improvements to video imaging detection for dilemma zone protection.

    DOT National Transportation Integrated Search

    2009-02-01

    The use of video imaging vehicle detection systems (VIVDS) at signalized intersections in Texas has : increased significantly due primarily to safety issues and costs. Installing non-intrusive detectors at : intersections is almost always safer than ...

  7. Picturing Video

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Video Pics is a software program that generates high-quality photos from video. The software was developed under an SBIR contract with Marshall Space Flight Center by Redhawk Vision, Inc.--a subsidiary of Irvine Sensors Corporation. Video Pics takes information content from multiple frames of video and enhances the resolution of a selected frame. The resulting image has enhanced sharpness and clarity like that of a 35 mm photo. The images are generated as digital files and are compatible with image editing software.

  8. Acoustical standards in engineering acoustics

    NASA Astrophysics Data System (ADS)

    Burkhard, Mahlon D.

    2004-05-01

    The Engineering Acoustics Technical Committee is concerned with the evolution and improvement of acoustical techniques and apparatus, and with the promotion of new applications of acoustics. As cited in the Membership Directory and Handbook (2002), the interest areas include transducers and arrays; underwater acoustic systems; acoustical instrumentation and monitoring; applied sonics, promotion of useful effects, information gathering and transmission; audio engineering; acoustic holography and acoustic imaging; acoustic signal processing (equipment and techniques); and ultrasound and infrasound. Evident connections between engineering and standards are needs for calibration, consistent terminology, uniform presentation of data, reference levels, or design targets for product development. Thus for the acoustical engineer standards are both a tool for practices, for communication, and for comparison of his efforts with those of others. Development of many standards depends on knowledge of the way products are put together for the market place and acoustical engineers provide important input to the development of standards. Acoustical engineers and members of the Engineering Acoustics arm of the Society both benefit from and contribute to the Acoustical Standards of the Acoustical Society.

  9. Identifying Vulnerable Plaques with Acoustic Radiation Force Impulse Imaging

    NASA Astrophysics Data System (ADS)

    Doherty, Joshua Ryan

    The rupture of arterial plaques is the most common cause of ischemic complications including stroke, the fourth leading cause of death and number one cause of long term disability in the United States. Unfortunately, because conventional diagnostic tools fail to identify plaques that confer the highest risk, often a disabling stroke and/or sudden death is the first sign of disease. A diagnostic method capable of characterizing plaque vulnerability would likely enhance the predictive ability and ultimately the treatment of stroke before the onset of clinical events. This dissertation evaluates the hypothesis that Acoustic Radiation Force Impulse (ARFI) imaging can noninvasively identify lipid regions, that have been shown to increase a plaque's propensity to rupture, within carotid artery plaques in vivo. The work detailed herein describes development efforts and results from simulations and experiments that were performed to evaluate this hypothesis. To first demonstrate feasibility and evaluate potential safety concerns, finite- element method simulations are used to model the response of carotid artery plaques to an acoustic radiation force excitation. Lipid pool visualization is shown to vary as a function of lipid pool geometry and stiffness. A comparison of the resulting Von Mises stresses indicates that stresses induced by an ARFI excitation are three orders of magnitude lower than those induced by blood pressure. This thesis also presents the development of a novel pulse inversion harmonic tracking method to reduce clutter-imposed errors in ultrasound-based tissue displacement estimates. This method is validated in phantoms and was found to reduce bias and jitter displacement errors for a marked improvement in image quality in vivo. Lastly, this dissertation presents results from a preliminary in vivo study that compares ARFI imaging derived plaque stiffness with spatially registered composition determined by a Magnetic Resonance Imaging (MRI) gold standard

  10. Collaborative real-time motion video analysis by human observer and image exploitation algorithms

    NASA Astrophysics Data System (ADS)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2015-05-01

    Motion video analysis is a challenging task, especially in real-time applications. In most safety and security critical applications, a human observer is an obligatory part of the overall analysis system. Over the last years, substantial progress has been made in the development of automated image exploitation algorithms. Hence, we investigate how the benefits of automated video analysis can be integrated suitably into the current video exploitation systems. In this paper, a system design is introduced which strives to combine both the qualities of the human observer's perception and the automated algorithms, thus aiming to improve the overall performance of a real-time video analysis system. The system design builds on prior work where we showed the benefits for the human observer by means of a user interface which utilizes the human visual focus of attention revealed by the eye gaze direction for interaction with the image exploitation system; eye tracker-based interaction allows much faster, more convenient, and equally precise moving target acquisition in video images than traditional computer mouse selection. The system design also builds on prior work we did on automated target detection, segmentation, and tracking algorithms. Beside the system design, a first pilot study is presented, where we investigated how the participants (all non-experts in video analysis) performed in initializing an object tracking subsystem by selecting a target for tracking. Preliminary results show that the gaze + key press technique is an effective, efficient, and easy to use interaction technique when performing selection operations on moving targets in videos in order to initialize an object tracking function.

  11. Acoustic imaging and mirage effects with high transmittance in a periodically perforated metal slab

    NASA Astrophysics Data System (ADS)

    Zhao, Sheng-Dong; Wang, Yue-Sheng; Zhang, Chuanzeng

    2016-11-01

    In this paper, we present a high-quality superlens to focus acoustic waves using a periodically perforated metallic structure which is made of zinc and immersed in water. By changing a geometrical parameter gradually, a kind of gradient-index phononic crystal lens is designed to attain the mirage effects. The acoustic waves can propagate along an arc-shaped trajectory which is precisely controlled by the angle and frequency of the incident waves. The negative refraction imaging effect depends delicately on the transmittance of the solid structure. The acoustic impedance matching between the solid and the liquid proposed in this article, which is determined by the effective density and group velocity of the unit-cell, is significant for overcoming the inefficiency problem of acoustic devices. This study focuses on how to obtain the high transmittance imaging and mirage effects based on the adequate material selection and geometrical design.

  12. Photo-induced ultrasound microscopy for photo-acoustic imaging of non-absorbing specimens

    NASA Astrophysics Data System (ADS)

    Tcarenkova, Elena; Koho, Sami V.; Hänninen, Pekka E.

    2017-08-01

    Photo-Acoustic Microscopy (PAM) has raised high interest in in-vivo imaging due to its ability to preserve the near-diffraction limited spatial resolution of optical microscopes, whilst extending the penetration depth to the mm-range. Another advantage of PAM is that it is a label-free technique - any substance that absorbs PAM excitation laser light can be viewed. However, not all sample structures desired to be observed absorb sufficiently to provide contrast for imaging. This work describes a novel imaging method that makes it possible to visualize optically transparent samples that lack intrinsic photo-acoustic contrast, without the addition of contrast agents. A thin, strongly light absorbing layer next to sample is used to generate a strong ultrasonic signal. This signal, when recorded from opposite side, contains ultrasonic transmission information of the sample and thus the method can be used to obtain an ultrasound transmission image on any PAM.

  13. Acoustic noise and functional magnetic resonance imaging: current strategies and future prospects.

    PubMed

    Amaro, Edson; Williams, Steve C R; Shergill, Sukhi S; Fu, Cynthia H Y; MacSweeney, Mairead; Picchioni, Marco M; Brammer, Michael J; McGuire, Philip K

    2002-11-01

    Functional magnetic resonance imaging (fMRI) has become the method of choice for studying the neural correlates of cognitive tasks. Nevertheless, the scanner produces acoustic noise during the image acquisition process, which is a problem in the study of auditory pathway and language generally. The scanner acoustic noise not only produces activation in brain regions involved in auditory processing, but also interferes with the stimulus presentation. Several strategies can be used to address this problem, including modifications of hardware and software. Although reduction of the source of the acoustic noise would be ideal, substantial hardware modifications to the current base of installed MRI systems would be required. Therefore, the most common strategy employed to minimize the problem involves software modifications. In this work we consider three main types of acquisitions: compressed, partially silent, and silent. For each implementation, paradigms using block and event-related designs are assessed. We also provide new data, using a silent event-related (SER) design, which demonstrate higher blood oxygen level-dependent (BOLD) response to a simple auditory cue when compared to a conventional image acquisition. Copyright 2002 Wiley-Liss, Inc.

  14. Wideband acoustic records of explosive volcanic eruptions at Stromboli: New insights on the explosive process and the acoustic source

    NASA Astrophysics Data System (ADS)

    Goto, A.; Ripepe, M.; Lacanna, G.

    2014-06-01

    Wideband acoustic waves, both inaudible infrasound (<20 Hz) and audible component (>20 Hz), generated by strombolian eruptions were recorded at 5 kHz and correlated with video images. The high sample rate revealed that in addition to the known initial infrasound, the acoustic signal includes an energetic high-frequency (typically >100 Hz) coda. This audible signal starts before the positive infrasound onset goes negative. We suggest that the infrasonic onset is due to magma doming at the free surface, whereas the immediate high-frequency signal reflects the following explosive discharge flow. During strong gas-rich eruptions, positively skewed shockwave-like components with sharp compression and gradual depression appeared. We suggest that successive bursting of overpressurized small bubbles and the resultant volcanic jets sustain the highly gas-rich explosions and emit the audible sound. When the jet is supersonic, microexplosions of ambient air entrained in the hot jet emit the skewed waveforms.

  15. Video Skimming and Characterization through the Combination of Image and Language Understanding Techniques

    NASA Technical Reports Server (NTRS)

    Smith, Michael A.; Kanade, Takeo

    1997-01-01

    Digital video is rapidly becoming important for education, entertainment, and a host of multimedia applications. With the size of the video collections growing to thousands of hours, technology is needed to effectively browse segments in a short time without losing the content of the video. We propose a method to extract the significant audio and video information and create a "skim" video which represents a very short synopsis of the original. The goal of this work is to show the utility of integrating language and image understanding techniques for video skimming by extraction of significant information, such as specific objects, audio keywords and relevant video structure. The resulting skim video is much shorter, where compaction is as high as 20:1, and yet retains the essential content of the original segment.

  16. Video and image retrieval beyond the cognitive level: the needs and possibilities

    NASA Astrophysics Data System (ADS)

    Hanjalic, Alan

    2000-12-01

    The worldwide research efforts in the are of image and video retrieval have concentrated so far on increasing the efficiency and reliability of extracting the elements of image and video semantics and so on improving the search and retrieval performance at the cognitive level of content abstraction. At this abstraction level, the user is searching for 'factual' or 'objective' content such as image showing a panorama of San Francisco, an outdoor or an indoor image, a broadcast news report on a defined topic, a movie dialog between the actors A and B or the parts of a basketball game showing fast breaks, steals and scores. These efforts, however, do not address the retrieval applications at the so-called affective level of content abstraction where the 'ground truth' is not strictly defined. Such applications are, for instance, those where subjectivity of the user plays the major role, e.g. the task of retrieving all images that the user 'likes most', and those that are based on 'recognizing emotions' in audiovisual data. Typical examples are searching for all images that 'radiate happiness', identifying all 'sad' movie fragments and looking for the 'romantic landscapes', 'sentimental' movie segments, 'movie highlights' or 'most exciting' moments of a sport event. This paper discusses the needs and possibilities for widening the current scope of research in the area of image and video search and retrieval in order to enable applications at the affective level of content abstraction.

  17. Video and image retrieval beyond the cognitive level: the needs and possibilities

    NASA Astrophysics Data System (ADS)

    Hanjalic, Alan

    2001-01-01

    The worldwide research efforts in the are of image and video retrieval have concentrated so far on increasing the efficiency and reliability of extracting the elements of image and video semantics and so on improving the search and retrieval performance at the cognitive level of content abstraction. At this abstraction level, the user is searching for 'factual' or 'objective' content such as image showing a panorama of San Francisco, an outdoor or an indoor image, a broadcast news report on a defined topic, a movie dialog between the actors A and B or the parts of a basketball game showing fast breaks, steals and scores. These efforts, however, do not address the retrieval applications at the so-called affective level of content abstraction where the 'ground truth' is not strictly defined. Such applications are, for instance, those where subjectivity of the user plays the major role, e.g. the task of retrieving all images that the user 'likes most', and those that are based on 'recognizing emotions' in audiovisual data. Typical examples are searching for all images that 'radiate happiness', identifying all 'sad' movie fragments and looking for the 'romantic landscapes', 'sentimental' movie segments, 'movie highlights' or 'most exciting' moments of a sport event. This paper discusses the needs and possibilities for widening the current scope of research in the area of image and video search and retrieval in order to enable applications at the affective level of content abstraction.

  18. Imaging of transient surface acoustic waves by full-field photorefractive interferometry.

    PubMed

    Xiong, Jichuan; Xu, Xiaodong; Glorieux, Christ; Matsuda, Osamu; Cheng, Liping

    2015-05-01

    A stroboscopic full-field imaging technique based on photorefractive interferometry for the visualization of rapidly changing surface displacement fields by using of a standard charge-coupled device (CCD) camera is presented. The photorefractive buildup of the space charge field during and after probe laser pulses is simulated numerically. The resulting anisotropic diffraction upon the refractive index grating and the interference between the polarization-rotated diffracted reference beam and the transmitted signal beam are modeled theoretically. The method is experimentally demonstrated by full-field imaging of the propagation of photoacoustically generated surface acoustic waves with a temporal resolution of nanoseconds. The surface acoustic wave propagation in a 23 mm × 17 mm area on an aluminum plate was visualized with 520 × 696 pixels of the CCD sensor, yielding a spatial resolution of 33 μm. The short pulse duration (8 ns) of the probe laser yields the capability of imaging SAWs with frequencies up to 60 MHz.

  19. Class Energy Image Analysis for Video Sensor-Based Gait Recognition: A Review

    PubMed Central

    Lv, Zhuowen; Xing, Xianglei; Wang, Kejun; Guan, Donghai

    2015-01-01

    Gait is a unique perceptible biometric feature at larger distances, and the gait representation approach plays a key role in a video sensor-based gait recognition system. Class Energy Image is one of the most important gait representation methods based on appearance, which has received lots of attentions. In this paper, we reviewed the expressions and meanings of various Class Energy Image approaches, and analyzed the information in the Class Energy Images. Furthermore, the effectiveness and robustness of these approaches were compared on the benchmark gait databases. We outlined the research challenges and provided promising future directions for the field. To the best of our knowledge, this is the first review that focuses on Class Energy Image. It can provide a useful reference in the literature of video sensor-based gait representation approach. PMID:25574935

  20. An echolocation model for the restoration of an acoustic image from a single-emission echo

    NASA Astrophysics Data System (ADS)

    Matsuo, Ikuo; Yano, Masafumi

    2004-12-01

    Bats can form a fine acoustic image of an object using frequency-modulated echolocation sound. The acoustic image is an impulse response, known as a reflected-intensity distribution, which is composed of amplitude and phase spectra over a range of frequencies. However, bats detect only the amplitude spectrum due to the low-time resolution of their peripheral auditory system, and the frequency range of emission is restricted. It is therefore necessary to restore the acoustic image from limited information. The amplitude spectrum varies with the changes in the configuration of the reflected-intensity distribution, while the phase spectrum varies with the changes in its configuration and location. Here, by introducing some reasonable constraints, a method is proposed for restoring an acoustic image from the echo. The configuration is extrapolated from the amplitude spectrum of the restricted frequency range by using the continuity condition of the amplitude spectrum at the minimum frequency of the emission and the minimum phase condition. The determination of the location requires extracting the amplitude spectra, which vary with its location. For this purpose, the Gaussian chirplets with a carrier frequency compatible with bat emission sweep rates were used. The location is estimated from the temporal changes of the amplitude spectra. .

  1. Photo-Acoustic Ultrasound Imaging to Distinguish Benign from Malignant Prostate Cancer

    DTIC Science & Technology

    2016-09-01

    from the inside out. Ultrasound imaging provides a basic view of the structure of the prostate while photoacoustic contrast is predicted to enhance...University Page 2 of 13 1. INTRODUCTION: Ultrasound imaging uses sound waves at frequencies above the human hearing range to image organs within the body...An ultrasound transducer delivers a pulse of acoustic energy into the area of interest and listens for the echoes which return as the sound waves

  2. Acoustic dipole radiation based electrical impedance contrast imaging approach of magnetoacoustic tomography with magnetic induction.

    PubMed

    Sun, Xiaodong; Fang, Dawei; Zhang, Dong; Ma, Qingyu

    2013-05-01

    Different from the theory of acoustic monopole spherical radiation, the acoustic dipole radiation based theory introduces the radiation pattern of Lorentz force induced dipole sources to describe the principle of magnetoacoustic tomography with magnetic induction (MAT-MI). Although two-dimensional (2D) simulations have been studied for cylindrical phantom models, layer effects of the dipole sources within the entire object along the z direction still need to be investigated to evaluate the performance of MAT-MI for different geometric specifications. The purpose of this work is further verifying the validity and generality of acoustic dipole radiation based theory for MAT-MI with two new models in different shapes, dimensions, and conductivities. Based on the theory of acoustic dipole radiation, the principles of MAT-MI were analyzed with derived analytic formulae. 2D and 3D numerical studies for two new models of aluminum foil and cooked egg were conducted to simulate acoustic pressures and corresponding waveforms, and 2D images of the scanned layers were reconstructed with the simplified back projection algorithm for the waveforms collected around the models. The spatial resolution for conductivity boundary differentiation was also analyzed with different foil thickness. For comparison, two experimental measurements were conducted for a cylindrical aluminum foil phantom and a shell-peeled cooked egg. The collected waveforms and the reconstructed images of the scanned layers were achieved to verify the validation of the acoustic dipole radiation based theory for MAT-MI. Despite the difference between the 2D and 3D simulated pressures, good consistence of the collected waveforms proves that wave clusters are generated by the abrupt pressure changes with bipolar vibration phases, representing the opposite polarities of the conductivity changes along the measurement direction. The configuration of the scanned layer can be reconstructed in terms of shape and size, and

  3. A system for the real-time display of radar and video images of targets

    NASA Technical Reports Server (NTRS)

    Allen, W. W.; Burnside, W. D.

    1990-01-01

    Described here is a software and hardware system for the real-time display of radar and video images for use in a measurement range. The main purpose is to give the reader a clear idea of the software and hardware design and its functions. This system is designed around a Tektronix XD88-30 graphics workstation, used to display radar images superimposed on video images of the actual target. The system's purpose is to provide a platform for tha analysis and documentation of radar images and their associated targets in a menu-driven, user oriented environment.

  4. Object detection and imaging with acoustic time reversal mirrors

    NASA Astrophysics Data System (ADS)

    Fink, Mathias

    1993-11-01

    Focusing an acoustic wave on an object of unknown shape through an inhomogeneous medium of any geometrical shape is a challenge in underground detection. Optimal detection and imaging of objects needs the development of such focusing techniques. The use of a time reversal mirror (TRM) represents an original solution to this problem. It realizes in real time a focusing process matched to the object shape, to the geometries of the acoustic interfaces and to the geometries of the mirror. It is a self adaptative technique which compensates for any geometrical distortions of the mirror structure as well as for diffraction and refraction effects through the interfaces. Two real time 64 and 128 channel prototypes have been built in our laboratory and TRM experiments demonstrating the TRM performance through inhomogeneous solid and liquid media are presented. Applications to medical therapy (kidney stone detection and destruction) and to nondestructive testing of metallurgical samples of different geometries are described. Extension of this study to underground detection and imaging will be discussed.

  5. High resolution and deep tissue imaging using a near infrared acoustic resolution photoacoustic microscopy

    NASA Astrophysics Data System (ADS)

    Moothanchery, Mohesh; Sharma, Arunima; Periyasamy, Vijitha; Pramanik, Manojit

    2018-02-01

    It is always a great challenge for pure optical techniques to maintain good resolution and imaging depth at the same time. Photoacoustic imaging is an emerging technique which can overcome the limitation by pulsed light illumination and acoustic detection. Here, we report a Near Infrared Acoustic-Resolution Photoacoustic Microscopy (NIR-AR-PAM) systm with 30 MHz transducer and 1064 nm illumination which can achieve a lateral resolution of around 88 μm and imaging depth of 9.2 mm. Compared to visible light NIR beam can penetrate deeper in biological tissue due to weaker optical attenuation. In this work, we also demonstrated the in vivo imaging capabilty of NIRARPAM by near infrared detection of SLN with black ink as exogenous photoacoustic contrast agent in a rodent model.

  6. The compressed average image intensity metric for stereoscopic video quality assessment

    NASA Astrophysics Data System (ADS)

    Wilczewski, Grzegorz

    2016-09-01

    The following article depicts insights towards design, creation and testing of a genuine metric designed for a 3DTV video quality evaluation. The Compressed Average Image Intensity (CAII) mechanism is based upon stereoscopic video content analysis, setting its core feature and functionality to serve as a versatile tool for an effective 3DTV service quality assessment. Being an objective type of quality metric it may be utilized as a reliable source of information about the actual performance of a given 3DTV system, under strict providers evaluation. Concerning testing and the overall performance analysis of the CAII metric, the following paper presents comprehensive study of results gathered across several testing routines among selected set of samples of stereoscopic video content. As a result, the designed method for stereoscopic video quality evaluation is investigated across the range of synthetic visual impairments injected into the original video stream.

  7. Transcranial fluorescence imaging of auditory cortical plasticity regulated by acoustic environments in mice.

    PubMed

    Takahashi, Kuniyuki; Hishida, Ryuichi; Kubota, Yamato; Kudoh, Masaharu; Takahashi, Sugata; Shibuki, Katsuei

    2006-03-01

    Functional brain imaging using endogenous fluorescence of mitochondrial flavoprotein is useful for investigating mouse cortical activities via the intact skull, which is thin and sufficiently transparent in mice. We applied this method to investigate auditory cortical plasticity regulated by acoustic environments. Normal mice of the C57BL/6 strain, reared in various acoustic environments for at least 4 weeks after birth, were anaesthetized with urethane (1.7 g/kg, i.p.). Auditory cortical images of endogenous green fluorescence in blue light were recorded by a cooled CCD camera via the intact skull. Cortical responses elicited by tonal stimuli (5, 10 and 20 kHz) exhibited mirror-symmetrical tonotopic maps in the primary auditory cortex (AI) and anterior auditory field (AAF). Depression of auditory cortical responses regarding response duration was observed in sound-deprived mice compared with naïve mice reared in a normal acoustic environment. When mice were exposed to an environmental tonal stimulus at 10 kHz for more than 4 weeks after birth, the cortical responses were potentiated in a frequency-specific manner in respect to peak amplitude of the responses in AI, but not for the size of the responsive areas. Changes in AAF were less clear than those in AI. To determine the modified synapses by acoustic environments, neural responses in cortical slices were investigated with endogenous fluorescence imaging. The vertical thickness of responsive areas after supragranular electrical stimulation was significantly reduced in the slices obtained from sound-deprived mice. These results suggest that acoustic environments regulate the development of vertical intracortical circuits in the mouse auditory cortex.

  8. Standoff passive video imaging at 350 GHz with 251 superconducting detectors

    NASA Astrophysics Data System (ADS)

    Becker, Daniel; Gentry, Cale; Smirnov, Ilya; Ade, Peter; Beall, James; Cho, Hsiao-Mei; Dicker, Simon; Duncan, William; Halpern, Mark; Hilton, Gene; Irwin, Kent; Li, Dale; Paulter, Nicholas; Reintsema, Carl; Schwall, Robert; Tucker, Carole

    2014-06-01

    Millimeter wavelength radiation holds promise for detection of security threats at a distance, including suicide bomb belts and maritime threats in poor weather. The high sensitivity of superconducting Transition Edge Sensor (TES) detectors makes them ideal for passive imaging of thermal signals at these wavelengths. We have built a 350 GHz video-rate imaging system using a large-format array of feedhorn-coupled TES bolometers. The system operates at a standoff distance of 16m to 28m with a spatial resolution of 1:4 cm (at 17m). It currently contains one 251-detector subarray, and will be expanded to contain four subarrays for a total of 1004 detectors. The system has been used to take video images which reveal the presence of weapons concealed beneath a shirt in an indoor setting. We present a summary of this work.

  9. JSC Shuttle Mission Simulator (SMS) visual system payload bay video image

    NASA Technical Reports Server (NTRS)

    1981-01-01

    This space shuttle orbiter payload bay (PLB) video image is used in JSC's Fixed Based (FB) Shuttle Mission Simulator (SMS). The image is projected inside the FB-SMS crew compartment during mission simulation training. The FB-SMS is located in the Mission Simulation and Training Facility Bldg 5.

  10. Echo planar imaging at 4 Tesla with minimum acoustic noise.

    PubMed

    Tomasi, Dardo G; Ernst, Thomas

    2003-07-01

    To minimize the acoustic sound pressure levels of single-shot echo planar imaging (EPI) acquisitions on high magnetic field MRI scanners. The resonance frequencies of gradient coil vibrations, which depend on the coil length and the elastic properties of the materials in the coil assembly, were measured using piezoelectric transducers. The frequency of the EPI-readout train was adjusted to avoid the frequency ranges of mechanical resonances. Our MRI system exhibited two sharp mechanical resonances (at 720 and 1220 Hz) that can increase vibrational amplitudes up to six-fold. A small adjustment of the EPI-readout frequency made it possible to reduce the sound pressure level of EPI-based perfusion and functional MRI scans by 12 dB. Normal vibrational modes of MRI gradient coils can dramatically increase the sound pressure levels during echo planar imaging (EPI) scans. To minimize acoustic noise, the frequency of EPI-readout trains and the resonance frequencies of gradient coil vibrations need to be different. Copyright 2003 Wiley-Liss, Inc.

  11. Internet Teleprescence by Real-Time View-Dependent Image Generation with Omnidirectional Video Camera

    NASA Astrophysics Data System (ADS)

    Morita, Shinji; Yamazawa, Kazumasa; Yokoya, Naokazu

    2003-01-01

    This paper describes a new networked telepresence system which realizes virtual tours into a visualized dynamic real world without significant time delay. Our system is realized by the following three steps: (1) video-rate omnidirectional image acquisition, (2) transportation of an omnidirectional video stream via internet, and (3) real-time view-dependent perspective image generation from the omnidirectional video stream. Our system is applicable to real-time telepresence in the situation where the real world to be seen is far from an observation site, because the time delay from the change of user"s viewing direction to the change of displayed image is small and does not depend on the actual distance between both sites. Moreover, multiple users can look around from a single viewpoint in a visualized dynamic real world in different directions at the same time. In experiments, we have proved that the proposed system is useful for internet telepresence.

  12. Mission planning optimization of video satellite for ground multi-object staring imaging

    NASA Astrophysics Data System (ADS)

    Cui, Kaikai; Xiang, Junhua; Zhang, Yulin

    2018-03-01

    This study investigates the emergency scheduling problem of ground multi-object staring imaging for a single video satellite. In the proposed mission scenario, the ground objects require a specified duration of staring imaging by the video satellite. The planning horizon is not long, i.e., it is usually shorter than one orbit period. A binary decision variable and the imaging order are used as the design variables, and the total observation revenue combined with the influence of the total attitude maneuvering time is regarded as the optimization objective. Based on the constraints of the observation time windows, satellite attitude adjustment time, and satellite maneuverability, a constraint satisfaction mission planning model is established for ground object staring imaging by a single video satellite. Further, a modified ant colony optimization algorithm with tabu lists (Tabu-ACO) is designed to solve this problem. The proposed algorithm can fully exploit the intelligence and local search ability of ACO. Based on full consideration of the mission characteristics, the design of the tabu lists can reduce the search range of ACO and improve the algorithm efficiency significantly. The simulation results show that the proposed algorithm outperforms the conventional algorithm in terms of optimization performance, and it can obtain satisfactory scheduling results for the mission planning problem.

  13. Effects of orientation on acoustic scattering from Antarctic krill at 120 kHz

    NASA Astrophysics Data System (ADS)

    McGehee, D. E.; O'Driscoll, R. L.; Traykovski, L. V. Martin

    Backscattering measurements of 14 live individual Antarctic krill ( Euphausia superba) were made at a frequency of 120 kHz in a chilled insulated tank at the Long Marine Laboratory in Santa Cruz, CA. Individual animals were suspended in front of the transducers, were only loosely constrained, had substantial freedom to move, and showed more or less random orientation. One thousand echoes were collected per animal. Orientation data were recorded on video. The acoustic data were analyzed and target strengths determined from each echo. A method was developed for estimating the three-dimensional orientation of the krill based on the video images and was applied to five of them, giving their target strengths as functions of orientation. Scattering models based on a simplified distorted-wave Born approximation (DWBA) method were developed for five animals and compared with the measurements. Both measured and modeled scattering patterns showed that 120 kHz acoustic scattering levels are highly dependent on animal orientation. Use of these scattering patterns with orientation data from shipboard studies of E. superba gave mean scattering levels approximately 12 dB lower than peak levels. These results underscore the need for better in situ behavioral data to properly interpret acoustic survey results. A generic E. superba DWBA scattering model is proposed that is scalable by animal length. With good orientation information, this model could significantly improve the precision and accuracy of krill acoustic surveys.

  14. Assessing the Content of YouTube Videos in Educating Patients Regarding Common Imaging Examinations.

    PubMed

    Rosenkrantz, Andrew B; Won, Eugene; Doshi, Ankur M

    2016-12-01

    To assess the content of currently available YouTube videos seeking to educate patients regarding commonly performed imaging examinations. After initial testing of possible search terms, the first two pages of YouTube search results for "CT scan," "MRI," "ultrasound patient," "PET scan," and "mammogram" were reviewed to identify educational patient videos created by health organizations. Sixty-three included videos were viewed and assessed for a range of features. Average views per video were highest for MRI (293,362) and mammography (151,664). Twenty-seven percent of videos used a nontraditional format (eg, animation, song, humor). All videos (100.0%) depicted a patient undergoing the examination, 84.1% a technologist, and 20.6% a radiologist; 69.8% mentioned examination lengths, 65.1% potential pain/discomfort, 41.3% potential radiation, 36.5% a radiology report/results, 27.0% the radiologist's role in interpretation, and 13.3% laboratory work. For CT, 68.8% mentioned intravenous contrast and 37.5% mentioned contrast safety. For MRI, 93.8% mentioned claustrophobia, 87.5% noise, 75.0% need to sit still, 68.8% metal safety, 50.0% intravenous contrast, and 0.0% contrast safety. For ultrasound, 85.7% mentioned use of gel. For PET, 92.3% mentioned radiotracer injection, 61.5% fasting, and 46.2% diabetic precautions. For mammography, unrobing, avoiding deodorant, and possible additional images were all mentioned by 63.6%; dense breasts were mentioned by 0.0%. Educational patient videos on YouTube regarding common imaging examinations received high public interest and may provide a valuable patient resource. Videos most consistently provided information detailing the examination experience and less consistently provided safety information or described the presence and role of the radiologist. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  15. High Resolution X-Ray Phase Contrast Imaging with Acoustic Tissue-Selective Contrast Enhancement

    DTIC Science & Technology

    2005-06-01

    Ultrasonics Symp 1319 (1999). 17. Sarvazyan, A. P. Shear Wave Elasticity Imaging: A New Ultrasonic Technology of Medical Diagnostics. Ultrasound in...samples using acoustically modulated X-ray phase contrast imaging. 15. SUBJECT TERMS x-ray, ultrasound, phase contrast, imaging, elastography 16...x-rays, phase contrast imaging is based on phase changes as x-rays traverse a body resulting in wave interference that result in intensity changes in

  16. Peri-operative imaging of cancer margins with reflectance confocal microscopy during Mohs micrographic surgery: feasibility of a video-mosaicing algorithm

    NASA Astrophysics Data System (ADS)

    Flores, Eileen; Yelamos, Oriol; Cordova, Miguel; Kose, Kivanc; Phillips, William; Rossi, Anthony; Nehal, Kishwer; Rajadhyaksha, Milind

    2017-02-01

    Reflectance confocal microscopy (RCM) imaging shows promise for guiding surgical treatment of skin cancers. Recent technological advancements such as the introduction of the handheld version of the reflectance confocal microscope, video acquisition and video-mosaicing have improved RCM as an emerging tool to evaluate cancer margins during routine surgical skin procedures such as Mohs micrographic surgery (MMS). Detection of residual non-melanoma skin cancer (NMSC) tumor during MMS is feasible, as demonstrated by the introduction of real-time perioperative imaging on patients in the surgical setting. Our study is currently testing the feasibility of a new mosaicing algorithm for perioperative RCM imaging of NMSC cancer margins on patients during MMS. We report progress toward imaging and image analysis on forty-five patients, who presented for MMS at the MSKCC Dermatology service. The first 10 patients were used as a training set to establish an RCM imaging algorithm, which was implemented on the remaining test set of 35 patients. RCM imaging, using 35% AlCl3 for nuclear contrast, was performed pre- and intra-operatively with the Vivascope 3000 (Caliber ID). Imaging was performed in quadrants in the wound, to simulate the Mohs surgeon's examination of pathology. Videos were taken at the epidermal and deep dermal margins. Our Mohs surgeons assessed all videos and video-mosaics for quality and correlation to histology. Overall, our RCM video-mosaicing algorithm is feasible. RCM videos and video-mosaics of the epidermal and dermal margins were found to be of clinically acceptable quality. Assessment of cancer margins was affected by type of NMSC, size and location. Among the test set of 35 patients, 83% showed acceptable imaging quality, resolution and contrast. Visualization of nuclear and cellular morphology of residual BCC/SCC tumor and normal skin features could be detected in the peripheral and deep dermal margins. We observed correlation between the RCM videos/video

  17. Evaluation of a HDR image sensor with logarithmic response for mobile video-based applications

    NASA Astrophysics Data System (ADS)

    Tektonidis, Marco; Pietrzak, Mateusz; Monnin, David

    2017-10-01

    The performance of mobile video-based applications using conventional LDR (Low Dynamic Range) image sensors highly depends on the illumination conditions. As an alternative, HDR (High Dynamic Range) image sensors with logarithmic response are capable to acquire illumination-invariant HDR images in a single shot. We have implemented a complete image processing framework for a HDR sensor, including preprocessing methods (nonuniformity correction (NUC), cross-talk correction (CTC), and demosaicing) as well as tone mapping (TM). We have evaluated the HDR sensor for video-based applications w.r.t. the display of images and w.r.t. image analysis techniques. Regarding the display we have investigated the image intensity statistics over time, and regarding image analysis we assessed the number of feature correspondences between consecutive frames of temporal image sequences. For the evaluation we used HDR image data recorded from a vehicle on outdoor or combined outdoor/indoor itineraries, and we performed a comparison with corresponding conventional LDR image data.

  18. Multifunctional single beam acoustic tweezer for non-invasive cell/organism manipulation and tissue imaging

    NASA Astrophysics Data System (ADS)

    Lam, Kwok Ho; Li, Ying; Li, Yang; Lim, Hae Gyun; Zhou, Qifa; Shung, Koping Kirk

    2016-11-01

    Non-contact precise manipulation of single microparticles, cells, and organisms has attracted considerable interest in biophysics and biomedical engineering. Similar to optical tweezers, acoustic tweezers have been proposed to be capable of manipulating microparticles and even cells. Although there have been concerted efforts to develop tools for non-contact manipulation, no alternative to complex, unifunctional tweezer has yet been found. Here we report a simple, low-cost, multifunctional single beam acoustic tweezer (SBAT) that is capable of manipulating an individual micrometer scale non-spherical cell at Rayleigh regime and even a single millimeter scale organism at Mie regime, and imaging tissue as well. We experimentally demonstrate that the SBAT with an ultralow f-number (f# = focal length/aperture size) could manipulate an individual red blood cell and a single 1.6 mm-diameter fertilized Zebrafish egg, respectively. Besides, in vitro rat aorta images were collected successfully at dynamic foci in which the lumen and the outer surface of the aorta could be clearly seen. With the ultralow f-number, the SBAT offers the combination of large acoustic radiation force and narrow beam width, leading to strong trapping and high-resolution imaging capabilities. These attributes enable the feasibility of using a single acoustic device to perform non-invasive multi-functions simultaneously for biomedical and biophysical applications.

  19. Opto-acoustic image fusion technology for diagnostic breast imaging in a feasibility study

    NASA Astrophysics Data System (ADS)

    Zalev, Jason; Clingman, Bryan; Herzog, Don; Miller, Tom; Ulissey, Michael; Stavros, A. T.; Oraevsky, Alexander; Lavin, Philip; Kist, Kenneth; Dornbluth, N. C.; Otto, Pamela

    2015-03-01

    Functional opto-acoustic (OA) imaging was fused with gray-scale ultrasound acquired using a specialized duplex handheld probe. Feasibility Study findings indicated the potential to more accurately characterize breast masses for cancer than conventional diagnostic ultrasound (CDU). The Feasibility Study included OA imagery of 74 breast masses that were collected using the investigational Imagio® breast imaging system. Superior specificity and equal sensitivity to CDU was demonstrated, suggesting that OA fusion imaging may potentially obviate the need for negative biopsies without missing cancers in a certain percentage of breast masses. Preliminary results from a 100 subject Pilot Study are also discussed. A larger Pivotal Study (n=2,097 subjects) is underway to confirm the Feasibility Study and Pilot Study findings.

  20. Multi-acoustic lens design methodology for a low cost C-scan photoacoustic imaging camera

    NASA Astrophysics Data System (ADS)

    Chinni, Bhargava; Han, Zichao; Brown, Nicholas; Vallejo, Pedro; Jacobs, Tess; Knox, Wayne; Dogra, Vikram; Rao, Navalgund

    2016-03-01

    We have designed and implemented a novel acoustic lens based focusing technology into a prototype photoacoustic imaging camera. All photoacoustically generated waves from laser exposed absorbers within a small volume get focused simultaneously by the lens onto an image plane. We use a multi-element ultrasound transducer array to capture the focused photoacoustic signals. Acoustic lens eliminates the need for expensive data acquisition hardware systems, is faster compared to electronic focusing and enables real-time image reconstruction. Using this photoacoustic imaging camera, we have imaged more than 150 several centimeter size ex-vivo human prostate, kidney and thyroid specimens with a millimeter resolution for cancer detection. In this paper, we share our lens design strategy and how we evaluate the resulting quality metrics (on and off axis point spread function, depth of field and modulation transfer function) through simulation. An advanced toolbox in MATLAB was adapted and used for simulating a two-dimensional gridded model that incorporates realistic photoacoustic signal generation and acoustic wave propagation through the lens with medium properties defined on each grid point. Two dimensional point spread functions have been generated and compared with experiments to demonstrate the utility of our design strategy. Finally we present results from work in progress on the use of two lens system aimed at further improving some of the quality metrics of our system.

  1. A video event trigger for high frame rate, high resolution video technology

    NASA Astrophysics Data System (ADS)

    Williams, Glenn L.

    1991-12-01

    When video replaces film the digitized video data accumulates very rapidly, leading to a difficult and costly data storage problem. One solution exists for cases when the video images represent continuously repetitive 'static scenes' containing negligible activity, occasionally interrupted by short events of interest. Minutes or hours of redundant video frames can be ignored, and not stored, until activity begins. A new, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term or short term changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pretrigger and post-trigger storage techniques are then adaptable for archiving the digital stream from only the significant video images.

  2. A video event trigger for high frame rate, high resolution video technology

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    1991-01-01

    When video replaces film the digitized video data accumulates very rapidly, leading to a difficult and costly data storage problem. One solution exists for cases when the video images represent continuously repetitive 'static scenes' containing negligible activity, occasionally interrupted by short events of interest. Minutes or hours of redundant video frames can be ignored, and not stored, until activity begins. A new, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term or short term changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pretrigger and post-trigger storage techniques are then adaptable for archiving the digital stream from only the significant video images.

  3. Video-rate in vivo fluorescence imaging with a line-scanned dual-axis confocal microscope.

    PubMed

    Chen, Ye; Wang, Danni; Khan, Altaz; Wang, Yu; Borwege, Sabine; Sanai, Nader; Liu, Jonathan T C

    2015-10-01

    Video-rate optical-sectioning microscopy of living organisms would allow for the investigation of dynamic biological processes and would also reduce motion artifacts, especially for in vivo imaging applications. Previous feasibility studies, with a slow stage-scanned line-scanned dual-axis confocal (LS-DAC) microscope, have demonstrated that LS-DAC microscopy is capable of imaging tissues with subcellular resolution and high contrast at moderate depths of up to several hundred microns. However, the sensitivity and performance of a video-rate LS-DAC imaging system, with low-numerical aperture optics, have yet to be demonstrated. Here, we report on the construction and validation of a video-rate LS-DAC system that possesses sufficient sensitivity to visualize fluorescent contrast agents that are topically applied or systemically delivered in animal and human tissues. We present images of murine oral mucosa that are topically stained with methylene blue, and images of protoporphyrin IX-expressing brain tumor from glioma patients that have been administered 5-aminolevulinic acid prior to surgery. In addition, we demonstrate in vivo fluorescence imaging of red blood cells trafficking within the capillaries of a mouse ear, at frame rates of up to 30 fps. These results can serve as a benchmark for miniature in vivo microscopy devices under development.

  4. Video-rate in vivo fluorescence imaging with a line-scanned dual-axis confocal microscope

    NASA Astrophysics Data System (ADS)

    Chen, Ye; Wang, Danni; Khan, Altaz; Wang, Yu; Borwege, Sabine; Sanai, Nader; Liu, Jonathan T. C.

    2015-10-01

    Video-rate optical-sectioning microscopy of living organisms would allow for the investigation of dynamic biological processes and would also reduce motion artifacts, especially for in vivo imaging applications. Previous feasibility studies, with a slow stage-scanned line-scanned dual-axis confocal (LS-DAC) microscope, have demonstrated that LS-DAC microscopy is capable of imaging tissues with subcellular resolution and high contrast at moderate depths of up to several hundred microns. However, the sensitivity and performance of a video-rate LS-DAC imaging system, with low-numerical aperture optics, have yet to be demonstrated. Here, we report on the construction and validation of a video-rate LS-DAC system that possesses sufficient sensitivity to visualize fluorescent contrast agents that are topically applied or systemically delivered in animal and human tissues. We present images of murine oral mucosa that are topically stained with methylene blue, and images of protoporphyrin IX-expressing brain tumor from glioma patients that have been administered 5-aminolevulinic acid prior to surgery. In addition, we demonstrate in vivo fluorescence imaging of red blood cells trafficking within the capillaries of a mouse ear, at frame rates of up to 30 fps. These results can serve as a benchmark for miniature in vivo microscopy devices under development.

  5. Positive effect on patient experience of video information given prior to cardiovascular magnetic resonance imaging: A clinical trial.

    PubMed

    Ahlander, Britt-Marie; Engvall, Jan; Maret, Eva; Ericsson, Elisabeth

    2018-03-01

    To evaluate the effect of video information given before cardiovascular magnetic resonance imaging on patient anxiety and to compare patient experiences of cardiovascular magnetic resonance imaging versus myocardial perfusion scintigraphy. To evaluate whether additional information has an impact on motion artefacts. Cardiovascular magnetic resonance imaging and myocardial perfusion scintigraphy are technically advanced methods for the evaluation of heart diseases. Although cardiovascular magnetic resonance imaging is considered to be painless, patients may experience anxiety due to the closed environment. A prospective randomised intervention study, not registered. The sample (n = 148) consisted of 97 patients referred for cardiovascular magnetic resonance imaging, randomised to receive either video information in addition to standard text-information (CMR-video/n = 49) or standard text-information alone (CMR-standard/n = 48). A third group undergoing myocardial perfusion scintigraphy (n = 51) was compared with the cardiovascular magnetic resonance imaging-standard group. Anxiety was evaluated before, immediately after the procedure and 1 week later. Five questionnaires were used: Cardiac Anxiety Questionnaire, State-Trait Anxiety Inventory, Hospital Anxiety and Depression scale, MRI Fear Survey Schedule and the MRI-Anxiety Questionnaire. Motion artefacts were evaluated by three observers, blinded to the information given. Data were collected between April 2015-April 2016. The study followed the CONSORT guidelines. The CMR-video group scored lower (better) than the cardiovascular magnetic resonance imaging-standard group in the factor Relaxation (p = .039) but not in the factor Anxiety. Anxiety levels were lower during scintigraphic examinations compared to the CMR-standard group (p < .001). No difference was found regarding motion artefacts between CMR-video and CMR-standard. Patient ability to relax during cardiovascular magnetic resonance imaging

  6. Listeners' expectation of room acoustical parameters based on visual cues

    NASA Astrophysics Data System (ADS)

    Valente, Daniel L.

    Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness. The current research presents an immersive audio-visual study, in which participants are instructed to make spatial congruency and quantity judgments in dynamic cross-modal environments. The results of these psychophysical tests suggest the importance of consilient audio-visual presentation to the legibility of an auditory scene. Several studies have looked into audio-visual interaction in room perception in recent years, but these studies rely on static images, speech signals, or photographs alone to represent the visual scene. Building on these studies, the aim is to propose a testing method that uses monochromatic compositing (blue-screen technique) to position a studio recording of a musical performance in a number of virtual acoustical environments and ask subjects to assess these environments. In the first experiment of the study, video footage was taken from five rooms varying in physical size from a small studio to a small performance hall. Participants were asked to perceptually align two distinct acoustical parameters---early-to-late reverberant energy ratio and reverberation time---of two solo musical performances in five contrasting visual environments according to their expectations of how the room should sound given its visual appearance. In the second experiment in the study, video footage shot from four different listening positions within a general-purpose space was coupled with sounds derived from measured binaural impulse responses (IRs). The relationship between the presented image, sound, and virtual receiver position was examined. It was found that many visual cues caused different perceived events of the acoustic environment. This included the visual attributes of the space in which the performance was located as well as the visual attributes of the performer

  7. A real-time remote video streaming platform for ultrasound imaging.

    PubMed

    Ahmadi, Mehdi; Gross, Warren J; Kadoury, Samuel

    2016-08-01

    Ultrasound is a viable imaging technology in remote and resources-limited areas. Ultrasonography is a user-dependent skill which depends on a high degree of training and hands-on experience. However, there is a limited number of skillful sonographers located in remote areas. In this work, we aim to develop a real-time video streaming platform which allows specialist physicians to remotely monitor ultrasound exams. To this end, an ultrasound stream is captured and transmitted through a wireless network into remote computers, smart-phones and tablets. In addition, the system is equipped with a camera to track the position of the ultrasound probe. The main advantage of our work is using an open source platform for video streaming which gives us more control over streaming parameters than the available commercial products. The transmission delays of the system are evaluated for several ultrasound video resolutions and the results show that ultrasound videos close to the high-definition (HD) resolution can be received and displayed on an Android tablet with the delay of 0.5 seconds which is acceptable for accurate real-time diagnosis.

  8. Wide field video-rate two-photon imaging by using spinning disk beam scanner

    NASA Astrophysics Data System (ADS)

    Maeda, Yasuhiro; Kurokawa, Kazuo; Ito, Yoko; Wada, Satoshi; Nakano, Akihiko

    2018-02-01

    The microscope technology with wider view field, deeper penetration depth, higher spatial resolution and higher imaging speed are required to investigate the intercellular dynamics or interactions of molecules and organs in cells or a tissue in more detail. The two-photon microscope with a near infrared (NIR) femtosecond laser is one of the technique to improve the penetration depth and spatial resolution. However, the video-rate or high-speed imaging with wide view field is difficult to perform with the conventional two-photon microscope. Because point-to-point scanning method is used in conventional one, so it's difficult to achieve video-rate imaging. In this study, we developed a two-photon microscope with spinning disk beam scanner and femtosecond NIR fiber laser with around 10 W average power for the microscope system to achieve above requirements. The laser is consisted of an oscillator based on mode-locked Yb fiber laser, a two-stage pre-amplifier, a main amplifier based on a Yb-doped photonic crystal fiber (PCF), and a pulse compressor with a pair of gratings. The laser generates a beam with maximally 10 W average power, 300 fs pulse width and 72 MHz repetition rate. And the beam incident to a spinning beam scanner (Yokogawa Electric) optimized for two-photon imaging. By using this system, we achieved to obtain the 3D images with over 1mm-penetration depth and video-rate image with 350 x 350 um view field from the root of Arabidopsis thaliana.

  9. Efficient super-resolution image reconstruction applied to surveillance video captured by small unmanned aircraft systems

    NASA Astrophysics Data System (ADS)

    He, Qiang; Schultz, Richard R.; Chu, Chee-Hung Henry

    2008-04-01

    The concept surrounding super-resolution image reconstruction is to recover a highly-resolved image from a series of low-resolution images via between-frame subpixel image registration. In this paper, we propose a novel and efficient super-resolution algorithm, and then apply it to the reconstruction of real video data captured by a small Unmanned Aircraft System (UAS). Small UAS aircraft generally have a wingspan of less than four meters, so that these vehicles and their payloads can be buffeted by even light winds, resulting in potentially unstable video. This algorithm is based on a coarse-to-fine strategy, in which a coarsely super-resolved image sequence is first built from the original video data by image registration and bi-cubic interpolation between a fixed reference frame and every additional frame. It is well known that the median filter is robust to outliers. If we calculate pixel-wise medians in the coarsely super-resolved image sequence, we can restore a refined super-resolved image. The primary advantage is that this is a noniterative algorithm, unlike traditional approaches based on highly-computational iterative algorithms. Experimental results show that our coarse-to-fine super-resolution algorithm is not only robust, but also very efficient. In comparison with five well-known super-resolution algorithms, namely the robust super-resolution algorithm, bi-cubic interpolation, projection onto convex sets (POCS), the Papoulis-Gerchberg algorithm, and the iterated back projection algorithm, our proposed algorithm gives both strong efficiency and robustness, as well as good visual performance. This is particularly useful for the application of super-resolution to UAS surveillance video, where real-time processing is highly desired.

  10. Video image processor on the Spacelab 2 Solar Optical Universal Polarimeter /SL2 SOUP/

    NASA Technical Reports Server (NTRS)

    Lindgren, R. W.; Tarbell, T. D.

    1981-01-01

    The SOUP instrument is designed to obtain diffraction-limited digital images of the sun with high photometric accuracy. The Video Processor originated from the requirement to provide onboard real-time image processing, both to reduce the telemetry rate and to provide meaningful video displays of scientific data to the payload crew. This original concept has evolved into a versatile digital processing system with a multitude of other uses in the SOUP program. The central element in the Video Processor design is a 16-bit central processing unit based on 2900 family bipolar bit-slice devices. All arithmetic, logical and I/O operations are under control of microprograms, stored in programmable read-only memory and initiated by commands from the LSI-11. Several functions of the Video Processor are described, including interface to the High Rate Multiplexer downlink, cosmetic and scientific data processing, scan conversion for crew displays, focus and exposure testing, and use as ground support equipment.

  11. Experimental results for a prototype 3-D acoustic imaging system using an ultra-sparse planar array

    NASA Astrophysics Data System (ADS)

    Impagliazzo, John M.; Chiang, Alice M.; Broadstone, Steven R.

    2002-11-01

    A handheld high resolution sonar has been under development to provide Navy Divers with a 3-D acoustic imaging system for mine reconnaissance. An ultra-sparse planar array, consisting of 121 1 mm x1 mm, 2 MHz elements, was fabricated to provide 3-D acoustic images. The array was 10 cm x10 cm. A full array at this frequency with elements at half-wavelength spacing would consist of 16384 elements. The first phase of testing of the planar array was completed in September 2001 with the characterization of the array in the NUWC Acoustic Test Facility (ATF). The center frequency was 2 MHz with a 667 kHz bandwidth. A system-level technology demonstration will be conducted in July 2002 with a real-time beamformer and near real-time 3-D imaging software. The demonstration phase consists of imaging simple targets at a range of 3 m in the ATF. Experimental results obtained will be reported on. [Work supported by the Defense Applied Research Project Agency, Advance Technology Office, Dr. Theo Kooij, Program Manager.

  12. Video image processing greatly enhances contrast, quality, and speed in polarization-based microscopy

    PubMed Central

    1981-01-01

    Video cameras with contrast and black level controls can yield polarized light and differential interference contrast microscope images with unprecedented image quality, resolution, and recording speed. The theoretical basis and practical aspects of video polarization and differential interference contrast microscopy are discussed and several applications in cell biology are illustrated. These include: birefringence of cortical structures and beating cilia in Stentor, birefringence of rotating flagella on a single bacterium, growth and morphogenesis of echinoderm skeletal spicules in culture, ciliary and electrical activity in a balancing organ of a nudibranch snail, and acrosomal reaction in activated sperm. PMID:6788777

  13. Interferometric imaging of acoustical phenomena using high-speed polarization camera and 4-step parallel phase-shifting technique

    NASA Astrophysics Data System (ADS)

    Ishikawa, K.; Yatabe, K.; Ikeda, Y.; Oikawa, Y.; Onuma, T.; Niwa, H.; Yoshii, M.

    2017-02-01

    Imaging of sound aids the understanding of the acoustical phenomena such as propagation, reflection, and diffraction, which is strongly required for various acoustical applications. The imaging of sound is commonly done by using a microphone array, whereas optical methods have recently been interested due to its contactless nature. The optical measurement of sound utilizes the phase modulation of light caused by sound. Since light propagated through a sound field changes its phase as proportional to the sound pressure, optical phase measurement technique can be used for the sound measurement. Several methods including laser Doppler vibrometry and Schlieren method have been proposed for that purpose. However, the sensitivities of the methods become lower as a frequency of sound decreases. In contrast, since the sensitivities of the phase-shifting technique do not depend on the frequencies of sounds, that technique is suitable for the imaging of sounds in the low-frequency range. The principle of imaging of sound using parallel phase-shifting interferometry was reported by the authors (K. Ishikawa et al., Optics Express, 2016). The measurement system consists of a high-speed polarization camera made by Photron Ltd., and a polarization interferometer. This paper reviews the principle briefly and demonstrates the high-speed imaging of acoustical phenomena. The results suggest that the proposed system can be applied to various industrial problems in acoustical engineering.

  14. Video-rate scanning two-photon excitation fluorescence microscopy and ratio imaging with cameleons.

    PubMed Central

    Fan, G Y; Fujisaki, H; Miyawaki, A; Tsay, R K; Tsien, R Y; Ellisman, M H

    1999-01-01

    A video-rate (30 frames/s) scanning two-photon excitation microscope has been successfully tested. The microscope, based on a Nikon RCM 8000, incorporates a femtosecond pulsed laser with wavelength tunable from 690 to 1050 nm, prechirper optics for laser pulse-width compression, resonant galvanometer for video-rate point scanning, and a pair of nonconfocal detectors for fast emission ratioing. An increase in fluorescent emission of 1.75-fold is consistently obtained with the use of the prechirper optics. The nonconfocal detectors provide another 2.25-fold increase in detection efficiency. Ratio imaging and optical sectioning can therefore be performed more efficiently without confocal optics. Faster frame rates, at 60, 120, and 240 frames/s, can be achieved with proportionally reduced scan lines per frame. Useful two-photon images can be acquired at video rate with a laser power as low as 2.7 mW at specimen with the genetically modified green fluorescent proteins. Preliminary results obtained using this system confirm that the yellow "cameleons" exhibit similar optical properties as under one-photon excitation conditions. Dynamic two-photon images of cardiac myocytes and ratio images of yellow cameleon-2.1, -3.1, and -3.1nu are also presented. PMID:10233058

  15. Super-resolution image reconstruction from UAS surveillance video through affine invariant interest point-based motion estimation

    NASA Astrophysics Data System (ADS)

    He, Qiang; Schultz, Richard R.; Wang, Yi; Camargo, Aldo; Martel, Florent

    2008-01-01

    In traditional super-resolution methods, researchers generally assume that accurate subpixel image registration parameters are given a priori. In reality, accurate image registration on a subpixel grid is the single most critically important step for the accuracy of super-resolution image reconstruction. In this paper, we introduce affine invariant features to improve subpixel image registration, which considerably reduces the number of mismatched points and hence makes traditional image registration more efficient and more accurate for super-resolution video enhancement. Affine invariant interest points include those corners that are invariant to affine transformations, including scale, rotation, and translation. They are extracted from the second moment matrix through the integration and differentiation covariance matrices. Our tests are based on two sets of real video captured by a small Unmanned Aircraft System (UAS) aircraft, which is highly susceptible to vibration from even light winds. The experimental results from real UAS surveillance video show that affine invariant interest points are more robust to perspective distortion and present more accurate matching than traditional Harris/SIFT corners. In our experiments on real video, all matching affine invariant interest points are found correctly. In addition, for the same super-resolution problem, we can use many fewer affine invariant points than Harris/SIFT corners to obtain good super-resolution results.

  16. ACOUSTICAL IMAGING AND MECHANICAL PROPERTIES OF SOFT ROCK AND MARINE SEDIMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thurman E. Scott, Jr., Ph.D.; Younane Abousleiman, Ph.D.; Musharraf Zaman, Ph.D., P.E.

    2002-11-18

    During the seven quarter of the project the research team analyzed some of the acoustic velocity data and rock deformation data. The goal is to create a series of ''deformation-velocity maps'' which can outline the types of rock deformational mechanisms which can occur at high pressures and then associate those with specific compressional or shear wave velocity signatures. During this quarter, we began to analyze both the acoustical and deformational properties of the various rock types. Some of the preliminary velocity data from the Danian chalk will be presented in this report. This rock type was selected for the initialmore » efforts as it will be used in the tomographic imaging study outlined in Task 10. This is one of the more important rock types in the study as the Danian chalk is thought to represent an excellent analog to the Ekofisk chalk that has caused so many problems in the North Sea. Some of the preliminary acoustic velocity data obtained during this phase of the project indicates that during pore collapse and compaction of this chalk, the acoustic velocities can change by as much as 200 m/s. Theoretically, this significant velocity change should be detectable during repeated successive 3-D seismic images. In addition, research continues with an analysis of the unconsolidated sand samples at high confining pressures obtained in Task 9. The analysis of the results indicate that sands with 10% volume of fines can undergo liquefaction at lower stress conditions than sand samples which do not have fines added. This liquefaction and/or sand flow is similar to ''shallow water'' flows observed during drilling in the offshore Gulf of Mexico.« less

  17. Software-codec-based full motion video conferencing on the PC using visual pattern image sequence coding

    NASA Astrophysics Data System (ADS)

    Barnett, Barry S.; Bovik, Alan C.

    1995-04-01

    This paper presents a real time full motion video conferencing system based on the Visual Pattern Image Sequence Coding (VPISC) software codec. The prototype system hardware is comprised of two personal computers, two camcorders, two frame grabbers, and an ethernet connection. The prototype system software has a simple structure. It runs under the Disk Operating System, and includes a user interface, a video I/O interface, an event driven network interface, and a free running or frame synchronous video codec that also acts as the controller for the video and network interfaces. Two video coders have been tested in this system. Simple implementations of Visual Pattern Image Coding and VPISC have both proven to support full motion video conferencing with good visual quality. Future work will concentrate on expanding this prototype to support the motion compensated version of VPISC, as well as encompassing point-to-point modem I/O and multiple network protocols. The application will be ported to multiple hardware platforms and operating systems. The motivation for developing this prototype system is to demonstrate the practicality of software based real time video codecs. Furthermore, software video codecs are not only cheaper, but are more flexible system solutions because they enable different computer platforms to exchange encoded video information without requiring on-board protocol compatible video codex hardware. Software based solutions enable true low cost video conferencing that fits the `open systems' model of interoperability that is so important for building portable hardware and software applications.

  18. Time-resolved coherent X-ray diffraction imaging of surface acoustic waves

    PubMed Central

    Nicolas, Jan-David; Reusch, Tobias; Osterhoff, Markus; Sprung, Michael; Schülein, Florian J. R.; Krenner, Hubert J.; Wixforth, Achim; Salditt, Tim

    2014-01-01

    Time-resolved coherent X-ray diffraction experiments of standing surface acoustic waves, illuminated under grazing incidence by a nanofocused synchrotron beam, are reported. The data have been recorded in stroboscopic mode at controlled and varied phase between the acoustic frequency generator and the synchrotron bunch train. At each time delay (phase angle), the coherent far-field diffraction pattern in the small-angle regime is inverted by an iterative algorithm to yield the local instantaneous surface height profile along the optical axis. The results show that periodic nanoscale dynamics can be imaged at high temporal resolution in the range of 50 ps (pulse length). PMID:25294979

  19. Time-resolved coherent X-ray diffraction imaging of surface acoustic waves.

    PubMed

    Nicolas, Jan-David; Reusch, Tobias; Osterhoff, Markus; Sprung, Michael; Schülein, Florian J R; Krenner, Hubert J; Wixforth, Achim; Salditt, Tim

    2014-10-01

    Time-resolved coherent X-ray diffraction experiments of standing surface acoustic waves, illuminated under grazing incidence by a nanofocused synchrotron beam, are reported. The data have been recorded in stroboscopic mode at controlled and varied phase between the acoustic frequency generator and the synchrotron bunch train. At each time delay (phase angle), the coherent far-field diffraction pattern in the small-angle regime is inverted by an iterative algorithm to yield the local instantaneous surface height profile along the optical axis. The results show that periodic nanoscale dynamics can be imaged at high temporal resolution in the range of 50 ps (pulse length).

  20. Change Detection in Uav Video Mosaics Combining a Feature Based Approach and Extended Image Differencing

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Krüger, Wolfgang

    2016-06-01

    Change detection is an important task when using unmanned aerial vehicles (UAV) for video surveillance. We address changes of short time scale using observations in time distances of a few hours. Each observation (previous and current) is a short video sequence acquired by UAV in near-Nadir view. Relevant changes are, e.g., recently parked or moved vehicles. Examples for non-relevant changes are parallaxes caused by 3D structures of the scene, shadow and illumination changes, and compression or transmission artifacts. In this paper we present (1) a new feature based approach to change detection, (2) a combination with extended image differencing (Saur et al., 2014), and (3) the application to video sequences using temporal filtering. In the feature based approach, information about local image features, e.g., corners, is extracted in both images. The label "new object" is generated at image points, where features occur in the current image and no or weaker features are present in the previous image. The label "vanished object" corresponds to missing or weaker features in the current image and present features in the previous image. This leads to two "directed" change masks and differs from image differencing where only one "undirected" change mask is extracted which combines both label types to the single label "changed object". The combination of both algorithms is performed by merging the change masks of both approaches. A color mask showing the different contributions is used for visual inspection by a human image interpreter.

  1. Full-frame video stabilization with motion inpainting.

    PubMed

    Matsushita, Yasuyuki; Ofek, Eyal; Ge, Weina; Tang, Xiaoou; Shum, Heung-Yeung

    2006-07-01

    Video stabilization is an important video enhancement technology which aims at removing annoying shaky motion from videos. We propose a practical and robust approach of video stabilization that produces full-frame stabilized videos with good visual quality. While most previous methods end up with producing smaller size stabilized videos, our completion method can produce full-frame videos by naturally filling in missing image parts by locally aligning image data of neighboring frames. To achieve this, motion inpainting is proposed to enforce spatial and temporal consistency of the completion in both static and dynamic image areas. In addition, image quality in the stabilized video is enhanced with a new practical deblurring algorithm. Instead of estimating point spread functions, our method transfers and interpolates sharper image pixels of neighboring frames to increase the sharpness of the frame. The proposed video completion and deblurring methods enabled us to develop a complete video stabilizer which can naturally keep the original image quality in the stabilized videos. The effectiveness of our method is confirmed by extensive experiments over a wide variety of videos.

  2. Measures in 2015 Using a DSLR and Video Lucky Imaging

    NASA Astrophysics Data System (ADS)

    Cotterell, David

    2017-10-01

    Measures of 31 pairs taken in 2015 are reported. A 202mm, f/15 Maksutov-Cassegrain and a DSLR in video crop mode were used for the acquisition of “lucky images”. Calibration was via essentially stationary wider pairs, as analyzed and discussed.

  3. A professional and cost effective digital video editing and image storage system for the operating room.

    PubMed

    Scollato, A; Perrini, P; Benedetto, N; Di Lorenzo, N

    2007-06-01

    We propose an easy-to-construct digital video editing system ideal to produce video documentation and still images. A digital video editing system applicable to many video sources in the operating room is described in detail. The proposed system has proved easy to use and permits one to obtain videography quickly and easily. Mixing different streams of video input from all the devices in use in the operating room, the application of filters and effects produces a final, professional end-product. Recording on a DVD provides an inexpensive, portable and easy-to-use medium to store or re-edit or tape at a later time. From stored videography it is easy to extract high-quality, still images useful for teaching, presentations and publications. In conclusion digital videography and still photography can easily be recorded by the proposed system, producing high-quality video recording. The use of firewire ports provides good compatibility with next-generation hardware and software. The high standard of quality makes the proposed system one of the lowest priced products available today.

  4. Passive Imaging in Nondiffuse Acoustic Wavefields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mulargia, Francesco; Castellaro, Silvia

    2008-05-30

    A main property of diffuse acoustic wavefields is that, taken any two points, each of them can be seen as the source of waves and the other as the recording station. This property is shown to follow simply from array azimuthal selectivity and Huygens principle in a locally isotropic wavefield. Without time reversal, this property holds approximately also in anisotropic azimuthally uniform wavefields, implying much looser constraints for undistorted passive imaging than those required by a diffuse field. A notable example is the seismic noise field, which is generally nondiffuse, but is found to be compatible with a finite aperturemore » anisotropic uniform wavefield. The theoretical predictions were confirmed by an experiment on seismic noise in the mainland of Venice, Italy.« less

  5. Acoustic emission linear pulse holography

    DOEpatents

    Collins, H.D.; Busse, L.J.; Lemon, D.K.

    1983-10-25

    This device relates to the concept of and means for performing Acoustic Emission Linear Pulse Holography, which combines the advantages of linear holographic imaging and Acoustic Emission into a single non-destructive inspection system. This unique system produces a chronological, linear holographic image of a flaw by utilizing the acoustic energy emitted during crack growth. The innovation is the concept of utilizing the crack-generated acoustic emission energy to generate a chronological series of images of a growing crack by applying linear, pulse holographic processing to the acoustic emission data. The process is implemented by placing on a structure an array of piezoelectric sensors (typically 16 or 32 of them) near the defect location. A reference sensor is placed between the defect and the array.

  6. From Video to Photo

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Ever wonder whether a still shot from a home video could serve as a "picture perfect" photograph worthy of being framed and proudly displayed on the mantle? Wonder no more. A critical imaging code used to enhance video footage taken from spaceborne imaging instruments is now available within a portable photography tool capable of producing an optimized, high-resolution image from multiple video frames.

  7. VQone MATLAB toolbox: A graphical experiment builder for image and video quality evaluations: VQone MATLAB toolbox.

    PubMed

    Nuutinen, Mikko; Virtanen, Toni; Rummukainen, Olli; Häkkinen, Jukka

    2016-03-01

    This article presents VQone, a graphical experiment builder, written as a MATLAB toolbox, developed for image and video quality ratings. VQone contains the main elements needed for the subjective image and video quality rating process. This includes building and conducting experiments and data analysis. All functions can be controlled through graphical user interfaces. The experiment builder includes many standardized image and video quality rating methods. Moreover, it enables the creation of new methods or modified versions from standard methods. VQone is distributed free of charge under the terms of the GNU general public license and allows code modifications to be made so that the program's functions can be adjusted according to a user's requirements. VQone is available for download from the project page (http://www.helsinki.fi/psychology/groups/visualcognition/).

  8. Computational imaging with a single-pixel detector and a consumer video projector

    NASA Astrophysics Data System (ADS)

    Sych, D.; Aksenov, M.

    2018-02-01

    Single-pixel imaging is a novel rapidly developing imaging technique that employs spatially structured illumination and a single-pixel detector. In this work, we experimentally demonstrate a fully operating modular single-pixel imaging system. Light patterns in our setup are created with help of a computer-controlled digital micromirror device from a consumer video projector. We investigate how different working modes and settings of the projector affect the quality of reconstructed images. We develop several image reconstruction algorithms and compare their performance for real imaging. Also, we discuss the potential use of the single-pixel imaging system for quantum applications.

  9. Video-mosaicking of in vivo reflectance confocal microscopy images for noninvasive examination of skin lesion (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Kose, Kivanc; Gou, Mengran; Yelamos, Oriol; Cordova, Miguel A.; Rossi, Anthony; Nehal, Kishwer S.; Camps, Octavia I.; Dy, Jennifer G.; Brooks, Dana H.; Rajadhyaksha, Milind

    2017-02-01

    In this report we describe a computer vision based pipeline to convert in-vivo reflectance confocal microscopy (RCM) videos collected with a handheld system into large field of view (FOV) mosaics. For many applications such as imaging of hard to access lesions, intraoperative assessment of MOHS margins, or delineation of lesion margins beyond clinical borders, raster scan based mosaicing techniques have clinically significant limitations. In such cases, clinicians often capture RCM videos by freely moving a handheld microscope over the area of interest, but the resulting videos lose large-scale spatial relationships. Videomosaicking is a standard computational imaging technique to register, and stitch together consecutive frames of videos into large FOV high resolution mosaics. However, mosaicing RCM videos collected in-vivo has unique challenges: (i) tissue may deform or warp due to physical contact with the microscope objective lens, (ii) discontinuities or "jumps" between consecutive images and motion blur artifacts may occur, due to manual operation of the microscope, and (iii) optical sectioning and resolution may vary between consecutive images due to scattering and aberrations induced by changes in imaging depth and tissue morphology. We addressed these challenges by adapting or developing new algorithmic methods for videomosaicking, specifically by modeling non-rigid deformations, followed by automatically detecting discontinuities (cut locations) and, finally, applying a data-driven image stitching approach that fully preserves resolution and tissue morphologic detail without imposing arbitrary pre-defined boundaries. We will present example mosaics obtained by clinical imaging of both melanoma and non-melanoma skin cancers. The ability to combine freehand mosaicing for handheld microscopes with preserved cellular resolution will have high impact application in diverse clinical settings, including low-resource healthcare systems.

  10. Ocean acoustic reverberation tomography.

    PubMed

    Dunn, Robert A

    2015-12-01

    Seismic wide-angle imaging using ship-towed acoustic sources and networks of ocean bottom seismographs is a common technique for exploring earth structure beneath the oceans. In these studies, the recorded data are dominated by acoustic waves propagating as reverberations in the water column. For surveys with a small receiver spacing (e.g., <10 km), the acoustic wave field densely samples properties of the water column over the width of the receiver array. A method, referred to as ocean acoustic reverberation tomography, is developed that uses the travel times of direct and reflected waves to image ocean acoustic structure. Reverberation tomography offers an alternative approach for determining the structure of the oceans and advancing the understanding of ocean heat content and mixing processes. The technique has the potential for revealing small-scale ocean thermal structure over the entire vertical height of the water column and along long survey profiles or across three-dimensional volumes of the ocean. For realistic experimental geometries and data noise levels, the method can produce images of ocean sound speed on a smaller scale than traditional acoustic tomography.

  11. TU-F-CAMPUS-I-04: Head-Only Asymmetric Gradient System Evaluation: ACR Image Quality and Acoustic Noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weavers, P; Shu, Y; Tao, S

    Purpose: A high-performance head-only magnetic resonance imaging gradient system with an acquisition volume of 26 cm employing an asymmetric design for the transverse coils has been developed. It is able to reach a magnitude of 85 mT/m at a slew rate of 700 T/m/s, but operated at 80 mT/m and 500 T/m/s for this test. A challenge resulting from this asymmetric design is that the gradient nonlinearly exhibits both odd- and even-ordered terms, and as the full imaging field of view is often used, the nonlinearity is pronounced. The purpose of this work is to show the system can producemore » clinically useful images after an on-site gradient nonlinearity calibration and correction, and show that acoustic noise levels fall within non-significant risk (NSR) limits for standard clinical pulse sequences. Methods: The head-only gradient system was inserted into a standard 3T wide-bore scanner without acoustic damping. The ACR phantom was scanned in an 8-channel receive-only head coil and the standard American College of Radiology (ACR) MRI quality control (QC) test was performed. Acoustic noise levels were measured for several standard pulse sequences. Results: Images acquired with the head-only gradient system passed all ACR MR image quality tests; Both even and odd-order gradient distortion correction terms were required for the asymmetric gradients to pass. Acoustic noise measurements were within FDA NSR guidelines of 99 dBA (with assumed 20 dBA hearing protection) A-weighted and 140 dB for peak for all but one sequence. Note the gradient system was installed without any shroud or acoustic batting. We expect final system integration to greatly reduce noise experienced by the patient. Conclusion: A high-performance head-only asymmetric gradient system operating at 80 mT/m and 500 T/m/s conforms to FDA acoustic noise limits in all but one case, and passes all the ACR MR image quality control tests. This work was supported in part by the NIH grant 5R01EB010065.« less

  12. Dependency of human target detection performance on clutter and quality of supporting image analysis algorithms in a video surveillance task

    NASA Astrophysics Data System (ADS)

    Huber, Samuel; Dunau, Patrick; Wellig, Peter; Stein, Karin

    2017-10-01

    Background: In target detection, the success rates depend strongly on human observer performances. Two prior studies tested the contributions of target detection algorithms and prior training sessions. The aim of this Swiss-German cooperation study was to evaluate the dependency of human observer performance on the quality of supporting image analysis algorithms. Methods: The participants were presented 15 different video sequences. Their task was to detect all targets in the shortest possible time. Each video sequence showed a heavily cluttered simulated public area from a different viewing angle. In each video sequence, the number of avatars in the area was altered to 100, 150 and 200 subjects. The number of targets appearing was kept at 10%. The number of marked targets varied from 0, 5, 10, 20 up to 40 marked subjects while keeping the positive predictive value of the detection algorithm at 20%. During the task, workload level was assessed by applying an acoustic secondary task. Detection rates and detection times for the targets were analyzed using inferential statistics. Results: The study found Target Detection Time to increase and Target Detection Rates to decrease with increasing numbers of avatars. The same is true for the Secondary Task Reaction Time while there was no effect on Secondary Task Hit Rate. Furthermore, we found a trend for a u-shaped correlation between the numbers of markings and RTST indicating increased workload. Conclusion: The trial results may indicate useful criteria for the design of training and support of observers in observational tasks.

  13. Eustachian Tube Mucosal Inflammation Scale Validation Based on Digital Video Images.

    PubMed

    Kivekäs, Ilkka; Pöyhönen, Leena; Aarnisalo, Antti; Rautiainen, Markus; Poe, Dennis

    2015-12-01

    The most common cause for Eustachian tube dilatory dysfunction is mucosal inflammation. The aim of this study was to validate a scale for Eustachian tube mucosal inflammation, based on digital video clips obtained during diagnostic rigid endoscopy. A previously described four-step scale for grading the degree of inflammation of the mucosa of the Eustachian tube lumen was used for this validation study. A tutorial for use of the scale, including static images and 10 second video clips, was presented to 26 clinicians with various levels of experience. Each clinician then reviewed 35 short digital video samples of Eustachian tubes from patients and rated the degree of inflammation. A subset of the clinicians performed a second rating of the same video clips at a subsequent time. Statistical analysis of the ratings provided inter- and intrarater reliability scores. Twenty-six clinicians with various levels of experience rated a total of 35 videos. Thirteen clinicians rated the videos twice. The overall correlation coefficient for the rating of inflammation severity was relatively good (0.74, 95% confidence interval, 0.72-0.76). The intralevel correlation coefficient for intrarater reliability was high (0.86). For those who rated videos twice, the intralevel correlation coefficient improved after the first rating (0.73, to 0.76), but improvement was not statistically significant. The inflammation scale used for Eustachian tube mucosal inflammation is reliable and this scale can be used with a high level of consistency by clinicians with various levels of experience.

  14. Multi-crack imaging using nonclassical nonlinear acoustic method

    NASA Astrophysics Data System (ADS)

    Zhang, Lue; Zhang, Ying; Liu, Xiao-Zhou; Gong, Xiu-Fen

    2014-10-01

    Solid materials with cracks exhibit the nonclassical nonlinear acoustical behavior. The micro-defects in solid materials can be detected by nonlinear elastic wave spectroscopy (NEWS) method with a time-reversal (TR) mirror. While defects lie in viscoelastic solid material with different distances from one another, the nonlinear and hysteretic stress—strain relation is established with Preisach—Mayergoyz (PM) model in crack zone. Pulse inversion (PI) and TR methods are used in numerical simulation and defect locations can be determined from images obtained by the maximum value. Since false-positive defects might appear and degrade the imaging when the defects are located quite closely, the maximum value imaging with a time window is introduced to analyze how defects affect each other and how the fake one occurs. Furthermore, NEWS-TR-NEWS method is put forward to improve NEWS-TR scheme, with another forward propagation (NEWS) added to the existing phases (NEWS and TR). In the added phase, scanner locations are determined by locations of all defects imaged in previous phases, so that whether an imaged defect is real can be deduced. NEWS-TR-NEWS method is proved to be effective to distinguish real defects from the false-positive ones. Moreover, it is also helpful to detect the crack that is weaker than others during imaging procedure.

  15. Video Analytics for Indexing, Summarization and Searching of Video Archives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trease, Harold E.; Trease, Lynn L.

    This paper will be submitted to the proceedings The Eleventh IASTED International Conference on. Signal and Image Processing. Given a video or video archive how does one effectively and quickly summarize, classify, and search the information contained within the data? This paper addresses these issues by describing a process for the automated generation of a table-of-contents and keyword, topic-based index tables that can be used to catalogue, summarize, and search large amounts of video data. Having the ability to index and search the information contained within the videos, beyond just metadata tags, provides a mechanism to extract and identify "useful"more » content from image and video data.« less

  16. Video conference quality assessment based on cooperative sensing of video and audio

    NASA Astrophysics Data System (ADS)

    Wang, Junxi; Chen, Jialin; Tian, Xin; Zhou, Cheng; Zhou, Zheng; Ye, Lu

    2015-12-01

    This paper presents a method to video conference quality assessment, which is based on cooperative sensing of video and audio. In this method, a proposed video quality evaluation method is used to assess the video frame quality. The video frame is divided into noise image and filtered image by the bilateral filters. It is similar to the characteristic of human visual, which could also be seen as a low-pass filtering. The audio frames are evaluated by the PEAQ algorithm. The two results are integrated to evaluate the video conference quality. A video conference database is built to test the performance of the proposed method. It could be found that the objective results correlate well with MOS. Then we can conclude that the proposed method is efficiency in assessing video conference quality.

  17. Comparison of sonochemiluminescence images using image analysis techniques and identification of acoustic pressure fields via simulation.

    PubMed

    Tiong, T Joyce; Chandesa, Tissa; Yap, Yeow Hong

    2017-05-01

    One common method to determine the existence of cavitational activity in power ultrasonics systems is by capturing images of sonoluminescence (SL) or sonochemiluminescence (SCL) in a dark environment. Conventionally, the light emitted from SL or SCL was detected based on the number of photons. Though this method is effective, it could not identify the sonochemical zones of an ultrasonic systems. SL/SCL images, on the other hand, enable identification of 'active' sonochemical zones. However, these images often provide just qualitative data as the harvesting of light intensity data from the images is tedious and require high resolution images. In this work, we propose a new image analysis technique using pseudo-colouring images to quantify the SCL zones based on the intensities of the SCL images and followed by comparison of the active SCL zones with COMSOL simulated acoustic pressure zones. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Strategic Acoustic Control of a Hummingbird Courtship Dive.

    PubMed

    Clark, Christopher J; Mistick, Emily A

    2018-04-23

    Male hummingbirds court females with a high-speed dive in which they "sing" with their tail feathers. The male's choice of trajectory provides him strategic control over acoustic frequency and pressure levels heard by the female. Unlike related species, male Costa's hummingbirds (Calypte costae) choose to place their dives to the side of females. Here we show that this minimizes an audible Doppler curve in their dive sound, thereby depriving females of an acoustic indicator that would otherwise reveal male dive speed. Wind-tunnel experiments indicate that the sounds produced by their feathers are directional; thus, males should aim their tail toward females. High-speed video of dives reveal that males twist half of their tail vertically during the dive, which acoustic-camera video shows effectively aims this sound sideways, toward the female. Our results demonstrate that male animals can strategically modulate female perception of dynamic aspects of athletic motor displays, such as their speed. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Phase change events of volatile liquid perfluorocarbon contrast agents produce unique acoustic signatures

    PubMed Central

    Sheeran, Paul S.; Matsunaga, Terry O.; Dayton, Paul A.

    2015-01-01

    Phase-change contrast agents (PCCAs) provide a dynamic platform to approach problems in medical ultrasound (US). Upon US-mediated activation, the liquid core vaporizes and expands to produce a gas bubble ideal for US imaging and therapy. In this study, we demonstrate through high-speed video microscopy and US interrogation that PCCAs composed of highly volatile perfluorocarbons (PFCs) exhibit unique acoustic behavior that can be detected and differentiated from standard microbubble contrast agents. Experimental results show that when activated with short pulses PCCAs will over-expand and undergo unforced radial oscillation while settling to a final bubble diameter. The size-dependent oscillation phenomenon generates a unique acoustic signal that can be passively detected in both time and frequency domain using confocal piston transducers with an ‘activate high’ (8 MHz, 2 cycles), ‘listen low’ (1 MHz) scheme. Results show that the magnitude of the acoustic ‘signature’ increases as PFC boiling point decreases. By using a band-limited spectral processing technique, the droplet signals can be isolated from controls and used to build experimental relationships between concentration and vaporization pressure. The techniques shown here may be useful for physical studies as well as development of droplet-specific imaging techniques. PMID:24351961

  20. Effective deep learning training for single-image super-resolution in endomicroscopy exploiting video-registration-based reconstruction.

    PubMed

    Ravì, Daniele; Szczotka, Agnieszka Barbara; Shakir, Dzhoshkun Ismail; Pereira, Stephen P; Vercauteren, Tom

    2018-06-01

    Probe-based confocal laser endomicroscopy (pCLE) is a recent imaging modality that allows performing in vivo optical biopsies. The design of pCLE hardware, and its reliance on an optical fibre bundle, fundamentally limits the image quality with a few tens of thousands fibres, each acting as the equivalent of a single-pixel detector, assembled into a single fibre bundle. Video registration techniques can be used to estimate high-resolution (HR) images by exploiting the temporal information contained in a sequence of low-resolution (LR) images. However, the alignment of LR frames, required for the fusion, is computationally demanding and prone to artefacts. In this work, we propose a novel synthetic data generation approach to train exemplar-based Deep Neural Networks (DNNs). HR pCLE images with enhanced quality are recovered by the models trained on pairs of estimated HR images (generated by the video registration algorithm) and realistic synthetic LR images. Performance of three different state-of-the-art DNNs techniques were analysed on a Smart Atlas database of 8806 images from 238 pCLE video sequences. The results were validated through an extensive image quality assessment that takes into account different quality scores, including a Mean Opinion Score (MOS). Results indicate that the proposed solution produces an effective improvement in the quality of the obtained reconstructed image. The proposed training strategy and associated DNNs allows us to perform convincing super-resolution of pCLE images.

  1. Acoustic Inversion in Optoacoustic Tomography: A Review

    PubMed Central

    Rosenthal, Amir; Ntziachristos, Vasilis; Razansky, Daniel

    2013-01-01

    Optoacoustic tomography enables volumetric imaging with optical contrast in biological tissue at depths beyond the optical mean free path by the use of optical excitation and acoustic detection. The hybrid nature of optoacoustic tomography gives rise to two distinct inverse problems: The optical inverse problem, related to the propagation of the excitation light in tissue, and the acoustic inverse problem, which deals with the propagation and detection of the generated acoustic waves. Since the two inverse problems have different physical underpinnings and are governed by different types of equations, they are often treated independently as unrelated problems. From an imaging standpoint, the acoustic inverse problem relates to forming an image from the measured acoustic data, whereas the optical inverse problem relates to quantifying the formed image. This review focuses on the acoustic aspects of optoacoustic tomography, specifically acoustic reconstruction algorithms and imaging-system practicalities. As these two aspects are intimately linked, and no silver bullet exists in the path towards high-performance imaging, we adopt a holistic approach in our review and discuss the many links between the two aspects. Four classes of reconstruction algorithms are reviewed: time-domain (so called back-projection) formulae, frequency-domain formulae, time-reversal algorithms, and model-based algorithms. These algorithms are discussed in the context of the various acoustic detectors and detection surfaces which are commonly used in experimental studies. We further discuss the effects of non-ideal imaging scenarios on the quality of reconstruction and review methods that can mitigate these effects. Namely, we consider the cases of finite detector aperture, limited-view tomography, spatial under-sampling of the acoustic signals, and acoustic heterogeneities and losses. PMID:24772060

  2. Pore Formation and Mobility Investigation video images

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Video images sent to the ground allow scientists to watch the behavior of the bubbles as they control the melting and freezing of the material during the Pore Formation and Mobility Investigation (PFMI) in the Microgravity Science Glovebox aboard the International Space Station. While the investigation studies the way that metals behave at the microscopic scale on Earth -- and how voids form -- the experiment uses a transparent material called succinonitrile that behaves like a metal to study this problem. The bubbles do not float to the top of the material in microgravity, so they can study their interactions.

  3. Innovative Solution to Video Enhancement

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.

  4. Reduced acoustic noise in diffusion tensor imaging on a compact MRI system.

    PubMed

    Tan, Ek T; Hardy, Christopher J; Shu, Yunhong; In, Myung-Ho; Guidon, Arnaud; Huston, John; Bernstein, Matt A; K F Foo, Thomas

    2018-06-01

    To investigate the feasibility of substantially reducing acoustic noise while performing diffusion tensor imaging (DTI) on a compact 3T (C3T) MRI scanner equipped with a 42-cm inner-diameter asymmetric gradient. A-weighted acoustic measurements were made using 10 mT/m-amplitude sinusoidal waveforms, corresponding to echo-planar imaging (EPI) echo spacing of 0.25 to 5.0 ms, on a conventional, whole-body 3T MRI and on the C3T. Acoustic measurements of DTI with trapezoidal EPI waveforms were then made at peak gradient performance on the C3T (80 mT/m amplitude, 700 T/m/s slew rate) and at derated performance (33 mT/m, 10 to 50 T/m/s) for acoustic noise reduction. DTI was acquired in two different phantoms and in seven human subjects, with and without gradient-derating corresponding to multi- and single-shot acquisitions, respectively. Sinusoidal waveforms on the C3T were quieter by 8.5 to 15.6 A-weighted decibels (dBA) on average as compared to the whole-body MRI. The derated multishot DTI acquisition noise level was only 8.7 dBA (at 13 T/m/s slew rate) above ambient, and was quieter than non-derated, single-shot DTI by 22.3 dBA; however, the scan time was almost quadrupled. Although derating resulted in negligible diffusivity differences in the phantoms, small biases in diffusivity measurements were observed in human subjects (apparent diffusion coefficient = +9.3 ± 8.8%, fractional anisotropy = +3.2 ± 11.2%, radial diffusivity = +9.4 ± 16.8%, parallel diffusivity = +10.3 ± 8.4%). The feasibility of achieving reduced acoustic noise levels with whole-brain DTI on the C3T MRI was demonstrated. Magn Reson Med 79:2902-2911, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  5. In situ quantitative characterisation of the ocean water column using acoustic multibeam backscatter data

    NASA Astrophysics Data System (ADS)

    Lamarche, G.; Le Gonidec, Y.; Lucieer, V.; Lurton, X.; Greinert, J.; Dupré, S.; Nau, A.; Heffron, E.; Roche, M.; Ladroit, Y.; Urban, P.

    2017-12-01

    Detecting liquid, solid or gaseous features in the ocean is generating considerable interest in the geoscience community, because of their potentially high economic values (oil & gas, mining), their significance for environmental management (oil/gas leakage, biodiversity mapping, greenhouse gas monitoring) as well as their potential cultural and traditional values (food, freshwater). Enhancing people's capability to quantify and manage the natural capital present in the ocean water goes hand in hand with the development of marine acoustic technology, as marine echosounders provide the most reliable and technologically advanced means to develop quantitative studies of water column backscatter data. This is not developed to its full capability because (i) of the complexity of the physics involved in relation to the constantly changing marine environment, and (ii) the rapid technological evolution of high resolution multibeam echosounder (MBES) water-column imaging systems. The Water Column Imaging Working Group is working on a series of multibeam echosounder (MBES) water column datasets acquired in a variety of environments, using a range of frequencies, and imaging a number of water-column features such as gas seeps, oil leaks, suspended particulate matter, vegetation and freshwater springs. Access to data from different acoustic frequencies and ocean dynamics enables us to discuss and test multifrequency approaches which is the most promising means to develop a quantitative analysis of the physical properties of acoustic scatterers, providing rigorous cross calibration of the acoustic devices. In addition, high redundancy of multibeam data, such as is available for some datasets, will allow us to develop data processing techniques, leading to quantitative estimates of water column gas seeps. Each of the datasets has supporting ground-truthing data (underwater videos and photos, physical oceanography measurements) which provide information on the origin and

  6. Multilocation Video Conference By Optical Fiber

    NASA Astrophysics Data System (ADS)

    Gray, Donald J.

    1982-10-01

    An experimental system that permits interconnection of many offices in a single video conference is described. Video images transmitted to conference participants are selected by the conference chairman and switched by a microprocessor-controlled video switch. Speakers can, at their choice, transmit their own images or images of graphics they wish to display. Users are connected to the Switching Center by optical fiber subscriber loops that carry analog video, digitized telephone, data and signaling. The same system also provides user-selectable distribution of video program and video library material. Experience in the operation of the conference system is discussed.

  7. Time domain localization technique with sparsity constraint for imaging acoustic sources

    NASA Astrophysics Data System (ADS)

    Padois, Thomas; Doutres, Olivier; Sgard, Franck; Berry, Alain

    2017-09-01

    This paper addresses source localization technique in time domain for broadband acoustic sources. The objective is to accurately and quickly detect the position and amplitude of noise sources in workplaces in order to propose adequate noise control options and prevent workers hearing loss or safety risk. First, the generalized cross correlation associated with a spherical microphone array is used to generate an initial noise source map. Then a linear inverse problem is defined to improve this initial map. Commonly, the linear inverse problem is solved with an l2 -regularization. In this study, two sparsity constraints are used to solve the inverse problem, the orthogonal matching pursuit and the truncated Newton interior-point method. Synthetic data are used to highlight the performances of the technique. High resolution imaging is achieved for various acoustic sources configurations. Moreover, the amplitudes of the acoustic sources are correctly estimated. A comparison of computation times shows that the technique is compatible with quasi real-time generation of noise source maps. Finally, the technique is tested with real data.

  8. Assessment of Molecular Acoustic Angiography for Combined Microvascular and Molecular Imaging in Preclinical Tumor Models

    PubMed Central

    Lindsey, Brooks D.; Shelton, Sarah E.; Foster, F. Stuart; Dayton, Paul A.

    2017-01-01

    Purpose To evaluate a new ultrasound molecular imaging approach in its ability to image a preclinical tumor model and to investigate the capacity to visualize and quantify co-registered microvascular and molecular imaging volumes. Procedures Molecular imaging using the new technique was compared with a conventional ultrasound molecular imaging technique (multi-pulse imaging) by varying the injected microbubble dose and scanning each animal using both techniques. Each of the 14 animals was randomly assigned one of three doses; bolus dose was varied, and the animals were imaged for three consecutive days so that each animal received every dose. A microvascular scan was also acquired for each animal by administering an infusion of non-targeted microbubbles. These scans were paired with co-registered molecular images (VEGFR2-targeted microbubbles), the vessels were segmented, and the spatial relationships between vessels and VEGFR2 targeting locations were analyzed. In 5 animals, an additional scan was performed in which the animal received a bolus of microbubbles targeted to E- and P-selectin. Vessel tortuosity as a function of distance from VEGF and selectin targeting was analyzed in these animals. Results Although resulting differences in image intensity due to varying microbubble dose were not significant between the two lowest doses, superharmonic imaging had significantly higher contrast-to-tissue ratio (CTR) than multi-pulse imaging (mean across all doses: 13.98 dB for molecular acoustic angiography vs. 0.53 dB for multi-pulse imaging; p = 4.9 × 10−10). Analysis of registered microvascular and molecular imaging volumes indicated that vessel tortuosity decreases with increasing distance from both VEGFR2 and selectin targeting sites. Conclusions Molecular acoustic angiography (superharmonic molecular imaging) exhibited a significant increase in CTR at all doses tested due to superior rejection of tissue artifact signals. Due to the high resolution of acoustic

  9. Acoustically modulated magnetic resonance imaging of gas-filled protein nanostructures

    NASA Astrophysics Data System (ADS)

    Lu, George J.; Farhadi, Arash; Szablowski, Jerzy O.; Lee-Gosselin, Audrey; Barnes, Samuel R.; Lakshmanan, Anupama; Bourdeau, Raymond W.; Shapiro, Mikhail G.

    2018-05-01

    Non-invasive biological imaging requires materials capable of interacting with deeply penetrant forms of energy such as magnetic fields and sound waves. Here, we show that gas vesicles (GVs), a unique class of gas-filled protein nanostructures with differential magnetic susceptibility relative to water, can produce robust contrast in magnetic resonance imaging (MRI) at sub-nanomolar concentrations, and that this contrast can be inactivated with ultrasound in situ to enable background-free imaging. We demonstrate this capability in vitro, in cells expressing these nanostructures as genetically encoded reporters, and in three model in vivo scenarios. Genetic variants of GVs, differing in their magnetic or mechanical phenotypes, allow multiplexed imaging using parametric MRI and differential acoustic sensitivity. Additionally, clustering-induced changes in MRI contrast enable the design of dynamic molecular sensors. By coupling the complementary physics of MRI and ultrasound, this nanomaterial gives rise to a distinct modality for molecular imaging with unique advantages and capabilities.

  10. Image processing and computer controls for video profile diagnostic system in the ground test accelerator (GTA)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wright, R.M.; Zander, M.E.; Brown, S.K.

    1992-09-01

    This paper describes the application of video image processing to beam profile measurements on the Ground Test Accelerator (GTA). A diagnostic was needed to measure beam profiles in the intermediate matching section (IMS) between the radio-frequency quadrupole (RFQ) and the drift tube linac (DTL). Beam profiles are measured by injecting puffs of gas into the beam. The light emitted from the beam-gas interaction is captured and processed by a video image processing system, generating the beam profile data. A general purpose, modular and flexible video image processing system, imagetool, was used for the GTA image profile measurement. The development ofmore » both software and hardware for imagetool and its integration with the GTA control system (GTACS) will be discussed. The software includes specialized algorithms for analyzing data and calibrating the system. The underlying design philosophy of imagetool was tested by the experience of building and using the system, pointing the way for future improvements. The current status of the system will be illustrated by samples of experimental data.« less

  11. Image processing and computer controls for video profile diagnostic system in the ground test accelerator (GTA)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wright, R.M.; Zander, M.E.; Brown, S.K.

    1992-01-01

    This paper describes the application of video image processing to beam profile measurements on the Ground Test Accelerator (GTA). A diagnostic was needed to measure beam profiles in the intermediate matching section (IMS) between the radio-frequency quadrupole (RFQ) and the drift tube linac (DTL). Beam profiles are measured by injecting puffs of gas into the beam. The light emitted from the beam-gas interaction is captured and processed by a video image processing system, generating the beam profile data. A general purpose, modular and flexible video image processing system, imagetool, was used for the GTA image profile measurement. The development ofmore » both software and hardware for imagetool and its integration with the GTA control system (GTACS) will be discussed. The software includes specialized algorithms for analyzing data and calibrating the system. The underlying design philosophy of imagetool was tested by the experience of building and using the system, pointing the way for future improvements. The current status of the system will be illustrated by samples of experimental data.« less

  12. Breaking the acoustic diffraction limit via nonlinear effect and thermal confinement for potential deep-tissue high-resolution imaging

    PubMed Central

    Yuan, Baohong; Pei, Yanbo; Kandukuri, Jayanth

    2013-01-01

    Our recently developed ultrasound-switchable fluorescence (USF) imaging technique showed that it was feasible to conduct high-resolution fluorescence imaging in a centimeter-deep turbid medium. Because the spatial resolution of this technique highly depends on the ultrasound-induced temperature focal size (UTFS), minimization of UTFS becomes important for further improving the spatial resolution USF technique. In this study, we found that UTFS can be significantly reduced below the diffraction-limited acoustic intensity focal size via nonlinear acoustic effects and thermal confinement by appropriately controlling ultrasound power and exposure time, which can be potentially used for deep-tissue high-resolution imaging. PMID:23479498

  13. Operational prediction of rip currents using numerical model and nearshore bathymetry from video images

    NASA Astrophysics Data System (ADS)

    Sembiring, L.; Van Ormondt, M.; Van Dongeren, A. R.; Roelvink, J. A.

    2017-07-01

    Rip currents are one of the most dangerous coastal hazards for swimmers. In order to minimize the risk, a coastal operational-process based-model system can be utilized in order to provide forecast of nearshore waves and currents that may endanger beach goers. In this paper, an operational model for rip current prediction by utilizing nearshore bathymetry obtained from video image technique is demonstrated. For the nearshore scale model, XBeach1 is used with which tidal currents, wave induced currents (including the effect of the wave groups) can be simulated simultaneously. Up-to-date bathymetry will be obtained using video images technique, cBathy 2. The system will be tested for the Egmond aan Zee beach, located in the northern part of the Dutch coastline. This paper will test the applicability of bathymetry obtained from video technique to be used as input for the numerical modelling system by comparing simulation results using surveyed bathymetry and model results using video bathymetry. Results show that the video technique is able to produce bathymetry converging towards the ground truth observations. This bathymetry validation will be followed by an example of operational forecasting type of simulation on predicting rip currents. Rip currents flow fields simulated over measured and modeled bathymetries are compared in order to assess the performance of the proposed forecast system.

  14. Teaching acoustics online

    NASA Astrophysics Data System (ADS)

    Morrison, Andrew; Rossing, Thomas D.

    2003-10-01

    We teach an introductory course in musical acoustics using a Blackboard. Students in this course can access audio and video materials as well as printed materials on our course website. All homework is submitted online, as are tests and examinations. The students also have the opportunity to use synchronous and asynchronous chat rooms to discuss the course with each other or with the instructors.

  15. Capturing and displaying microscopic images used in medical diagnostics and forensic science using 4K video resolution - an application in higher education.

    PubMed

    Maier, Hans; de Heer, Gert; Ortac, Ajda; Kuijten, Jan

    2015-11-01

    To analyze, interpret and evaluate microscopic images, used in medical diagnostics and forensic science, video images for educational purposes were made with a very high resolution of 4096 × 2160 pixels (4K), which is four times as many pixels as High-Definition Video (1920 × 1080 pixels). The unprecedented high resolution makes it possible to see details that remain invisible to any other video format. The images of the specimens (blood cells, tissue sections, hair, fibre, etc.) are recorded using a 4K video camera which is attached to a light microscope. After processing, this resulted in very sharp and highly detailed images. This material was then used in education for classroom discussion. Spoken explanation by experts in the field of medical diagnostics and forensic science was also added to the high-resolution video images to make it suitable for self-study. © 2015 The Authors. Journal of Microscopy published by John Wiley & Sons Ltd on behalf of Royal Microscopical Society.

  16. 2D-pattern matching image and video compression: theory, algorithms, and experiments.

    PubMed

    Alzina, Marc; Szpankowski, Wojciech; Grama, Ananth

    2002-01-01

    In this paper, we propose a lossy data compression framework based on an approximate two-dimensional (2D) pattern matching (2D-PMC) extension of the Lempel-Ziv (1977, 1978) lossless scheme. This framework forms the basis upon which higher level schemes relying on differential coding, frequency domain techniques, prediction, and other methods can be built. We apply our pattern matching framework to image and video compression and report on theoretical and experimental results. Theoretically, we show that the fixed database model used for video compression leads to suboptimal but computationally efficient performance. The compression ratio of this model is shown to tend to the generalized entropy. For image compression, we use a growing database model for which we provide an approximate analysis. The implementation of 2D-PMC is a challenging problem from the algorithmic point of view. We use a range of techniques and data structures such as k-d trees, generalized run length coding, adaptive arithmetic coding, and variable and adaptive maximum distortion level to achieve good compression ratios at high compression speeds. We demonstrate bit rates in the range of 0.25-0.5 bpp for high-quality images and data rates in the range of 0.15-0.5 Mbps for a baseline video compression scheme that does not use any prediction or interpolation. We also demonstrate that this asymmetric compression scheme is capable of extremely fast decompression making it particularly suitable for networked multimedia applications.

  17. Wavefront modulation and subwavelength diffractive acoustics with an acoustic metasurface.

    PubMed

    Xie, Yangbo; Wang, Wenqi; Chen, Huanyang; Konneker, Adam; Popa, Bogdan-Ioan; Cummer, Steven A

    2014-11-24

    Metasurfaces are a family of novel wavefront-shaping devices with planar profile and subwavelength thickness. Acoustic metasurfaces with ultralow profile yet extraordinary wave manipulating properties would be highly desirable for improving the performance of many acoustic wave-based applications. However, designing acoustic metasurfaces with similar functionality to their electromagnetic counterparts remains challenging with traditional metamaterial design approaches. Here we present a design and realization of an acoustic metasurface based on tapered labyrinthine metamaterials. The demonstrated metasurface can not only steer an acoustic beam as expected from the generalized Snell's law, but also exhibits various unique properties such as conversion from propagating wave to surface mode, extraordinary beam-steering and apparent negative refraction through higher-order diffraction. Such designer acoustic metasurfaces provide a new design methodology for acoustic signal modulation devices and may be useful for applications such as acoustic imaging, beam steering, ultrasound lens design and acoustic surface wave-based applications.

  18. Measurements of Low-Frequency Acoustic Attenuation in Soils.

    DTIC Science & Technology

    1994-10-13

    Engineering Research Laboratory to design an acoustic subsurface imaging system, a set of experiments was conducted in which the attenuation and the velocity...support of the U.S. Army Construction Engineering Research Laboratory’s efforts to design an acoustic subsurface imaging system which would ideally be...of acoustic waves such as those generated by a subsurface imaging system. An experiment reported in the literature characterized the acoustic

  19. The effects of video compression on acceptability of images for monitoring life sciences experiments

    NASA Astrophysics Data System (ADS)

    Haines, Richard F.; Chuang, Sherry L.

    1992-07-01

    Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters

  20. The effects of video compression on acceptability of images for monitoring life sciences experiments

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Chuang, Sherry L.

    1992-01-01

    Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters

  1. Comparison of Inter-Observer Variability and Diagnostic Performance of the Fifth Edition of BI-RADS for Breast Ultrasound of Static versus Video Images.

    PubMed

    Youk, Ji Hyun; Jung, Inkyung; Yoon, Jung Hyun; Kim, Sung Hun; Kim, You Me; Lee, Eun Hye; Jeong, Sun Hye; Kim, Min Jung

    2016-09-01

    Our aim was to compare the inter-observer variability and diagnostic performance of the Breast Imaging Reporting and Data System (BI-RADS) lexicon for breast ultrasound of static and video images. Ninety-nine breast masses visible on ultrasound examination from 95 women 19-81 y of age at five institutions were enrolled in this study. They were scheduled to undergo biopsy or surgery or had been stable for at least 2 y of ultrasound follow-up after benign biopsy results or typically benign findings. For each mass, representative long- and short-axis static ultrasound images were acquired; real-time long- and short-axis B-mode video images through the mass area were separately saved as cine clips. Each image was reviewed independently by five radiologists who were asked to classify ultrasound features according to the fifth edition of the BI-RADS lexicon. Inter-observer variability was assessed using kappa (κ) statistics. Diagnostic performance on static and video images was compared using the area under the receiver operating characteristic curve. No significant difference was found in κ values between static and video images for all descriptors, although κ values of video images were higher than those of static images for shape, orientation, margin and calcifications. After receiver operating characteristic curve analysis, the video images (0.83, range: 0.77-0.87) had higher areas under the curve than the static images (0.80, range: 0.75-0.83; p = 0.08). Inter-observer variability and diagnostic performance of video images was similar to that of static images on breast ultrasonography according to the new edition of BI-RADS. Copyright © 2016 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  2. JSC Shuttle Mission Simulator (SMS) visual system payload bay video image

    NASA Technical Reports Server (NTRS)

    1981-01-01

    This video image is of the STS-2 Columbia, Orbiter Vehicle (OV) 102, payload bay (PLB) showing the Office of Space Terrestrial Applications 1 (OSTA-1) pallet (Shuttle Imaging Radar A (SIR-A) antenna (left) and SIR-A recorder, Shuttle Multispectral Infrared Radiometer (SMIRR), Feature Identification Location Experiment (FILE), Measurement of Air Pollution for Satellites (MAPS) (right)). The image is used in JSC's Fixed Based (FB) Shuttle Mission Simulator (SMS). It is projected inside the FB-SMS crew compartment during mission simulation training. The FB-SMS is located in the Mission Simulation and Training Facility Bldg 5.

  3. Schlieren imaging of the standing wave field in an ultrasonic acoustic levitator

    NASA Astrophysics Data System (ADS)

    Rendon, Pablo Luis; Boullosa, Ricardo R.; Echeverria, Carlos; Porta, David

    2015-11-01

    We consider a model of a single axis acoustic levitator consisting of two cylinders immersed in air and directed along the same axis. The first cylinder has a flat termination and functions as a sound emitter, and the second cylinder, which is simply a refector, has the side facing the first cylinder cut out by a spherical surface. By making the first cylinder vibrate at ultrasonic frequencies a standing wave is produced in the air between the cylinders which makes it possible, by means of the acoustic radiation pressure, to levitate one or several small objects of different shapes, such as spheres or disks. We use schlieren imaging to observe the acoustic field resulting from the levitation of one or several objects, and compare these results to previous numerical approximations of the field obtained using a finite element method. The authors acknowledge financial support from DGAPA-UNAM through project PAPIIT IN109214.

  4. Interfacial Dynamics of Condensing Vapor Bubbles in an Ultrasonic Acoustic Field

    NASA Astrophysics Data System (ADS)

    Boziuk, Thomas; Smith, Marc; Glezer, Ari

    2016-11-01

    Enhancement of vapor condensation in quiescent subcooled liquid using ultrasonic actuation is investigated experimentally. The vapor bubbles are formed by direct injection from a pressurized steam reservoir through nozzles of varying characteristic diameters, and are advected within an acoustic field of programmable intensity. While kHz-range acoustic actuation typically couples to capillary instability of the vapor-liquid interface, ultrasonic (MHz-range) actuation leads to the formation of a liquid spout that penetrates into the vapor bubble and significantly increases its surface area and therefore condensation rate. Focusing of the ultrasonic beam along the spout leads to ejection of small-scale droplets from that are propelled towards the vapor liquid interface and result in localized acceleration of the condensation. High-speed video of Schlieren images is used to investigate the effects of the ultrasonic actuation on the thermal boundary layer on the liquid side of the vapor-liquid interface and its effect on the condensation rate, and the liquid motion during condensation is investigated using high-magnification PIV measurements. High-speed image processing is used to assess the effect of the actuation on the dynamics and temporal variation in characteristic scale (and condensation rate) of the vapor bubbles.

  5. Human body motion capture from multi-image video sequences

    NASA Astrophysics Data System (ADS)

    D'Apuzzo, Nicola

    2003-01-01

    In this paper is presented a method to capture the motion of the human body from multi image video sequences without using markers. The process is composed of five steps: acquisition of video sequences, calibration of the system, surface measurement of the human body for each frame, 3-D surface tracking and tracking of key points. The image acquisition system is currently composed of three synchronized progressive scan CCD cameras and a frame grabber which acquires a sequence of triplet images. Self calibration methods are applied to gain exterior orientation of the cameras, the parameters of internal orientation and the parameters modeling the lens distortion. From the video sequences, two kinds of 3-D information are extracted: a three-dimensional surface measurement of the visible parts of the body for each triplet and 3-D trajectories of points on the body. The approach for surface measurement is based on multi-image matching, using the adaptive least squares method. A full automatic matching process determines a dense set of corresponding points in the triplets. The 3-D coordinates of the matched points are then computed by forward ray intersection using the orientation and calibration data of the cameras. The tracking process is also based on least squares matching techniques. Its basic idea is to track triplets of corresponding points in the three images through the sequence and compute their 3-D trajectories. The spatial correspondences between the three images at the same time and the temporal correspondences between subsequent frames are determined with a least squares matching algorithm. The results of the tracking process are the coordinates of a point in the three images through the sequence, thus the 3-D trajectory is determined by computing the 3-D coordinates of the point at each time step by forward ray intersection. Velocities and accelerations are also computed. The advantage of this tracking process is twofold: it can track natural points

  6. Towards non-contact photo-acoustic endoscopy using speckle pattern analysis

    NASA Astrophysics Data System (ADS)

    Lengenfelder, Benjamin; Mehari, Fanuel; Tang, Yuqi; Klämpfl, Florian; Zalevsky, Zeev; Schmidt, Michael

    2017-03-01

    Photoacoustic Tomography combines the advantages of optical and acoustic imaging as it makes use of the high optical contrast of tissue and the high resolution of ultrasound. Furthermore, high penetration depths in tissue in the order of several centimeters can be achieved by the combination of these modalities. Extensive research is being done in the field of miniaturization of photoacoustic devices, as photoacoustic imaging could be of significant benefits for the physician during endoscopic interventions. All the existing miniature systems are based on contact transducers for signal detection that are placed at the distal end of an endoscopic device. This makes the manufacturing process difficult and impedance matching to the inspected surface a requirement. The requirement for contact limits the view of the physician during the intervention. Consequently, a fiber based non-contact optical sensing technique would be highly beneficial for the development of miniaturized photoacoustic endoscopic devices. This work demonstrates the feasibility of surface displacement detection using remote speckle-sensing using a high speed camera and an imaging fiber bundle that is used in commercially available video endoscopes. The feasibility of displacement sensing is demonstrated by analysis of phantom vibrations which are induced by loudspeaker membrane oscillations. Since the usability of the remote speckle-sensing for photo-acoustic signal detection was already demonstrated, the fiber bundle approach demonstrates the potential for non-contact photoacoustic detections during endoscopy.

  7. Video-assisted segmentation of speech and audio track

    NASA Astrophysics Data System (ADS)

    Pandit, Medha; Yusoff, Yusseri; Kittler, Josef; Christmas, William J.; Chilton, E. H. S.

    1999-08-01

    Video database research is commonly concerned with the storage and retrieval of visual information invovling sequence segmentation, shot representation and video clip retrieval. In multimedia applications, video sequences are usually accompanied by a sound track. The sound track contains potential cues to aid shot segmentation such as different speakers, background music, singing and distinctive sounds. These different acoustic categories can be modeled to allow for an effective database retrieval. In this paper, we address the problem of automatic segmentation of audio track of multimedia material. This audio based segmentation can be combined with video scene shot detection in order to achieve partitioning of the multimedia material into semantically significant segments.

  8. Differential phase acoustic microscope for micro-NDE

    NASA Technical Reports Server (NTRS)

    Waters, David D.; Pusateri, T. L.; Huang, S. R.

    1992-01-01

    A differential phase scanning acoustic microscope (DP-SAM) was developed, fabricated, and tested in this project. This includes the acoustic lens and transducers, driving and receiving electronics, scanning stage, scanning software, and display software. This DP-SAM can produce mechanically raster-scanned acoustic microscopic images of differential phase, differential amplitude, or amplitude of the time gated returned echoes of the samples. The differential phase and differential amplitude images provide better image contrast over the conventional amplitude images. A specially designed miniature dual beam lens was used to form two foci to obtain the differential phase and amplitude information of the echoes. High image resolution (1 micron) was achieved by applying high frequency (around 1 GHz) acoustic signals to the samples and placing two foci close to each other (1 micron). Tone burst was used in this system to obtain a good estimation of the phase differences between echoes from the two adjacent foci. The system can also be used to extract the V(z) acoustic signature. Since two acoustic beams and four receiving modes are available, there are 12 possible combinations to produce an image or a V(z) scan. This provides a unique feature of this system that none of the existing acoustic microscopic systems can provide for the micro-nondestructive evaluation applications. The entire system, including the lens, electronics, and scanning control software, has made a competitive industrial product for nondestructive material inspection and evaluation and has attracted interest from existing acoustic microscope manufacturers.

  9. Temperature-dependent differences in the nonlinear acoustic behavior of ultrasound contrast agents revealed by high-speed imaging and bulk acoustics.

    PubMed

    Mulvana, Helen; Stride, Eleanor; Tang, Mengxing; Hajnal, Jo V; Eckersley, Robert

    2011-09-01

    Previous work by the authors has established that increasing the temperature of the suspending liquid from 20°C to body temperature has a significant impact on the bulk acoustic properties and stability of an ultrasound contrast agent suspension (SonoVue, Bracco Suisse SA, Manno, Lugano, Switzerland). In this paper the influence of temperature on the nonlinear behavior of microbubbles is investigated, because this is one of the most important parameters in the context of diagnostic imaging. High-speed imaging showed that raising the temperature significantly influences the dynamic behavior of individual microbubbles. At body temperature, microbubbles exhibit greater radial excursion and oscillate less spherically, with a greater incidence of jetting and gas expulsion, and therefore collapse, than they do at room temperature. Bulk acoustics revealed an associated increase in the harmonic content of the scattered signals. These findings emphasize the importance of conducting laboratory studies at body temperature if the results are to be interpreted for in vivo applications. Copyright © 2011 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  10. Formulating an image matching strategy for terrestrial stem data collection using a multisensor video system

    Treesearch

    Neil A. Clark

    2001-01-01

    A multisensor video system has been developed incorporating a CCD video camera, a 3-axis magnetometer, and a laser-rangefinding device, for the purpose of measuring individual tree stems. While preliminary results show promise, some changes are needed to improve the accuracy and efficiency of the system. Image matching is needed to improve the accuracy of length...

  11. Video indexing based on image and sound

    NASA Astrophysics Data System (ADS)

    Faudemay, Pascal; Montacie, Claude; Caraty, Marie-Jose

    1997-10-01

    Video indexing is a major challenge for both scientific and economic reasons. Information extraction can sometimes be easier from sound channel than from image channel. We first present a multi-channel and multi-modal query interface, to query sound, image and script through 'pull' and 'push' queries. We then summarize the segmentation phase, which needs information from the image channel. Detection of critical segments is proposed. It should speed-up both automatic and manual indexing. We then present an overview of the information extraction phase. Information can be extracted from the sound channel, through speaker recognition, vocal dictation with unconstrained vocabularies, and script alignment with speech. We present experiment results for these various techniques. Speaker recognition methods were tested on the TIMIT and NTIMIT database. Vocal dictation as experimented on newspaper sentences spoken by several speakers. Script alignment was tested on part of a carton movie, 'Ivanhoe'. For good quality sound segments, error rates are low enough for use in indexing applications. Major issues are the processing of sound segments with noise or music, and performance improvement through the use of appropriate, low-cost architectures or networks of workstations.

  12. Acoustic imaging of the Mediterranean water outflowing through the Strait of Gibraltar

    NASA Astrophysics Data System (ADS)

    Biescas Gorriz, Berta; Carniel, Sandro; Sallarès, Valentí; Rodriguez Ranero, Cesar

    2016-04-01

    Acoustic imaging of the Mediterranean water outflowing through the Strait of Gibraltar Berta Biescas (1), Sandro Carniel (2) , Valentí Sallarès (3) and Cesar R. Ranero(3) (1) Istituto di Scienze Marine, CNR, Bologna, Italy (2) Istituto di Scienze Marine, CNR, Venice, Italy (3) Institut de Ciències del Mar, CSIC, Barcelona, Spain Acoustic reflectivity acquired with multichannel seismic reflection (MCS) systems allow to detect and explore the thermohaline structure in the ocean with vertical and lateral resolutions in the order of 10 m, covering hundreds of kilometers in the lateral dimension and the full-depth water column. In this work we present a MCS 2D profile that crosses the Strait of Gibraltar, from the Alboran Sea to the internal Gulf of Cadiz (NE Atlantic Ocean). The MCS data was acquired during the Topomed-Gassis Cruise (European Science Foundation TopoEurope), which was carried out on board of the Spanish R/V Sarmiento de Gamboa in October 2011. The strong thermohaline contrast between the Mediterranean water and the Atlantic water, characterizes this area and allows to visualize, with unprecedented resolution, the acoustic reflectivity associated to the dense flow of the Mediterranean water outflowing through the prominent slope of the Strait of Gibraltar. During the first kilometers, the dense flow drops attached to the continental slope until it reaches the buoyancy depth at 700 m. Then, it detaches from the sea floor and continues flowing towards the Atlantic Ocean, occupying the layer at 700-1500 m deep and developing clear staircase layers. The reflectivity images display near seabed reflections that could well correspond to turbidity layers. The XBT data acquired coincident in time and space with the MCS data will help us in the interpretation and analysis of the acoustic data.

  13. Diffraction of dust acoustic waves by a circular cylinder

    NASA Astrophysics Data System (ADS)

    Kim, S.-H.; Heinrich, J. R.; Merlino, R. L.

    2008-09-01

    The diffraction of dust acoustic (DA) waves around a long dielectric rod is observed using video imaging methods. The DA waves are spontaneously excited in a dusty plasma produced in a direct current glow discharge plasma. The rod acquires a negative charge that produces a coaxial dust void around it. The diameter of the void is the effective size of the "obstacle" encountered by the waves. The wavelength of the DA waves is approximately the size of the void. The observations are considered in relation to the classical problem of the diffraction of sound waves from a circular cylinder, a problem first analyzed by Lord Rayleigh [Theory of Sound, 2nd ed. (MacMillan, London, 1896)].

  14. A flexible software architecture for scalable real-time image and video processing applications

    NASA Astrophysics Data System (ADS)

    Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.

    2012-06-01

    Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility because they are normally oriented towards particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse and inefficient execution on multicore processors. This paper presents a novel software architecture for real-time image and video processing applications which addresses these issues. The architecture is divided into three layers: the platform abstraction layer, the messaging layer, and the application layer. The platform abstraction layer provides a high level application programming interface for the rest of the architecture. The messaging layer provides a message passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of messages. The application layer provides a repository for reusable application modules designed for real-time image and video processing applications. These modules, which include acquisition, visualization, communication, user interface and data processing modules, take advantage of the power of other well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, we present different prototypes and applications to show the possibilities of the proposed architecture.

  15. Acoustic Characterization and Enhanced Ultrasound Imaging of Long-Circulating Lipid-Coated Microbubbles.

    PubMed

    Li, Hongbo; Yang, Yanye; Zhang, Meimei; Yin, Liping; Tu, Juan; Guo, Xiasheng; Zhang, Dong

    2018-05-01

    A long-circulating lipid-coated ultrasound (US) contrast agent was fabricated to achieve a longer wash-out time and gain more resistance against higher-mechanical index sonication. Systemic physical, acoustic, and in vivo imaging experiments were performed to better understand the underlying mechanism enabling the improvement of contrast agent performance by adjusting the physical and acoustic properties of contrast agent microbubbles. By simply altering the gas core, a kind of US contrast agent microbubble was synthesized with a similar lipid-coating shell as SonoVue microbubbles (Bracco SpA, Milan, Italy) to achieve a longer wash-out time and higher inertial cavitation threshold. To bridge the structure-performance relationship of the synthesized microbubbles, the imaging performance of the microbubbles was assessed in vivo with SonoVue as a control group. The size distribution and inertial cavitation threshold of the synthesized microbubbles were characterized, and the shell parameters of the microbubbles were determined by acoustic attenuation measurements. All of the measurements were compared with SonoVue microbubbles. The synthesized microbubbles had a spherical shape, a smooth, consistent membrane, and a uniform distribution, with an average diameter of 1.484 μm. According to the measured attenuation curve, the synthesized microbubbles resonated at around 2.8 MHz. Although the bubble's shell elasticity (0.2 ± 0.09 N/m) was comparable with SonoVue, it had relatively greater viscosity and inertial cavitation because of the different gas core. Imaging studies showed that the synthesized microbubbles had a longer circulation time and a better chance of fighting against rapid collapse than SonoVue. Nano/micrometer long-circulating lipid-coated microbubbles could be fabricated by simply altering the core composition of SonoVue microbubbles with a higher-molecular weight gas. The smaller diameter and higher inertial cavitation threshold of the

  16. A multicolor imaging pyrometer

    NASA Technical Reports Server (NTRS)

    Frish, Michael B.; Frank, Jonathan H.

    1989-01-01

    A multicolor imaging pyrometer was designed for accurately and precisely measuring the temperature distribution histories of small moving samples. The device projects six different color images of the sample onto a single charge coupled device array that provides an RS-170 video signal to a computerized frame grabber. The computer automatically selects which one of the six images provides useful data, and converts that information to a temperature map. By measuring the temperature of molten aluminum heated in a kiln, a breadboard version of the device was shown to provide high accuracy in difficult measurement situations. It is expected that this pyrometer will ultimately find application in measuring the temperature of materials undergoing radiant heating in a microgravity acoustic levitation furnace.

  17. A multicolor imaging pyrometer

    NASA Astrophysics Data System (ADS)

    Frish, Michael B.; Frank, Jonathan H.

    1989-06-01

    A multicolor imaging pyrometer was designed for accurately and precisely measuring the temperature distribution histories of small moving samples. The device projects six different color images of the sample onto a single charge coupled device array that provides an RS-170 video signal to a computerized frame grabber. The computer automatically selects which one of the six images provides useful data, and converts that information to a temperature map. By measuring the temperature of molten aluminum heated in a kiln, a breadboard version of the device was shown to provide high accuracy in difficult measurement situations. It is expected that this pyrometer will ultimately find application in measuring the temperature of materials undergoing radiant heating in a microgravity acoustic levitation furnace.

  18. Video as Character: The Use of Video Technology in Theatrical Productions.

    ERIC Educational Resources Information Center

    Trimble, Frank P.

    The use of video images, tempered with good judgment and some restraint, can serve a stage play as opposed to stealing its thunder. An experienced director of university theater productions decided to try to incorporate video images into his production of "Joseph and the Amazing Technicolor Dreamcoat." The production drew from the works…

  19. Bragg Coherent Diffractive Imaging of Zinc Oxide Acoustic Phonons at Picosecond Timescales

    DOE PAGES

    Ulvestad, A.; Cherukara, M. J.; Harder, R.; ...

    2017-08-29

    Mesoscale thermal transport is of fundamental interest and practical importance in materials such as thermoelectrics. Coherent lattice vibrations (acoustic phonons) govern thermal transport in crystalline solids and are affected by the shape, size, and defect density in nanoscale materials. The advent of hard x-ray free electron lasers (XFELs) capable of producing ultrafast x-ray pulses has significantly impacted the understanding of acoustic phonons by enabling their direct study with x-rays. However, previous studies have reported ensemble-averaged results that cannot distinguish the impact of mesoscale heterogeneity on the phonon dynamics. Here we use Bragg coherent diffractive imaging (BCDI) to resolve the 4Dmore » evolution of the acoustic phonons in a single zinc oxide rod with a spatial resolution of 50 nm and a temporal resolution of 25 picoseconds. We observe homogeneous (lattice breathing/rotation) and inhomogeneous (shear) acoustic phonon modes, which are compared to finite element simulations. We investigate the possibility of changing phonon dynamics by altering the crystal through acid etching. Lastly, we find that the acid heterogeneously dissolves the crystal volume, which will significantly impact the phonon dynamics. In general, our results represent the first step towards understanding the effect of structural properties at the individual crystal level on phonon dynamics.« less

  20. Bragg Coherent Diffractive Imaging of Zinc Oxide Acoustic Phonons at Picosecond Timescales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ulvestad, A.; Cherukara, M. J.; Harder, R.

    Mesoscale thermal transport is of fundamental interest and practical importance in materials such as thermoelectrics. Coherent lattice vibrations (acoustic phonons) govern thermal transport in crystalline solids and are affected by the shape, size, and defect density in nanoscale materials. The advent of hard x-ray free electron lasers (XFELs) capable of producing ultrafast x-ray pulses has significantly impacted the understanding of acoustic phonons by enabling their direct study with x-rays. However, previous studies have reported ensemble-averaged results that cannot distinguish the impact of mesoscale heterogeneity on the phonon dynamics. Here we use Bragg coherent diffractive imaging (BCDI) to resolve the 4Dmore » evolution of the acoustic phonons in a single zinc oxide rod with a spatial resolution of 50 nm and a temporal resolution of 25 picoseconds. We observe homogeneous (lattice breathing/rotation) and inhomogeneous (shear) acoustic phonon modes, which are compared to finite element simulations. We investigate the possibility of changing phonon dynamics by altering the crystal through acid etching. Lastly, we find that the acid heterogeneously dissolves the crystal volume, which will significantly impact the phonon dynamics. In general, our results represent the first step towards understanding the effect of structural properties at the individual crystal level on phonon dynamics.« less

  1. Video flowmeter

    DOEpatents

    Lord, D.E.; Carter, G.W.; Petrini, R.R.

    1983-08-02

    A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid containing entrained particles is formed and positioned by a rod optic lens assembly on the raster area of a low-light level television camera. The particles are illuminated by light transmitted through a bundle of glass fibers surrounding the rod optic lens assembly. Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen. The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid. 4 figs.

  2. Multimodal Translation System Using Texture-Mapped Lip-Sync Images for Video Mail and Automatic Dubbing Applications

    NASA Astrophysics Data System (ADS)

    Morishima, Shigeo; Nakamura, Satoshi

    2004-12-01

    We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.

  3. Video stereo-laparoscopy system

    NASA Astrophysics Data System (ADS)

    Xiang, Yang; Hu, Jiasheng; Jiang, Huilin

    2006-01-01

    Minimally invasive surgery (MIS) has contributed significantly to patient care by reducing the morbidity associated with more invasive procedures. MIS procedures have become standard treatment for gallbladder disease and some abdominal malignancies. The imaging system has played a major role in the evolving field of minimally invasive surgery (MIS). The image need to have good resolution, large magnification, especially, the image need to have depth cue at the same time the image have no flicker and suit brightness. The video stereo-laparoscopy system can meet the demand of the doctors. This paper introduces the 3d video laparoscopy has those characteristic, field frequency: 100Hz, the depth space: 150mm, resolution: 10pl/mm. The work principle of the system is introduced in detail, and the optical system and time-division stereo-display system are described briefly in this paper. The system has focusing image lens, it can image on the CCD chip, the optical signal can change the video signal, and through A/D switch of the image processing system become the digital signal, then display the polarized image on the screen of the monitor through the liquid crystal shutters. The doctors with the polarized glasses can watch the 3D image without flicker of the tissue or organ. The 3D video laparoscope system has apply in the MIS field and praised by doctors. Contrast to the traditional 2D video laparoscopy system, it has some merit such as reducing the time of surgery, reducing the problem of surgery and the trained time.

  4. Detection and display of acoustic window for guiding and training cardiac ultrasound users

    NASA Astrophysics Data System (ADS)

    Huang, Sheng-Wen; Radulescu, Emil; Wang, Shougang; Thiele, Karl; Prater, David; Maxwell, Douglas; Rafter, Patrick; Dupuy, Clement; Drysdale, Jeremy; Erkamp, Ramon

    2014-03-01

    Successful ultrasound data collection strongly relies on the skills of the operator. Among different scans, echocardiography is especially challenging as the heart is surrounded by ribs and lung tissue. Less experienced users might acquire compromised images because of suboptimal hand-eye coordination and less awareness of artifacts. Clearly, there is a need for a tool that can guide and train less experienced users to position the probe optimally. We propose to help users with hand-eye coordination by displaying lines overlaid on B-mode images. The lines indicate the edges of blockages (e.g., ribs) and are updated in real time according to movement of the probe relative to the blockages. They provide information about how probe positioning can be improved. To distinguish between blockage and acoustic window, we use coherence, an indicator of channel data similarity after applying focusing delays. Specialized beamforming was developed to estimate coherence. Image processing is applied to coherence maps to detect unblocked beams and the angle of the lines for display. We built a demonstrator based on a Philips iE33 scanner, from which beamsummed RF data and video output are transferred to a workstation for processing. The detected lines are overlaid on B-mode images and fed back to the scanner display to provide users real-time guidance. Using such information in addition to B-mode images, users will be able to quickly find a suitable acoustic window for optimal image quality, and improve their skill.

  5. Low-complexity video encoding method for wireless image transmission in capsule endoscope.

    PubMed

    Takizawa, Kenichi; Hamaguchi, Kiyoshi

    2010-01-01

    This paper presents a low-complexity video encoding method applicable for wireless image transmission in capsule endoscopes. This encoding method is based on Wyner-Ziv theory, in which side information available at a transmitter is treated as side information at its receiver. Therefore complex processes in video encoding, such as estimation of the motion vector, are moved to the receiver side, which has a larger-capacity battery. As a result, the encoding process is only to decimate coded original data through channel coding. We provide a performance evaluation for a low-density parity check (LDPC) coding method in the AWGN channel.

  6. The contrast between alveolar and velar stops with typical speech data: acoustic and articulatory analyses.

    PubMed

    Melo, Roberta Michelon; Mota, Helena Bolli; Berti, Larissa Cristina

    2017-06-08

    This study used acoustic and articulatory analyses to characterize the contrast between alveolar and velar stops with typical speech data, comparing the parameters (acoustic and articulatory) of adults and children with typical speech development. The sample consisted of 20 adults and 15 children with typical speech development. The analyzed corpus was organized through five repetitions of each target-word (/'kap ə/, /'tapə/, /'galo/ e /'daɾə/). These words were inserted into a carrier phrase and the participant was asked to name them spontaneously. Simultaneous audio and video data were recorded (tongue ultrasound images). The data was submitted to acoustic analyses (voice onset time; spectral peak and burst spectral moments; vowel/consonant transition and relative duration measures) and articulatory analyses (proportion of significant axes of the anterior and posterior tongue regions and description of tongue curves). Acoustic and articulatory parameters were effective to indicate the contrast between alveolar and velar stops, mainly in the adult group. Both speech analyses showed statistically significant differences between the two groups. The acoustic and articulatory parameters provided signals to characterize the phonic contrast of speech. One of the main findings in the comparison between adult and child speech was evidence of articulatory refinement/maturation even after the period of segment acquisition.

  7. 13 point video tape quality guidelines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaunt, R.

    1997-05-01

    Until high definition television (ATV) arrives, in the U.S. we must still contend with the National Television Systems Committee (NTSC) video standard (or PAL or SECAM-depending on your country). NTSC, a 40-year old standard designed for transmission of color video camera images over a small bandwidth, is not well suited for the sharp, full-color images that todays computers are capable of producing. PAL and SECAM also suffers from many of NTSC`s problems, but to varying degrees. Video professionals, when working with computer graphic (CG) images, use two monitors: a computer monitor for producing CGs and an NTSC monitor to viewmore » how a CG will look on video. More often than not, the NTSC image will differ significantly from the CG image, and outputting it to NTSC as an artist works enables the him or her to see the images as others will see it. Below are thirteen guidelines designed to increase the quality of computer graphics recorded onto video tape. Viewing your work in NTSC and attempting to follow the below tips will enable you to create higher quality videos. No video is perfect, so don`t expect to abide by every guideline every time.« less

  8. Vocal fold vibrations: high-speed imaging, kymography, and acoustic analysis: a preliminary report.

    PubMed

    Larsson, H; Hertegård, S; Lindestad, P A; Hammarberg, B

    2000-12-01

    To evaluate a new analysis system, High-Speed Tool Box (H. Larsson, custom-made program for image analysis, version 1.1, Department of Logopedics and Phoniatrics, Huddinge University Hospital, Huddinge, Sweden, 1998) for studying vocal fold vibrations using a high-speed camera and to relate findings from these analyses to sound characteristics. A Weinberger Speedcam + 500 system (Weinberger AG, Dietikon, Switzerland) was used with a frame rate of 1,904 frames per second. Images were stored and analyzed digitally. Analysis included automatic glottal edge detection and calculation of glottal area variations, as well as kymography. These signals were compared with acoustic waveforms using the Soundswell program (Hitech Development AB, Stockholm, Sweden). The High-Speed Tool Box was applied on two types of high-speed recordings: a diplophonic phonation and a tremor voice. Relations between glottal vibratory patterns and the sound waveform were analyzed. In the diplophonic phonation, the glottal area waveform, as well as the kymogram, showed a specific pattern of repetitive glottal closures, which was also seen in the acoustic waveform. In the tremor voice, fundamental frequency (F0) fluctuations in the acoustic waveform were reflected in slow variations in amplitude in the glottal area waveform. For studying details of mucosal movements during these kinds of abnormal vibrations, the glottal area waveform was particularly useful. Our results suggest that this combined high-speed acoustic-kymographic analysis package is a promising aid for separating and specifying different voice qualities such as diplophonia and voice tremor. Apart from clinical use, this finding should be of help for specification of the terminology of different voice qualities.

  9. Bond-selective photoacoustic imaging by converting molecular vibration into acoustic waves

    PubMed Central

    Hui, Jie; Li, Rui; Phillips, Evan H.; Goergen, Craig J.; Sturek, Michael; Cheng, Ji-Xin

    2016-01-01

    The quantized vibration of chemical bonds provides a way of detecting specific molecules in a complex tissue environment. Unlike pure optical methods, for which imaging depth is limited to a few hundred micrometers by significant optical scattering, photoacoustic detection of vibrational absorption breaks through the optical diffusion limit by taking advantage of diffused photons and weak acoustic scattering. Key features of this method include both high scalability of imaging depth from a few millimeters to a few centimeters and chemical bond selectivity as a novel contrast mechanism for photoacoustic imaging. Its biomedical applications spans detection of white matter loss and regeneration, assessment of breast tumor margins, and diagnosis of vulnerable atherosclerotic plaques. This review provides an overview of the recent advances made in vibration-based photoacoustic imaging and various biomedical applications enabled by this new technology. PMID:27069873

  10. Images of sexual stereotypes in rap videos and the health of African American female adolescents.

    PubMed

    Peterson, Shani H; Wingood, Gina M; DiClemente, Ralph J; Harrington, Kathy; Davies, Susan

    2007-10-01

    This study sought to determine whether perceiving portrayals of sexual stereotypes in rap music videos was associated with adverse health outcomes among African American adolescent females. African American female adolescents (n = 522) were recruited from community venues. Adolescents completed a survey consisting of questions on sociodemographic characteristics, rap music video viewing habits, and a scale that assessed the primary predictor variable, portrayal of sexual stereotypes in rap music videos. Adolescents also completed an interview that assessed the health outcomes and provided urine for a marijuana screen. In logistic regression analyses, adolescents who perceived more portrayals of sexual stereotypes in rap music videos were more likely to engage in binge drinking (OR 3.8, 95% CI 1.32-11.04, p = 0.01), test positive for marijuana (OR 3.4, 95% CI 1.19-9.85, p = 0.02), have multiple sexual partners (OR 1.9, 95% CI 1.01-3.71, p = 0.04), and have a negative body image (OR 1.5, 95% CI 1.02-2.26, p = 0.04). This is one of the first studies quantitatively examining the relationship between cultural images of sexual stereotypes in rap music videos and a spectrum of adverse health outcomes in African American female adolescents. Greater attention to this social issue may improve the health of all adolescent females.

  11. A magnetic resonance imaging study on the articulatory and acoustic speech parameters of Malay vowels

    PubMed Central

    2014-01-01

    The phonetic properties of six Malay vowels are investigated using magnetic resonance imaging (MRI) to visualize the vocal tract in order to obtain dynamic articulatory parameters during speech production. To resolve image blurring due to the tongue movement during the scanning process, a method based on active contour extraction is used to track tongue contours. The proposed method efficiently tracks tongue contours despite the partial blurring of MRI images. Consequently, the articulatory parameters that are effectively measured as tongue movement is observed, and the specific shape of the tongue and its position for all six uttered Malay vowels are determined. Speech rehabilitation procedure demands some kind of visual perceivable prototype of speech articulation. To investigate the validity of the measured articulatory parameters based on acoustic theory of speech production, an acoustic analysis based on the uttered vowels by subjects has been performed. As the acoustic speech and articulatory parameters of uttered speech were examined, a correlation between formant frequencies and articulatory parameters was observed. The experiments reported a positive correlation between the constriction location of the tongue body and the first formant frequency, as well as a negative correlation between the constriction location of the tongue tip and the second formant frequency. The results demonstrate that the proposed method is an effective tool for the dynamic study of speech production. PMID:25060583

  12. A magnetic resonance imaging study on the articulatory and acoustic speech parameters of Malay vowels.

    PubMed

    Zourmand, Alireza; Mirhassani, Seyed Mostafa; Ting, Hua-Nong; Bux, Shaik Ismail; Ng, Kwan Hoong; Bilgen, Mehmet; Jalaludin, Mohd Amin

    2014-07-25

    The phonetic properties of six Malay vowels are investigated using magnetic resonance imaging (MRI) to visualize the vocal tract in order to obtain dynamic articulatory parameters during speech production. To resolve image blurring due to the tongue movement during the scanning process, a method based on active contour extraction is used to track tongue contours. The proposed method efficiently tracks tongue contours despite the partial blurring of MRI images. Consequently, the articulatory parameters that are effectively measured as tongue movement is observed, and the specific shape of the tongue and its position for all six uttered Malay vowels are determined.Speech rehabilitation procedure demands some kind of visual perceivable prototype of speech articulation. To investigate the validity of the measured articulatory parameters based on acoustic theory of speech production, an acoustic analysis based on the uttered vowels by subjects has been performed. As the acoustic speech and articulatory parameters of uttered speech were examined, a correlation between formant frequencies and articulatory parameters was observed. The experiments reported a positive correlation between the constriction location of the tongue body and the first formant frequency, as well as a negative correlation between the constriction location of the tongue tip and the second formant frequency. The results demonstrate that the proposed method is an effective tool for the dynamic study of speech production.

  13. Video flowmeter

    DOEpatents

    Lord, David E.; Carter, Gary W.; Petrini, Richard R.

    1983-01-01

    A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid (10) containing entrained particles (12) is formed and positioned by a rod optic lens assembly (31) on the raster area of a low-light level television camera (20). The particles (12) are illuminated by light transmitted through a bundle of glass fibers (32) surrounding the rod optic lens assembly (31). Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen (40). The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid (10).

  14. An improved architecture for video rate image transformations

    NASA Technical Reports Server (NTRS)

    Fisher, Timothy E.; Juday, Richard D.

    1989-01-01

    Geometric image transformations are of interest to pattern recognition algorithms for their use in simplifying some aspects of the pattern recognition process. Examples include reducing sensitivity to rotation, scale, and perspective of the object being recognized. The NASA Programmable Remapper can perform a wide variety of geometric transforms at full video rate. An architecture is proposed that extends its abilities and alleviates many of the first version's shortcomings. The need for the improvements are discussed in the context of the initial Programmable Remapper and the benefits and limitations it has delivered. The implementation and capabilities of the proposed architecture are discussed.

  15. Diffusion tensor imaging reveals changes in the adult rat brain following long-term and passive moderate acoustic exposure.

    PubMed

    Abdoli, Sherwin; Ho, Leon C; Zhang, Jevin W; Dong, Celia M; Lau, Condon; Wu, Ed X

    2016-12-01

    This study investigated neuroanatomical changes following long-term acoustic exposure at moderate sound pressure level (SPL) under passive conditions, without coupled behavioral training. The authors utilized diffusion tensor imaging (DTI) to detect morphological changes in white matter. DTIs from adult rats (n = 8) exposed to continuous acoustic exposure at moderate SPL for 2 months were compared with DTIs from rats (n = 8) reared under standard acoustic conditions. Two distinct forms of DTI analysis were applied in a sequential manner. First, DTI images were analyzed using voxel-based statistics which revealed greater fractional anisotropy (FA) of the pyramidal tract and decreased FA of the tectospinal tract and trigeminothalamic tract of the exposed rats. Region of interest analysis confirmed (p < 0.05) that FA had increased in the pyramidal tract but did not show a statistically significant difference in the FA of the tectospinal or trigeminothalamic tract. The results of the authors show that long-term and passive acoustic exposure at moderate SPL increases the organization of white matter in the pyramidal tract.

  16. Video Guidance, Landing, and Imaging system (VGLIS) for space missions

    NASA Technical Reports Server (NTRS)

    Schappell, R. T.; Knickerbocker, R. L.; Tietz, J. C.; Grant, C.; Flemming, J. C.

    1975-01-01

    The feasibility of an autonomous video guidance system that is capable of observing a planetary surface during terminal descent and selecting the most acceptable landing site was demonstrated. The system was breadboarded and "flown" on a physical simulator consisting of a control panel and monitor, a dynamic simulator, and a PDP-9 computer. The breadboard VGLIS consisted of an image dissector camera and the appropriate processing logic. Results are reported.

  17. Music video shot segmentation using independent component analysis and keyframe extraction based on image complexity

    NASA Astrophysics Data System (ADS)

    Li, Wei; Chen, Ting; Zhang, Wenjun; Shi, Yunyu; Li, Jun

    2012-04-01

    In recent years, Music video data is increasing at an astonishing speed. Shot segmentation and keyframe extraction constitute a fundamental unit in organizing, indexing, retrieving video content. In this paper a unified framework is proposed to detect the shot boundaries and extract the keyframe of a shot. Music video is first segmented to shots by illumination-invariant chromaticity histogram in independent component (IC) analysis feature space .Then we presents a new metric, image complexity, to extract keyframe in a shot which is computed by ICs. Experimental results show the framework is effective and has a good performance.

  18. Air-coupled acoustic thermography for in-situ evaluation

    NASA Technical Reports Server (NTRS)

    Zalameda, Joseph N. (Inventor); Winfree, William P. (Inventor); Yost, William T. (Inventor)

    2010-01-01

    Acoustic thermography uses a housing configured for thermal, acoustic and infrared radiation shielding. For in-situ applications, the housing has an open side adapted to be sealingly coupled to a surface region of a structure such that an enclosed chamber filled with air is defined. One or more acoustic sources are positioned to direct acoustic waves through the air in the enclosed chamber and towards the surface region. To activate and control each acoustic source, a pulsed signal is applied thereto. An infrared imager focused on the surface region detects a thermal image of the surface region. A data capture device records the thermal image in synchronicity with each pulse of the pulsed signal such that a time series of thermal images is generated. For enhanced sensitivity and/or repeatability, sound and/or vibrations at the surface region can be used in feedback control of the pulsed signal applied to the acoustic sources.

  19. Astronomical Video Suites

    NASA Astrophysics Data System (ADS)

    Francisco Salgado, Jose

    2010-01-01

    Astronomer and visual artist Jose Francisco Salgado has directed two astronomical video suites to accompany live performances of classical music works. The suites feature awe-inspiring images, historical illustrations, and visualizations produced by NASA, ESA, and the Adler Planetarium. By the end of 2009, his video suites Gustav Holst's The Planets and Astronomical Pictures at an Exhibition will have been presented more than 40 times in over 10 countries. Lately Salgado, an avid photographer, has been experimenting with high dynamic range imaging, time-lapse, infrared, and fisheye photography, as well as with stereoscopic photography and video to enhance his multimedia works.

  20. NASA's Myriad Uses of Digital Video

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney; Lindblom, Walt; George, Sandy

    1999-01-01

    Since it's inception, NASA has created many of the most memorable images seen this Century. From the fuzzy video of Neil Armstrong taking that first step on the moon, to images of the Mars surface available to all on the internet, NASA has provided images to inspire a generation, all because a scientist or researcher had a requirement to see something unusual. Digital Television technology will give NASA unprecedented new tools for acquiring, analyzing, and distributing video. This paper will explore NASA's DTV future. The agency has a requirement to move video from one NASA Center to another, in real time. Specifics will be provided relating to the NASA video infrastructure, including video from the Space Shuttle and from the various Centers. A comparison of the pros and cons of interlace and progressive scanned images will be presented. Film is a major component of NASA's image acquisition for analysis usage. The future of film within the context of DTV will be explored.

  1. A comparison of traffic estimates of nocturnal flying animals using radar, thermal imaging, and acoustic recording.

    PubMed

    Horton, Kyle G; Shriver, W Gregory; Buler, Jeffrey J

    2015-03-01

    There are several remote-sensing tools readily available for the study of nocturnally flying animals (e.g., migrating birds), each possessing unique measurement biases. We used three tools (weather surveillance radar, thermal infrared camera, and acoustic recorder) to measure temporal and spatial patterns of nocturnal traffic estimates of flying animals during the spring and fall of 2011 and 2012 in Lewes, Delaware, USA. Our objective was to compare measures among different technologies to better understand their animal detection biases. For radar and thermal imaging, the greatest observed traffic rate tended to occur at, or shortly after, evening twilight, whereas for the acoustic recorder, peak bird flight-calling activity was observed just prior to morning twilight. Comparing traffic rates during the night for all seasons, we found that mean nightly correlations between acoustics and the other two tools were weakly correlated (thermal infrared camera and acoustics, r = 0.004 ± 0.04 SE, n = 100 nights; radar and acoustics, r = 0.14 ± 0.04 SE, n = 101 nights), but highly variable on an individual nightly basis (range = -0.84 to 0.92, range = -0.73 to 0.94). The mean nightly correlations between traffic rates estimated by radar and by thermal infrared camera during the night were more strongly positively correlated (r = 0.39 ± 0.04 SE, n = 125 nights), but also were highly variable for individual nights (range = -0.76 to 0.98). Through comparison with radar data among numerous height intervals, we determined that flying animal height above the ground influenced thermal imaging positively and flight call detections negatively. Moreover, thermal imaging detections decreased with the presence of cloud cover and increased with mean ground flight speed of animals, whereas acoustic detections showed no relationship with cloud cover presence but did decrease with increased flight speed. We found sampling methods to be positively correlated when comparing mean nightly

  2. Retrospective comparison of measured stone size and posterior acoustic shadow width in clinical ultrasound images.

    PubMed

    Dai, Jessica C; Dunmire, Barbrina; Sternberg, Kevan M; Liu, Ziyue; Larson, Troy; Thiel, Jeff; Chang, Helena C; Harper, Jonathan D; Bailey, Michael R; Sorensen, Mathew D

    2018-05-01

    Posterior acoustic shadow width has been proposed as a more accurate measure of kidney stone size compared to direct measurement of stone width on ultrasound (US). Published data in humans to date have been based on a research using US system. Herein, we compared these two measurements in clinical US images. Thirty patient image sets where computed tomography (CT) and US images were captured less than 1 day apart were retrospectively reviewed. Five blinded reviewers independently assessed the largest stone in each image set for shadow presence and size. Shadow size was compared to US and CT stone sizes. Eighty percent of included stones demonstrated an acoustic shadow; 83% of stones without a shadow were ≤ 5 mm on CT. Average stone size was 6.5 ± 4.0 mm on CT, 10.3 ± 4.1 mm on US, and 7.5 ± 4.2 mm by shadow width. On average, US overestimated stone size by 3.8 ± 2.4 mm based on stone width (p < 0.001) and 1.0 ± 1.4 mm based on shadow width (p < 0.0098). Shadow measurements decreased misclassification of stones by 25% among three clinically relevant size categories (≤ 5, 5.1-10, > 10 mm), and by 50% for stones ≤ 5 mm. US overestimates stone size compared to CT. Retrospective measurement of the acoustic shadow from the same clinical US images is a more accurate reflection of true stone size than direct stone measurement. Most stones without a posterior shadow are ≤ 5 mm.

  3. Qualitative and quantitative assessment of video transmitted by DVTS (digital video transport system) in surgical telemedicine.

    PubMed

    Shima, Yoichiro; Suwa, Akina; Gomi, Yuichiro; Nogawa, Hiroki; Nagata, Hiroshi; Tanaka, Hiroshi

    2007-01-01

    Real-time video pictures can be transmitted inexpensively via a broadband connection using the DVTS (digital video transport system). However, the degradation of video pictures transmitted by DVTS has not been sufficiently evaluated. We examined the application of DVTS to remote consultation by using images of laparoscopic and endoscopic surgeries. A subjective assessment by the double stimulus continuous quality scale (DSCQS) method of the transmitted video pictures was carried out by eight doctors. Three of the four video recordings were assessed as being transmitted with no degradation in quality. None of the doctors noticed any degradation in the images due to encryption by the VPN (virtual private network) system. We also used an automatic picture quality assessment system to make an objective assessment of the same images. The objective DSCQS values were similar to the subjective ones. We conclude that although the quality of video pictures transmitted by the DVTS was slightly reduced, they were useful for clinical purposes. Encryption with a VPN did not degrade image quality.

  4. Flow fields and acoustics in a unilateral scarred vocal fold model.

    PubMed

    Murugappan, Shanmugam; Khosla, Sid; Casper, Keith; Oren, Liran; Gutmark, Ephraim

    2009-01-01

    From prior work in an excised canine larynx model, it has been shown that intraglottal vortices form between the vocal folds during the latter part of closing. It has also been shown that the vortices generate a negative pressure between the folds, producing a suction force that causes sudden, rapid closing of the folds. This rapid closing will produce increased loudness and increased higher harmonics. We used a unilateral scarred excised canine larynx model to determine whether the intraglottal vortices and resulting acoustics were changed, compared to those of normal larynges. Acoustic, flow field, and high-speed imaging measurements from 5 normal and 5 unilaterally scarred canine larynges are presented in this report. Scarring was produced by complete resection of the vocal fold mucosa and superficial layer of the lamina propria on the right vocal fold only. Two months later, these dogs were painlessly sacrificed, and testing was done on the excised larynges during phonation. High-speed video imaging was then used to measure vocal fold displacement during different phases. Particle image velocimetry and acoustic measurements were used to describe possible acoustic effects of the vortices. A higher phonation threshold was required to excite the motion of the vocal fold in scarred larynges. As the subglottal pressure increased, the strength of the vortices and the higher harmonics both consistently increased. However, it was seen that increasing the maximum displacement of the scarred fold did not consistently increase the higher harmonics. The improvements that result from increasing subglottal pressure may be due to a combination of increasing the strength of the intraglottal vortices and increasing the maximum displacement of the vocal fold; however, the data in this study suggest that the vortices play a much more important role. The current study indicates that higher subglottal pressures may excite higher harmonics and improve loudness for patients with

  5. Clinical feasibility study of combined opto-acoustic and ultrasonic imaging modality providing coregistered functional and anatomical maps of breast tumors

    NASA Astrophysics Data System (ADS)

    Zalev, Jason; Clingman, Bryan; Smith, Remie J.; Herzog, Don; Miller, Tom; Stavros, A. Thomas; Ermilov, Sergey; Conjusteau, André; Tsyboulski, Dmitri; Oraevsky, Alexander A.; Kist, Kenneth; Dornbluth, N. C.; Otto, Pamela

    2013-03-01

    We report on findings from the clinical feasibility study of the ImagioTM. Breast Imaging System, which acquires two-dimensional opto-acoustic (OA) images co-registered with conventional ultrasound using a specialized duplex hand-held probe. Dual-wavelength opto-acoustic technology is used to generate parametric maps based upon total hemoglobin and its oxygen saturation in breast tissues. This may provide functional diagnostic information pertaining to tumor metabolism and microvasculature, which is complementary to morphological information obtained with conventional gray-scale ultrasound. We present co-registered opto-acoustic and ultrasonic images of malignant and benign tumors from a recent clinical feasibility study. The clinical results illustrate that the technology may have the capability to improve the efficacy of breast tumor diagnosis. In doing so, it may have the potential to reduce biopsies and to characterize cancers that were not seen well with conventional gray-scale ultrasound alone.

  6. Video-based face recognition via convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Bao, Tianlong; Ding, Chunhui; Karmoshi, Saleem; Zhu, Ming

    2017-06-01

    Face recognition has been widely studied recently while video-based face recognition still remains a challenging task because of the low quality and large intra-class variation of video captured face images. In this paper, we focus on two scenarios of video-based face recognition: 1)Still-to-Video(S2V) face recognition, i.e., querying a still face image against a gallery of video sequences; 2)Video-to-Still(V2S) face recognition, in contrast to S2V scenario. A novel method was proposed in this paper to transfer still and video face images to an Euclidean space by a carefully designed convolutional neural network, then Euclidean metrics are used to measure the distance between still and video images. Identities of still and video images that group as pairs are used as supervision. In the training stage, a joint loss function that measures the Euclidean distance between the predicted features of training pairs and expanding vectors of still images is optimized to minimize the intra-class variation while the inter-class variation is guaranteed due to the large margin of still images. Transferred features are finally learned via the designed convolutional neural network. Experiments are performed on COX face dataset. Experimental results show that our method achieves reliable performance compared with other state-of-the-art methods.

  7. Photometric Calibration of Consumer Video Cameras

    NASA Technical Reports Server (NTRS)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to

  8. Immediate early gene expression following exposure to acoustic and visual components of courtship in zebra finches.

    PubMed

    Avey, Marc T; Phillmore, Leslie S; MacDougall-Shackleton, Scott A

    2005-12-07

    Sensory driven immediate early gene expression (IEG) has been a key tool to explore auditory perceptual areas in the avian brain. Most work on IEG expression in songbirds such as zebra finches has focused on playback of acoustic stimuli and its effect on auditory processing areas such as caudal medial mesopallium (CMM) caudal medial nidopallium (NCM). However, in a natural setting, the courtship displays of songbirds (including zebra finches) include visual as well as acoustic components. To determine whether the visual stimulus of a courting male modifies song-induced expression of the IEG ZENK in the auditory forebrain we exposed male and female zebra finches to acoustic (song) and visual (dancing) components of courtship. Birds were played digital movies with either combined audio and video, audio only, video only, or neither audio nor video (control). We found significantly increased levels of Zenk response in the auditory region CMM in the two treatment groups exposed to acoustic stimuli compared to the control group. The video only group had an intermediate response, suggesting potential effect of visual input on activity in these auditory brain regions. Finally, we unexpectedly found a lateralization of Zenk response that was independent of sex, brain region, or treatment condition, such that Zenk immunoreactivity was consistently higher in the left hemisphere than in the right and the majority of individual birds were left-hemisphere dominant.

  9. A spatiotemporal decomposition strategy for personal home video management

    NASA Astrophysics Data System (ADS)

    Yi, Haoran; Kozintsev, Igor; Polito, Marzia; Wu, Yi; Bouguet, Jean-Yves; Nefian, Ara; Dulong, Carole

    2007-01-01

    With the advent and proliferation of low cost and high performance digital video recorder devices, an increasing number of personal home video clips are recorded and stored by the consumers. Compared to image data, video data is lager in size and richer in multimedia content. Efficient access to video content is expected to be more challenging than image mining. Previously, we have developed a content-based image retrieval system and the benchmarking framework for personal images. In this paper, we extend our personal image retrieval system to include personal home video clips. A possible initial solution to video mining is to represent video clips by a set of key frames extracted from them thus converting the problem into an image search one. Here we report that a careful selection of key frames may improve the retrieval accuracy. However, because video also has temporal dimension, its key frame representation is inherently limited. The use of temporal information can give us better representation for video content at semantic object and concept levels than image-only based representation. In this paper we propose a bottom-up framework to combine interest point tracking, image segmentation and motion-shape factorization to decompose the video into spatiotemporal regions. We show an example application of activity concept detection using the trajectories extracted from the spatio-temporal regions. The proposed approach shows good potential for concise representation and indexing of objects and their motion in real-life consumer video.

  10. Psychophysical Comparison Of A Video Display System To Film By Using Bone Fracture Images

    NASA Astrophysics Data System (ADS)

    Seeley, George W.; Stempski, Mark; Roehrig, Hans; Nudelman, Sol; Capp, M. P.

    1982-11-01

    This study investigated the possibility of using a video display system instead of film for radiological diagnosis. Also investigated were the relationships between characteristics of the system and the observer's accuracy level. Radiologists were used as observers. Thirty-six clinical bone fractures were separated into two matched sets of equal difficulty. The difficulty parameters and ratings were defined by a panel of expert bone radiologists at the Arizona Health Sciences Center, Radiology Department. These two sets of fracture images were then matched with verifiably normal images using parameters such as film type, angle of view, size, portion of anatomy, the film's density range, and the patient's age and sex. The two sets of images were then displayed, using a counterbalanced design, to each of the participating radiologists for diagnosis. Whenever a response was given to a video image, the radiologist used enhancement controls to "window in" on the grey levels of interest. During the TV phase, the radiologist was required to record the settings of the calibrated controls of the image enhancer during interpretation. At no time did any single radiologist see the same film in both modes. The study was designed so that a standard analysis of variance would show the effects of viewing mode (film vs TV), the effects due to stimulus set, and any interactions with observers. A signal detection analysis of observer performance was also performed. Results indicate that the TV display system is almost as good as the view box display; an average of only two more errors were made on the TV display. The difference between the systems has been traced to four observers who had poor accuracy on a small number of films viewed on the TV display. This information is now being correlated with the video system's signal-to-noise ratio (SNR), signal transfer function (STF), and resolution measurements, to obtain information on the basic display and enhancement requirements for a

  11. The design of L1-norm visco-acoustic wavefield extrapolators

    NASA Astrophysics Data System (ADS)

    Salam, Syed Abdul; Mousa, Wail A.

    2018-04-01

    Explicit depth frequency-space (f - x) prestack imaging is an attractive mechanism for seismic imaging. To date, the main focus of this method was data migration assuming an acoustic medium, but until now very little work assumed visco-acoustic media. Real seismic data usually suffer from attenuation and dispersion effects. To compensate for attenuation in a visco-acoustic medium, new operators are required. We propose using the L1-norm minimization technique to design visco-acoustic f - x extrapolators. To show the accuracy and compensation of the operators, prestack depth migration is performed on the challenging Marmousi model for both acoustic and visco-acoustic datasets. The final migrated images show that the proposed L1-norm extrapolation results in practically stable and improved resolution of the images.

  12. Direct imaging of delayed magneto-dynamic modes induced by surface acoustic waves.

    PubMed

    Foerster, Michael; Macià, Ferran; Statuto, Nahuel; Finizio, Simone; Hernández-Mínguez, Alberto; Lendínez, Sergi; Santos, Paulo V; Fontcuberta, Josep; Hernàndez, Joan Manel; Kläui, Mathias; Aballe, Lucia

    2017-09-01

    The magnetoelastic effect-the change of magnetic properties caused by the elastic deformation of a magnetic material-has been proposed as an alternative approach to magnetic fields for the low-power control of magnetization states of nanoelements since it avoids charge currents, which entail ohmic losses. Here, we have studied the effect of dynamic strain accompanying a surface acoustic wave on magnetic nanostructures in thermal equilibrium. We have developed an experimental technique based on stroboscopic X-ray microscopy that provides a pathway to the quantitative study of strain waves and magnetization at the nanoscale. We have simultaneously imaged the evolution of both strain and magnetization dynamics of nanostructures at the picosecond time scale and found that magnetization modes have a delayed response to the strain modes, adjustable by the magnetic domain configuration. Our results provide fundamental insight into magnetoelastic coupling in nanostructures and have implications for the design of strain-controlled magnetostrictive nano-devices.Understanding the effects of local dynamic strain on magnetization may help the development of magnetic devices. Foerster et al. demonstrate stroboscopic imaging that allows the observation of both strain and magnetization dynamics in nickel when surface acoustic waves are driven in the substrate.

  13. Underwater Communications for Video Surveillance Systems at 2.4 GHz

    PubMed Central

    Sendra, Sandra; Lloret, Jaime; Jimenez, Jose Miguel; Rodrigues, Joel J.P.C.

    2016-01-01

    Video surveillance is needed to control many activities performed in underwater environments. The use of wired media can be a problem since the material specially designed for underwater environments is very expensive. In order to transmit the images and videos wirelessly under water, three main technologies can be used: acoustic waves, which do not provide high bandwidth, optical signals, although the effect of light dispersion in water severely penalizes the transmitted signals and therefore, despite offering high transfer rates, the maximum distance is very small, and electromagnetic (EM) waves, which can provide enough bandwidth for video delivery. In the cases where the distance between transmitter and receiver is short, the use of EM waves would be an interesting option since they provide high enough data transfer rates to transmit videos with high resolution. This paper presents a practical study of the behavior of EM waves at 2.4 GHz in freshwater underwater environments. First, we discuss the minimum requirements of a network to allow video delivery. From these results, we measure the maximum distance between nodes and the round trip time (RTT) value depending on several parameters such as data transfer rate, signal modulations, working frequency, and water temperature. The results are statistically analyzed to determine their relation. Finally, the EM waves’ behavior is modeled by a set of equations. The results show that there are some combinations of working frequency, modulation, transfer rate and temperature that offer better results than others. Our work shows that short communication distances with high data transfer rates is feasible. PMID:27782095

  14. Underwater Communications for Video Surveillance Systems at 2.4 GHz.

    PubMed

    Sendra, Sandra; Lloret, Jaime; Jimenez, Jose Miguel; Rodrigues, Joel J P C

    2016-10-23

    Video surveillance is needed to control many activities performed in underwater environments. The use of wired media can be a problem since the material specially designed for underwater environments is very expensive. In order to transmit the images and videos wirelessly under water, three main technologies can be used: acoustic waves, which do not provide high bandwidth, optical signals, although the effect of light dispersion in water severely penalizes the transmitted signals and therefore, despite offering high transfer rates, the maximum distance is very small, and electromagnetic (EM) waves, which can provide enough bandwidth for video delivery. In the cases where the distance between transmitter and receiver is short, the use of EM waves would be an interesting option since they provide high enough data transfer rates to transmit videos with high resolution. This paper presents a practical study of the behavior of EM waves at 2.4 GHz in freshwater underwater environments. First, we discuss the minimum requirements of a network to allow video delivery. From these results, we measure the maximum distance between nodes and the round trip time (RTT) value depending on several parameters such as data transfer rate, signal modulations, working frequency, and water temperature. The results are statistically analyzed to determine their relation. Finally, the EM waves' behavior is modeled by a set of equations. The results show that there are some combinations of working frequency, modulation, transfer rate and temperature that offer better results than others. Our work shows that short communication distances with high data transfer rates is feasible.

  15. Magneto acoustic tomography with short pulsed magnetic field for in-vivo imaging of magnetic iron oxide nanoparticles.

    PubMed

    Mariappan, Leo; Shao, Qi; Jiang, Chunlan; Yu, Kai; Ashkenazi, Shai; Bischof, John C; He, Bin

    2016-04-01

    Nanoparticles are widely used as contrast and therapeutic agents. As such, imaging modalities that can accurately estimate their distribution in-vivo are actively sought. We present here our method Magneto Acoustic Tomography (MAT), which uses magnetomotive force due to a short pulsed magnetic field to induce ultrasound in the magnetic nanoparticle labeled tissue and estimates an image of the distribution of the nanoparticles in-vivo with ultrasound imaging resolution. In this study, we image the distribution of superparamagnetic iron oxide nanoparticles (IONP) using MAT method. In-vivo imaging was performed on live, nude mice with IONP injected into LNCaP tumors grown subcutaneously within the hind limb of the mice. Our experimental results indicate that the MAT method is capable of imaging the distribution of IONPs in-vivo. Therefore, MAT could become an imaging modality for high resolution reconstruction of MNP distribution in the body. Many magnetic nanoparticles (MNPs) have been used as contrast agents in magnetic resonance imaging. In this study, the authors investigated the use of ultrasound to detect the presence of MNPs by magneto acoustic tomography. In-vivo experiments confirmed the imaging quality of this new approach, which hopefully would provide an alternative method for accurate tumor detection. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Translational-circular scanning for magneto-acoustic tomography with current injection.

    PubMed

    Wang, Shigang; Ma, Ren; Zhang, Shunqi; Yin, Tao; Liu, Zhipeng

    2016-01-27

    Magneto-acoustic tomography with current injection involves using electrical impedance imaging technology. To explore the potential applications in imaging biological tissue and enhance image quality, a new scan mode for the transducer is proposed that is based on translational and circular scanning to record acoustic signals from sources. An imaging algorithm to analyze these signals is developed in respect to this alternative scanning scheme. Numerical simulations and physical experiments were conducted to evaluate the effectiveness of this scheme. An experiment using a graphite sheet as a tissue-mimicking phantom medium was conducted to verify simulation results. A pulsed voltage signal was applied across the sample, and acoustic signals were recorded as the transducer performed stepped translational or circular scans. The imaging algorithm was used to obtain an acoustic-source image based on the signals. In simulations, the acoustic-source image is correlated with the conductivity at the sample boundaries of the sample, but image results change depending on distance and angular aspect of the transducer. In general, as angle and distance decreases, the image quality improves. Moreover, experimental data confirmed the correlation. The acoustic-source images resulting from the alternative scanning mode has yielded the outline of a phantom medium. This scan mode enables improvements to be made in the sensitivity of the detecting unit and a change to a transducer array that would improve the efficiency and accuracy of acoustic-source images.

  17. From Acoustic Segmentation to Language Processing: Evidence from Optical Imaging

    PubMed Central

    Obrig, Hellmuth; Rossi, Sonja; Telkemeyer, Silke; Wartenburger, Isabell

    2010-01-01

    During language acquisition in infancy and when learning a foreign language, the segmentation of the auditory stream into words and phrases is a complex process. Intuitively, learners use “anchors” to segment the acoustic speech stream into meaningful units like words and phrases. Regularities on a segmental (e.g., phonological) or suprasegmental (e.g., prosodic) level can provide such anchors. Regarding the neuronal processing of these two kinds of linguistic cues a left-hemispheric dominance for segmental and a right-hemispheric bias for suprasegmental information has been reported in adults. Though lateralization is common in a number of higher cognitive functions, its prominence in language may also be a key to understanding the rapid emergence of the language network in infants and the ease at which we master our language in adulthood. One question here is whether the hemispheric lateralization is driven by linguistic input per se or whether non-linguistic, especially acoustic factors, “guide” the lateralization process. Methodologically, functional magnetic resonance imaging provides unsurpassed anatomical detail for such an enquiry. However, instrumental noise, experimental constraints and interference with EEG assessment limit its applicability, pointedly in infants and also when investigating the link between auditory and linguistic processing. Optical methods have the potential to fill this gap. Here we review a number of recent studies using optical imaging to investigate hemispheric differences during segmentation and basic auditory feature analysis in language development. PMID:20725516

  18. Image quality assessment for video stream recognition systems

    NASA Astrophysics Data System (ADS)

    Chernov, Timofey S.; Razumnuy, Nikita P.; Kozharinov, Alexander S.; Nikolaev, Dmitry P.; Arlazarov, Vladimir V.

    2018-04-01

    Recognition and machine vision systems have long been widely used in many disciplines to automate various processes of life and industry. Input images of optical recognition systems can be subjected to a large number of different distortions, especially in uncontrolled or natural shooting conditions, which leads to unpredictable results of recognition systems, making it impossible to assess their reliability. For this reason, it is necessary to perform quality control of the input data of recognition systems, which is facilitated by modern progress in the field of image quality evaluation. In this paper, we investigate the approach to designing optical recognition systems with built-in input image quality estimation modules and feedback, for which the necessary definitions are introduced and a model for describing such systems is constructed. The efficiency of this approach is illustrated by the example of solving the problem of selecting the best frames for recognition in a video stream for a system with limited resources. Experimental results are presented for the system for identity documents recognition, showing a significant increase in the accuracy and speed of the system under simulated conditions of automatic camera focusing, leading to blurring of frames.

  19. Coupled High Speed Imaging and Seismo-Acoustic Recordings of Strombolian Explosions at Etna, July 2014: Implications for Source Processes and Signal Inversions.

    NASA Astrophysics Data System (ADS)

    Taddeucci, J.; Del Bello, E.; Scarlato, P.; Ricci, T.; Andronico, D.; Kueppers, U.; Cannata, A.; Sesterhenn, J.; Spina, L.

    2015-12-01

    Seismic and acoustic surveillance is routinely performed at several persistent activity volcanoes worldwide. However, interpretation of the signals associated with explosive activity is still equivocal, due to both source variability and the intrinsically limited information carried by the waves. Comparison and cross-correlation of the geophysical quantities with other information in general and visual recording in particular is therefore actively sought. At Etna (Italy) in July 2014, short-lived Strombolian explosions ejected bomb- to lapilli-sized, molten pyroclasts at a remarkably repeatable time interval of about two seconds, offering a rare occasion to systematically investigate the seismic and acoustic fields radiated by this common volcanic source. We deployed FAMoUS (FAst, MUltiparametric Setup for the study of explosive activity) at 260 meters from the vents, recording more than 60 explosions in thermal and visible high-speed videos (50 to 500 frames per second) and broadband seismic and acoustic instruments (1 to 10000 Hz for the acoustic and from 0.01 to 30 Hz for the seismic). Analysis of this dataset highlights nonlinear relationships between the exit velocity and mass of ejecta and the amplitude and frequency of the acoustic signals. It also allows comparing different methods to estimate source depth, and to validate existing theory on the coupling of airwaves with ground motion.

  20. Acoustic Sensing and Ultrasonic Drug Delivery in Multimodal Theranostic Capsule Endoscopy

    PubMed Central

    Stewart, Fraser R.; Qiu, Yongqiang; Newton, Ian P.; Cox, Benjamin F.; Al-Rawhani, Mohammed A.; Beeley, James; Liu, Yangminghao; Huang, Zhihong; Cumming, David R. S.; Näthke, Inke

    2017-01-01

    Video capsule endoscopy (VCE) is now a clinically accepted diagnostic modality in which miniaturized technology, an on-board power supply and wireless telemetry stand as technological foundations for other capsule endoscopy (CE) devices. However, VCE does not provide therapeutic functionality, and research towards therapeutic CE (TCE) has been limited. In this paper, a route towards viable TCE is proposed, based on multiple CE devices including important acoustic sensing and drug delivery components. In this approach, an initial multimodal diagnostic device with high-frequency quantitative microultrasound that complements video imaging allows surface and subsurface visualization and computer-assisted diagnosis. Using focused ultrasound (US) to mark sites of pathology with exogenous fluorescent agents permits follow-up with another device to provide therapy. This is based on an US-mediated targeted drug delivery system with fluorescence imaging guidance. An additional device may then be utilized for treatment verification and monitoring, exploiting the minimally invasive nature of CE. While such a theranostic patient pathway for gastrointestinal treatment is presently incomplete, the description in this paper of previous research and work under way to realize further components for the proposed pathway suggests it is feasible and provides a framework around which to structure further work. PMID:28671642

  1. Evaluation schemes for video and image anomaly detection algorithms

    NASA Astrophysics Data System (ADS)

    Parameswaran, Shibin; Harguess, Josh; Barngrover, Christopher; Shafer, Scott; Reese, Michael

    2016-05-01

    Video anomaly detection is a critical research area in computer vision. It is a natural first step before applying object recognition algorithms. There are many algorithms that detect anomalies (outliers) in videos and images that have been introduced in recent years. However, these algorithms behave and perform differently based on differences in domains and tasks to which they are subjected. In order to better understand the strengths and weaknesses of outlier algorithms and their applicability in a particular domain/task of interest, it is important to measure and quantify their performance using appropriate evaluation metrics. There are many evaluation metrics that have been used in the literature such as precision curves, precision-recall curves, and receiver operating characteristic (ROC) curves. In order to construct these different metrics, it is also important to choose an appropriate evaluation scheme that decides when a proposed detection is considered a true or a false detection. Choosing the right evaluation metric and the right scheme is very critical since the choice can introduce positive or negative bias in the measuring criterion and may favor (or work against) a particular algorithm or task. In this paper, we review evaluation metrics and popular evaluation schemes that are used to measure the performance of anomaly detection algorithms on videos and imagery with one or more anomalies. We analyze the biases introduced by these by measuring the performance of an existing anomaly detection algorithm.

  2. Imaging of acoustic waves induced by excimer laser ablation of the cornea

    NASA Astrophysics Data System (ADS)

    Rossi, Francesca; Pini, Roberto; Siano, Salvatore; Salimbeni, Renzo

    1996-12-01

    In this present study a pump-and-probe imaging set up was arranged to image and analyze the evolution of pressure waves induced by ArF ablation of the cornea, during their propagation into the eyeball. In vitro experiments simulating the effects of clinical PRK have been performed by using an artificial model of the human eyeball, composed of a cell filled with hyaluronic acid gel with a sample of freshly excised bovine cornea placed on the gel surface. LAser irradiation was provided at a fluence of 180 mJ/cm2. Irradiation spot diameters were varied in the range 2.0-5.0 mm. Images of the traveling acoustic waves evidenced diffraction effects, related to the diameter of laser spots on the corneal surface.

  3. Secure Video Surveillance System Acquisition Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2009-12-04

    The SVSS Acquisition Software collects and displays video images from two cameras through a VPN, and store the images onto a collection controller. The software is configured to allow a user to enter a time window to display up to 2 1/2, hours of video review. The software collects images from the cameras at a rate of 1 image per second and automatically deletes images older than 3 hours. The software code operates in a linux environment and can be run in a virtual machine on Windows XP. The Sandia software integrates the different COTS software together to build themore » video review system.« less

  4. Sparsity-based acoustic inversion in cross-sectional multiscale optoacoustic imaging.

    PubMed

    Han, Yiyong; Tzoumas, Stratis; Nunes, Antonio; Ntziachristos, Vasilis; Rosenthal, Amir

    2015-09-01

    With recent advancement in hardware of optoacoustic imaging systems, highly detailed cross-sectional images may be acquired at a single laser shot, thus eliminating motion artifacts. Nonetheless, other sources of artifacts remain due to signal distortion or out-of-plane signals. The purpose of image reconstruction algorithms is to obtain the most accurate images from noisy, distorted projection data. In this paper, the authors use the model-based approach for acoustic inversion, combined with a sparsity-based inversion procedure. Specifically, a cost function is used that includes the L1 norm of the image in sparse representation and a total variation (TV) term. The optimization problem is solved by a numerically efficient implementation of a nonlinear gradient descent algorithm. TV-L1 model-based inversion is tested in the cross section geometry for numerically generated data as well as for in vivo experimental data from an adult mouse. In all cases, model-based TV-L1 inversion showed a better performance over the conventional Tikhonov regularization, TV inversion, and L1 inversion. In the numerical examples, the images reconstructed with TV-L1 inversion were quantitatively more similar to the originating images. In the experimental examples, TV-L1 inversion yielded sharper images and weaker streak artifact. The results herein show that TV-L1 inversion is capable of improving the quality of highly detailed, multiscale optoacoustic images obtained in vivo using cross-sectional imaging systems. As a result of its high fidelity, model-based TV-L1 inversion may be considered as the new standard for image reconstruction in cross-sectional imaging.

  5. Field methods to measure surface displacement and strain with the Video Image Correlation method

    NASA Technical Reports Server (NTRS)

    Maddux, Gary A.; Horton, Charles M.; Mcneill, Stephen R.; Lansing, Matthew D.

    1994-01-01

    The objective of this project was to develop methods and application procedures to measure displacement and strain fields during the structural testing of aerospace components using paint speckle in conjunction with the Video Image Correlation (VIC) system.

  6. Linked color imaging improves the visibility of colorectal polyps: a video study

    PubMed Central

    Yoshida, Naohisa; Naito, Yuji; Murakami, Takaaki; Hirose, Ryohei; Ogiso, Kiyoshi; Inada, Yutaka; Dohi, Osamu; Kamada, Kazuhiro; Uchiyama, Kazuhiko; Handa, Osamu; Konishi, Hideyuki; Siah, Kewin Tien Ho; Yagi, Nobuaki; Fujita, Yasuko; Kishimoto, Mitsuo; Yanagisawa, Akio; Itoh, Yoshito

    2017-01-01

    Background/study aim  Linked color imaging (LCI) by a laser endoscope (Fujifilm Co, Tokyo, Japan) is a novel narrow band light observation. In this study, we aimed to investigate whether LCI could improve the visibility of colorectal polyps using endoscopic videos. Patients and methods  We prospectively recorded videos of consecutive polyps 2 – 20 mm in size diagnosed as neoplastic polyps. Three videos, white light (WL), blue laser imaging (BLI)-bright, and LCI, were recorded for each polyp by one expert. After excluding inappropriate videos, all videos were evaluated in random order by two experts and two non-experts according to a published polyp visibility score from four (excellent visibility) to one (poor visibility). Additionally, the relationship between polyp visibility scores in LCI and various clinical characteristics including location, size, histology, morphology, and preparation were analyzed compared to WL and BLI-bright. Results  We analyzed 101 colorectal polyps (94 neoplastic) in 66 patients (303 videos). The mean polyp size was 9.0 ± 8.1 mm and 54 polyps were non-polypoid. The mean polyp visibility scores for LCI (2.86 ± 1.08) were significantly higher than for WL and BLI-bright (2.53 ± 1.15, P  < 0.001; 2.73 ± 1.47, P  < 0.041). The ratio of poor visibility (score 1 and 2) was significantly lower in LCI for experts and non-experts (35.6 %, 33.6 %) compared with WL (49.6 %, P  = 0.015, 50.5 %, P  = 0.046). The polyp visibility scores for LCI were significantly higher than those for WL for all of the factors. With respect to the comparison between BLI-bright and WL, the polyp visibility scores for BLI-bright were not higher than WL for right-sided location, < 10 mm size, sessile serrated adenoma and polyp histology, and poor preparation. For those characteristics, LCI improved the lesions with right-sided location, SSA/P histology, and poor preparation significantly better than BLI

  7. Image Size Scalable Full-parallax Coloured Three-dimensional Video by Electronic Holography

    NASA Astrophysics Data System (ADS)

    Sasaki, Hisayuki; Yamamoto, Kenji; Ichihashi, Yasuyuki; Senoh, Takanori

    2014-02-01

    In electronic holography, various methods have been considered for using multiple spatial light modulators (SLM) to increase the image size. In a previous work, we used a monochrome light source for a method that located an optical system containing lens arrays and other components in front of multiple SLMs. This paper proposes a colourization technique for that system based on time division multiplexing using laser light sources of three colours (red, green, and blue). The experimental device we constructed was able to perform video playback (20 fps) in colour of full parallax holographic three-dimensional (3D) images with an image size of 63 mm and a viewing-zone angle of 5.6 degrees without losing any part of the 3D image.

  8. Innovative Video Diagnostic Equipment for Material Science

    NASA Technical Reports Server (NTRS)

    Capuano, G.; Titomanlio, D.; Soellner, W.; Seidel, A.

    2012-01-01

    Materials science experiments under microgravity increasingly rely on advanced optical systems to determine the physical properties of the samples under investigation. This includes video systems with high spatial and temporal resolution. The acquisition, handling, storage and transmission to ground of the resulting video data are very challenging. Since the available downlink data rate is limited, the capability to compress the video data significantly without compromising the data quality is essential. We report on the development of a Digital Video System (DVS) for EML (Electro Magnetic Levitator) which provides real-time video acquisition, high compression using advanced Wavelet algorithms, storage and transmission of a continuous flow of video with different characteristics in terms of image dimensions and frame rates. The DVS is able to operate with the latest generation of high-performance cameras acquiring high resolution video images up to 4Mpixels@60 fps or high frame rate video images up to about 1000 fps@512x512pixels.

  9. Flat-panel detector, CCD cameras, and electron-beam-tube-based video for use in portal imaging

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Tang, Chuankun; Cheng, Chee-Way; Dallas, William J.

    1998-07-01

    This paper provides a comparison of some imaging parameters of four portal imaging systems at 6 MV: a flat panel detector, two CCD cameras and an electron beam tube based video camera. Measurements were made of signal and noise and consequently of signal-to-noise per pixel as a function of the exposure. All systems have a linear response with respect to exposure, and with the exception of the electron beam tube based video camera, the noise is proportional to the square-root of the exposure, indicating photon-noise limitation. The flat-panel detector has a signal-to-noise ratio, which is higher than that observed with both CCD-Cameras or with the electron beam tube based video camera. This is expected because most portal imaging systems using optical coupling with a lens exhibit severe quantum-sinks. The measurements of signal-and noise were complemented by images of a Las Vegas-type aluminum contrast detail phantom, located at the ISO-Center. These images were generated at an exposure of 1 MU. The flat-panel detector permits detection of Aluminum holes of 1.2 mm diameter and 1.6 mm depth, indicating the best signal-to-noise ratio. The CCD-cameras rank second and third in signal-to- noise ratio, permitting detection of Aluminum-holes of 1.2 mm diameter and 2.2 mm depth (CCD_1) and of 1.2 mm diameter and 3.2 mm depth (CCD_2) respectively, while the electron beam tube based video camera permits detection of only a hole of 1.2 mm diameter and 4.6 mm depth. Rank Order Filtering was applied to the raw images from the CCD-based systems in order to remove the direct hits. These are camera responses to scattered x-ray photons which interact directly with the CCD of the CCD-Camera and generate 'Salt and Pepper type noise,' which interferes severely with attempts to determine accurate estimates of the image noise. The paper also presents data on the metal-phosphor's photon gain (the number of light-photons per interacting x-ray photon).

  10. Degraded visual environment image/video quality metrics

    NASA Astrophysics Data System (ADS)

    Baumgartner, Dustin D.; Brown, Jeremy B.; Jacobs, Eddie L.; Schachter, Bruce J.

    2014-06-01

    A number of image quality metrics (IQMs) and video quality metrics (VQMs) have been proposed in the literature for evaluating techniques and systems for mitigating degraded visual environments. Some require both pristine and corrupted imagery. Others require patterned target boards in the scene. None of these metrics relates well to the task of landing a helicopter in conditions such as a brownout dust cloud. We have developed and used a variety of IQMs and VQMs related to the pilot's ability to detect hazards in the scene and to maintain situational awareness. Some of these metrics can be made agnostic to sensor type. Not only are the metrics suitable for evaluating algorithm and sensor variation, they are also suitable for choosing the most cost effective solution to improve operating conditions in degraded visual environments.

  11. The infrared video image pseudocolor processing system

    NASA Astrophysics Data System (ADS)

    Zhu, Yong; Zhang, JiangLing

    2003-11-01

    The infrared video image pseudo-color processing system, emphasizing on the algorithm and its implementation for measured object"s 2D temperature distribution using pseudo-color technology, is introduced in the paper. The data of measured object"s thermal image is the objective presentation of its surface temperature distribution, but the color has a close relationship with people"s subjective cognition. The so-called pseudo-color technology cross the bridge between subjectivity and objectivity, and represents the measured object"s temperature distribution in reason and at first hand. The algorithm of pseudo-color is based on the distance of IHS space. Thereby the definition of pseudo-color visual resolution is put forward. Both the software (which realize the map from the sample data to the color space) and the hardware (which carry out the conversion from the color space to palette by HDL) co-operate. Therefore the two levels map which is logic map and physical map respectively is presented. The system has been used abroad in failure diagnose of electric power devices, fire protection for lifesaving and even SARS detection in CHINA lately.

  12. Aerodynamic and acoustic effects of ventricular gap.

    PubMed

    Alipour, Fariborz; Karnell, Michael

    2014-03-01

    Supraglottic compression is frequently observed in individuals with dysphonia. It is commonly interpreted as an indication of excessive circumlaryngeal muscular tension and ventricular medialization. The purpose of this study was to describe the aerodynamic and acoustic impact of varying ventricular medialization in a canine model. Subglottal air pressure, glottal airflow, electroglottograph, acoustic signals, and high-speed video images were recorded in seven excised canine larynges mounted in vitro for laryngeal vibratory experimentation. The degree of gap between the ventricular folds was adjusted and measured using sutures and weights. Data were recorded during phonation when the ventricular gap was narrow, neutral, and large. Glottal resistance was estimated by measures of subglottal pressure and glottal flow. Glottal resistance increased systematically as ventricular gap became smaller. Wide ventricular gaps were associated with increases in fundamental frequency and decreases in glottal resistance. Sound pressure level did not appear to be impacted by the adjustments in ventricular gap used in this research. Increases in supraglottic compression and associated reduced ventricular width may be observed in a variety of disorders that affect voice quality. Ventricular compression may interact with true vocal fold posture and vibration resulting in predictable changes in aerodynamic, physiological, acoustic, and perceptual measures of phonation. The data from this report supports the theory that narrow ventricular gaps may be associated with disordered phonation. In vitro and in vivo human data are needed to further test this association. Copyright © 2014 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  13. State of the "art": a taxonomy of artistic stylization techniques for images and video.

    PubMed

    Kyprianidis, Jan Eric; Collomosse, John; Wang, Tinghuai; Isenberg, Tobias

    2013-05-01

    This paper surveys the field of nonphotorealistic rendering (NPR), focusing on techniques for transforming 2D input (images and video) into artistically stylized renderings. We first present a taxonomy of the 2D NPR algorithms developed over the past two decades, structured according to the design characteristics and behavior of each technique. We then describe a chronology of development from the semiautomatic paint systems of the early nineties, through to the automated painterly rendering systems of the late nineties driven by image gradient analysis. Two complementary trends in the NPR literature are then addressed, with reference to our taxonomy. First, the fusion of higher level computer vision and NPR, illustrating the trends toward scene analysis to drive artistic abstraction and diversity of style. Second, the evolution of local processing approaches toward edge-aware filtering for real-time stylization of images and video. The survey then concludes with a discussion of open challenges for 2D NPR identified in recent NPR symposia, including topics such as user and aesthetic evaluation.

  14. Video-rate or high-precision: a flexible range imaging camera

    NASA Astrophysics Data System (ADS)

    Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.; Payne, Andrew D.; Conroy, Richard M.; Godbaz, John P.; Jongenelen, Adrian P. P.

    2008-02-01

    A range imaging camera produces an output similar to a digital photograph, but every pixel in the image contains distance information as well as intensity. This is useful for measuring the shape, size and location of objects in a scene, hence is well suited to certain machine vision applications. Previously we demonstrated a heterodyne range imaging system operating in a relatively high resolution (512-by-512) pixels and high precision (0.4 mm best case) configuration, but with a slow measurement rate (one every 10 s). Although this high precision range imaging is useful for some applications, the low acquisition speed is limiting in many situations. The system's frame rate and length of acquisition is fully configurable in software, which means the measurement rate can be increased by compromising precision and image resolution. In this paper we demonstrate the flexibility of our range imaging system by showing examples of high precision ranging at slow acquisition speeds and video-rate ranging with reduced ranging precision and image resolution. We also show that the heterodyne approach and the use of more than four samples per beat cycle provides better linearity than the traditional homodyne quadrature detection approach. Finally, we comment on practical issues of frame rate and beat signal frequency selection.

  15. Magneto-thermal-acoustic differential-frequency imaging of magnetic nanoparticle with magnetic spatial localization: a theoretical prediction

    NASA Astrophysics Data System (ADS)

    Piao, Daqing

    2017-02-01

    The magneto-thermo-acoustic effect that we predicted in 2013 refers to the generation of acoustic-pressure wave from magnetic nanoparticle (MNP) when thermally mediated under an alternating magnetic field (AMF) at a pulsed or frequency-chirped application. Several independent experimental studies have since validated magneto-thermoacoustic effect, and a latest report has discovered acoustic-wave generation from MNP at the second-harmonic frequency of the AMF when operating continuously. We propose that applying two AMFs with differing frequencies to MNP will produce acoustic-pressure wave at the summation and difference of the two frequencies, in addition to the two second-harmonic frequencies. Analysis of the specific absorption dynamics of the MNP when exposed to two AMFs of differing frequencies has shown some interesting patterns of acoustic-intensity at the multiple frequency components. The ratio of the acoustic-intensity at the summation-frequency over that of the difference-frequency is determined by the frequency-ratio of the two AMFs, but remains independent of the AMF strengths. The ratio of the acoustic-intensity at the summation- or difference-frequency over that at each of the two second-harmonic frequencies is determined by both the frequency-ratio and the field-strength-ratio of the two AMFs. The results indicate a potential strategy for localization of the source of a continuous-wave magneto-thermalacoustic signal by examining the frequency spectrum of full-field non-differentiating acoustic detection, with the field-strength ratio changed continuously at a fixed frequency-ratio. The practicalities and challenges of this magnetic spatial localization approach for magneto-thermo-acoustic imaging using a simple envisioned set of two AMFs arranged in parallel to each other are discussed.

  16. A synchronized particle image velocimetry and infrared thermography technique applied to an acoustic streaming flow

    PubMed Central

    Sou, In Mei; Layman, Christopher N.; Ray, Chittaranjan

    2013-01-01

    Subsurface coherent structures and surface temperatures are investigated using simultaneous measurements of particle image velocimetry (PIV) and infrared (IR) thermography. Results for coherent structures from acoustic streaming and associated heating transfer in a rectangular tank with an acoustic horn mounted horizontally at the sidewall are presented. An observed vortex pair develops and propagates in the direction along the centerline of the horn. From the PIV velocity field data, distinct kinematic regions are found with the Lagrangian coherent structure (LCS) method. The implications of this analysis with respect to heat transfer and related sonochemical applications are discussed. PMID:24347810

  17. Videos and images from 25 years of teaching compressible flow

    NASA Astrophysics Data System (ADS)

    Settles, Gary

    2008-11-01

    Compressible flow is a very visual topic due to refractive optical flow visualization and the public fascination with high-speed flight. Films, video clips, and many images are available to convey this in the classroom. An overview of this material is given and selected examples are shown, drawn from educational films, the movies, television, etc., and accumulated over 25 years of teaching basic and advanced compressible-flow courses. The impact of copyright protection and the doctrine of fair use is also discussed.

  18. Transmission mode acoustic time-reversal imaging for nondestructive evaluation

    NASA Astrophysics Data System (ADS)

    Lehman, Sean K.; Devaney, Anthony J.

    2002-11-01

    In previous ASA meetings and JASA papers, the extended and formalized theory of transmission mode time reversal in which the transceivers are noncoincident was presented. When combined with the subspace concepts of a generalized MUltiple SIgnal Classification (MUSIC) algorithm, this theory is used to form super-resolution images of scatterers buried in a medium. These techniques are now applied to ultrasonic nondestructive evaluation (NDE) of parts, and shallow subsurface seismic imaging. Results are presented of NDE experiments on metal and epoxy blocks using data collected from an adaptive ultrasonic array, that is, a ''time-reversal machine,'' at Lawrence Livermore National Laboratory. Also presented are the results of seismo-acoustic subsurface probing of buried hazardous waste pits at the Idaho National Engineering and Environmental Laboratory. [Work performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore National Laboratory under Contract No. W-7405-Eng-48.] [Work supported in part by CenSSIS, the Center for Subsurface Sensing and Imaging Systems, under the Engineering Research Centers Program of the NSF (award number EEC-9986821) as well as from Air Force Contracts No. F41624-99-D6002 and No. F49620-99-C0013.

  19. Analyzing crime scene videos

    NASA Astrophysics Data System (ADS)

    Cunningham, Cindy C.; Peloquin, Tracy D.

    1999-02-01

    Since late 1996 the Forensic Identification Services Section of the Ontario Provincial Police has been actively involved in state-of-the-art image capture and the processing of video images extracted from crime scene videos. The benefits and problems of this technology for video analysis are discussed. All analysis is being conducted on SUN Microsystems UNIX computers, networked to a digital disk recorder that is used for video capture. The primary advantage of this system over traditional frame grabber technology is reviewed. Examples from actual cases are presented and the successes and limitations of this approach are explored. Suggestions to companies implementing security technology plans for various organizations (banks, stores, restaurants, etc.) will be made. Future directions for this work and new technologies are also discussed.

  20. Contrast-enhanced optical coherence microangiography with acoustic-actuated microbubbles

    NASA Astrophysics Data System (ADS)

    Liu, Yu-Hsuan; Zhang, Jia-Wei; Yeh, Chih-Kuang; Wei, Kuo-Chen; Liu, Hao-Li; Tsai, Meng-Tsan

    2017-04-01

    In this study, we propose to use gas-filled microbubbles (MBs) simultaneously actuated by the acoustic wave to enhance the imaging contrast of optical coherence tomography (OCT)-based angiography. In the phantom experiments, MBs can result in stronger backscattered intensity, enabling to enhance the contrast of OCT intensity image. Moreover, simultaneous application of low-intensity acoustic wave enables to temporally induce local vibration of particles and MBs in the vessels, resulting in time-variant OCT intensity which can be used for enhancing the contrast of OCT intensitybased angiography. Additionally, different acoustic modes and different acoustic powers to actuate MBs are performed and compared to investigate the feasibility of contrast enhancement. Finally, animal experiments are performed. The findings suggest that acoustic-actuated MBs can effectively enhance the imaging contrast of OCT-based angiography and the imaging depth of OCT angiography is also extended.

  1. Passive 350 GHz Video Imaging Systems for Security Applications

    NASA Astrophysics Data System (ADS)

    Heinz, E.; May, T.; Born, D.; Zieger, G.; Anders, S.; Zakosarenko, V.; Meyer, H.-G.; Schäffel, C.

    2015-10-01

    Passive submillimeter-wave imaging is a concept that has been in the focus of interest as a promising technology for personal security screening for a number of years. In contradiction to established portal-based millimeter-wave scanning techniques, it allows for scanning people from a distance in real time with high throughput and without a distinct inspection procedure. This opens up new possibilities for scanning, which directly address an urgent security need of modern societies: protecting crowds and critical infrastructure from the growing threat of individual terror attacks. Considering the low radiometric contrast of indoor scenes in the submillimeter range, this objective calls for an extremely high detector sensitivity that can only be achieved using cooled detectors. Our approach to this task is a series of passive standoff video cameras for the 350 GHz band that represent an evolving concept and a continuous development since 2007. Arrays of superconducting transition-edge sensors (TES), operated at temperatures below 1 K, are used as radiation detectors. By this means, background limited performance (BLIP) mode is achieved, providing the maximum possible signal to noise ratio. At video rates, this leads to a temperature resolution well below 1 K. The imaging system is completed by reflector optics based on free-form mirrors. For object distances of 5-25 m, a field of view up to 2 m height and a diffraction-limited spatial resolution in the order of 1-2 cm is provided. Opto-mechanical scanning systems are part of the optical setup and capable of frame rates of up to 25 frames per second.

  2. One-dimensional acoustic standing waves in rectangular channels for flow cytometry.

    PubMed

    Austin Suthanthiraraj, Pearlson P; Piyasena, Menake E; Woods, Travis A; Naivar, Mark A; Lόpez, Gabriel P; Graves, Steven W

    2012-07-01

    Flow cytometry has become a powerful analytical tool for applications ranging from blood diagnostics to high throughput screening of molecular assemblies on microsphere arrays. However, instrument size, expense, throughput, and consumable use limit its use in resource poor areas of the world, as a component in environmental monitoring, and for detection of very rare cell populations. For these reasons, new technologies to improve the size and cost-to-performance ratio of flow cytometry are required. One such technology is the use of acoustic standing waves that efficiently concentrate cells and particles to the center of flow channels for analysis. The simplest form of this method uses one-dimensional acoustic standing waves to focus particles in rectangular channels. We have developed one-dimensional acoustic focusing flow channels that can be fabricated in simple capillary devices or easily microfabricated using photolithography and deep reactive ion etching. Image and video analysis demonstrates that these channels precisely focus single flowing streams of particles and cells for traditional flow cytometry analysis. Additionally, use of standing waves with increasing harmonics and in parallel microfabricated channels is shown to effectively create many parallel focused streams. Furthermore, we present the fabrication of an inexpensive optical platform for flow cytometry in rectangular channels and use of the system to provide precise analysis. The simplicity and low-cost of the acoustic focusing devices developed here promise to be effective for flow cytometers that have reduced size, cost, and consumable use. Finally, the straightforward path to parallel flow streams using one-dimensional multinode acoustic focusing, indicates that simple acoustic focusing in rectangular channels may also have a prominent role in high-throughput flow cytometry. Copyright © 2012 Elsevier Inc. All rights reserved.

  3. The future of acoustics distance education at Penn State

    NASA Astrophysics Data System (ADS)

    Brooks, Karen P.; Sparrow, Victor W.; Atchley, Anthony A.

    2005-04-01

    For nearly 20 years Penn State's Graduate Program in Acoustics has offered a graduate distance education program, established in response to Department of Defense needs. Using satellite technology, courses provided synchronous classes incorporating one-way video and two-way audio. Advancements in technology allowed more sophisticated delivery systems to be considered and courses to be offered to employees of industry. Current technology utilizes real time video-streaming and archived lectures to enable individuals anywhere to access course materials. The evolution of technology, expansion of the geographic market and changing needs of the student, among other issues, require a new paradigm. This paradigm must consider issues such as faculty acceptance and questions facing all institutions with regard to blurring the distinction between residence and distance education. Who will be the students? What will be the purpose of education? Will it be to provide professional and/or research degrees? How will the Acoustics Program ensure it remains attractive to all students, while working within the boundaries and constraints of a major research university? This is a look at current practice and issues with an emphasis on those relevant to constructing the Acoustics Programs distance education strategy for the future.

  4. Acoustic communication at the water's edge: evolutionary insights from a mudskipper.

    PubMed

    Polgar, Gianluca; Malavasi, Stefano; Cipolato, Giacomo; Georgalas, Vyron; Clack, Jennifer A; Torricelli, Patrizia

    2011-01-01

    Coupled behavioural observations and acoustical recordings of aggressive dyadic contests showed that the mudskipper Periophthalmodon septemradiatus communicates acoustically while out of water. An analysis of intraspecific variability showed that specific acoustic components may act as tags for individual recognition, further supporting the sounds' communicative value. A correlative analysis amongst acoustical properties and video-acoustical recordings in slow-motion supported first hypotheses on the emission mechanism. Acoustic transmission through the wet exposed substrate was also discussed. These observations were used to support an "exaptation hypothesis", i.e. the maintenance of key adaptations during the first stages of water-to-land vertebrate eco-evolutionary transitions (based on eco-evolutionary and palaeontological considerations), through a comparative bioacoustic analysis of aquatic and semiterrestrial gobiid taxa. In fact, a remarkable similarity was found between mudskipper vocalisations and those emitted by gobioids and other soniferous benthonic fishes.

  5. Design and Characterization of an Acoustically and Structurally Matched 3-D-Printed Model for Transcranial Ultrasound Imaging.

    PubMed

    Bai, Chen; Ji, Meiling; Bouakaz, Ayache; Zong, Yujin; Wan, Mingxi

    2018-05-01

    For investigating human transcranial ultrasound imaging (TUI) through the temporal bone, an intact human skull is needed. Since it is complex and expensive to obtain one, it requires that experiments are performed without excision or abrasion of the skull. Besides, to mimic blood circulation for the vessel target, cellulose tubes generally fit the vessel simulation with straight linear features. These issues, which limit experimental studies, can be overcome by designing a 3-D-printed skull model with acoustic and dimensional properties that match a real skull and a vessel model with curve and bifurcation. First, the optimal printing material which matched a real skull in terms of the acoustic attenuation coefficient and sound propagation velocity was identified at 2-MHz frequency, i.e., 7.06 dB/mm and 2168.71 m/s for the skull while 6.98 dB/mm and 2114.72 m/s for the printed material, respectively. After modeling, the average thickness of the temporal bone in the printed skull was about 1.8 mm, while it was to 1.7 mm in the real skull. Then, a vascular phantom was designed with 3-D-printed vessels of low acoustic attenuation (0.6 dB/mm). It was covered with a porcine brain tissue contained within a transparent polyacrylamide gel. After characterizing the acoustic consistency, based on the designed skull model and vascular phantom, vessels with inner diameters of 1 and 0.7 mm were distinguished by resolution enhanced imaging with low frequency. Measurements and imaging results proved that the model and phantom are authentic and viable alternatives, and will be of interest for TUI, high intensity focused ultrasound, or other therapy studies.

  6. Analysis of physiological responses associated with emotional changes induced by viewing video images of dental treatments.

    PubMed

    Sekiya, Taki; Miwa, Zenzo; Tsuchihashi, Natsumi; Uehara, Naoko; Sugimoto, Kumiko

    2015-03-30

    Since the understanding of emotional changes induced by dental treatments is important for dentists to provide a safe and comfortable dental treatment, we analyzed physiological responses during watching video images of dental treatments to search for the appropriate objective indices reflecting emotional changes. Fifteen healthy young adult subjects voluntarily participated in the present study. Electrocardiogram (ECG), electroencephalogram (EEG) and corrugator muscle electromyogram (EMG) were recorded and changes of them by viewing videos of dental treatments were analyzed. The subjective discomfort level was acquired by Visual Analog Scale method. Analyses of autonomic nervous activities from ECG and four emotional factors (anger/stress, joy/satisfaction, sadness/depression and relaxation) from EEG demonstrated that increases in sympathetic nervous activity reflecting stress increase and decreases in relaxation level were induced by the videos of infiltration anesthesia and cavity excavation, but not intraoral examination. The corrugator muscle activity was increased by all three images regardless of video contents. The subjective discomfort during watching infiltration anesthesia and cavity excavation was higher than intraoral examination, showing that sympathetic activities and relaxation factor of emotion changed in a manner consistent with subjective emotional changes. These results suggest that measurement of autonomic nervous activities estimated from ECG and emotional factors analyzed from EEG is useful for objective evaluation of subjective emotion.

  7. Video-rate imaging of microcirculation with single-exposure oblique back-illumination microscopy

    NASA Astrophysics Data System (ADS)

    Ford, Tim N.; Mertz, Jerome

    2013-06-01

    Oblique back-illumination microscopy (OBM) is a new technique for simultaneous, independent measurements of phase gradients and absorption in thick scattering tissues based on widefield imaging. To date, OBM has been used with sequential camera exposures, which reduces temporal resolution, and can produce motion artifacts in dynamic samples. Here, a variation of OBM that allows single-exposure operation with wavelength multiplexing and image splitting with a Wollaston prism is introduced. Asymmetric anamorphic distortion induced by the prism is characterized and corrected in real time using a graphics-processing unit. To demonstrate the capacity of single-exposure OBM to perform artifact-free imaging of blood flow, video-rate movies of microcirculation in ovo in the chorioallantoic membrane of the developing chick are presented. Imaging is performed with a high-resolution rigid Hopkins lens suitable for endoscopy.

  8. Video-rate imaging of microcirculation with single-exposure oblique back-illumination microscopy.

    PubMed

    Ford, Tim N; Mertz, Jerome

    2013-06-01

    Oblique back-illumination microscopy (OBM) is a new technique for simultaneous, independent measurements of phase gradients and absorption in thick scattering tissues based on widefield imaging. To date, OBM has been used with sequential camera exposures, which reduces temporal resolution, and can produce motion artifacts in dynamic samples. Here, a variation of OBM that allows single-exposure operation with wavelength multiplexing and image splitting with a Wollaston prism is introduced. Asymmetric anamorphic distortion induced by the prism is characterized and corrected in real time using a graphics-processing unit. To demonstrate the capacity of single-exposure OBM to perform artifact-free imaging of blood flow, video-rate movies of microcirculation in ovo in the chorioallantoic membrane of the developing chick are presented. Imaging is performed with a high-resolution rigid Hopkins lens suitable for endoscopy.

  9. Digital photography for the light microscope: results with a gated, video-rate CCD camera and NIH-image software.

    PubMed

    Shaw, S L; Salmon, E D; Quatrano, R S

    1995-12-01

    In this report, we describe a relatively inexpensive method for acquiring, storing and processing light microscope images that combines the advantages of video technology with the powerful medium now termed digital photography. Digital photography refers to the recording of images as digital files that are stored, manipulated and displayed using a computer. This report details the use of a gated video-rate charge-coupled device (CCD) camera and a frame grabber board for capturing 256 gray-level digital images from the light microscope. This camera gives high-resolution bright-field, phase contrast and differential interference contrast (DIC) images but, also, with gated on-chip integration, has the capability to record low-light level fluorescent images. The basic components of the digital photography system are described, and examples are presented of fluorescence and bright-field micrographs. Digital processing of images to remove noise, to enhance contrast and to prepare figures for printing is discussed.

  10. Sparsity-based acoustic inversion in cross-sectional multiscale optoacoustic imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Yiyong; Tzoumas, Stratis; Nunes, Antonio

    2015-09-15

    Purpose: With recent advancement in hardware of optoacoustic imaging systems, highly detailed cross-sectional images may be acquired at a single laser shot, thus eliminating motion artifacts. Nonetheless, other sources of artifacts remain due to signal distortion or out-of-plane signals. The purpose of image reconstruction algorithms is to obtain the most accurate images from noisy, distorted projection data. Methods: In this paper, the authors use the model-based approach for acoustic inversion, combined with a sparsity-based inversion procedure. Specifically, a cost function is used that includes the L1 norm of the image in sparse representation and a total variation (TV) term. Themore » optimization problem is solved by a numerically efficient implementation of a nonlinear gradient descent algorithm. TV–L1 model-based inversion is tested in the cross section geometry for numerically generated data as well as for in vivo experimental data from an adult mouse. Results: In all cases, model-based TV–L1 inversion showed a better performance over the conventional Tikhonov regularization, TV inversion, and L1 inversion. In the numerical examples, the images reconstructed with TV–L1 inversion were quantitatively more similar to the originating images. In the experimental examples, TV–L1 inversion yielded sharper images and weaker streak artifact. Conclusions: The results herein show that TV–L1 inversion is capable of improving the quality of highly detailed, multiscale optoacoustic images obtained in vivo using cross-sectional imaging systems. As a result of its high fidelity, model-based TV–L1 inversion may be considered as the new standard for image reconstruction in cross-sectional imaging.« less

  11. Acoustic emission linear pulse holography

    DOEpatents

    Collins, H. Dale; Busse, Lawrence J.; Lemon, Douglas K.

    1985-01-01

    Defects in a structure are imaged as they propagate, using their emitted acoustic energy as a monitored source. Short bursts of acoustic energy propagate through the structure to a discrete element receiver array. A reference timing transducer located between the array and the inspection zone initiates a series of time-of-flight measurements. A resulting series of time-of-flight measurements are then treated as aperture data and are transferred to a computer for reconstruction of a synthetic linear holographic image. The images can be displayed and stored as a record of defect growth.

  12. A study of the method of the video image presentation for the manipulation of forceps.

    PubMed

    Kono, Soichi; Sekioka, Toshiharu; Matsunaga, Katsuya; Shidoji, Kazunori; Matsuki, Yuji

    2005-01-01

    Recently, surgical operations have sometimes been tried under laparoscopic video images using teleoperation robots or forceps manipulators. Therefore, in this paper, forceps manipulation efficiencies were evaluated when images for manipulation had some transmission delay (Experiment 1), and when the convergence point of the stereoscopic video cameras was either fixed and variable (Experiment 2). The operators' tasks in these experiments were sewing tasks which simulated telesurgery under 3-dimensional scenography. As a result of experiment 1, the operation at a 200+/-100 ms delay was kept at almost the same accuracy as that without delay. As a result of experiment 2, work accuracy was improved by using the zooming lens function; however the working time became longer. These results seemed to show the relation of a trade-off between working time and working accuracy.

  13. Micromachined silicon acoustic delay line with 3D-printed micro linkers and tapered input for improved structural stability and acoustic directivity

    NASA Astrophysics Data System (ADS)

    Cho, Y.; Kumar, A.; Xu, S.; Zou, J.

    2016-10-01

    Recent studies have shown that micromachined silicon acoustic delay lines can provide a promising solution to achieve real-time photoacoustic tomography without the need for complex transducer arrays and data acquisition electronics. To achieve deeper imaging depth and wider field of view, a longer delay time and therefore delay length are required. However, as the length of the delay line increases, it becomes more vulnerable to structural instability due to reduced mechanical stiffness. In this paper, we report the design, fabrication, and testing of a new silicon acoustic delay line enhanced with 3D printed polymer micro linker structures. First, mechanical deformation of the silicon acoustic delay line (with and without linker structures) under gravity was simulated by using finite element method. Second, the acoustic crosstalk and acoustic attenuation caused by the polymer micro linker structures were evaluated with both numerical simulation and ultrasound transmission testing. The result shows that the use of the polymer micro linker structures significantly improves the structural stability of the silicon acoustic delay lines without creating additional acoustic attenuation and crosstalk. In addition, the improvement of the acoustic acceptance angle of the silicon acoustic delay lines was also investigated to better suppress the reception of unwanted ultrasound signals outside of the imaging plane. These two improvements are expected to provide an effective solution to eliminate current limitations on the achievable acoustic delay time and out-of-plane imaging resolution of micromachined silicon acoustic delay line arrays.

  14. High-speed video capillaroscopy method for imaging and evaluation of moving red blood cells

    NASA Astrophysics Data System (ADS)

    Gurov, Igor; Volkov, Mikhail; Margaryants, Nikita; Pimenov, Aleksei; Potemkin, Andrey

    2018-05-01

    The video capillaroscopy system with high image recording rate to resolve moving red blood cells with velocity up to 5 mm/s into a capillary is considered. Proposed procedures of the recorded video sequence processing allow evaluating spatial capillary area, capillary diameter and central line with high accuracy and reliability independently on properties of individual capillary. Two-dimensional inter frame procedure is applied to find lateral shift of neighbor images in the blood flow area with moving red blood cells and to measure directly the blood flow velocity along a capillary central line. The developed method opens new opportunities for biomedical diagnostics, particularly, due to long-time continuous monitoring of red blood cells velocity into capillary. Spatio-temporal representation of capillary blood flow is considered. Experimental results of direct measurement of blood flow velocity into separate capillary as well as capillary net are presented and discussed.

  15. Experimental and theoretical studies on the movements of two bubbles in an acoustic standing wave field.

    PubMed

    Jiao, Junjie; He, Yong; Leong, Thomas; Kentish, Sandra E; Ashokkumar, Muthupandian; Manasseh, Richard; Lee, Judy

    2013-10-17

    When subjected to an ultrasonic standing-wave field, cavitation bubbles smaller than the resonance size migrate to the pressure antinodes. As bubbles approach the antinode, they also move toward each other and either form a cluster or coalesce. In this study, the translational trajectory of two bubbles moving toward each other in an ultrasonic standing wave at 22.4 kHz was observed using an imaging system with a high-speed video camera. This allowed the speed of the approaching bubbles to be measured for much closer distances than those reported in the prior literature. The trajectory of two approaching bubbles was modeled using coupled equations of radial and translational motions, showing similar trends with the experimental results. We also indirectly measured the secondary Bjerknes force by monitoring the acceleration when bubbles are close to each other under different acoustic pressure amplitudes. Bubbles begin to accelerate toward each other as the distance between them gets shorter, and this acceleration increases with increasing acoustic pressure. The current study provides experimental data that validates the theory on the movement of bubbles and forces acting between them in an acoustic field that will be useful in understanding bubble coalescence in an acoustic field.

  16. Precise color images a high-speed color video camera system with three intensified sensors

    NASA Astrophysics Data System (ADS)

    Oki, Sachio; Yamakawa, Masafumi; Gohda, Susumu; Etoh, Takeharu G.

    1999-06-01

    High speed imaging systems have been used in a large field of science and engineering. Although the high speed camera systems have been improved to high performance, most of their applications are only to get high speed motion pictures. However, in some fields of science and technology, it is useful to get some other information, such as temperature of combustion flame, thermal plasma and molten materials. Recent digital high speed video imaging technology should be able to get such information from those objects. For this purpose, we have already developed a high speed video camera system with three-intensified-sensors and cubic prism image splitter. The maximum frame rate is 40,500 pps (picture per second) at 64 X 64 pixels and 4,500 pps at 256 X 256 pixels with 256 (8 bit) intensity resolution for each pixel. The camera system can store more than 1,000 pictures continuously in solid state memory. In order to get the precise color images from this camera system, we need to develop a digital technique, which consists of a computer program and ancillary instruments, to adjust displacement of images taken from two or three image sensors and to calibrate relationship between incident light intensity and corresponding digital output signals. In this paper, the digital technique for pixel-based displacement adjustment are proposed. Although the displacement of the corresponding circle was more than 8 pixels in original image, the displacement was adjusted within 0.2 pixels at most by this method.

  17. Ultrasonic superlensing jets and acoustic-fork sheets

    NASA Astrophysics Data System (ADS)

    Mitri, F. G.

    2017-05-01

    Focusing acoustical (and optical) beams beyond the diffraction limit has remained a major challenge in imaging instruments and systems, until recent advances on ;hyper; or ;super; lensing and higher-resolution imaging techniques have shown the counterintuitive violation of this rule under certain circumstances. Nonetheless, the proposed technologies of super-resolution acoustical focusing beyond the diffraction barrier require complex tools such as artificially engineered metamaterials, and other hardware equipment that may not be easily synthesized or manufactured. The present contribution therefore suggests a simple and reliable method of using a sound-penetrable circular cylinder lens illuminated by a nonparaxial Gaussian acoustical sheet (i.e. finite beam in 2D) to produce non-evanescent ultrasonic superlensing jets (or bullets) and acoustical 'snail-fork' shaped wavefronts with limited diffraction. The generalized (near-field) scattering theory for acoustical sheets of arbitrary wavefronts and incidence is utilized to synthesize the incident beam based upon the angular spectrum decomposition method and the multipole expansion method in cylindrical wave functions to compute the scattered pressure around the cylinder with particular emphasis on its physical properties. The results show that depending on the beam and lens parameters, a tight focusing (with dimensions much smaller than the beam waist) can be achieved. Subwavelength resolution can be also achieved by selecting a lens material with a speed of sound exceeding that of the host fluid medium. The ultrasonic superlensing jets provide the impetus to develop improved subwavelength microscopy and acoustical image-slicing systems, cell lysis and surgery, and photoacoustic imaging to name a few examples. Moreover, an acoustical fork-sheet generation may open innovative avenues in reconfigurable on-chip micro/nanoparticle tweezers and surface acoustic waves devices.

  18. Broadband acoustic focusing by Airy-like beams based on acoustic metasurfaces

    NASA Astrophysics Data System (ADS)

    Chen, Di-Chao; Zhu, Xing-Feng; Wei, Qi; Wu, Da-Jian; Liu, Xiao-Jun

    2018-01-01

    An acoustic metasurface (AM) composed of space-coiling subunits is proposed to generate acoustic Airy-like beams (ALBs) by manipulating the transmitted acoustic phase. The self-accelerating, self-healing, and non-diffracting features of ALBs are demonstrated using finite element simulations. We further employ two symmetrical AMs to realize two symmetrical ALBs, resulting in highly efficient acoustic focusing. At the working frequency, the focal intensity can reach roughly 20 times that of the incident wave. It is found that the highly efficient acoustic focusing can circumvent obstacles in the propagating path and can be maintained in a broad frequency bandwidth. In addition, simply changing the separation between the two AMs can modulate the focal length of the proposed AM lens. ALBs generated by AMs and the corresponding AM lens may benefit applications in medical ultrasound imaging, biomedical therapy, and particle trapping and manipulation.

  19. Ultrasound-modulated optical tomography with intense acoustic bursts.

    PubMed

    Zemp, Roger J; Kim, Chulhong; Wang, Lihong V

    2007-04-01

    Ultrasound-modulated optical tomography (UOT) detects ultrasonically modulated light to spatially localize multiply scattered photons in turbid media with the ultimate goal of imaging the optical properties in living subjects. A principal challenge of the technique is weak modulated signal strength. We discuss ways to push the limits of signal enhancement with intense acoustic bursts while conforming to optical and ultrasonic safety standards. A CCD-based speckle-contrast detection scheme is used to detect acoustically modulated light by measuring changes in speckle statistics between ultrasound-on and ultrasound-off states. The CCD image capture is synchronized with the ultrasound burst pulse sequence. Transient acoustic radiation force, a consequence of bursts, is seen to produce slight signal enhancement over pure ultrasonic-modulation mechanisms for bursts and CCD exposure times of the order of milliseconds. However, acoustic radiation-force-induced shear waves are launched away from the acoustic sample volume, which degrade UOT spatial resolution. By time gating the CCD camera to capture modulated light before radiation force has an opportunity to accumulate significant tissue displacement, we reduce the effects of shear-wave image degradation, while enabling very high signal-to-noise ratios. Additionally, we maintain high-resolution images representative of optical and not mechanical contrast. Signal-to-noise levels are sufficiently high so as to enable acquisition of 2D images of phantoms with one acoustic burst per pixel.

  20. Evaluation of privacy in high dynamic range video sequences

    NASA Astrophysics Data System (ADS)

    Řeřábek, Martin; Yuan, Lin; Krasula, Lukáš; Korshunov, Pavel; Fliegel, Karel; Ebrahimi, Touradj

    2014-09-01

    The ability of high dynamic range (HDR) to capture details in environments with high contrast has a significant impact on privacy in video surveillance. However, the extent to which HDR imaging affects privacy, when compared to a typical low dynamic range (LDR) imaging, is neither well studied nor well understood. To achieve such an objective, a suitable dataset of images and video sequences is needed. Therefore, we have created a publicly available dataset of HDR video for privacy evaluation PEViD-HDR, which is an HDR extension of an existing Privacy Evaluation Video Dataset (PEViD). PEViD-HDR video dataset can help in the evaluations of privacy protection tools, as well as for showing the importance of HDR imaging in video surveillance applications and its influence on the privacy-intelligibility trade-off. We conducted a preliminary subjective experiment demonstrating the usability of the created dataset for evaluation of privacy issues in video. The results confirm that a tone-mapped HDR video contains more privacy sensitive information and details compared to a typical LDR video.

  1. Availability and performance of image/video-based vital signs monitoring methods: a systematic review protocol.

    PubMed

    Harford, Mirae; Catherall, Jacqueline; Gerry, Stephen; Young, Duncan; Watkinson, Peter

    2017-10-25

    For many vital signs, monitoring methods require contact with the patient and/or are invasive in nature. There is increasing interest in developing still and video image-guided monitoring methods that are non-contact and non-invasive. We will undertake a systematic review of still and video image-based monitoring methods. We will perform searches in multiple databases which include MEDLINE, Embase, CINAHL, Cochrane library, IEEE Xplore and ACM Digital Library. We will use OpenGrey and Google searches to access unpublished or commercial data. We will not use language or publication date restrictions. The primary goal is to summarise current image-based vital signs monitoring methods, limited to heart rate, respiratory rate, oxygen saturations and blood pressure. Of particular interest will be the effectiveness of image-based methods compared to reference devices. Other outcomes of interest include the quality of the method comparison studies with respect to published reporting guidelines, any limitations of non-contact non-invasive technology and application in different populations. To the best of our knowledge, this is the first systematic review of image-based non-contact methods of vital signs monitoring. Synthesis of currently available technology will facilitate future research in this highly topical area. PROSPERO CRD42016029167.

  2. A micro-Doppler sonar for acoustic surveillance in sensor networks

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaonian

    Wireless sensor networks have been employed in a wide variety of applications, despite the limited energy and communication resources at each sensor node. Low power custom VLSI chips implementing passive acoustic sensing algorithms have been successfully integrated into an acoustic surveillance unit and demonstrated for detection and location of sound sources. In this dissertation, I explore active and passive acoustic sensing techniques, signal processing and classification algorithms for detection and classification in a multinodal sensor network environment. I will present the design and characterization of a continuous-wave micro-Doppler sonar to image objects with articulated moving components. As an example application for this system, we use it to image gaits of humans and four-legged animals. I will present the micro-Doppler gait signatures of a walking person, a dog and a horse. I will discuss the resolution and range of this micro-Doppler sonar and use experimental results to support the theoretical analyses. In order to reduce the data rate and make the system amenable to wireless sensor networks, I will present a second micro-Doppler sonar that uses bandpass sampling for data acquisition. Speech recognition algorithms are explored for biometric identifications from one's gait, and I will present and compare the classification performance of the two systems. The acoustic micro-Doppler sonar design and biometric identification results are the first in the field as the previous work used either video camera or microwave technology. I will also review bearing estimation algorithms and present results of applying these algorithms for bearing estimation and tracking of moving vehicles. Another major source of the power consumption at each sensor node is the wireless interface. To address the need of low power communications in a wireless sensor network, I will also discuss the design and implementation of ultra wideband transmitters in a three dimensional

  3. Image/video understanding systems based on network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2004-03-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/network models is found. Symbols, predicates and grammars naturally emerge in such networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type relational structure created via multilevel hierarchical compression of visual information. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. Spatial logic and topology naturally present in such structures. Mid-level vision processes like perceptual grouping, separation of figure from ground, are special kinds of network transformations. They convert primary image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models combines learning, classification, and analogy together with higher-level model-based reasoning into a single framework, and it works similar to frames and agents. Computational intelligence methods transform images into model-based knowledge representation. Based on such principles, an Image/Video Understanding system can convert images into the knowledge models, and resolve uncertainty and ambiguity. This allows creating intelligent computer vision systems for design and manufacturing.

  4. Non-invasive vascular radial/circumferential strain imaging and wall shear rate estimation using video images of diagnostic ultrasound.

    PubMed

    Wan, Jinjin; He, Fangli; Zhao, Yongfeng; Zhang, Hongmei; Zhou, Xiaodong; Wan, Mingxi

    2014-03-01

    The aim of this work was to develop a convenient method for radial/circumferential strain imaging and shear rate estimation that could be used as a supplement to the current routine screening for carotid atherosclerosis using video images of diagnostic ultrasound. A reflection model-based correction for gray-scale non-uniform distribution was applied to B-mode video images before strain estimation to improve the accuracy of radial/circumferential strain imaging when applied to vessel transverse cross sections. The incremental and cumulative radial/circumferential strain images can then be calculated based on the displacement field between consecutive B-mode images. Finally, the transverse Doppler spectra acquired at different depths along the vessel diameter were used to construct the spatially matched instantaneous wall shear values in a cardiac cycle. Vessel phantom simulation results revealed that the signal-to-noise ratio and contrast-to-noise ratio of the radial and circumferential strain images were increased by 2.8 and 5.9 dB and by 2.3 and 4.4 dB, respectively, after non-uniform correction. Preliminary results for 17 patients indicated that the accuracy of radial/circumferential strain images was improved in the lateral direction after non-uniform correction. The peak-to-peak value of incremental strain and the maximum cumulative strain for calcified plaques are evidently lower than those for other plaque types, and the echolucent plaques had higher values, on average, than the mixed plaques. Moreover, low oscillating wall shear rate values, found near the plaque and stenosis regions, are closely related to plaque formation. In conclusion, the method described can provide additional valuable results as a supplement to the current routine ultrasound examination for carotid atherosclerosis and, therefore, has significant potential as a feasible screening method for atherosclerosis diagnosis in the future. Copyright © 2014 World Federation for Ultrasound in

  5. Overview of image processing tools to extract physical information from JET videos

    NASA Astrophysics Data System (ADS)

    Craciunescu, T.; Murari, A.; Gelfusa, M.; Tiseanu, I.; Zoita, V.; EFDA Contributors, JET

    2014-11-01

    In magnetic confinement nuclear fusion devices such as JET, the last few years have witnessed a significant increase in the use of digital imagery, not only for the surveying and control of experiments, but also for the physical interpretation of results. More than 25 cameras are routinely used for imaging on JET in the infrared (IR) and visible spectral regions. These cameras can produce up to tens of Gbytes per shot and their information content can be very different, depending on the experimental conditions. However, the relevant information about the underlying physical processes is generally of much reduced dimensionality compared to the recorded data. The extraction of this information, which allows full exploitation of these diagnostics, is a challenging task. The image analysis consists, in most cases, of inverse problems which are typically ill-posed mathematically. The typology of objects to be analysed is very wide, and usually the images are affected by noise, low levels of contrast, low grey-level in-depth resolution, reshaping of moving objects, etc. Moreover, the plasma events have time constants of ms or tens of ms, which imposes tough conditions for real-time applications. On JET, in the last few years new tools and methods have been developed for physical information retrieval. The methodology of optical flow has allowed, under certain assumptions, the derivation of information about the dynamics of video objects associated with different physical phenomena, such as instabilities, pellets and filaments. The approach has been extended in order to approximate the optical flow within the MPEG compressed domain, allowing the manipulation of the large JET video databases and, in specific cases, even real-time data processing. The fast visible camera may provide new information that is potentially useful for disruption prediction. A set of methods, based on the extraction of structural information from the visual scene, have been developed for the

  6. Synchronized imaging and acoustic analysis of the upper airway in patients with sleep-disordered breathing.

    PubMed

    Chang, Yi-Chung; Huon, Leh-Kiong; Pham, Van-Truong; Chen, Yunn-Jy; Jiang, Sun-Fen; Shih, Tiffany Ting-Fang; Tran, Thi-Thao; Wang, Yung-Hung; Lin, Chen; Tsao, Jenho; Lo, Men-Tzung; Wang, Pa-Chun

    2014-12-01

    Progressive narrowing of the upper airway increases airflow resistance and can produce snoring sounds and apnea/hypopnea events associated with sleep-disordered breathing due to airway collapse. Recent studies have shown that acoustic properties during snoring can be altered with anatomic changes at the site of obstruction. To evaluate the instantaneous association between acoustic features of snoring and the anatomic sites of obstruction, a novel method was developed and applied in nine patients to extract the snoring sounds during sleep while performing dynamic magnetic resonance imaging (MRI). The degree of airway narrowing during the snoring events was then quantified by the collapse index (ratio of airway diameter preceding and during the events) and correlated with the synchronized acoustic features. A total of 201 snoring events (102 pure retropalatal and 99 combined retropalatal and retroglossal events) were recorded, and the collapse index as well as the soft tissue vibration time were significantly different between pure retropalatal (collapse index, 2 ± 11%; vibration time, 0.2 ± 0.3 s) and combined (retropalatal and retroglossal) snores (collapse index, 13 ± 7% [P ≤ 0.0001]; vibration time, 1.2 ± 0.7 s [P ≤ 0.0001]). The synchronized dynamic MRI and acoustic recordings successfully characterized the sites of obstruction and established the dynamic relationship between the anatomic site of obstruction and snoring acoustics.

  7. Low-complexity camera digital signal imaging for video document projection system

    NASA Astrophysics Data System (ADS)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  8. High Resolution Ultrasound Superharmonic Perfusion Imaging: In Vivo Feasibility and Quantification of Dynamic Contrast-Enhanced Acoustic Angiography.

    PubMed

    Lindsey, Brooks D; Shelton, Sarah E; Martin, K Heath; Ozgun, Kathryn A; Rojas, Juan D; Foster, F Stuart; Dayton, Paul A

    2017-04-01

    Mapping blood perfusion quantitatively allows localization of abnormal physiology and can improve understanding of disease progression. Dynamic contrast-enhanced ultrasound is a low-cost, real-time technique for imaging perfusion dynamics with microbubble contrast agents. Previously, we have demonstrated another contrast agent-specific ultrasound imaging technique, acoustic angiography, which forms static anatomical images of the superharmonic signal produced by microbubbles. In this work, we seek to determine whether acoustic angiography can be utilized for high resolution perfusion imaging in vivo by examining the effect of acquisition rate on superharmonic imaging at low flow rates and demonstrating the feasibility of dynamic contrast-enhanced superharmonic perfusion imaging for the first time. Results in the chorioallantoic membrane model indicate that frame rate and frame averaging do not affect the measured diameter of individual vessels observed, but that frame rate does influence the detection of vessels near and below the resolution limit. The highest number of resolvable vessels was observed at an intermediate frame rate of 3 Hz using a mechanically-steered prototype transducer. We also demonstrate the feasibility of quantitatively mapping perfusion rate in 2D in a mouse model with spatial resolution of ~100 μm. This type of imaging could provide non-invasive, high resolution quantification of microvascular function at penetration depths of several centimeters.

  9. Objectification of perceptual image quality for mobile video

    NASA Astrophysics Data System (ADS)

    Lee, Seon-Oh; Sim, Dong-Gyu

    2011-06-01

    This paper presents an objective video quality evaluation method for quantifying the subjective quality of digital mobile video. The proposed method aims to objectify the subjective quality by extracting edgeness and blockiness parameters. To evaluate the performance of the proposed algorithms, we carried out subjective video quality tests with the double-stimulus continuous quality scale method and obtained differential mean opinion score values for 120 mobile video clips. We then compared the performance of the proposed methods with that of existing methods in terms of the differential mean opinion score with 120 mobile video clips. Experimental results showed that the proposed methods were approximately 10% better than the edge peak signal-to-noise ratio of the J.247 method in terms of the Pearson correlation.

  10. Acoustic Communication at the Water's Edge: Evolutionary Insights from a Mudskipper

    PubMed Central

    Polgar, Gianluca; Malavasi, Stefano; Cipolato, Giacomo; Georgalas, Vyron; Clack, Jennifer A.; Torricelli, Patrizia

    2011-01-01

    Coupled behavioural observations and acoustical recordings of aggressive dyadic contests showed that the mudskipper Periophthalmodon septemradiatus communicates acoustically while out of water. An analysis of intraspecific variability showed that specific acoustic components may act as tags for individual recognition, further supporting the sounds' communicative value. A correlative analysis amongst acoustical properties and video-acoustical recordings in slow-motion supported first hypotheses on the emission mechanism. Acoustic transmission through the wet exposed substrate was also discussed. These observations were used to support an “exaptation hypothesis”, i.e. the maintenance of key adaptations during the first stages of water-to-land vertebrate eco-evolutionary transitions (based on eco-evolutionary and palaeontological considerations), through a comparative bioacoustic analysis of aquatic and semiterrestrial gobiid taxa. In fact, a remarkable similarity was found between mudskipper vocalisations and those emitted by gobioids and other soniferous benthonic fishes. PMID:21738663

  11. Frame Rate Considerations for Real-Time Abdominal Acoustic Radiation Force Impulse Imaging

    PubMed Central

    Fahey, Brian J.; Palmeri, Mark L.; Trahey, Gregg E.

    2008-01-01

    With the advent of real-time Acoustic Radiation Force Impulse (ARFI) imaging, elevated frame rates are both desirable and relevant from a clinical perspective. However, fundamental limitations on frame rates are imposed by thermal safety concerns related to incident radiation force pulses. Abdominal ARFI imaging utilizes a curvilinear scanning geometry that results in markedly different tissue heating patterns than those previously studied for linear arrays or mechanically-translated concave transducers. Finite Element Method (FEM) models were used to simulate these tissue heating patterns and to analyze the impact of tissue heating on frame rates available for abdominal ARFI imaging. A perfusion model was implemented to account for cooling effects due to blood flow and frame rate limitations were evaluated in the presence of normal, reduced and negligible tissue perfusions. Conventional ARFI acquisition techniques were also compared to ARFI imaging with parallel receive tracking in terms of thermal efficiency. Additionally, thermocouple measurements of transducer face temperature increases were acquired to assess the frame rate limitations imposed by cumulative heating of the imaging array. Frame rates sufficient for many abdominal imaging applications were found to be safely achievable utilizing available ARFI imaging techniques. PMID:17521042

  12. The effect of music video clips on adolescent boys' body image, mood, and schema activation.

    PubMed

    Mulgrew, Kate E; Volcevski-Kostas, Diana; Rendell, Peter G

    2014-01-01

    There is limited research that has examined experimentally the effects of muscular images on adolescent boys' body image, with no research specifically examining the effects of music television. The aim of the current study was to examine the effects of viewing muscular and attractive singers in music video clips on early, mid, and late adolescent boys' body image, mood, and schema activation. Participants were 180 boys in grade 7 (mean age = 12.73 years), grade 9 (mean age = 14.40 years) or grade 11 (mean age = 16.15 years) who completed pre- and post-test measures of mood and body satisfaction after viewing music videos containing male singers of muscular or average appearance. They also completed measures of schema activation and social comparison after viewing the clips. The results showed that the boys who viewed the muscular clips reported poorer upper body satisfaction, lower appearance satisfaction, lower happiness, and more depressive feelings compared to boys who viewed the clips depicting singers of average appearance. There was no evidence of increased appearance schema activation but the boys who viewed the muscular clips did report higher levels of social comparison to the singers. The results suggest that music video clips are a powerful form of media in conveying information about the male ideal body shape and that negative effects are found in boys as young as 12 years.

  13. Efficient Use of Video for 3d Modelling of Cultural Heritage Objects

    NASA Astrophysics Data System (ADS)

    Alsadik, B.; Gerke, M.; Vosselman, G.

    2015-03-01

    Currently, there is a rapid development in the techniques of the automated image based modelling (IBM), especially in advanced structure-from-motion (SFM) and dense image matching methods, and camera technology. One possibility is to use video imaging to create 3D reality based models of cultural heritage architectures and monuments. Practically, video imaging is much easier to apply when compared to still image shooting in IBM techniques because the latter needs a thorough planning and proficiency. However, one is faced with mainly three problems when video image sequences are used for highly detailed modelling and dimensional survey of cultural heritage objects. These problems are: the low resolution of video images, the need to process a large number of short baseline video images and blur effects due to camera shake on a significant number of images. In this research, the feasibility of using video images for efficient 3D modelling is investigated. A method is developed to find the minimal significant number of video images in terms of object coverage and blur effect. This reduction in video images is convenient to decrease the processing time and to create a reliable textured 3D model compared with models produced by still imaging. Two experiments for modelling a building and a monument are tested using a video image resolution of 1920×1080 pixels. Internal and external validations of the produced models are applied to find out the final predicted accuracy and the model level of details. Related to the object complexity and video imaging resolution, the tests show an achievable average accuracy between 1 - 5 cm when using video imaging, which is suitable for visualization, virtual museums and low detailed documentation.

  14. Methods And Systems For Using Reference Images In Acoustic Image Processing

    DOEpatents

    Moore, Thomas L.; Barter, Robert Henry

    2005-01-04

    A method and system of examining tissue are provided in which a field, including at least a portion of the tissue and one or more registration fiducials, is insonified. Scattered acoustic information, including both transmitted and reflected waves, is received from the field. A representation of the field, including both the tissue and the registration fiducials, is then derived from the received acoustic radiation.

  15. An acoustic charge transport imager for high definition television applications

    NASA Technical Reports Server (NTRS)

    Hunt, W. D.; Brennan, Kevin F.

    1994-01-01

    The primary goal of this research is to develop a solid-state high definition television (HDTV) imager chip operating at a frame rate of about 170 frames/sec at 2 Megapixels per frame. This imager offers an order of magnitude improvement in speed over CCD designs and will allow for monolithic imagers operating from the IR to the UV. The technical approach of the project focuses on the development of the three basic components of the imager and their integration. The imager chip can be divided into three distinct components: (1) image capture via an array of avalanche photodiodes (APD's), (2) charge collection, storage and overflow control via a charge transfer transistor device (CTD), and (3) charge readout via an array of acoustic charge transport (ACT) channels. The use of APD's allows for front end gain at low noise and low operating voltages while the ACT readout enables concomitant high speed and high charge transfer efficiency. Currently work is progressing towards the development of manufacturable designs for each of these component devices. In addition to the development of each of the three distinct components, work towards their integration is also progressing. The component designs are considered not only to meet individual specifications but to provide overall system level performance suitable for HDTV operation upon integration. The ultimate manufacturability and reliability of the chip constrains the design as well. The progress made during this period is described in detail in Sections 2-4.

  16. The Use of Acoustic Radiation Force Decorrelation-Weighted Pulse Inversion for Enhanced Ultrasound Contrast Imaging.

    PubMed

    Herbst, Elizabeth B; Unnikrishnan, Sunil; Wang, Shiying; Klibanov, Alexander L; Hossack, John A; Mauldin, Frank William

    2017-02-01

    The use of ultrasound imaging for cancer diagnosis and screening can be enhanced with the use of molecularly targeted microbubbles. Nonlinear imaging strategies such as pulse inversion (PI) and "contrast pulse sequences" (CPS) can be used to differentiate microbubble signal, but often fail to suppress highly echogenic tissue interfaces. This failure results in false-positive detection and potential misdiagnosis. In this study, a novel acoustic radiation force (ARF)-based approach was developed for superior microbubble signal detection. The feasibility of this technique, termed ARF decorrelation-weighted PI (ADW-PI), was demonstrated in vivo using a subcutaneous mouse tumor model. Tumors were implanted in the hindlimb of C57BL/6 mice by subcutaneous injection of MC38 cells. Lipid-shelled microbubbles were conjugated to anti-VEGFR2 antibody and administered via bolus injection. An image sequence using ARF pulses to generate microbubble motion was combined with PI imaging on a Verasonics Vantage programmable scanner. ADW-PI images were generated by combining PI images with interframe signal decorrelation data. For comparison, CPS images of the same mouse tumor were acquired using a Siemens Sequoia clinical scanner. Microbubble-bound regions in the tumor interior exhibited significantly higher signal decorrelation than static tissue (n = 9, P < 0.001). The application of ARF significantly increased microbubble signal decorrelation (n = 9, P < 0.01). Using these decorrelation measurements, ADW-PI imaging demonstrated significantly improved microbubble contrast-to-tissue ratio when compared with corresponding CPS or PI images (n = 9, P < 0.001). Contrast-to-tissue ratio improved with ADW-PI by approximately 3 dB compared with PI images and 2 dB compared with CPS images. Acoustic radiation force can be used to generate adherent microbubble signal decorrelation without microbubble bursting. When combined with PI, measurements of the resulting microbubble signal

  17. Toward enhancing the distributed video coder under a multiview video codec framework

    NASA Astrophysics Data System (ADS)

    Lee, Shih-Chieh; Chen, Jiann-Jone; Tsai, Yao-Hong; Chen, Chin-Hua

    2016-11-01

    The advance of video coding technology enables multiview video (MVV) or three-dimensional television (3-D TV) display for users with or without glasses. For mobile devices or wireless applications, a distributed video coder (DVC) can be utilized to shift the encoder complexity to decoder under the MVV coding framework, denoted as multiview distributed video coding (MDVC). We proposed to exploit both inter- and intraview video correlations to enhance side information (SI) and improve the MDVC performance: (1) based on the multiview motion estimation (MVME) framework, a categorized block matching prediction with fidelity weights (COMPETE) was proposed to yield a high quality SI frame for better DVC reconstructed images. (2) The block transform coefficient properties, i.e., DCs and ACs, were exploited to design the priority rate control for the turbo code, such that the DVC decoding can be carried out with fewest parity bits. In comparison, the proposed COMPETE method demonstrated lower time complexity, while presenting better reconstructed video quality. Simulations show that the proposed COMPETE can reduce the time complexity of MVME to 1.29 to 2.56 times smaller, as compared to previous hybrid MVME methods, while the image peak signal to noise ratios (PSNRs) of a decoded video can be improved 0.2 to 3.5 dB, as compared to H.264/AVC intracoding.

  18. Video-rate terahertz electric-field vector imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takai, Mayuko; Takeda, Masatoshi; Sasaki, Manabu

    We present an experimental setup to dramatically reduce a measurement time for obtaining spatial distributions of terahertz electric-field (E-field) vectors. The method utilizes the electro-optic sampling, and we use a charge-coupled device to detect a spatial distribution of the probe beam polarization rotation by the E-field-induced Pockels effect in a 〈110〉-oriented ZnTe crystal. A quick rotation of the ZnTe crystal allows analyzing the terahertz E-field direction at each image position, and the terahertz E-field vector mapping at a fixed position of an optical delay line is achieved within 21 ms. Video-rate mapping of terahertz E-field vectors is likely to bemore » useful for achieving real-time sensing of terahertz vector beams, vector vortices, and surface topography. The method is also useful for a fast polarization analysis of terahertz beams.« less

  19. MO-A-BRD-06: In Vivo Cherenkov Video Imaging to Verify Whole Breast Irradiation Treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, R; Glaser, A; Jarvis, L

    Purpose: To show in vivo video imaging of Cherenkov emission (Cherenkoscopy) can be acquired in the clinical treatment room without affecting the normal process of external beam radiation therapy (EBRT). Applications of Cherenkoscopy, such as patient positioning, movement tracking, treatment monitoring and superficial dose estimation, were examined. Methods: In a phase 1 clinical trial, including 12 patients undergoing post-lumpectomy whole breast irradiation, Cherenkov emission was imaged with a time-gated ICCD camera synchronized to the radiation pulses, during 10 fractions of the treatment. Images from different treatment days were compared by calculating the 2-D correlations corresponding to the averaged image. Anmore » edge detection algorithm was utilized to highlight biological features, such as the blood vessels. Superficial dose deposited at the sampling depth were derived from the Eclipse treatment planning system (TPS) and compared with the Cherenkov images. Skin reactions were graded weekly according to the Common Toxicity Criteria and digital photographs were obtained for comparison. Results: Real time (fps = 4.8) imaging of Cherenkov emission was feasible and feasibility tests indicated that it could be improved to video rate (fps = 30) with system improvements. Dynamic field changes due to fast MLC motion were imaged in real time. The average 2-D correlation was about 0.99, suggesting the stability of this imaging technique and repeatability of patient positioning was outstanding. Edge enhanced images of blood vessels were observed, and could serve as unique biological markers for patient positioning and movement tracking (breathing). Small discrepancies exists between the Cherenkov images and the superficial dose predicted from the TPS but the former agreed better with actual skin reactions than did the latter. Conclusion: Real time Cherenkoscopy imaging during EBRT is a novel imaging tool that could be utilized for patient positioning, movement

  20. Hologlyphics: volumetric image synthesis performance system

    NASA Astrophysics Data System (ADS)

    Funk, Walter

    2008-02-01

    This paper describes a novel volumetric image synthesis system and artistic technique, which generate moving volumetric images in real-time, integrated with music. The system, called the Hologlyphic Funkalizer, is performance based, wherein the images and sound are controlled by a live performer, for the purposes of entertaining a live audience and creating a performance art form unique to volumetric and autostereoscopic images. While currently configured for a specific parallax barrier display, the Hologlyphic Funkalizer's architecture is completely adaptable to various volumetric and autostereoscopic display technologies. Sound is distributed through a multi-channel audio system; currently a quadraphonic speaker setup is implemented. The system controls volumetric image synthesis, production of music and spatial sound via acoustic analysis and human gestural control, using a dedicated control panel, motion sensors, and multiple musical keyboards. Music can be produced by external acoustic instruments, pre-recorded sounds or custom audio synthesis integrated with the volumetric image synthesis. Aspects of the sound can control the evolution of images and visa versa. Sounds can be associated and interact with images, for example voice synthesis can be combined with an animated volumetric mouth, where nuances of generated speech modulate the mouth's expressiveness. Different images can be sent to up to 4 separate displays. The system applies many novel volumetric special effects, and extends several film and video special effects into the volumetric realm. Extensive and various content has been developed and shown to live audiences by a live performer. Real world applications will be explored, with feedback on the human factors.

  1. Computer-Aided Evaluation of Blood Vessel Geometry From Acoustic Images.

    PubMed

    Lindström, Stefan B; Uhlin, Fredrik; Bjarnegård, Niclas; Gylling, Micael; Nilsson, Kamilla; Svensson, Christina; Yngman-Uhlin, Pia; Länne, Toste

    2018-04-01

    A method for computer-aided assessment of blood vessel geometries based on shape-fitting algorithms from metric vision was evaluated. Acoustic images of cross sections of the radial artery and cephalic vein were acquired, and medical practitioners used a computer application to measure the wall thickness and nominal diameter of these blood vessels with a caliper method and the shape-fitting method. The methods performed equally well for wall thickness measurements. The shape-fitting method was preferable for measuring the diameter, since it reduced systematic errors by up to 63% in the case of the cephalic vein because of its eccentricity. © 2017 by the American Institute of Ultrasound in Medicine.

  2. Enhanced video indirect ophthalmoscopy (VIO) via robust mosaicing.

    PubMed

    Estrada, Rolando; Tomasi, Carlo; Cabrera, Michelle T; Wallace, David K; Freedman, Sharon F; Farsiu, Sina

    2011-10-01

    Indirect ophthalmoscopy (IO) is the standard of care for evaluation of the neonatal retina. When recorded on video from a head-mounted camera, IO images have low quality and narrow Field of View (FOV). We present an image fusion methodology for converting a video IO recording into a single, high quality, wide-FOV mosaic that seamlessly blends the best frames in the video. To this end, we have developed fast and robust algorithms for automatic evaluation of video quality, artifact detection and removal, vessel mapping, registration, and multi-frame image fusion. Our experiments show the effectiveness of the proposed methods.

  3. Acoustic Remote Sensing

    NASA Astrophysics Data System (ADS)

    Dowling, David R.; Sabra, Karim G.

    2015-01-01

    Acoustic waves carry information about their source and collect information about their environment as they propagate. This article reviews how these information-carrying and -collecting features of acoustic waves that travel through fluids can be exploited for remote sensing. In nearly all cases, modern acoustic remote sensing involves array-recorded sounds and array signal processing to recover multidimensional results. The application realm for acoustic remote sensing spans an impressive range of signal frequencies (10-2 to 107 Hz) and distances (10-2 to 107 m) and involves biomedical ultrasound imaging, nondestructive evaluation, oil and gas exploration, military systems, and Nuclear Test Ban Treaty monitoring. In the past two decades, approaches have been developed to robustly localize remote sources; remove noise and multipath distortion from recorded signals; and determine the acoustic characteristics of the environment through which the sound waves have traveled, even when the recorded sounds originate from uncooperative sources or are merely ambient noise.

  4. High-speed varifocal imaging with a tunable acoustic gradient index of refraction lens.

    PubMed

    Mermillod-Blondin, Alexandre; McLeod, Euan; Arnold, Craig B

    2008-09-15

    Fluidic lenses allow for varifocal optical elements, but current approaches are limited by the speed at which focal length can be changed. Here we demonstrate the use of a tunable acoustic gradient (TAG) index of refraction lens as a fast varifocal element. The optical power of the TAG lens varies continuously, allowing for rapid selection and modification of the effective focal length at time scales of 1 mus and shorter. The wavefront curvature applied to the incident light is experimentally quantified as a function of time, and single-frame imaging is demonstrated. Results indicate that the TAG lens can successfully be employed to perform high-rate imaging at multiple locations.

  5. A novel imaging technique based on the spatial coherence of backscattered waves: demonstration in the presence of acoustical clutter

    NASA Astrophysics Data System (ADS)

    Dahl, Jeremy J.; Pinton, Gianmarco F.; Lediju, Muyinatu; Trahey, Gregg E.

    2011-03-01

    In the last 20 years, the number of suboptimal and inadequate ultrasound exams has increased. This trend has been linked to the increasing population of overweight and obese individuals. The primary causes of image degradation in these individuals are often attributed to phase aberration and clutter. Phase aberration degrades image quality by distorting the transmitted and received pressure waves, while clutter degrades image quality by introducing incoherent acoustical interference into the received pressure wavefront. Although significant research efforts have pursued the correction of image degradation due to phase aberration, few efforts have characterized or corrected image degradation due to clutter. We have developed a novel imaging technique that is capable of differentiating ultrasonic signals corrupted by acoustical interference. The technique, named short-lag spatial coherence (SLSC) imaging, is based on the spatial coherence of the received ultrasonic wavefront at small spatial distances across the transducer aperture. We demonstrate comparative B-mode and SLSC images using full-wave simulations that include the effects of clutter and show that SLSC imaging generates contrast-to-noise ratios (CNR) and signal-to-noise ratios (SNR) that are significantly better than B-mode imaging under noise-free conditions. In the presence of noise, SLSC imaging significantly outperforms conventional B-mode imaging in all image quality metrics. We demonstrate the use of SLSC imaging in vivo and compare B-mode and SLSC images of human thyroid and liver.

  6. Research on compression performance of ultrahigh-definition videos

    NASA Astrophysics Data System (ADS)

    Li, Xiangqun; He, Xiaohai; Qing, Linbo; Tao, Qingchuan; Wu, Di

    2017-11-01

    With the popularization of high-definition (HD) images and videos (1920×1080 pixels and above), there are even 4K (3840×2160) television signals and 8 K (8192×4320) ultrahigh-definition videos. The demand for HD images and videos is increasing continuously, along with the increasing data volume. The storage and transmission cannot be properly solved only by virtue of the expansion capacity of hard disks and the update and improvement of transmission devices. Based on the full use of the coding standard high-efficiency video coding (HEVC), super-resolution reconstruction technology, and the correlation between the intra- and the interprediction, we first put forward a "division-compensation"-based strategy to further improve the compression performance of a single image and frame I. Then, by making use of the above thought and HEVC encoder and decoder, a video compression coding frame is designed. HEVC is used inside the frame. Last, with the super-resolution reconstruction technology, the reconstructed video quality is further improved. The experiment shows that by the proposed compression method for a single image (frame I) and video sequence here, the performance is superior to that of HEVC in a low bit rate environment.

  7. Transthoracic Cardiac Acoustic Radiation Force Impulse Imaging

    NASA Astrophysics Data System (ADS)

    Bradway, David Pierson

    This dissertation investigates the feasibility of a real-time transthoracic Acoustic Radiation Force Impulse (ARFI) imaging system to measure myocardial function non-invasively in clinical setting. Heart failure is an important cardiovascular disease and contributes to the leading cause of death for developed countries. Patients exhibiting heart failure with a low left ventricular ejection fraction (LVEF) can often be identified by clinicians, but patients with preserved LVEF might be undetected if they do not exhibit other signs and symptoms of heart failure. These cases motivate development of transthoracic ARFI imaging to aid the early diagnosis of the structural and functional heart abnormalities leading to heart failure. M-Mode ARFI imaging utilizes ultrasonic radiation force to displace tissue several micrometers in the direction of wave propagation. Conventional ultrasound tracks the response of the tissue to the force. This measurement is repeated rapidly at a location through the cardiac cycle, measuring timing and relative changes in myocardial stiffness. ARFI imaging was previously shown capable of measuring myocardial properties and function via invasive open-chest and intracardiac approaches. The prototype imaging system described in this dissertation is capable of rapid acquisition, processing, and display of ARFI images and shear wave elasticity imaging (SWEI) movies. Also presented is a rigorous safety analysis, including finite element method (FEM) simulations of tissue heating, hydrophone intensity and mechanical index (MI) measurements, and thermocouple transducer face heating measurements. For the pulse sequences used in later animal and clinical studies, results from the safety analysis indicates that transthoracic ARFI imaging can be safely applied at rates and levels realizable on the prototype ARFI imaging system. Preliminary data are presented from in vivo trials studying changes in myocardial stiffness occurring under normal and abnormal

  8. A new look at deep-sea video

    USGS Publications Warehouse

    Chezar, H.; Lee, J.

    1985-01-01

    A deep-towed photographic system with completely self-contained recording instrumentation and power can obtain color-video and still-photographic transects along rough terrane without need for a long electrically conducting cable. Both the video- and still-camera systems utilize relatively inexpensive and proven off-the-shelf hardware adapted for deep-water environments. The small instrument frame makes the towed sled an ideal photographic tool for use on ship or small-boat operations. The system includes a temperature probe and altimeter that relay data acoustically from the sled to the surface ship. This relay enables the operator to monitor simultaneously water temperature and the precise height off the bottom. ?? 1985.

  9. Contrast-enhanced magneto-photo-acoustic imaging in vivo using dual-contrast nanoparticles☆

    PubMed Central

    Qu, Min; Mehrmohammadi, Mohammad; Truby, Ryan; Graf, Iulia; Homan, Kimberly; Emelianov, Stanislav

    2014-01-01

    By mapping the distribution of targeted plasmonic nanoparticles (NPs), photoacoustic (PA) imaging offers the potential to detect the pathologies in the early stages. However, optical absorption of the endogenous chromophores in the background tissue significantly reduces the contrast resolution of photoacoustic imaging. Previously, we introduced MPA imaging – a synergistic combination of magneto-motive ultrasound (MMUS) and PA imaging, and demonstrated MPA contrast enhancement using cell culture studies. In the current study, contrast enhancement was investigated in vivo using the magneto-photo-acoustic (MPA) imaging augmented with dual-contrast nanoparticles. Liposomal nanoparticles (LNPs) possessing both optical absorption and magnetic properties were injected into a murine tumor model. First, photoacoustic signals were generated from both the endogenous absorbers in the tissue and the liposomal nanoparticles in the tumor. Then, given significant differences in magnetic properties of tissue and LNPs, the magnetic response of LNPs (i.e. MMUS signal) was utilized to suppress the unwanted PA signals from the background tissue thus improving the PA imaging contrast. In this study, we demonstrated the 3D MPA imaging of LNP-labeled xenografted tumor in a live animal. Compared to conventional PA imaging, the MPA imaging show significantly enhanced contrast between the nanoparticle-labeled tumor and the background tissue. Our results suggest the feasibility of MPA imaging for high contrast in vivo mapping of dual-contrast nanoparticles. PMID:24653976

  10. Stress-Induced Fracturing of Reservoir Rocks: Acoustic Monitoring and μCT Image Analysis

    NASA Astrophysics Data System (ADS)

    Pradhan, Srutarshi; Stroisz, Anna M.; Fjær, Erling; Stenebråten, Jørn F.; Lund, Hans K.; Sønstebø, Eyvind F.

    2015-11-01

    Stress-induced fracturing in reservoir rocks is an important issue for the petroleum industry. While productivity can be enhanced by a controlled fracturing operation, it can trigger borehole instability problems by reactivating existing fractures/faults in a reservoir. However, safe fracturing can improve the quality of operations during CO2 storage, geothermal installation and gas production at and from the reservoir rocks. Therefore, understanding the fracturing behavior of different types of reservoir rocks is a basic need for planning field operations toward these activities. In our study, stress-induced fracturing of rock samples has been monitored by acoustic emission (AE) and post-experiment computer tomography (CT) scans. We have used hollow cylinder cores of sandstones and chalks, which are representatives of reservoir rocks. The fracture-triggering stress has been measured for different rocks and compared with theoretical estimates. The population of AE events shows the location of main fracture arms which is in a good agreement with post-test CT image analysis, and the fracture patterns inside the samples are visualized through 3D image reconstructions. The amplitudes and energies of acoustic events clearly indicate initiation and propagation of the main fractures. Time evolution of the radial strain measured in the fracturing tests will later be compared to model predictions of fracture size.

  11. Development and validation of a combined phased acoustical radiosity and image source model for predicting sound fields in rooms.

    PubMed

    Marbjerg, Gerd; Brunskog, Jonas; Jeong, Cheol-Ho; Nilsson, Erling

    2015-09-01

    A model, combining acoustical radiosity and the image source method, including phase shifts on reflection, has been developed. The model is denoted Phased Acoustical Radiosity and Image Source Method (PARISM), and it has been developed in order to be able to model both specular and diffuse reflections with complex-valued and angle-dependent boundary conditions. This paper mainly describes the combination of the two models and the implementation of the angle-dependent boundary conditions. It furthermore describes how a pressure impulse response is obtained from the energy-based acoustical radiosity by regarding the model as being stochastic. Three methods of implementation are proposed and investigated, and finally, recommendations are made for their use. Validation of the image source method is done by comparison with finite element simulations of a rectangular room with a porous absorber ceiling. Results from the full model are compared with results from other simulation tools and with measurements. The comparisons of the full model are done for real-valued and angle-independent surface properties. The proposed model agrees well with both the measured results and the alternative theories, and furthermore shows a more realistic spatial variation than energy-based methods due to the fact that interference is considered.

  12. Video System Highlights Hydrogen Fires

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C.; Gleman, Stuart M.; Moerk, John S.

    1992-01-01

    Video system combines images from visible spectrum and from three bands in infrared spectrum to produce color-coded display in which hydrogen fires distinguished from other sources of heat. Includes linear array of 64 discrete lead selenide mid-infrared detectors operating at room temperature. Images overlaid on black and white image of same scene from standard commercial video camera. In final image, hydrogen fires appear red; carbon-based fires, blue; and other hot objects, mainly green and combinations of green and red. Where no thermal source present, image remains in black and white. System enables high degree of discrimination between hydrogen flames and other thermal emitters.

  13. Enhance Video Film using Retnix method

    NASA Astrophysics Data System (ADS)

    Awad, Rasha; Al-Zuky, Ali A.; Al-Saleh, Anwar H.; Mohamad, Haidar J.

    2018-05-01

    An enhancement technique used to improve the studied video quality. Algorithms like mean and standard deviation are used as a criterion within this paper, and it applied for each video clip that divided into 80 images. The studied filming environment has different light intensity (315, 566, and 644Lux). This different environment gives similar reality to the outdoor filming. The outputs of the suggested algorithm are compared with the results before applying it. This method is applied into two ways: first, it is applied for the full video clip to get the enhanced film; second, it is applied for every individual image to get the enhanced image then compiler them to get the enhanced film. This paper shows that the enhancement technique gives good quality video film depending on a statistical method, and it is recommended to use it in different application.

  14. Video Extrapolation Method Based on Time-Varying Energy Optimization and CIP.

    PubMed

    Sakaino, Hidetomo

    2016-09-01

    Video extrapolation/prediction methods are often used to synthesize new videos from images. For fluid-like images and dynamic textures as well as moving rigid objects, most state-of-the-art video extrapolation methods use non-physics-based models that learn orthogonal bases from a number of images but at high computation cost. Unfortunately, data truncation can cause image degradation, i.e., blur, artifact, and insufficient motion changes. To extrapolate videos that more strictly follow physical rules, this paper proposes a physics-based method that needs only a few images and is truncation-free. We utilize physics-based equations with image intensity and velocity: optical flow, Navier-Stokes, continuity, and advection equations. These allow us to use partial difference equations to deal with the local image feature changes. Image degradation during extrapolation is minimized by updating model parameters, where a novel time-varying energy balancer model that uses energy based image features, i.e., texture, velocity, and edge. Moreover, the advection equation is discretized by high-order constrained interpolation profile for lower quantization error than can be achieved by the previous finite difference method in long-term videos. Experiments show that the proposed energy based video extrapolation method outperforms the state-of-the-art video extrapolation methods in terms of image quality and computation cost.

  15. High Resolution X-ray-Induced Acoustic Tomography

    PubMed Central

    Xiang, Liangzhong; Tang, Shanshan; Ahmad, Moiz; Xing, Lei

    2016-01-01

    Absorption based CT imaging has been an invaluable tool in medical diagnosis, biology, and materials science. However, CT requires a large set of projection data and high radiation dose to achieve superior image quality. In this letter, we report a new imaging modality, X-ray Induced Acoustic Tomography (XACT), which takes advantages of high sensitivity to X-ray absorption and high ultrasonic resolution in a single modality. A single projection X-ray exposure is sufficient to generate acoustic signals in 3D space because the X-ray generated acoustic waves are of a spherical nature and propagate in all directions from their point of generation. We demonstrate the successful reconstruction of gold fiducial markers with a spatial resolution of about 350 μm. XACT reveals a new imaging mechanism and provides uncharted opportunities for structural determination with X-ray. PMID:27189746

  16. Annotation of UAV surveillance video

    NASA Astrophysics Data System (ADS)

    Howlett, Todd; Robertson, Mark A.; Manthey, Dan; Krol, John

    2004-08-01

    Significant progress toward the development of a video annotation capability is presented in this paper. Research and development of an object tracking algorithm applicable for UAV video is described. Object tracking is necessary for attaching the annotations to the objects of interest. A methodology and format is defined for encoding video annotations using the SMPTE Key-Length-Value encoding standard. This provides the following benefits: a non-destructive annotation, compliance with existing standards, video playback in systems that are not annotation enabled and support for a real-time implementation. A model real-time video annotation system is also presented, at a high level, using the MPEG-2 Transport Stream as the transmission medium. This work was accomplished to meet the Department of Defense"s (DoD"s) need for a video annotation capability. Current practices for creating annotated products are to capture a still image frame, annotate it using an Electric Light Table application, and then pass the annotated image on as a product. That is not adequate for reporting or downstream cueing. It is too slow and there is a severe loss of information. This paper describes a capability for annotating directly on the video.

  17. Researching on the process of remote sensing video imagery

    NASA Astrophysics Data System (ADS)

    Wang, He-rao; Zheng, Xin-qi; Sun, Yi-bo; Jia, Zong-ren; Wang, He-zhan

    Unmanned air vehicle remotely-sensed imagery on the low-altitude has the advantages of higher revolution, easy-shooting, real-time accessing, etc. It's been widely used in mapping , target identification, and other fields in recent years. However, because of conditional limitation, the video images are unstable, the targets move fast, and the shooting background is complex, etc., thus it is difficult to process the video images in this situation. In other fields, especially in the field of computer vision, the researches on video images are more extensive., which is very helpful for processing the remotely-sensed imagery on the low-altitude. Based on this, this paper analyzes and summarizes amounts of video image processing achievement in different fields, including research purposes, data sources, and the pros and cons of technology. Meantime, this paper explores the technology methods more suitable for low-altitude video image processing of remote sensing.

  18. State of the art in video system performance

    NASA Technical Reports Server (NTRS)

    Lewis, Michael J.

    1990-01-01

    The closed circuit television (CCTV) system that is onboard the Space Shuttle has the following capabilities: camera, video signal switching and routing unit (VSU); and Space Shuttle video tape recorder. However, this system is inadequate for use with many experiments that require video imaging. In order to assess the state-of-the-art in video technology and data storage systems, a survey was conducted of the High Resolution, High Frame Rate Video Technology (HHVT) products. The performance of the state-of-the-art solid state cameras and image sensors, video recording systems, data transmission devices, and data storage systems versus users' requirements are shown graphically.

  19. Acoustic Scattering Classification of Zooplankton and Microstructure

    DTIC Science & Technology

    2002-09-30

    the scattering in different areas. In some cases, siphonophores dominated the scattering; in other cases, euphausiids were the dominant scatterers...juvenile form of siphonophores ) through the use of BIOMAPER-II acoustics and video systems. Because of their fragility, these organisms are...scattering strength, total biomass, siphonophore abundance, and water temperature, throughout the water column in a one-hour section of a transect

  20. Individual differences in the processing of smoking-cessation video messages: An imaging genetics study.

    PubMed

    Shi, Zhenhao; Wang, An-Li; Aronowitz, Catherine A; Romer, Daniel; Langleben, Daniel D

    2017-09-01

    Studies testing the benefits of enriching smoking-cessation video ads with attention-grabbing sensory features have yielded variable results. Dopamine transporter gene (DAT1) has been implicated in attention deficits. We hypothesized that DAT1 polymorphism is partially responsible for this variability. Using functional magnetic resonance imaging, we examined brain responses to videos high or low in attention-grabbing features, indexed by "message sensation value" (MSV), in 53 smokers genotyped for DAT1. Compared to other smokers, 10/10 homozygotes showed greater neural response to High- vs. Low-MSV smoking-cessation videos in two a priori regions of interest: the right temporoparietal junction and the right ventrolateral prefrontal cortex. These regions are known to underlie stimulus-driven attentional processing. Exploratory analysis showed that the right temporoparietal response positively predicted follow-up smoking behavior indexed by urine cotinine. Our findings suggest that responses to attention-grabbing features in smoking-cessation messages is affected by the DAT1 genotype. Copyright © 2017. Published by Elsevier B.V.

  1. Analysis and segmentation of images in case of solving problems of detecting and tracing objects on real-time video

    NASA Astrophysics Data System (ADS)

    Ezhova, Kseniia; Fedorenko, Dmitriy; Chuhlamov, Anton

    2016-04-01

    The article deals with the methods of image segmentation based on color space conversion, and allow the most efficient way to carry out the detection of a single color in a complex background and lighting, as well as detection of objects on a homogeneous background. The results of the analysis of segmentation algorithms of this type, the possibility of their implementation for creating software. The implemented algorithm is very time-consuming counting, making it a limited application for the analysis of the video, however, it allows us to solve the problem of analysis of objects in the image if there is no dictionary of images and knowledge bases, as well as the problem of choosing the optimal parameters of the frame quantization for video analysis.

  2. Acoustic field modulation in regenerators

    NASA Astrophysics Data System (ADS)

    Hu, J. Y.; Wang, W.; Luo, E. C.; Chen, Y. Y.

    2016-12-01

    The regenerator is a key component that transfers energy between heat and work. The conversion efficiency is significantly influenced by the acoustic field in the regenerator. Much effort has been spent to quantitatively determine this influence, but few comprehensive experimental verifications have been performed because of difficulties in modulating and measuring the acoustic field. In this paper, a method requiring two compressors is introduced and theoretically investigated that achieves acoustic field modulation in the regenerator. One compressor outputs the acoustic power for the regenerator; the other acts as a phase shifter. A RC load dissipates the acoustic power out of both the regenerator and the latter compressor. The acoustic field can be modulated by adjusting the current in the two compressors and opening the RC load. The acoustic field is measured with pressure sensors instead of flow-field imaging equipment, thereby greatly simplifying the experiment.

  3. Contribution of the supraglottic larynx to the vocal product: imaging and acoustic analysis

    NASA Astrophysics Data System (ADS)

    Gracco, L. Carol

    1996-04-01

    Horizontal supraglottic laryngectomy is a surgical procedure to remove a mass lesion located in the region of the pharynx superior to the true vocal folds. In contrast to full or partial laryngectomy, patients who undergo horizontal supraglottic laryngectomy often present with little or nor involvement to the true vocal folds. This population provides an opportunity to examine the acoustic consequences of altering the pharynx while sparing the laryngeal sound source. Acoustic and magnetic resonance imaging (MRI) data were acquired in a group of four patients before and after supraglottic laryngectomy. Acoustic measures included the identification of vocal tract resonances and the fundamental frequency of the vocal fold vibration. 3D reconstruction of the pharyngeal portion of each subjects' vocal tract were made from MRIs taken during phonation and volume measures were obtained. These measures reveal a variable, but often dramatic difference in the surgically-altered area of the pharynx and changes in the formant frequencies of the vowel/i/post surgically. In some cases the presence of the tumor created a deviation from the expected formant values pre-operatively with post-operative values approaching normal. Patients who also underwent radiation treatment post surgically tended to have greater constriction in the pharyngeal area of the vocal tract.

  4. Contrast-enhanced magneto-photo-acoustic imaging in vivo using dual-contrast nanoparticles.

    PubMed

    Qu, Min; Mehrmohammadi, Mohammad; Truby, Ryan; Graf, Iulia; Homan, Kimberly; Emelianov, Stanislav

    2014-06-01

    By mapping the distribution of targeted plasmonic nanoparticles (NPs), photoacoustic (PA) imaging offers the potential to detect the pathologies in the early stages. However, optical absorption of the endogenous chromophores in the background tissue significantly reduces the contrast resolution of photoacoustic imaging. Previously, we introduced MPA imaging - a synergistic combination of magneto-motive ultrasound (MMUS) and PA imaging, and demonstrated MPA contrast enhancement using cell culture studies. In the current study, contrast enhancement was investigated in vivo using the magneto-photo-acoustic (MPA) imaging augmented with dual-contrast nanoparticles. Liposomal nanoparticles (LNPs) possessing both optical absorption and magnetic properties were injected into a murine tumor model. First, photoacoustic signals were generated from both the endogenous absorbers in the tissue and the liposomal nanoparticles in the tumor. Then, given significant differences in magnetic properties of tissue and LNPs, the magnetic response of LNPs (i.e. MMUS signal) was utilized to suppress the unwanted PA signals from the background tissue and thus improves the PA imaging contrast. In this study, we demonstrated the 3D MPA image of LNP-labeled xenografted tumor in a live animal. Compared to conventional PA imaging, the MPA images show significantly enhanced contrast between the nanoparticle-labeled tumor and the background tissue. Our results suggest the feasibility of MPA for high contrast in vivo mapping of dual-contrast nanoparticles.

  5. Two sided residual refocusing for acoustic lens based photoacoustic imaging system.

    PubMed

    Kalloor Joseph, Francis; Chinni, Bhargava; Channappayya, Sumohana S; Pachamuthu, Rajalakshmi; Dogra, Vikram S; Rao, Navalgund

    2018-05-30

    In photoacoustic (PA) imaging, an acoustic lens-based system can form a focused image of an object plane. A real-time C-scan PA image can be formed by simply time gating the transducer response. While most of the focusing action is done by the lens, residual refocusing is needed to image multiple depths with high resolution simultaneously. However, a refocusing algorithm for PA camera has not been studied so far in the literature. In this work, we reformulate this residual refocusing problem for a PA camera into a two-sided wave propagation from a planar sensor array. One part of the problem deals with forward wave propagation while the other deals with time reversal. We have chosen a Fast Fourier Transform (FFT) based wave propagation model for the refocusing to maintain the real-time nature of the system. We have conducted Point Spread Function (PSF) measurement experiments at multiple depths and refocused the signal using the proposed method. Full Width at Half Maximum (FWHM), peak value and Signal to Noise Ratio (SNR) of the refocused PSF is analyzed to quantify the effect of refocusing. We believe that using a two-dimensional transducer array combined with the proposed refocusing, can lead to real-time volumetric imaging using a lens based PA imaging system. © 2018 Institute of Physics and Engineering in Medicine.

  6. More About The Video Event Trigger

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    1996-01-01

    Report presents additional information about system described in "Video Event Trigger" (LEW-15076). Digital electronic system processes video-image data to generate trigger signal when image shows significant change, such as motion, or appearance, disappearance, change in color, brightness, or dilation of object. Potential uses include monitoring of hallways, parking lots, and other areas during hours when supposed unoccupied, looking for fires, tracking airplanes or other moving objects, identification of missing or defective parts on production lines, and video recording of automobile crash tests.

  7. Acoustic radiation force impulse (ARFI) imaging: Characterizing the mechanical properties of tissues using their transient response to localized force

    NASA Astrophysics Data System (ADS)

    Nightingale, Kathryn R.; Palmeri, Mark L.; Congdon, Amy N.; Frinkely, Kristin D.; Trahey, Gregg E.

    2004-05-01

    Acoustic radiation force impulse (ARFI) imaging utilizes brief, high energy, focused acoustic pulses to generate radiation force in tissue, and conventional diagnostic ultrasound methods to detect the resulting tissue displacements in order to image the relative mechanical properties of tissue. The magnitude and spatial extent of the applied force is dependent upon the transmit beam parameters and the tissue attenuation. Forcing volumes are on the order of 5 mm3, pulse durations are less than 1 ms, and tissue displacements are typically several microns. Images of tissue displacement reflect local tissue stiffness, with softer tissues (e.g., fat) displacing farther than stiffer tissues (e.g., muscle). Parametric images of maximum displacement, time to peak displacement, and recovery time provide information about tissue material properties and structure. In both in vivo and ex vivo data, structures shown in matched B-mode images are in good agreement with those shown in ARFI images, with comparable resolution. Potential clinical applications under investigation include soft tissue lesion characterization, assessment of focal atherosclerosis, and imaging of thermal lesion formation during tissue ablation procedures. Results from ongoing studies will be presented. [Work supported by NIH Grant R01 EB002132-03, and the Whitaker Foundation. System support from Siemens Medical Solutions USA, Inc.

  8. Determining anisotropic conductivity using diffusion tensor imaging data in magneto-acoustic tomography with magnetic induction

    NASA Astrophysics Data System (ADS)

    Ammari, Habib; Qiu, Lingyun; Santosa, Fadil; Zhang, Wenlong

    2017-12-01

    In this paper we present a mathematical and numerical framework for a procedure of imaging anisotropic electrical conductivity tensor by integrating magneto-acoutic tomography with data acquired from diffusion tensor imaging. Magneto-acoustic tomography with magnetic induction (MAT-MI) is a hybrid, non-invasive medical imaging technique to produce conductivity images with improved spatial resolution and accuracy. Diffusion tensor imaging (DTI) is also a non-invasive technique for characterizing the diffusion properties of water molecules in tissues. We propose a model for anisotropic conductivity in which the conductivity is proportional to the diffusion tensor. Under this assumption, we propose an optimal control approach for reconstructing the anisotropic electrical conductivity tensor. We prove convergence and Lipschitz type stability of the algorithm and present numerical examples to illustrate its accuracy and feasibility.

  9. Modeling of video traffic in packet networks, low rate video compression, and the development of a lossy+lossless image compression algorithm

    NASA Technical Reports Server (NTRS)

    Sayood, K.; Chen, Y. C.; Wang, X.

    1992-01-01

    During this reporting period we have worked on three somewhat different problems. These are modeling of video traffic in packet networks, low rate video compression, and the development of a lossy + lossless image compression algorithm, which might have some application in browsing algorithms. The lossy + lossless scheme is an extension of work previously done under this grant. It provides a simple technique for incorporating browsing capability. The low rate coding scheme is also a simple variation on the standard discrete cosine transform (DCT) coding approach. In spite of its simplicity, the approach provides surprisingly high quality reconstructions. The modeling approach is borrowed from the speech recognition literature, and seems to be promising in that it provides a simple way of obtaining an idea about the second order behavior of a particular coding scheme. Details about these are presented.

  10. Model simulations of line-of-sight effects in airglow imaging of acoustic and fast gravity waves from ground and space

    NASA Astrophysics Data System (ADS)

    Aguilar Guerrero, J.; Snively, J. B.

    2017-12-01

    Acoustic waves (AWs) have been predicted to be detectable by imaging systems for the OH airglow layer [Snively, GRL, 40, 2013], and have been identified in spectrometer data [Pilger et al., JASP, 104, 2013]. AWs are weak in the mesopause region, but can attain large amplitudes in the F region [Garcia et al., GRL, 40, 2013] and have local impacts on the thermosphere and ionosphere. Similarly, fast GWs, with phase speeds over 100 m/s, may propagate to the thermosphere and impart significant local body forcing [Vadas and Fritts, JASTP, 66, 2004]. Both have been clearly identified in ionospheric total electron content (TEC), such as following the 2013 Moore, OK, EF5 tornado [Nishioka et al., GRL, 40, 2013] and following the 2011 Tohoku-Oki tsunami [e.g., Galvan et al., RS, 47, 2012, and references therein], but AWs have yet to be unambiguously imaged in MLT data and fast GWs have low amplitudes near the threshold of detection; nevertheless, recent imaging systems have sufficient spatial and temporal resolution and sensitivity to detect both AWs and fast GWs with short periods [e.g., Pautet et al., AO, 53, 2014]. The associated detectability challenges are related to the transient nature of their signatures and to systematic challenges due to line-of-sight (LOS) effects such as enhancements and cancelations due to integration along aligned or oblique wavefronts and geometric intensity enhancements. We employ a simulated airglow imager framework that incorporates 2D and 3D emission rate data and performs the necessary LOS integrations for synthetic imaging from ground- and space-based platforms to assess relative intensity and temperature perturbations. We simulate acoustic and fast gravity wave perturbations to the hydroxyl layer from a nonlinear, compressible model [e.g., Snively, 2013] for different idealized and realistic test cases. The results show clear signal enhancements when acoustic waves are imaged off-zenith or off-nadir and the temporal evolution of these

  11. Contrast Enhanced Superharmonic Imaging for Acoustic Angiography Using Reduced Form-factor Lateral Mode Transmitters for Intravascular and Intracavity Applications

    PubMed Central

    Wang, Zhuochen; Martin, K. Heath; Huang, Wenbin; Dayton, Paul A.; Jiang, Xiaoning

    2016-01-01

    Techniques to image the microvasculature may play an important role in imaging tumor-related angiogenesis and vasa vasorum associated with vulnerable atherosclerotic plaques. However, the microvasculature associated with these pathologies is difficult to detect using traditional B-mode ultrasound or even harmonic imaging due to small vessel size and poor differentiation from surrounding tissue. Acoustic angiography, a microvascular imaging technique which utilizes superharmonic imaging (detection of higher order harmonics of microbubble response), can yield a much higher contrast to tissue ratio (CTR) than second harmonic imaging methods. In this work, two dual-frequency transducers using lateral mode transmitters were developed for superharmonic detection and acoustic angiography imaging in intracavity applications. A single element dual-frequency IVUS transducer was developed for concept validation, which achieved larger signal amplitude, better contrast to noise ratio (CNR) and pulse length compared to the previous work. A dual-frequency PMN-PT array transducer was then developed for superharmonic imaging with dynamic focusing. The axial and lateral size of the microbubbles in a 200 μm tube were measured to be 269 μm and 200 μm, respectively. The maximum CNR was calculated to be 22 dB. These results show that superharmonic imaging with a low frequency lateral mode transmitter is a feasible alternative to thickness mode transmitters when final transducer size requirements dictate design choices. PMID:27775903

  12. Using video playbacks to study visual communication in a marine fish, Salaria pavo.

    PubMed

    Gonçalves; Oliveira; Körner; Poschadel; Schlupp

    2000-09-01

    Video playbacks have been successfully applied to the study of visual communication in several groups of animals. However, this technique is controversial as video monitors are designed with the human visual system in mind. Differences between the visual capabilities of humans and other animals will lead to perceptually different interpretations of video images. We simultaneously presented males and females of the peacock blenny, Salaria pavo, with a live conspecific male and an online video image of the same individual. Video images failed to elicit appropriate responses. Males were aggressive towards the live male but not towards video images of the same male. Similarly, females courted only the live male and spent more time near this stimulus. In contrast, females of the gynogenetic poecilid Poecilia formosa showed an equal preference for a live and video image of a P. mexicana male, suggesting a response to live animals as strong as to video images. We discuss differences between the species that may explain their opposite reaction to video images. Copyright 2000 The Association for the Study of Animal Behaviour.

  13. An acoustic charge transport imager for high definition television applications

    NASA Technical Reports Server (NTRS)

    Hunt, W. D.; Brennan, K. F.; Summers, C. J.

    1994-01-01

    The primary goal of this research is to develop a solid-state television (HDTV) imager chip operating at a frame rate of about 170 frames/sec at 2 Megapixels/frame. This imager will offer an order of magnitude improvements in speed over CCD designs and will allow for monolithic imagers operating from the IR to UV. The technical approach of the project focuses on the development of the three basic components of the imager and their subsequent integration. The camera chip can be divided into three distinct functions: (1) image capture via an array of avalanche photodiodes (APD's); (2) charge collection, storage, and overflow control via a charge transfer transistor device (CTD); and (3) charge readout via an array of acoustic charge transport (ACT) channels. The use of APD's allows for front end gain at low noise and low operating voltages while the ACT readout enables concomitant high speed and high charge transfer efficiency. Currently work is progressing towards the optimization of each of these component devices. In addition to the development of each of the three distinct components, work towards their integration and manufacturability is also progressing. The component designs are considered not only to meet individual specifications but to provide overall system level performance suitable for HDTV operation upon integration. The ultimate manufacturability and reliability of the chip constrains the design as well. The progress made during this period is described in detail.

  14. Integration of prior knowledge into dense image matching for video surveillance

    NASA Astrophysics Data System (ADS)

    Menze, M.; Heipke, C.

    2014-08-01

    Three-dimensional information from dense image matching is a valuable input for a broad range of vision applications. While reliable approaches exist for dedicated stereo setups they do not easily generalize to more challenging camera configurations. In the context of video surveillance the typically large spatial extent of the region of interest and repetitive structures in the scene render the application of dense image matching a challenging task. In this paper we present an approach that derives strong prior knowledge from a planar approximation of the scene. This information is integrated into a graph-cut based image matching framework that treats the assignment of optimal disparity values as a labelling task. Introducing the planar prior heavily reduces ambiguities together with the search space and increases computational efficiency. The results provide a proof of concept of the proposed approach. It allows the reconstruction of dense point clouds in more general surveillance camera setups with wider stereo baselines.

  15. Video-rate volumetric functional imaging of the brain at synaptic resolution.

    PubMed

    Lu, Rongwen; Sun, Wenzhi; Liang, Yajie; Kerlin, Aaron; Bierfeld, Jens; Seelig, Johannes D; Wilson, Daniel E; Scholl, Benjamin; Mohar, Boaz; Tanimoto, Masashi; Koyama, Minoru; Fitzpatrick, David; Orger, Michael B; Ji, Na

    2017-04-01

    Neurons and neural networks often extend hundreds of micrometers in three dimensions. Capturing the calcium transients associated with their activity requires volume imaging methods with subsecond temporal resolution. Such speed is a challenge for conventional two-photon laser-scanning microscopy, because it depends on serial focal scanning in 3D and indicators with limited brightness. Here we present an optical module that is easily integrated into standard two-photon laser-scanning microscopes to generate an axially elongated Bessel focus, which when scanned in 2D turns frame rate into volume rate. We demonstrated the power of this approach in enabling discoveries for neurobiology by imaging the calcium dynamics of volumes of neurons and synapses in fruit flies, zebrafish larvae, mice and ferrets in vivo. Calcium signals in objects as small as dendritic spines could be resolved at video rates, provided that the samples were sparsely labeled to limit overlap in their axially projected images.

  16. Elasticity imaging of speckle-free tissue regions with moving acoustic radiation force and phase-sensitive optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Hsieh, Bao-Yu; Song, Shaozhen; Nguyen, Thu-Mai; Yoon, Soon Joon; Shen, Tueng; Wang, Ruikang; O'Donnell, Matthew

    2016-03-01

    Phase-sensitive optical coherence tomography (PhS-OCT) can be utilized for quantitative shear-wave elastography using speckle tracking. However, current approaches cannot directly reconstruct elastic properties in speckle-less or speckle-free regions, for example within the crystalline lens in ophthalmology. Investigating the elasticity of the crystalline lens could improve understanding and help manage presbyopia-related pathologies that change biomechanical properties. We propose to reconstruct the elastic properties in speckle-less regions by sequentially launching shear waves with moving acoustic radiation force (mARF), and then detecting the displacement at a specific speckle-generating position, or limited set of positions, with PhS-OCT. A linear ultrasound array (with a center frequency of 5 MHz) interfaced with a programmable imaging system was designed to launch shear waves by mARF. Acoustic sources were electronically translated to launch shear waves at laterally shifted positions, where displacements were detected by speckle tracking images produced by PhS-OCT operating in M-B mode with a 125-kHz A-line rate. Local displacements were calculated and stitched together sequentially based on the distance between the acoustic source and the detection beam. Shear wave speed, and the associated elasticity map, were then reconstructed based on a time-of-flight algorithm. In this study, moving-source shear wave elasticity imaging (SWEI) can highlight a stiff inclusion within an otherwise homogeneous phantom but with a CNR increased by 3.15 dB compared to a similar image reconstructed with moving-detector SWEI. Partial speckle-free phantoms were also investigated to demonstrate that the moving-source sequence could reconstruct the elastic properties of speckle-free regions. Results show that harder inclusions within the speckle-free region can be detected, suggesting that this imaging method may be able to detect the elastic properties of the crystalline lens.

  17. A new acoustic lens material for large area detectors in photoacoustic breast tomography☆

    PubMed Central

    Xia, Wenfeng; Piras, Daniele; van Hespen, Johan C.G.; Steenbergen, Wiendelt; Manohar, Srirang

    2013-01-01

    Objectives We introduce a new acoustic lens material for photoacoustic tomography (PAT) to improve lateral resolution while possessing excellent acoustic acoustic impedance matching with tissue to minimize lens induced image artifacts. Background A large surface area detector due to its high sensitivity is preferable to detect weak signals in photoacoustic mammography. The lateral resolution is then limited by the narrow acceptance angle of such detectors. Acoustic lenses made of acrylic plastic (PMMA) have been used to enlarge the acceptance angle of such detectors and improve lateral resolution. However, such PMMA lenses introduce image artifacts due to internal reflections of ultrasound within the lenses, the result of acoustic impedance mismatch with the coupling medium or tissue. Methods A new lens is proposed based on the 2-component resin Stycast 1090SI. We characterized the acoustic properties of the proposed lens material in comparison with commonly used PMMA, inspecting the speed of sound, acoustic attenuation and density. We fabricated acoustic lenses based on the new material and PMMA, and studied the effect of the acoustic lenses on detector performance comparing finite element (FEM) simulations and measurements of directional sensitivity, pulse-echo response and frequency response. We further investigated the effect of using the acoustic lenses on the image quality of a photoacoustic breast tomography system using k-Wave simulations and experiments. Results Our acoustic characterization shows that Stycast 1090SI has tissue-like acoustic impedance, high speed of sound and low acoustic attenuation. These acoustic properties ensure an excellent acoustic lens material to minimize the acoustic insertion loss. Both acoustic lenses show significant enlargement of detector acceptance angle and lateral resolution improvement from modeling and experiments. However, the image artifacts induced by the presence of an acoustic lens are reduced using the proposed

  18. Video bioinformatics analysis of human embryonic stem cell colony growth.

    PubMed

    Lin, Sabrina; Fonteno, Shawn; Satish, Shruthi; Bhanu, Bir; Talbot, Prue

    2010-05-20

    Because video data are complex and are comprised of many images, mining information from video material is difficult to do without the aid of computer software. Video bioinformatics is a powerful quantitative approach for extracting spatio-temporal data from video images using computer software to perform dating mining and analysis. In this article, we introduce a video bioinformatics method for quantifying the growth of human embryonic stem cells (hESC) by analyzing time-lapse videos collected in a Nikon BioStation CT incubator equipped with a camera for video imaging. In our experiments, hESC colonies that were attached to Matrigel were filmed for 48 hours in the BioStation CT. To determine the rate of growth of these colonies, recipes were developed using CL-Quant software which enables users to extract various types of data from video images. To accurately evaluate colony growth, three recipes were created. The first segmented the image into the colony and background, the second enhanced the image to define colonies throughout the video sequence accurately, and the third measured the number of pixels in the colony over time. The three recipes were run in sequence on video data collected in a BioStation CT to analyze the rate of growth of individual hESC colonies over 48 hours. To verify the truthfulness of the CL-Quant recipes, the same data were analyzed manually using Adobe Photoshop software. When the data obtained using the CL-Quant recipes and Photoshop were compared, results were virtually identical, indicating the CL-Quant recipes were truthful. The method described here could be applied to any video data to measure growth rates of hESC or other cells that grow in colonies. In addition, other video bioinformatics recipes can be developed in the future for other cell processes such as migration, apoptosis, and cell adhesion.

  19. Oil/water nano-emulsion loaded with cobalt ferrite oxide nanocubes for photo-acoustic and magnetic resonance dual imaging in cancer: in vitro and preclinical studies.

    PubMed

    Vecchione, Raffaele; Quagliariello, Vincenzo; Giustetto, Pierangela; Calabria, Dominic; Sathya, Ayyappan; Marotta, Roberto; Profeta, Martina; Nitti, Simone; Silvestri, Niccolò; Pellegrino, Teresa; Iaffaioli, Rosario V; Netti, Paolo Antonio

    2017-01-01

    Dual imaging dramatically improves detection and early diagnosis of cancer. In this work we present an oil in water (O/W) nano-emulsion stabilized with lecithin and loaded with cobalt ferrite oxide (Co 0.5 Fe 2.5 O 4 ) nanocubes for photo-acoustic and magnetic resonance dual imaging. The nanocarrier is responsive in in vitro photo-acoustic and magnetic resonance imaging (MRI) tests. A clear and significant time-dependent accumulation in tumor tissue is shown in in vivo photo-acoustic studies on a murine melanoma xenograft model. The proposed O/W nano-emulsion exhibits also high values of r 2 /r 1 (ranging from 45 to 85, depending on the magnetic field) suggesting a possible use as T 2 weighted image contrast agents. In addition, viability and cellular uptake studies show no significant cytotoxicity on the fibroblast cell line. We also tested the O/W nano-emulsion loaded with curcumin against melanoma cancer cells demonstrating a significant cytotoxicity and thus showing possible therapeutic effects in addition to the in vivo imaging. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Examining the Impact of Video Modeling Techniques on the Efficacy of Clinical Voice Assessment.

    PubMed

    Werner, Cara; Bowyer, Samantha; Weinrich, Barbara; Gottliebson, Renee; Brehm, Susan Baker

    2017-01-01

    The purpose of the current study was to determine whether or not presenting patients with a video model improves efficacy of the assessment as defined by efficiency and decreased variability in trials during the acoustic component of voice evaluations. Twenty pediatric participants with a mean age of 7.6 years (SD = 1.50; range = 6-11 years), 32 college-age participants with a mean age of 21.32 years (SD = 1.61; range = 18-30 years), and 17 adult participants with a mean age of 54.29 years (SD = 2.78; range = 50-70 years) were included in the study and divided into experimental and control groups. The experimental group viewed a training video prior to receiving verbal instructions and performing acoustic assessment tasks, whereas the control group received verbal instruction only prior to completing the acoustic assessment. Primary measures included the number of clinician cues required and instructional time. Standard deviations of acoustic measurements (eg, minimum and maximum frequency) were also examined to determine effects on stability. Individuals in the experimental group required significantly less cues, P = 0.012, compared to the control group. Although some trends were observed in instructional time and stability of measurements, no significant differences were observed. The findings of this study may be useful for speech-language pathologists in regard to improving assessment of patients' voice disorders with the use of video modeling. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  1. Simultaneous compression and encryption of closely resembling images: application to video sequences and polarimetric images.

    PubMed

    Aldossari, M; Alfalou, A; Brosseau, C

    2014-09-22

    This study presents and validates an optimized method of simultaneous compression and encryption designed to process images with close spectra. This approach is well adapted to the compression and encryption of images of a time-varying scene but also to static polarimetric images. We use the recently developed spectral fusion method [Opt. Lett.35, 1914-1916 (2010)] to deal with the close resemblance of the images. The spectral plane (containing the information to send and/or to store) is decomposed in several independent areas which are assigned according a specific way. In addition, each spectrum is shifted in order to minimize their overlap. The dual purpose of these operations is to optimize the spectral plane allowing us to keep the low- and high-frequency information (compression) and to introduce an additional noise for reconstructing the images (encryption). Our results show that not only can the control of the spectral plane enhance the number of spectra to be merged, but also that a compromise between the compression rate and the quality of the reconstructed images can be tuned. We use a root-mean-square (RMS) optimization criterion to treat compression. Image encryption is realized at different security levels. Firstly, we add a specific encryption level which is related to the different areas of the spectral plane, and then, we make use of several random phase keys. An in-depth analysis at the spectral fusion methodology is done in order to find a good trade-off between the compression rate and the quality of the reconstructed images. Our new proposal spectral shift allows us to minimize the image overlap. We further analyze the influence of the spectral shift on the reconstructed image quality and compression rate. The performance of the multiple-image optical compression and encryption method is verified by analyzing several video sequences and polarimetric images.

  2. Acoustic Characterization of Soil

    DTIC Science & Technology

    1996-03-28

    modified SAR imaging algorithm. Page 26 Final Report In the acoustic subsurface imaging scenario, the "object" to be imaged (i.e., cultural artifacts... subsurface imaging scenario. To combat this potential difficulty we can utilize a new SAR imaging algorithm (Lee et al., 1996) derived from a geophysics...essentially a transmit plane wave. This is a cost-effective means to evaluate the feasibility of subsurface imaging . A more complete (and costly

  3. Acoustic property reconstruction of a neonate Yangtze finless porpoise's (Neophocaena asiaeorientalis) head based on CT imaging.

    PubMed

    Wei, Chong; Wang, Zhitao; Song, Zhongchang; Wang, Kexiong; Wang, Ding; Au, Whitlow W L; Zhang, Yu

    2015-01-01

    The reconstruction of the acoustic properties of a neonate finless porpoise's head was performed using X-ray computed tomography (CT). The head of the deceased neonate porpoise was also segmented across the body axis and cut into slices. The averaged sound velocity and density were measured, and the Hounsfield units (HU) of the corresponding slices were obtained from computed tomography scanning. A regression analysis was employed to show the linear relationships between the Hounsfield unit and both sound velocity and density of samples. Furthermore, the CT imaging data were used to compare the HU value, sound velocity, density and acoustic characteristic impedance of the main tissues in the porpoise's head. The results showed that the linear relationships between HU and both sound velocity and density were qualitatively consistent with previous studies on Indo-pacific humpback dolphins and Cuvier's beaked whales. However, there was no significant increase of the sound velocity and acoustic impedance from the inner core to the outer layer in this neonate finless porpoise's melon.

  4. Action recognition in depth video from RGB perspective: A knowledge transfer manner

    NASA Astrophysics Data System (ADS)

    Chen, Jun; Xiao, Yang; Cao, Zhiguo; Fang, Zhiwen

    2018-03-01

    Different video modal for human action recognition has becoming a highly promising trend in the video analysis. In this paper, we propose a method for human action recognition from RGB video to Depth video using domain adaptation, where we use learned feature from RGB videos to do action recognition for depth videos. More specifically, we make three steps for solving this problem in this paper. First, different from image, video is more complex as it has both spatial and temporal information, in order to better encode this information, dynamic image method is used to represent each RGB or Depth video to one image, based on this, most methods for extracting feature in image can be used in video. Secondly, as video can be represented as image, so standard CNN model can be used for training and testing for videos, beside, CNN model can be also used for feature extracting as its powerful feature expressing ability. Thirdly, as RGB videos and Depth videos are belong to two different domains, in order to make two different feature domains has more similarity, domain adaptation is firstly used for solving this problem between RGB and Depth video, based on this, the learned feature from RGB video model can be directly used for Depth video classification. We evaluate the proposed method on one complex RGB-D action dataset (NTU RGB-D), and our method can have more than 2% accuracy improvement using domain adaptation from RGB to Depth action recognition.

  5. Video and thermal imaging system for monitoring interiors of high temperature reaction vessels

    DOEpatents

    Saveliev, Alexei V [Chicago, IL; Zelepouga, Serguei A [Hoffman Estates, IL; Rue, David M [Chicago, IL

    2012-01-10

    A system and method for real-time monitoring of the interior of a combustor or gasifier wherein light emitted by the interior surface of a refractory wall of the combustor or gasifier is collected using an imaging fiber optic bundle having a light receiving end and a light output end. Color information in the light is captured with primary color (RGB) filters or complimentary color (GMCY) filters placed over individual pixels of color sensors disposed within a digital color camera in a BAYER mosaic layout, producing RGB signal outputs or GMCY signal outputs. The signal outputs are processed using intensity ratios of the primary color filters or the complimentary color filters, producing video images and/or thermal images of the interior of the combustor or gasifier.

  6. Sensing the delivery and endocytosis of nanoparticles using magneto-photo-acoustic imaging

    PubMed Central

    Qu, M.; Mehrmohammadi, M.; Emelianov, S.Y.

    2015-01-01

    Many biomedical applications necessitate a targeted intracellular delivery of the nanomaterial to specific cells. Therefore, a non-invasive and reliable imaging tool is required to detect both the delivery and cellular endocytosis of the nanoparticles. Herein, we demonstrate that magneto-photo-acoustic (MPA) imaging can be used to monitor the delivery and to identify endocytosis of magnetic and optically absorbing nanoparticles. The relationship between photoacoustic (PA) and magneto-motive ultrasound (MMUS) signals from the in vitro samples were analyzed to identify the delivery and endocytosis of nanoparticles. The results indicated that during the delivery of nanoparticles to the vicinity of the cells, both PA and MMUS signals are almost linearly proportional. However, accumulation of nanoparticles within the cells leads to nonlinear MMUS-PA relationship, due to non-linear MMUS signal amplification. Therefore, through longitudinal MPA imaging, it is possible to monitor the delivery of nanoparticles and identify the endocytosis of the nanoparticles by living cells. PMID:26640773

  7. Aerial Video Imaging

    NASA Technical Reports Server (NTRS)

    1991-01-01

    When Michael Henry wanted to start an aerial video service, he turned to Johnson Space Center for assistance. Two NASA engineers - one had designed and developed TV systems in Apollo, Skylab, Apollo- Soyuz and Space Shuttle programs - designed a wing-mounted fiberglass camera pod. Camera head and angles are adjustable, and the pod is shaped to reduce vibration. The controls are located so a solo pilot can operate the system. A microprocessor displays latitude, longitude, and bearing, and a GPS receiver provides position data for possible legal references. The service has been successfully utilized by railroads, oil companies, real estate companies, etc.

  8. Venus in motion: An animated video catalog of Pioneer Venus Orbiter Cloud Photopolarimeter images

    NASA Technical Reports Server (NTRS)

    Limaye, Sanjay S.

    1992-01-01

    Images of Venus acquired by the Pioneer Venus Orbiter Cloud Photopolarimeter (OCPP) during the 1982 opportunity have been utilized to create a short video summary of the data. The raw roll by roll images were first navigated using the spacecraft attitude and orbit information along with the CPP instrument pointing information. The limb darkening introduced by the variation of solar illumination geometry and the viewing angle was then modelled and removed. The images were then projected to simulate a view obtained from a fixed perspective with the observer at 10 Venus radii away and located above a Venus latitude of 30 degrees south and a longitude 60 degrees west. A total of 156 images from the 1982 opportunity have been animated at different dwell rates.

  9. Training with video imaging improves the initial intubation success rates of paramedic trainees in an operating room setting.

    PubMed

    Levitan, R M; Goldman, T S; Bryan, D A; Shofer, F; Herlich, A

    2001-01-01

    Video imaging of intubation as seen by the laryngoscopist has not been a part of traditional instruction methods, and its potential impact on novice intubation success rates has not been evaluated. We prospectively tracked the success rates of novice intubators in paramedic classes who were required to watch a 26-minute instructional videotape made with a direct laryngoscopy imaging system (video group). We compared the prospectively obtained intubation success rate of the video group against retrospectively collected data from prior classes of paramedic students (traditional group) in the same training program. All classes received the same didactic airway instruction, same mannequin practice time, same paramedic textbook, and were trained in the same operating room with the same teaching staff. The traditional group (n=113, total attempts 783) had a mean individual intubation success rate of 46.7% (95% confidence interval 42.2% to 51.3%). The video group (n=36, total attempts 102) had a mean individual intubation success rate of 88.1% (95% confidence interval 79.6% to 96.5%). The difference in mean intubation success rates between the 2 groups was 41.4% (95% confidence interval 31.1% to 50.7%, P <.0001). The 2 groups did not differ in respect to age, male sex, or level of education. An instructional videotape made with the direct laryngoscopy video system significantly improved the initial success rates of novice intubators in an operating room setting.

  10. Laser-induced dental caries and plaque diagnosis on patients by sensitive autofluorescence spectroscopy and time-gated video imaging: preliminary studies

    NASA Astrophysics Data System (ADS)

    Koenig, Karsten; Schneckenburger, Herbert

    1994-09-01

    The laser-induced in vivo autofluorescence of human teeth was investigated by means of time- resolved/time-gated fluorescence techniques. The aim of these studies was non-contact caries and plaque detection. Carious lesions and dental plaque fluoresce in the red spectral region. This autofluorescence seems to be based on porphyrin-producing bacteria. We report on preliminary studies on patients using a novel method of autofluorescence imaging. A special device was constructed for time-gated video imaging. Nanosecond laser pulses for fluorescence excitation were provided by a frequency-doubled, Q-switched Nd:YAG laser. Autofluorescence was detected in an appropriate nanosecond time window using a video camera with a time-gated image intensifier (minimal time gate: 5 ns). Laser-induced autofluorescence based on porphyrin-producing bacteria seems to be an appropriate tool for detecting dental lesions and for creating `caries-images' and `dental plaque' images.

  11. Simple video format for mobile applications

    NASA Astrophysics Data System (ADS)

    Smith, John R.; Miao, Zhourong; Li, Chung-Sheng

    2000-04-01

    With the advent of pervasive computing, there is a growing demand for enabling multimedia applications on mobile devices. Large numbers of pervasive computing devices, such as personal digital assistants (PDAs), hand-held computer (HHC), smart phones, portable audio players, automotive computing devices, and wearable computers are gaining access to online information sources. However, the pervasive computing devices are often constrained along a number of dimensions, such as processing power, local storage, display size and depth, connectivity, and communication bandwidth, which makes it difficult to access rich image and video content. In this paper, we report on our initial efforts in designing a simple scalable video format with low-decoding and transcoding complexity for pervasive computing. The goal is to enable image and video access for mobile applications such as electronic catalog shopping, video conferencing, remote surveillance and video mail using pervasive computing devices.

  12. High-bandwidth acoustic detection system (HBADS) for stripmap synthetic aperture acoustic imaging of canonical ground targets using airborne sound and a 16 element receiving array

    NASA Astrophysics Data System (ADS)

    Bishop, Steven S.; Moore, Timothy R.; Gugino, Peter; Smith, Brett; Kirkwood, Kathryn P.; Korman, Murray S.

    2018-04-01

    High Bandwidth Acoustic Detection System (HBADS) is an emerging active acoustic sensor technology undergoing study by the US Army's Night Vision and Electronic Sensors Directorate. Mounted on a commercial all-terrain type vehicle, it uses a single source pulse chirp while moving and a new array (two rows each containing eight microphones) mounted horizontally and oriented in a side scan mode. Experiments are performed with this synthetic aperture air acoustic (SAA) array to image canonical ground targets in clutter or foliage. A commercial audio speaker transmits a linear FM chirp having an effective frequency range of 2 kHz to 15 kHz. The system includes an inertial navigation system using two differential GPS antennas, an inertial measurement unit and a wheel coder. A web camera is mounted midway between the two horizontal microphone arrays and a meteorological unit acquires ambient, temperature, pressure and humidity information. A data acquisition system is central to the system's operation, which is controlled by a laptop computer. Recent experiments include imaging canonical targets located on the ground in a grassy field and similar targets camouflaged by natural vegetation along the side of a road. A recent modification involves implementing SAA stripmap mode interferometry for computing the reflectance of targets placed along the ground. Typical strip map SAA parameters are chirp pulse = 10 or 40 ms, slant range resolution c/(2*BW) = 0.013 m, microphone diameter D = 0.022 m, azimuthal resolution (D/2) = 0.01, air sound speed c ≍ 340 m/s and maximum vehicle speed ≍ 2 m/s.

  13. Feasibility of video codec algorithms for software-only playback

    NASA Astrophysics Data System (ADS)

    Rodriguez, Arturo A.; Morse, Ken

    1994-05-01

    Software-only video codecs can provide good playback performance in desktop computers with a 486 or 68040 CPU running at 33 MHz without special hardware assistance. Typically, playback of compressed video can be categorized into three tasks: the actual decoding of the video stream, color conversion, and the transfer of decoded video data from system RAM to video RAM. By current standards, good playback performance is the decoding and display of video streams of 320 by 240 (or larger) compressed frames at 15 (or greater) frames-per- second. Software-only video codecs have evolved by modifying and tailoring existing compression methodologies to suit video playback in desktop computers. In this paper we examine the characteristics used to evaluate software-only video codec algorithms, namely: image fidelity (i.e., image quality), bandwidth (i.e., compression) ease-of-decoding (i.e., playback performance), memory consumption, compression to decompression asymmetry, scalability, and delay. We discuss the tradeoffs among these variables and the compromises that can be made to achieve low numerical complexity for software-only playback. Frame- differencing approaches are described since software-only video codecs typically employ them to enhance playback performance. To complement other papers that appear in this session of the Proceedings, we review methods derived from binary pattern image coding since these methods are amenable for software-only playback. In particular, we introduce a novel approach called pixel distribution image coding.

  14. Still-to-video face recognition in unconstrained environments

    NASA Astrophysics Data System (ADS)

    Wang, Haoyu; Liu, Changsong; Ding, Xiaoqing

    2015-02-01

    Face images from video sequences captured in unconstrained environments usually contain several kinds of variations, e.g. pose, facial expression, illumination, image resolution and occlusion. Motion blur and compression artifacts also deteriorate recognition performance. Besides, in various practical systems such as law enforcement, video surveillance and e-passport identification, only a single still image per person is enrolled as the gallery set. Many existing methods may fail to work due to variations in face appearances and the limit of available gallery samples. In this paper, we propose a novel approach for still-to-video face recognition in unconstrained environments. By assuming that faces from still images and video frames share the same identity space, a regularized least squares regression method is utilized to tackle the multi-modality problem. Regularization terms based on heuristic assumptions are enrolled to avoid overfitting. In order to deal with the single image per person problem, we exploit face variations learned from training sets to synthesize virtual samples for gallery samples. We adopt a learning algorithm combining both affine/convex hull-based approach and regularizations to match image sets. Experimental results on a real-world dataset consisting of unconstrained video sequences demonstrate that our method outperforms the state-of-the-art methods impressively.

  15. Field-based high-speed imaging of explosive eruptions

    NASA Astrophysics Data System (ADS)

    Taddeucci, J.; Scarlato, P.; Freda, C.; Moroni, M.

    2012-12-01

    -speed videos reveal multiple, discrete ejection pulses within a single Strombolian explosion, with ejection velocities twice as high as previously recorded. Video-derived information on ejection velocity and ejecta mass can be combined with analytical and experimental models to constrain the physical parameters of the gas driving individual pulses. 2) Jet development. The ejection trajectory of pyroclasts can also be used to outline the spatial and temporal development of the eruptive jet and the dynamics of gas-pyroclast coupling within the jet, while high-speed thermal images add information on the temperature evolution in the jet itself as a function of the pyroclast size and content. 2) Pyroclasts settling. High-speed videos can be used to investigate the aerodynamic settling behavior of pyroclasts from bomb to ash in size and including ash aggregates, providing key parameters such as drag coefficient as a function of Re, and particle density. 3) The generation and propagation of acoustic and shock waves. Phase condensation in volcanic and atmospheric aerosol is triggered by the transit of pressure waves and can be recorded in high-speed videos, allowing the speed and wavelength of the waves to be measured and compared with the corresponding infrasonic signals and theoretical predictions.

  16. Video-imaging assessment of nasal morphology in individuals with complete unilateral cleft lip and palate.

    PubMed

    Russell, K A; Waldman, S D; Lee, J M

    2000-11-01

    The purpose of this study was to develop a video-imaging mathematical method to assess nostril morphology. This retrospective study involved two age-matched groups: 28 subjects with complete unilateral cleft lip and palate (CUCLP) and 19 noncleft controls. Nose casts were reproducibly oriented in a jig such that the casts could be rotated about the coronal axis. Video images of the nostrils were captured and then analyzed for area, perimeter, centroid, principal axis, moments about the major and minor axes (I11, I22), anisometry, bulkiness, lateral offset, internostril angle, and rotational angle. All parameters identified nostril asymmetry in both groups. The results of the analyses using anisometry, I11, and I22 showed that, in both groups, one nostril was rounder and one was more elliptical. This asymmetry, however, differed between the two groups, and the difference was primarily based on the degree of ellipticity of the nostrils. Maximum dimension, perimeter, lateral offset, I11, and I22 were more asymmetric in the cleft group. In the control group, the right nostril was more elliptical and had a greater perimeter, and the left-side nostril had a greater bulkiness (enfolding). The method developed was validated for assessment of nasal morphology in cleft and noncleft samples. Nostril morphology was asymmetric in both groups but more asymmetric in the cleft group than the control group. The dominant influence of the cleft resulted in more elliptical noncleft nostrils and greater nostril shape asymmetry in the cleft group. The validated video-imaging method can now be used to assess the efficacy of treatment on nasal morphology.

  17. Imaging of heart acoustic based on the sub-space methods using a microphone array.

    PubMed

    Moghaddasi, Hanie; Almasganj, Farshad; Zoroufian, Arezoo

    2017-07-01

    Heart disease is one of the leading causes of death around the world. Phonocardiogram (PCG) is an important bio-signal which represents the acoustic activity of heart, typically without any spatiotemporal information of the involved acoustic sources. The aim of this study is to analyze the PCG by employing a microphone array by which the heart internal sound sources could be localized, too. In this paper, it is intended to propose a modality by which the locations of the active sources in the heart could also be investigated, during a cardiac cycle. In this way, a microphone array with six microphones is employed as the recording set up to be put on the human chest. In the following, the Group Delay MUSIC algorithm which is a sub-space based localization method is used to estimate the location of the heart sources in different phases of the PCG. We achieved to 0.14cm mean error for the sources of first heart sound (S 1 ) simulator and 0.21cm mean error for the sources of second heart sound (S 2 ) simulator with Group Delay MUSIC algorithm. The acoustical diagrams created for human subjects show distinct patterns in various phases of the cardiac cycles such as the first and second heart sounds. Moreover, the evaluated source locations for the heart valves are matched with the ones that are obtained via the 4-dimensional (4D) echocardiography applied, to a real human case. Imaging of heart acoustic map presents a new outlook to indicate the acoustic properties of cardiovascular system and disorders of valves and thereby, in the future, could be used as a new diagnostic tool. Copyright © 2017. Published by Elsevier B.V.

  18. Video monitoring system for car seat

    NASA Technical Reports Server (NTRS)

    Elrod, Susan Vinz (Inventor); Dabney, Richard W. (Inventor)

    2004-01-01

    A video monitoring system for use with a child car seat has video camera(s) mounted in the car seat. The video images are wirelessly transmitted to a remote receiver/display encased in a portable housing that can be removably mounted in the vehicle in which the car seat is installed.

  19. Investigation into the Effect of Acoustic Radiation Force and Acoustic Streaming on Particle Patterning in Acoustic Standing Wave Fields

    PubMed Central

    Yang, Yanye; Ni, Zhengyang; Guo, Xiasheng; Luo, Linjiao; Tu, Juan; Zhang, Dong

    2017-01-01

    Acoustic standing waves have been widely used in trapping, patterning, and manipulating particles, whereas one barrier remains: the lack of understanding of force conditions on particles which mainly include acoustic radiation force (ARF) and acoustic streaming (AS). In this paper, force conditions on micrometer size polystyrene microspheres in acoustic standing wave fields were investigated. The COMSOL® Mutiphysics particle tracing module was used to numerically simulate force conditions on various particles as a function of time. The velocity of particle movement was experimentally measured using particle imaging velocimetry (PIV). Through experimental and numerical simulation, the functions of ARF and AS in trapping and patterning were analyzed. It is shown that ARF is dominant in trapping and patterning large particles while the impact of AS increases rapidly with decreasing particle size. The combination of using both ARF and AS for medium size particles can obtain different patterns with only using ARF. Findings of the present study will aid the design of acoustic-driven microfluidic devices to increase the diversity of particle patterning. PMID:28753955

  20. An automated form of video image analysis applied to classification of movement disorders.

    PubMed

    Chang, R; Guan, L; Burne, J A

    Video image analysis is able to provide quantitative data on postural and movement abnormalities and thus has an important application in neurological diagnosis and management. The conventional techniques require patients to be videotaped while wearing markers in a highly structured laboratory environment. This restricts the utility of video in routine clinical practise. We have begun development of intelligent software which aims to provide a more flexible system able to quantify human posture and movement directly from whole-body images without markers and in an unstructured environment. The steps involved are to extract complete human profiles from video frames, to fit skeletal frameworks to the profiles and derive joint angles and swing distances. By this means a given posture is reduced to a set of basic parameters that can provide input to a neural network classifier. To test the system's performance we videotaped patients with dopa-responsive Parkinsonism and age-matched normals during several gait cycles, to yield 61 patient and 49 normal postures. These postures were reduced to their basic parameters and fed to the neural network classifier in various combinations. The optimal parameter sets (consisting of both swing distances and joint angles) yielded successful classification of normals and patients with an accuracy above 90%. This result demonstrated the feasibility of the approach. The technique has the potential to guide clinicians on the relative sensitivity of specific postural/gait features in diagnosis. Future studies will aim to improve the robustness of the system in providing accurate parameter estimates from subjects wearing a range of clothing, and to further improve discrimination by incorporating more stages of the gait cycle into the analysis.

  1. Orbital motions of bubbles in an acoustic field

    NASA Astrophysics Data System (ADS)

    Shirota, Minori; Yamashita, Ko; Inamura, Takao

    2012-09-01

    This experimental study aims to clarify the mechanism of orbital motion of two oscillating bubbles in an acoustic field. Trajectory of the orbital motion on the wall of a spherical levitator was observed using a high-speed video camera. Because of a good repeatability in volume oscillation of bubbles, we were also able to observe the radial motion driven at 24 kHz by stroboscopic like imaging technique. The orbital motions of bubbles raging from 0.13 to 0.18 mm were examined with different forcing amplitude and in different viscous oils. As a result, we found that pairs of bubbles revolve along an elliptic orbit around the center of mass of the bubbles. We also found that the two bubbles perform anti-phase radial oscillation. Although this radial oscillation should result in a repulsive secondary Bjerknes force, the bubbles kept a constant separate distance of about 1 mm, which indicates the existence of centripetal primary Bjerknes force.

  2. Optical Verification of Microbubble Response to Acoustic Radiation Force in Large Vessels With In Vivo Results.

    PubMed

    Wang, Shiying; Wang, Claudia Y; Unnikrishnan, Sunil; Klibanov, Alexander L; Hossack, John A; Mauldin, F William

    2015-11-01

    The objective of this study was to optically verify the dynamic behaviors of adherent microbubbles in large blood vessel environments in response to a new ultrasound technique using modulated acoustic radiation force. Polydimethylsiloxane (PDMS) flow channels coated with streptavidin were used in targeted groups to mimic large blood vessels. The custom-modulated acoustic radiation force beam sequence was programmed on a Verasonics research scanner. In vitro experiments were performed by injecting a biotinylated lipid-perfluorobutane microbubble dispersion through flow channels. The dynamic response of adherent microbubbles was detected acoustically and simultaneously visualized using a video camera connected to a microscope. In vivo verification was performed in a large abdominal blood vessel of a murine model for inflammation with injection of biotinylated microbubbles conjugated with P-selectin antibody. Aggregates of adherent microbubbles were observed optically under the influence of acoustic radiation force. Large microbubble aggregates were observed solely in control groups without targeted adhesion. Additionally, the dispersion of microbubble aggregates were demonstrated to lead to a transient acoustic signal enhancement in control groups (a new phenomenon we refer to as "control peak"). In agreement with in vitro results, the control peak phenomenon was observed in vivo in a murine model. This study provides the first optical observation of microbubble-binding dynamics in large blood vessel environments with application of a modulated acoustic radiation force beam sequence. With targeted adhesion, secondary radiation forces were unable to produce large aggregates of adherent microbubbles. Additionally, the new phenomenon called control peak was observed both in vitro and in vivo in a murine model for the first time. The findings in this study provide us with a better understanding of microbubble behaviors in large blood vessel environments with application

  3. Optical Verification of Microbubble Response to Acoustic Radiation Force in Large Vessels with In Vivo Results

    PubMed Central

    Wang, Shiying; Wang, Claudia Y.; Unnikrishnan, Sunil; Klibanov, Alexander L.; Hossack, John A.; Mauldin, F. William

    2015-01-01

    Objectives To optically verify the dynamic behaviors of adherent microbubbles in large blood vessel environments in response to a new ultrasound technique using modulated acoustic radiation force. Materials and Methods Polydimethylsiloxane (PDMS) flow channels coated with streptavidin were used in targeted groups to mimic large blood vessels. The custom modulated acoustic radiation force beam sequence was programmed on a Verasonics research scanner. In vitro experiments were performed by injecting a biotinylated lipid-perfluorobutane microbubble dispersion through flow channels. The dynamic response of adherent microbubbles was detected acoustically and simultaneously visualized using a video camera connected to a microscope. In vivo verification was performed in a large abdominal blood vessel of a murine model for inflammation with injection of biotinylated microbubbles conjugated with P-selectin antibody. Results Aggregates of adherent microbubbles were observed optically under the influence of acoustic radiation force. Large microbubble aggregates were observed solely in control groups without targeted adhesion. Additionally, the dispersion of microbubble aggregates were demonstrated to lead to a transient acoustic signal enhancement in control groups (a new phenomenon we refer to as “control peak”). In agreement with in vitro results, the “control peak” phenomenon was observed in vivo in a murine model. Conclusions This study provides the first optical observation of microbubble binding dynamics in large blood vessel environments with application of a modulated acoustic radiation force beam sequence. With targeted adhesion, secondary radiation forces were unable to produce large aggregates of adherent microbubbles. Additionally, the new phenomenon called “control peak” was observed both in vitro and in vivo in a murine model for the first time. The findings in this study provide us with a better understanding of microbubble behaviors in large blood

  4. Representing videos in tangible products

    NASA Astrophysics Data System (ADS)

    Fageth, Reiner; Weiting, Ralf

    2014-03-01

    Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used, the number of images extracted out of the video in order to represent the video, the positions in the book and different design strategies compared to regular books.

  5. Development of high-speed video cameras

    NASA Astrophysics Data System (ADS)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  6. Aerospace video imaging systems for rangeland management

    NASA Technical Reports Server (NTRS)

    Everitt, J. H.; Escobar, D. E.; Richardson, A. J.; Lulla, K.

    1990-01-01

    This paper presents an overview on the application of airborne video imagery (VI) for assessment of rangeland resources. Multispectral black-and-white video with visible/NIR sensitivity; color-IR, normal color, and black-and-white MIR; and thermal IR video have been used to detect or distinguish among many rangeland and other natural resource variables such as heavy grazing, drought-stressed grass, phytomass levels, burned areas, soil salinity, plant communities and species, and gopher and ant mounds. The digitization and computer processing of VI have also been demonstrated. VI does not have the detailed resolution of film, but these results have shown that it has considerable potential as an applied remote sensing tool for rangeland management. In the future, spaceborne VI may provide additional data for monitoring and management of rangelands.

  7. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2012-01-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  8. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2011-12-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  9. Electronic magnification and perceived contrast of video

    PubMed Central

    Haun, Andrew; Woods, Russell L; Peli, Eli

    2012-01-01

    Electronic magnification of an image results in a decrease in its perceived contrast. The decrease in perceived contrast could be due to a perceived blur or to limited sampling of the range of contrasts in the original image. We measured the effect on perceived contrast of magnification in two contexts: either a small video was enlarged to fill a larger area, or a portion of a larger video was enlarged to fill the same area as the original. Subjects attenuated the source video contrast to match the perceived contrast of the magnified videos, with the effect increasing with magnification and decreasing with viewing distance. These effects are consistent with expectations based on both the contrast statistics of natural images and the contrast sensitivity of the human visual system. We demonstrate that local regions within videos usually have lower physical contrast than the whole, and that this difference accounts for a minor part of the perceived differences. Instead, visibility of ‘missing content’ (blur) in a video is misinterpreted as a decrease in contrast. We detail how the effects of magnification on perceived contrast can be measured while avoiding confounding factors. PMID:23483111

  10. Contrast Enhanced Superharmonic Imaging for Acoustic Angiography Using Reduced Form-Factor Lateral Mode Transmitters for Intravascular and Intracavity Applications.

    PubMed

    Wang, Zhuochen; Heath Martin, K; Huang, Wenbin; Dayton, Paul A; Jiang, Xiaoning

    2017-02-01

    Techniques to image the microvasculature may play an important role in imaging tumor-related angiogenesis and vasa vasorum associated with vulnerable atherosclerotic plaques. However, the microvasculature associated with these pathologies is difficult to detect using traditional B-mode ultrasound or even harmonic imaging due to small vessel size and poor differentiation from surrounding tissue. Acoustic angiography, a microvascular imaging technique that utilizes superharmonic imaging (detection of higher order harmonics of microbubble response), can yield a much higher contrast-to-tissue ratio than second harmonic imaging methods. In this paper, two dual-frequency transducers using lateral mode transmitters were developed for superharmonic detection and acoustic angiography imaging in intracavity applications. A single element dual-frequency intravascular ultrasound transducer was developed for concept validation, which achieved larger signal amplitude, better contrast-to-noise ratio (CNR), and pulselength compared to the previous work. A dual-frequency [Pb(Mg 1/3 Nb 2/3 )O 3 ]-x[PbTiO 3 ] array transducer was then developed for superharmonic imaging with dynamic focusing. The axial and lateral sizes of the microbubbles in a 200- [Formula: see text] tube were measured to be 269 and [Formula: see text], respectively. The maximum CNR was calculated to be 22 dB. These results show that superharmonic imaging with a low frequency lateral mode transmitter is a feasible alternative to thickness mode transmitters when the final transducer size requirements dictate design choices.

  11. a Sensor Aided H.264/AVC Video Encoder for Aerial Video Sequences with in the Loop Metadata Correction

    NASA Astrophysics Data System (ADS)

    Cicala, L.; Angelino, C. V.; Ruatta, G.; Baccaglini, E.; Raimondo, N.

    2015-08-01

    Unmanned Aerial Vehicles (UAVs) are often employed to collect high resolution images in order to perform image mosaicking and/or 3D reconstruction. Images are usually stored on board and then processed with on-ground desktop software. In such a way the computational load, and hence the power consumption, is moved on ground, leaving on board only the task of storing data. Such an approach is important in the case of small multi-rotorcraft UAVs because of their low endurance due to the short battery life. Images can be stored on board with either still image or video data compression. Still image system are preferred when low frame rates are involved, because video coding systems are based on motion estimation and compensation algorithms which fail when the motion vectors are significantly long and when the overlapping between subsequent frames is very small. In this scenario, UAVs attitude and position metadata from the Inertial Navigation System (INS) can be employed to estimate global motion parameters without video analysis. A low complexity image analysis can be still performed in order to refine the motion field estimated using only the metadata. In this work, we propose to use this refinement step in order to improve the position and attitude estimation produced by the navigation system in order to maximize the encoder performance. Experiments are performed on both simulated and real world video sequences.

  12. Acoustic property reconstruction of a pygmy sperm whale (Kogia breviceps) forehead based on computed tomography imaging.

    PubMed

    Song, Zhongchang; Xu, Xiao; Dong, Jianchen; Xing, Luru; Zhang, Meng; Liu, Xuecheng; Zhang, Yu; Li, Songhai; Berggren, Per

    2015-11-01

    Computed tomography (CT) imaging and sound experimental measurements were used to reconstruct the acoustic properties (density, velocity, and impedance) of the forehead tissues of a deceased pygmy sperm whale (Kogia breviceps). The forehead was segmented along the body axis and sectioned into cross section slices, which were further cut into sample pieces for measurements. Hounsfield units (HUs) of the corresponding measured pieces were obtained from CT scans, and regression analyses were conducted to investigate the linear relationships between the tissues' HUs and velocity, and HUs and density. The distributions of the acoustic properties of the head at axial, coronal, and sagittal cross sections were reconstructed, revealing that the nasal passage system was asymmetric and the cornucopia-shaped spermaceti organ was in the right nasal passage, surrounded by tissues and airsacs. A distinct dense theca was discovered in the posterior-dorsal area of the melon, which was characterized by low velocity in the inner core and high velocity in the outer region. Statistical analyses revealed significant differences in density, velocity, and acoustic impedance between all four structures, melon, spermaceti organ, muscle, and connective tissue (p < 0.001). The obtained acoustic properties of the forehead tissues provide important information for understanding the species' bioacoustic characteristics.

  13. Video-rate hyperspectral two-photon fluorescence microscopy for in vivo imaging

    NASA Astrophysics Data System (ADS)

    Deng, Fengyuan; Ding, Changqin; Martin, Jerald C.; Scarborough, Nicole M.; Song, Zhengtian; Eakins, Gregory S.; Simpson, Garth J.

    2018-02-01

    Fluorescence hyperspectral imaging is a powerful tool for in vivo biological studies. The ability to recover the full spectra of the fluorophores allows accurate classification of different structures and study of the dynamic behaviors during various biological processes. However, most existing methods require significant instrument modifications and/or suffer from image acquisition rates too low for compatibility with in vivo imaging. In the present work, a fast (up to 18 frames per second) hyperspectral two-photon fluorescence microscopy approach was demonstrated. Utilizing the beamscanning hardware inherent in conventional multi-photon microscopy, the angle dependence of the generated fluorescence signal as a function beam's position allowed the system to probe of a different potion of the spectrum at every single scanning line. An iterative algorithm to classify the fluorophores recovered spectra with up to 2,400 channels using a custom high-speed 16-channel photon multiplier tube array. Several dynamic samples including live fluorescent labeled C. elegans were imaged at video rate. Fluorescence spectra recovered using no a priori spectral information agreed well with those obtained by fluorimetry. This system required minimal changes to most existing beam-scanning multi-photon fluorescence microscopes, already accessible in many research facilities.

  14. Airborne Acoustic Perception by a Jumping Spider.

    PubMed

    Shamble, Paul S; Menda, Gil; Golden, James R; Nitzany, Eyal I; Walden, Katherine; Beatus, Tsevi; Elias, Damian O; Cohen, Itai; Miles, Ronald N; Hoy, Ronald R

    2016-11-07

    Jumping spiders (Salticidae) are famous for their visually driven behaviors [1]. Here, however, we present behavioral and neurophysiological evidence that these animals also perceive and respond to airborne acoustic stimuli, even when the distance between the animal and the sound source is relatively large (∼3 m) and with stimulus amplitudes at the position of the spider of ∼65 dB sound pressure level (SPL). Behavioral experiments with the jumping spider Phidippus audax reveal that these animals respond to low-frequency sounds (80 Hz; 65 dB SPL) by freezing-a common anti-predatory behavior characteristic of an acoustic startle response. Neurophysiological recordings from auditory-sensitive neural units in the brains of these jumping spiders showed responses to low-frequency tones (80 Hz at ∼65 dB SPL)-recordings that also represent the first record of acoustically responsive neural units in the jumping spider brain. Responses persisted even when the distances between spider and stimulus source exceeded 3 m and under anechoic conditions. Thus, these spiders appear able to detect airborne sound at distances in the acoustic far-field region, beyond the near-field range often thought to bound acoustic perception in arthropods that lack tympanic ears (e.g., spiders) [2]. Furthermore, direct mechanical stimulation of hairs on the patella of the foreleg was sufficient to generate responses in neural units that also responded to airborne acoustic stimuli-evidence that these hairs likely play a role in the detection of acoustic cues. We suggest that these auditory responses enable the detection of predators and facilitate an acoustic startle response. VIDEO ABSTRACT. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Optical stereo video signal processor

    NASA Technical Reports Server (NTRS)

    Craig, G. D. (Inventor)

    1985-01-01

    An otpical video signal processor is described which produces a two-dimensional cross-correlation in real time of images received by a stereo camera system. The optical image of each camera is projected on respective liquid crystal light valves. The images on the liquid crystal valves modulate light produced by an extended light source. This modulated light output becomes the two-dimensional cross-correlation when focused onto a video detector and is a function of the range of a target with respect to the stereo camera. Alternate embodiments utilize the two-dimensional cross-correlation to determine target movement and target identification.

  16. Staff acceptance of video monitoring for coordination: a video system to support perioperative situation awareness.

    PubMed

    Kim, Young Ju; Xiao, Yan; Hu, Peter; Dutton, Richard

    2009-08-01

    To understand staff acceptance of a remote video monitoring system for operating room (OR) coordination. Improved real-time remote visual access to OR may enhance situational awareness but also raises privacy concerns for patients and staff. Survey. A system was implemented in a six-room surgical suite to display OR monitoring video at an access restricted control desk area. Image quality was manipulated to improve staff acceptance. Two months after installation, interviews and a survey were conducted on staff acceptance of video monitoring. About half of all OR personnel responded (n = 63). Overall levels of concerns were low, with 53% rated no concerns and 42% little concern. Top two reported uses of the video were to see if cases are finished and to see if a room is ready. Viewing the video monitoring system as useful did not reduce levels of concern. Staff in supervisory positions perceived less concern about the system's impact on privacy than did those supervised (p < 0.03). Concerns for patient privacy correlated with concerns for staff privacy and performance monitoring. Technical means such as manipulating image quality helped staff acceptance. Manipulation of image quality resulted overall acceptance of monitoring video, with residual levels of concerns. OR nurses may express staff privacy concern in the form of concerns over patient privacy. This study provided suggestions for technological and implementation strategies of video monitoring for coordination use in OR. Deployment of communication technology and integration of clinical information will likely raise concerns over staff privacy and performance monitoring. The potential gain of increased information access may be offset by negative impact of a sense of loss of autonomy.

  17. Intraoperative video-rate hemodynamic response assessment in human cortex using snapshot hyperspectral optical imaging

    PubMed Central

    Pichette, Julien; Laurence, Audrey; Angulo, Leticia; Lesage, Frederic; Bouthillier, Alain; Nguyen, Dang Khoa; Leblond, Frederic

    2016-01-01

    Abstract. Using light, we are able to visualize the hemodynamic behavior of the brain to better understand neurovascular coupling and cerebral metabolism. In vivo optical imaging of tissue using endogenous chromophores necessitates spectroscopic detection to ensure molecular specificity as well as sufficiently high imaging speed and signal-to-noise ratio, to allow dynamic physiological changes to be captured, isolated, and used as surrogate of pathophysiological processes. An optical imaging system is introduced using a 16-bands on-chip hyperspectral camera. Using this system, we show that up to three dyes can be imaged and quantified in a tissue phantom at video-rate through the optics of a surgical microscope. In vivo human patient data are presented demonstrating brain hemodynamic response can be measured intraoperatively with molecular specificity at high speed. PMID:27752519

  18. Video Feedforward for Reading

    ERIC Educational Resources Information Center

    Dowrick, Peter W.; Kim-Rupnow, Weol Soon; Power, Thomas J.

    2006-01-01

    Video feedforward can create images of positive futures, as has been shown by researchers using self-modeling methods to teach new skills with carefully planned and edited videos that show the future capability of the individual. As a supplement to tutoring provided by community members, we extended these practices to young children struggling to…

  19. Video event classification and image segmentation based on noncausal multidimensional hidden Markov models.

    PubMed

    Ma, Xiang; Schonfeld, Dan; Khokhar, Ashfaq A

    2009-06-01

    In this paper, we propose a novel solution to an arbitrary noncausal, multidimensional hidden Markov model (HMM) for image and video classification. First, we show that the noncausal model can be solved by splitting it into multiple causal HMMs and simultaneously solving each causal HMM using a fully synchronous distributed computing framework, therefore referred to as distributed HMMs. Next we present an approximate solution to the multiple causal HMMs that is based on an alternating updating scheme and assumes a realistic sequential computing framework. The parameters of the distributed causal HMMs are estimated by extending the classical 1-D training and classification algorithms to multiple dimensions. The proposed extension to arbitrary causal, multidimensional HMMs allows state transitions that are dependent on all causal neighbors. We, thus, extend three fundamental algorithms to multidimensional causal systems, i.e., 1) expectation-maximization (EM), 2) general forward-backward (GFB), and 3) Viterbi algorithms. In the simulations, we choose to limit ourselves to a noncausal 2-D model whose noncausality is along a single dimension, in order to significantly reduce the computational complexity. Simulation results demonstrate the superior performance, higher accuracy rate, and applicability of the proposed noncausal HMM framework to image and video classification.

  20. Video change detection for fixed wing UAVs

    NASA Astrophysics Data System (ADS)

    Bartelsen, Jan; Müller, Thomas; Ring, Jochen; Mück, Klaus; Brüstle, Stefan; Erdnüß, Bastian; Lutz, Bastian; Herbst, Theresa

    2017-10-01

    In this paper we proceed the work of Bartelsen et al.1 We present the draft of a process chain for an image based change detection which is designed for videos acquired by fixed wing unmanned aerial vehicles (UAVs). From our point of view, automatic video change detection for aerial images can be useful to recognize functional activities which are typically caused by the deployment of improvised explosive devices (IEDs), e.g. excavations, skid marks, footprints, left-behind tooling equipment, and marker stones. Furthermore, in case of natural disasters, like flooding, imminent danger can be recognized quickly. Due to the necessary flight range, we concentrate on fixed wing UAVs. Automatic change detection can be reduced to a comparatively simple photogrammetric problem when the perspective change between the "before" and "after" image sets is kept as small as possible. Therefore, the aerial image acquisition demands a mission planning with a clear purpose including flight path and sensor configuration. While the latter can be enabled simply by a fixed and meaningful adjustment of the camera, ensuring a small perspective change for "before" and "after" videos acquired by fixed wing UAVs is a challenging problem. Concerning this matter, we have performed tests with an advanced commercial off the shelf (COTS) system which comprises a differential GPS and autopilot system estimating the repetition accuracy of its trajectory. Although several similar approaches have been presented,23 as far as we are able to judge, the limits for this important issue are not estimated so far. Furthermore, we design a process chain to enable the practical utilization of video change detection. It consists of a front-end of a database to handle large amounts of video data, an image processing and change detection implementation, and the visualization of the results. We apply our process chain on the real video data acquired by the advanced COTS fixed wing UAV and synthetic data. For the

  1. NFL Films audio, video, and film production facilities

    NASA Astrophysics Data System (ADS)

    Berger, Russ; Schrag, Richard C.; Ridings, Jason J.

    2003-04-01

    The new NFL Films 200,000 sq. ft. headquarters is home for the critically acclaimed film production that preserves the NFL's visual legacy week-to-week during the football season, and is also the technical plant that processes and archives football footage from the earliest recorded media to the current network broadcasts. No other company in the country shoots more film than NFL Films, and the inclusion of cutting-edge video and audio formats demands that their technical spaces continually integrate the latest in the ever-changing world of technology. This facility houses a staggering array of acoustically sensitive spaces where music and sound are equal partners with the visual medium. Over 90,000 sq. ft. of sound critical technical space is comprised of an array of sound stages, music scoring stages, audio control rooms, music writing rooms, recording studios, mixing theaters, video production control rooms, editing suites, and a screening theater. Every production control space in the building is designed to monitor and produce multi channel surround sound audio. An overview of the architectural and acoustical design challenges encountered for each sophisticated listening, recording, viewing, editing, and sound critical environment will be discussed.

  2. Video-based convolutional neural networks for activity recognition from robot-centric videos

    NASA Astrophysics Data System (ADS)

    Ryoo, M. S.; Matthies, Larry

    2016-05-01

    In this evaluation paper, we discuss convolutional neural network (CNN)-based approaches for human activity recognition. In particular, we investigate CNN architectures designed to capture temporal information in videos and their applications to the human activity recognition problem. There have been multiple previous works to use CNN-features for videos. These include CNNs using 3-D XYT convolutional filters, CNNs using pooling operations on top of per-frame image-based CNN descriptors, and recurrent neural networks to learn temporal changes in per-frame CNN descriptors. We experimentally compare some of these different representatives CNNs while using first-person human activity videos. We especially focus on videos from a robots viewpoint, captured during its operations and human-robot interactions.

  3. What does See the Impulse Acoustic Microscopy inside Nanocomposites?

    NASA Astrophysics Data System (ADS)

    Levin, V. M.; Petronyuk, Y. S.; Morokov, E. S.; Celzard, A.; Bellucci, S.; Kuzhir, P. P.

    The paper presents results of studying bulk microstructure in carbon nanocomposites by impulse acoustic microscopy technique. Nanocomposite materials are in the focus of interest because of their outstanding properties in minimal nanofiller content. Large surface area and high superficial activity cause strong interaction between nanoparticles that can result in formation of fractal conglomerates. This paper involves results of the first direct observation of nanoparticle conglomerates inside the bulk of epoxy-carbon nanocomposites. Diverse types of carbon nanofiller have been under investigation. The impulse acoustic microscope SIAM-1 (Acoustic Microscopy Lab, IBCP RAS) has been employed for 3D imaging bulk microstructure and measuring elastic properties of the nanocomposite specimens. The range of 50-200 MHz allows observing microstructure inside the entire specimen bulk. Acoustic images are obtained in the ultramicroscopic regime; they are formed by the Rayleigh type scattered radiation. It has been found the high-resolution acoustic vision (impulse acoustic microscopy) is an efficient technique to observe mesostructure formed by fractal cluster inside nanocomposites. The clusterization takes its utmost form in nanocomposites with graphite nanoplatelets as nanofiller. The nanoparticles agglomerate into micron-sized conglomerates distributed randomly over the material. Mesostructure in nanocomposites filled with carbon nanotubes is alternation of regions with diverse density of nanotube packing. Regions with alternative density of CNT packing are clearly seen in acoustical images as neighboring pixels of various brightness.

  4. Comparison of ultrasound B-mode, strain imaging, acoustic radiation force impulse displacement and shear wave velocity imaging using real time clinical breast images

    NASA Astrophysics Data System (ADS)

    Manickam, Kavitha; Machireddy, Ramasubba Reddy; Raghavan, Bagyam

    2016-04-01

    It has been observed that many pathological process increase the elastic modulus of soft tissue compared to normal. In order to image tissue stiffness using ultrasound, a mechanical compression is applied to tissues of interest and local tissue deformation is measured. Based on the mechanical excitation, ultrasound stiffness imaging methods are classified as compression or strain imaging which is based on external compression and Acoustic Radiation Force Impulse (ARFI) imaging which is based on force generated by focused ultrasound. When ultrasound is focused on tissue, shear wave is generated in lateral direction and shear wave velocity is proportional to stiffness of tissues. The work presented in this paper investigates strain elastography and ARFI imaging in clinical cancer diagnostics using real time patient data. Ultrasound B-mode imaging, strain imaging, ARFI displacement and ARFI shear wave velocity imaging were conducted on 50 patients (31 Benign and 23 malignant categories) using Siemens S2000 machine. True modulus contrast values were calculated from the measured shear wave velocities. For ultrasound B-mode, ARFI displacement imaging and strain imaging, observed image contrast and Contrast to Noise Ratio were calculated for benign and malignant cancers. Observed contrast values were compared based on the true modulus contrast values calculated from shear wave velocity imaging. In addition to that, student unpaired t-test was conducted for all the four techniques and box plots are presented. Results show that, strain imaging is better for malignant cancers whereas ARFI imaging is superior than strain imaging and B-mode for benign lesions representations.

  5. Magnetoactive Acoustic Metamaterials.

    PubMed

    Yu, Kunhao; Fang, Nicholas X; Huang, Guoliang; Wang, Qiming

    2018-04-11

    Acoustic metamaterials with negative constitutive parameters (modulus and/or mass density) have shown great potential in diverse applications ranging from sonic cloaking, abnormal refraction and superlensing, to noise canceling. In conventional acoustic metamaterials, the negative constitutive parameters are engineered via tailored structures with fixed geometries; therefore, the relationships between constitutive parameters and acoustic frequencies are typically fixed to form a 2D phase space once the structures are fabricated. Here, by means of a model system of magnetoactive lattice structures, stimuli-responsive acoustic metamaterials are demonstrated to be able to extend the 2D phase space to 3D through rapidly and repeatedly switching signs of constitutive parameters with remote magnetic fields. It is shown for the first time that effective modulus can be reversibly switched between positive and negative within controlled frequency regimes through lattice buckling modulated by theoretically predicted magnetic fields. The magnetically triggered negative-modulus and cavity-induced negative density are integrated to achieve flexible switching between single-negative and double-negative. This strategy opens promising avenues for remote, rapid, and reversible modulation of acoustic transportation, refraction, imaging, and focusing in subwavelength regimes. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. The use of digital imaging, video conferencing, and telepathology in histopathology: a national survey.

    PubMed

    Dennis, T; Start, R D; Cross, S S

    2005-03-01

    To undertake a large scale survey of histopathologists in the UK to determine the current infrastructure, training, and attitudes to digital pathology. A postal questionnaire was sent to 500 consultant histopathologists randomly selected from the membership of the Royal College of Pathologists in the UK. There was a response rate of 47%. Sixty four per cent of respondents had a digital camera mounted on their microscope, but only 12% had any sort of telepathology equipment. Thirty per cent used digital images in electronic presentations at meetings at least once a year and only 24% had ever used telepathology in a diagnostic situation. Fifty nine per cent had received no training in digital imaging. Fifty eight per cent felt that the medicolegal implications of duty of care were a barrier to its use. A large proportion of pathologists (69%) were interested in using video conferencing for remote attendance at multidisciplinary team meetings. There is a reasonable level of equipment and communications infrastructure among histopathologists in the UK but a very low level of training. There is resistance to the use of telepathology in the diagnostic context but enthusiasm for the use of video conferencing in multidisciplinary team meetings.

  7. The use of Acoustic Radiation Force decorrelation-weighted pulse inversion (ADW-PI) for enhanced ultrasound contrast imaging

    PubMed Central

    Herbst, Elizabeth; Unnikrishnan, Sunil; Wang, Shiying; Klibanov, Alexander L.; Hossack, John A.; Mauldin, F. William

    2016-01-01

    Objectives The use of ultrasound imaging for cancer diagnosis and screening can be enhanced with the use of molecularly targeted microbubbles. Nonlinear imaging strategies such as pulse inversion (PI) and “contrast pulse sequences” (CPS) can be used to differentiate microbubble signal, but often fail to suppress highly echogenic tissue interfaces. This failure results in false positive detection and potential misdiagnosis. In this study, a novel Acoustic Radiation Force (ARF) based approach was developed for superior microbubble signal detection. The feasibility of this technique, termed ARF-decorrelation-weighted PI (ADW-PI), was demonstrated in vivo using a subcutaneous mouse tumor model. Materials and Methods Tumors were implanted in the hindlimb of C57BL/6 mice by subcutaneous injection of MC38 cells. Lipid-shelled microbubbles were conjugated to anti-VEGFR2 antibody and administered via bolus injection. An image sequence using ARF pulses to generate microbubble motion was combined with PI imaging on a Verasonics Vantage programmable scanner. ADW-PI images were generated by combining PI images with inter-frame signal decorrelation data. For comparison, CPS images of the same mouse tumor were acquired using a Siemens Sequoia clinical scanner. Results Microbubble-bound regions in the tumor interior exhibited significantly higher signal decorrelation than static tissue (n = 9, p < 0.001). The application of ARF significantly increased microbubble signal decorrelation (n = 9, p < 0.01). Using these decorrelation measurements, ADW-PI imaging demonstrated significantly improved microbubble contrast-to-tissue ratio (CTR) when compared to corresponding CPS or PI images (n = 9, p < 0.001). CTR improved with ADW-PI by approximately 3 dB compared to PI images and 2 dB compared to CPS images. Conclusions Acoustic radiation force can be used to generate adherent microbubble signal decorrelation without microbubble bursting. When combined with pulse inversion

  8. Experimental investigation of conical bubble structure and acoustic flow structure in ultrasonic field.

    PubMed

    Ma, Xiaojian; Huang, Biao; Wang, Guoyu; Zhang, Mindi

    2017-01-01

    The objective of this paper is to investigate the transient conical bubble structure (CBS) and acoustic flow structure in ultrasonic field. In the experiment, the high-speed video and particle image velocimetry (PIV) techniques are used to measure the acoustic cavitation patterns, as well as the flow velocity and vorticity fields. Results are presented for a high power ultrasound with a frequency of 18kHz, and the range of the input power is from 50W to 250W. The results of the experiment show the input power significantly affects the structures of CBS, with the increase of input power, the cavity region of CBS and the velocity of bubbles increase evidently. For the transient motion of bubbles on radiating surface, two different types could be classified, namely the formation, aggregation and coalescence of cavitation bubbles, and the aggregation, shrink, expansion and collapse of bubble cluster. Furthermore, the thickness of turbulent boundary layer near the sonotrode region is found to be much thicker, and the turbulent intensities are much higher for relatively higher input power. The vorticity distribution is prominently affected by the spatial position and input power. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Performance characterization of image and video analysis systems at Siemens Corporate Research

    NASA Astrophysics Data System (ADS)

    Ramesh, Visvanathan; Jolly, Marie-Pierre; Greiffenhagen, Michael

    2000-06-01

    There has been a significant increase in commercial products using imaging analysis techniques to solve real-world problems in diverse fields such as manufacturing, medical imaging, document analysis, transportation and public security, etc. This has been accelerated by various factors: more advanced algorithms, the availability of cheaper sensors, and faster processors. While algorithms continue to improve in performance, a major stumbling block in translating improvements in algorithms to faster deployment of image analysis systems is the lack of characterization of limits of algorithms and how they affect total system performance. The research community has realized the need for performance analysis and there have been significant efforts in the last few years to remedy the situation. Our efforts at SCR have been on statistical modeling and characterization of modules and systems. The emphasis is on both white-box and black box methodologies to evaluate and optimize vision systems. In the first part of this paper we review the literature on performance characterization and then provide an overview of the status of research in performance characterization of image and video understanding systems. The second part of the paper is on performance evaluation of medical image segmentation algorithms. Finally, we highlight some research issues in performance analysis in medical imaging systems.

  10. Enhanced Video-Oculography System

    NASA Technical Reports Server (NTRS)

    Moore, Steven T.; MacDougall, Hamish G.

    2009-01-01

    A previously developed video-oculography system has been enhanced for use in measuring vestibulo-ocular reflexes of a human subject in a centrifuge, motor vehicle, or other setting. The system as previously developed included a lightweight digital video camera mounted on goggles. The left eye was illuminated by an infrared light-emitting diode via a dichroic mirror, and the camera captured images of the left eye in infrared light. To extract eye-movement data, the digitized video images were processed by software running in a laptop computer. Eye movements were calibrated by having the subject view a target pattern, fixed with respect to the subject s head, generated by a goggle-mounted laser with a diffraction grating. The system as enhanced includes a second camera for imaging the scene from the subject s perspective, and two inertial measurement units (IMUs) for measuring linear accelerations and rates of rotation for computing head movements. One IMU is mounted on the goggles, the other on the centrifuge or vehicle frame. All eye-movement and head-motion data are time-stamped. In addition, the subject s point of regard is superimposed on each scene image to enable analysis of patterns of gaze in real time.

  11. Intense acoustic bursts as a signal-enhancement mechanism in ultrasound-modulated optical tomography.

    PubMed

    Kim, Chulhong; Zemp, Roger J; Wang, Lihong V

    2006-08-15

    Biophotonic imaging with ultrasound-modulated optical tomography (UOT) promises ultrasonically resolved imaging in biological tissues. A key challenge in this imaging technique is a low signal-to-noise ratio (SNR). We show significant UOT signal enhancement by using intense time-gated acoustic bursts. A CCD camera captured the speckle pattern from a laser-illuminated tissue phantom. Differences in speckle contrast were observed when ultrasonic bursts were applied, compared with when no ultrasound was applied. When CCD triggering was synchronized with burst initiation, acoustic-radiation-force-induced displacements were detected. To avoid mechanical contrast in UOT images, the CCD camera acquisition was delayed several milliseconds until transient effects of acoustic radiation force attenuated to a satisfactory level. The SNR of our system was sufficiently high to provide an image pixel per acoustic burst without signal averaging. Because of the substantially improved SNR, the use of intense acoustic bursts is a promising signal enhancement strategy for UOT.

  12. Toward a perceptual video-quality metric

    NASA Astrophysics Data System (ADS)

    Watson, Andrew B.

    1998-07-01

    The advent of widespread distribution of digital video creates a need for automated methods for evaluating the visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics, and the economic need to reduce bit-rate to the lowest level that yields acceptable quality. In previous work, we have developed visual quality metrics for evaluating, controlling,a nd optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. Here I describe a new video quality metric that is an extension of these still image metrics into the time domain. Like the still image metrics, it is based on the Discrete Cosine Transform. An effort has been made to minimize the amount of memory and computation required by the metric, in order that might be applied in the widest range of applications. To calibrate the basic sensitivity of this metric to spatial and temporal signals we have made measurements of visual thresholds for temporally varying samples of DCT quantization noise.

  13. Automatic content-based analysis of georeferenced image data: Detection of Beggiatoa mats in seafloor video mosaics from the HÅkon Mosby Mud Volcano

    NASA Astrophysics Data System (ADS)

    Jerosch, K.; Lüdtke, A.; Schlüter, M.; Ioannidis, G. T.

    2007-02-01

    The combination of new underwater technology as remotely operating vehicles (ROVs), high-resolution video imagery, and software to compute georeferenced mosaics of the seafloor provides new opportunities for marine geological or biological studies and applications in offshore industry. Even during single surveys by ROVs or towed systems large amounts of images are compiled. While these underwater techniques are now well-engineered, there is still a lack of methods for the automatic analysis of the acquired image data. During ROV dives more than 4200 georeferenced video mosaics were compiled for the HÅkon Mosby Mud Volcano (HMMV). Mud volcanoes as HMMV are considered as significant source locations for methane characterised by unique chemoautotrophic communities as Beggiatoa mats. For the detection and quantification of the spatial distribution of Beggiatoa mats an automated image analysis technique was developed, which applies watershed transformation and relaxation-based labelling of pre-segmented regions. Comparison of the data derived by visual inspection of 2840 video images with the automated image analysis revealed similarities with a precision better than 90%. We consider this as a step towards a time-efficient and accurate analysis of seafloor images for computation of geochemical budgets and identification of habitats at the seafloor.

  14. An efficient HW and SW design of H.264 video compression, storage and playback on FPGA devices for handheld thermal imaging systems

    NASA Astrophysics Data System (ADS)

    Gunay, Omer; Ozsarac, Ismail; Kamisli, Fatih

    2017-05-01

    Video recording is an essential property of new generation military imaging systems. Playback of the stored video on the same device is also desirable as it provides several operational benefits to end users. Two very important constraints for many military imaging systems, especially for hand-held devices and thermal weapon sights, are power consumption and size. To meet these constraints, it is essential to perform most of the processing applied to the video signal, such as preprocessing, compression, storing, decoding, playback and other system functions on a single programmable chip, such as FPGA, DSP, GPU or ASIC. In this work, H.264/AVC (Advanced Video Coding) compatible video compression, storage, decoding and playback blocks are efficiently designed and implemented on FPGA platforms using FPGA fabric and Altera NIOS II soft processor. Many subblocks that are used in video encoding are also used during video decoding in order to save FPGA resources and power. Computationally complex blocks are designed using FPGA fabric, while blocks such as SD card write/read, H.264 syntax decoding and CAVLC decoding are done using NIOS processor to benefit from software flexibility. In addition, to keep power consumption low, the system was designed to require limited external memory access. The design was tested using 640x480 25 fps thermal camera on CYCLONE V FPGA, which is the ALTERA's lowest power FPGA family, and consumes lower than 40% of CYCLONE V 5CEFA7 FPGA resources on average.

  15. Experimental study on acoustic subwavelength imaging of holey-structured metamaterials by resonant tunneling.

    PubMed

    Su, Haijing; Zhou, Xiaoming; Xu, Xianchen; Hu, Gengkai

    2014-04-01

    A holey-structured metamaterial is proposed for near-field acoustic imaging beyond the diffraction limit. The structured lens consists of a rigid slab perforated with an array of cylindrical holes with periodically modulated diameters. Based on the effective medium approach, the structured lens is characterized by multilayered metamaterials with anisotropic dynamic mass, and an analytic model is proposed to evaluate the transmission properties of incident evanescent waves. The condition is derived for the resonant tunneling, by which evanescent waves can completely transmit through the structured lens without decaying. As an advantage of the proposed lens, the imaging frequency can be modified by the diameter modulation of internal holes without the change of the lens thickness in contrast to the lens due to the Fabry-Pérot resonant mechanism. In this experiment, the lens is assembled by aluminum plates drilled with cylindrical holes. The imaging experiment demonstrates that the designed lens can clearly distinguish two sources separated in the distance below the diffraction limit at the tunneling frequency.

  16. Video transmission on ATM networks. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chen, Yun-Chung

    1993-01-01

    The broadband integrated services digital network (B-ISDN) is expected to provide high-speed and flexible multimedia applications. Multimedia includes data, graphics, image, voice, and video. Asynchronous transfer mode (ATM) is the adopted transport techniques for B-ISDN and has the potential for providing a more efficient and integrated environment for multimedia. It is believed that most broadband applications will make heavy use of visual information. The prospect of wide spread use of image and video communication has led to interest in coding algorithms for reducing bandwidth requirements and improving image quality. The major results of a study on the bridging of network transmission performance and video coding are: Using two representative video sequences, several video source models are developed. The fitness of these models are validated through the use of statistical tests and network queuing performance. A dual leaky bucket algorithm is proposed as an effective network policing function. The concept of the dual leaky bucket algorithm can be applied to a prioritized coding approach to achieve transmission efficiency. A mapping of the performance/control parameters at the network level into equivalent parameters at the video coding level is developed. Based on that, a complete set of principles for the design of video codecs for network transmission is proposed.

  17. Reconstructing Interlaced High-Dynamic-Range Video Using Joint Learning.

    PubMed

    Inchang Choi; Seung-Hwan Baek; Kim, Min H

    2017-11-01

    For extending the dynamic range of video, it is a common practice to capture multiple frames sequentially with different exposures and combine them to extend the dynamic range of each video frame. However, this approach results in typical ghosting artifacts due to fast and complex motion in nature. As an alternative, video imaging with interlaced exposures has been introduced to extend the dynamic range. However, the interlaced approach has been hindered by jaggy artifacts and sensor noise, leading to concerns over image quality. In this paper, we propose a data-driven approach for jointly solving two specific problems of deinterlacing and denoising that arise in interlaced video imaging with different exposures. First, we solve the deinterlacing problem using joint dictionary learning via sparse coding. Since partial information of detail in differently exposed rows is often available via interlacing, we make use of the information to reconstruct details of the extended dynamic range from the interlaced video input. Second, we jointly solve the denoising problem by tailoring sparse coding to better handle additive noise in low-/high-exposure rows, and also adopt multiscale homography flow to temporal sequences for denoising. We anticipate that the proposed method will allow for concurrent capture of higher dynamic range video frames without suffering from ghosting artifacts. We demonstrate the advantages of our interlaced video imaging compared with the state-of-the-art high-dynamic-range video methods.

  18. Generation of acoustic self-bending and bottle beams by phase engineering

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Li, Tongcang; Zhu, Jie; Zhu, Xuefeng; Yang, Sui; Wang, Yuan; Yin, Xiaobo; Zhang, Xiang

    2014-07-01

    Directing acoustic waves along curved paths is critical for applications such as ultrasound imaging, surgery and acoustic cloaking. Metamaterials can direct waves by spatially varying the material properties through which the wave propagates. However, this approach is not always feasible, particularly for acoustic applications. Here we demonstrate the generation of acoustic bottle beams in homogeneous space without using metamaterials. Instead, the sound energy flows through a three-dimensional curved shell in air leaving a close-to-zero pressure region in the middle, exhibiting the capability of circumventing obstacles. By designing the initial phase, we develop a general recipe for creating self-bending wave packets, which can set acoustic beams propagating along arbitrary prescribed convex trajectories. The measured acoustic pulling force experienced by a rigid ball placed inside such a beam confirms the pressure field of the bottle. The demonstrated acoustic bottle and self-bending beams have potential applications in medical ultrasound imaging, therapeutic ultrasound, as well as acoustic levitations and isolations.

  19. Ham Video Commissioning in Columbus

    NASA Image and Video Library

    2014-04-13

    Documentation of the Ham Video unit installed in the Columbus European Laboratory. Part number (P/N) is HAM-11000-0F, serial number (S/N) is 01, barcode is HAMV0001E. Image was taken during Expedition 39 Ham Video commissioning activities and released by astronaut on Twitter.

  20. Generation and control of sound bullets with a nonlinear acoustic lens.

    PubMed

    Spadoni, Alessandro; Daraio, Chiara

    2010-04-20

    Acoustic lenses are employed in a variety of applications, from biomedical imaging and surgery to defense systems and damage detection in materials. Focused acoustic signals, for example, enable ultrasonic transducers to image the interior of the human body. Currently however the performance of acoustic devices is limited by their linear operational envelope, which implies relatively inaccurate focusing and low focal power. Here we show a dramatic focusing effect and the generation of compact acoustic pulses (sound bullets) in solid and fluid media, with energies orders of magnitude greater than previously achievable. This focusing is made possible by a tunable, nonlinear acoustic lens, which consists of ordered arrays of granular chains. The amplitude, size, and location of the sound bullets can be controlled by varying the static precompression of the chains. Theory and numerical simulations demonstrate the focusing effect, and photoelasticity experiments corroborate it. Our nonlinear lens permits a qualitatively new way of generating high-energy acoustic pulses, which may improve imaging capabilities through increased accuracy and signal-to-noise ratios and may lead to more effective nonintrusive scalpels, for example, for cancer treatment.

  1. Frequency Representation: Visualization and Clustering of Acoustic Data Using Self-Organizing Maps.

    PubMed

    Guo, Xinhua; Sun, Song; Yu, Xiantao; Wang, Pan; Nakamura, Kentaro

    2017-11-01

    Extraction and display of frequency information in three-dimensional (3D) acoustic data are important steps to analyze object characteristics, because the characteristics, such as profiles, sizes, surface structures, and material properties, may show frequency dependence. In this study, frequency representation (FR) based on phase information in multispectral acoustic imaging (MSAI) is proposed to overcome the limit of intensity or amplitude information in image display. Experiments are performed on 3D acoustic data collected from a rigid surface engraved with five different letters. The results show that the proposed FR technique can not only identify the depth of the five letters by the colors representing frequency characteristics but also demonstrate the 3D image of the five letters, providing more detailed characteristics that are unavailable by conventional acoustic imaging.

  2. Short-term change detection for UAV video

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Krüger, Wolfgang

    2012-11-01

    In the last years, there has been an increased use of unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. An important application in this context is change detection in UAV video data. Here we address short-term change detection, in which the time between observations ranges from several minutes to a few hours. We distinguish this task from video motion detection (shorter time scale) and from long-term change detection, based on time series of still images taken between several days, weeks, or even years. Examples for relevant changes we are looking for are recently parked or moved vehicles. As a pre-requisite, a precise image-to-image registration is needed. Images are selected on the basis of the geo-coordinates of the sensor's footprint and with respect to a certain minimal overlap. The automatic imagebased fine-registration adjusts the image pair to a common geometry by using a robust matching approach to handle outliers. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are stereo disparity at 3D structures of the scene, changed length of shadows, and compression or transmission artifacts. To detect changes in image pairs we analyzed image differencing, local image correlation, and a transformation-based approach (multivariate alteration detection). As input we used color and gradient magnitude images. To cope with local misalignment of image structures we extended the approaches by a local neighborhood search. The algorithms are applied to several examples covering both urban and rural scenes. The local neighborhood search in combination with intensity and gradient magnitude differencing clearly improved the results. Extended image differencing performed better than both the correlation based approach and the multivariate alternation detection. The algorithms are adapted to be used in semi-automatic workflows for the ABUL video exploitation system of Fraunhofer

  3. Track and track-side video survey technology development.

    DOT National Transportation Integrated Search

    2015-05-01

    Researchers at HiDef/Createc have completed prototype development and testing of a novel track video surveying technology : called Track and Track-Side Video Survey (TTVS). TTVS is designed to capture clear video images of the track and track side : ...

  4. Abnormal Image Detection in Endoscopy Videos Using a Filter Bank and Local Binary Patterns

    PubMed Central

    Nawarathna, Ruwan; Oh, JungHwan; Muthukudage, Jayantha; Tavanapong, Wallapak; Wong, Johnny; de Groen, Piet C.; Tang, Shou Jiang

    2014-01-01

    Finding mucosal abnormalities (e.g., erythema, blood, ulcer, erosion, and polyp) is one of the most essential tasks during endoscopy video review. Since these abnormalities typically appear in a small number of frames (around 5% of the total frame number), automated detection of frames with an abnormality can save physician’s time significantly. In this paper, we propose a new multi-texture analysis method that effectively discerns images showing mucosal abnormalities from the ones without any abnormality since most abnormalities in endoscopy images have textures that are clearly distinguishable from normal textures using an advanced image texture analysis method. The method uses a “texton histogram” of an image block as features. The histogram captures the distribution of different “textons” representing various textures in an endoscopy image. The textons are representative response vectors of an application of a combination of Leung and Malik (LM) filter bank (i.e., a set of image filters) and a set of Local Binary Patterns on the image. Our experimental results indicate that the proposed method achieves 92% recall and 91.8% specificity on wireless capsule endoscopy (WCE) images and 91% recall and 90.8% specificity on colonoscopy images. PMID:25132723

  5. Repurposing video recordings for structure motion estimations

    NASA Astrophysics Data System (ADS)

    Khaloo, Ali; Lattanzi, David

    2016-04-01

    Video monitoring of public spaces is becoming increasingly ubiquitous, particularly near essential structures and facilities. During any hazard event that dynamically excites a structure, such as an earthquake or hurricane, proximal video cameras may inadvertently capture the motion time-history of the structure during the event. If this dynamic time-history could be extracted from the repurposed video recording it would become a valuable forensic analysis tool for engineers performing post-disaster structural evaluations. The difficulty is that almost all potential video cameras are not installed to monitor structure motions, leading to camera perspective distortions and other associated challenges. This paper presents a method for extracting structure motions from videos using a combination of computer vision techniques. Images from a video recording are first reprojected into synthetic images that eliminate perspective distortion, using as-built knowledge of a structure for calibration. The motion of the camera itself during an event is also considered. Optical flow, a technique for tracking per-pixel motion, is then applied to these synthetic images to estimate the building motion. The developed method was validated using the experimental records of the NEESHub earthquake database. The results indicate that the technique is capable of estimating structural motions, particularly the frequency content of the response. Further work will evaluate variants and alternatives to the optical flow algorithm, as well as study the impact of video encoding artifacts on motion estimates.

  6. Automated video-microscopic imaging and data acquisition system for colloid deposition measurements

    DOEpatents

    Abdel-Fattah, Amr I.; Reimus, Paul W.

    2004-12-28

    A video microscopic visualization system and image processing and data extraction and processing method for in situ detailed quantification of the deposition of sub-micrometer particles onto an arbitrary surface and determination of their concentration across the bulk suspension. The extracted data includes (a) surface concentration and flux of deposited, attached and detached colloids, (b) surface concentration and flux of arriving and departing colloids, (c) distribution of colloids in the bulk suspension in the direction perpendicular to the deposition surface, and (d) spatial and temporal distributions of deposited colloids.

  7. VICAR - VIDEO IMAGE COMMUNICATION AND RETRIEVAL

    NASA Technical Reports Server (NTRS)

    Wall, R. J.

    1994-01-01

    VICAR (Video Image Communication and Retrieval) is a general purpose image processing software system that has been under continuous development since the late 1960's. Originally intended for data from the NASA Jet Propulsion Laboratory's unmanned planetary spacecraft, VICAR is now used for a variety of other applications including biomedical image processing, cartography, earth resources, and geological exploration. The development of this newest version of VICAR emphasized a standardized, easily-understood user interface, a shield between the user and the host operating system, and a comprehensive array of image processing capabilities. Structurally, VICAR can be divided into roughly two parts; a suite of applications programs and an executive which serves as the interfaces between the applications, the operating system, and the user. There are several hundred applications programs ranging in function from interactive image editing, data compression/decompression, and map projection, to blemish, noise, and artifact removal, mosaic generation, and pattern recognition and location. An information management system designed specifically for handling image related data can merge image data with other types of data files. The user accesses these programs through the VICAR executive, which consists of a supervisor and a run-time library. From the viewpoint of the user and the applications programs, the executive is an environment that is independent of the operating system. VICAR does not replace the host computer's operating system; instead, it overlays the host resources. The core of the executive is the VICAR Supervisor, which is based on NASA Goddard Space Flight Center's Transportable Applications Executive (TAE). Various modifications and extensions have been made to optimize TAE for image processing applications, resulting in a user friendly environment. The rest of the executive consists of the VICAR Run-Time Library, which provides a set of subroutines (image

  8. Shear wave elasticity imaging based on acoustic radiation force and optical detection.

    PubMed

    Cheng, Yi; Li, Rui; Li, Sinan; Dunsby, Christopher; Eckersley, Robert J; Elson, Daniel S; Tang, Meng-Xing

    2012-09-01

    Tissue elasticity is closely related to the velocity of shear waves within biologic tissue. Shear waves can be generated by an acoustic radiation force and tracked by, e.g., ultrasound or magnetic resonance imaging (MRI) measurements. This has been shown to be able to noninvasively map tissue elasticity in depth and has great potential in a wide range of clinical applications including cancer and cardiovascular diseases. In this study, a highly sensitive optical measurement technique is proposed as an alternative way to track shear waves generated by the acoustic radiation force. A charge coupled device (CCD) camera was used to capture diffuse photons from tissue mimicking phantoms illuminated by a laser source at 532 nm. CCD images were recorded at different delays after the transmission of an ultrasound burst and were processed to obtain the time of flight for the shear wave. A differential measurement scheme involving generation of shear waves at two different positions was used to improve the accuracy and spatial resolution of the system. The results from measurements on both homogeneous and heterogeneous phantoms were compared with measurements from other instruments and demonstrate the feasibility and accuracy of the technique for imaging and quantifying elasticity. The relative error in estimation of shear wave velocity can be as low as 3.3% with a spatial resolution of 2 mm, and increases to 8.8% with a spatial resolution of 1 mm for the medium stiffness phantom. The system is shown to be highly sensitive and is able to track shear waves propagating over several centimetres given the ultrasound excitation amplitude and the phantom material used in this study. It was also found that the reflection of shear waves from boundaries between regions with different elastic properties can cause significant bias in the estimation of elasticity, which also applies to other shear wave tracking techniques. This bias can be reduced at the expense of reduced spatial

  9. A theoretical study of inertial cavitation from acoustic radiation force impulse (ARFI) imaging and implications for the mechanical index

    PubMed Central

    Church, Charles C.; Labuda, Cecille; Nightingale, Kathryn

    2014-01-01

    The mechanical index (MI) attempts to quantify the likelihood that exposure to diagnostic ultrasound will produce an adverse biological effect by a nonthermal mechanism. The current formulation of the MI implicitly assumes that the acoustic field is generated using the short pulse durations appropriate to B-mode imaging. However, acoustic radiation force impulse (ARFI) imaging employs high-intensity pulses up to several hundred acoustic periods long. The effect of increased pulse durations on the thresholds for inertial cavitation was studied computationally in water, urine, blood, cardiac and skeletal muscle, brain, kidney, liver and skin. The results show that while the effect of pulse duration on cavitation thresholds in the three liquids can be considerable, reducing them by, e.g., 6% – 24% at 1 MHz, the effect in tissue is minor. More importantly, the frequency dependence of the MI appears to be unnecessarily conservative, i.e., that the magnitude of the exponent on frequency could be increased to 0.75. Comparison of these theoretical results with experimental measurements suggests that some tissues do not contain the pre-existing, optimally sized bubbles assumed for the MI. This means that in these tissues the MI is not necessarily a strong predictor of the probability for an adverse biological effect. PMID:25592457

  10. Quantitative flaw characterization with scanning laser acoustic microscopy

    NASA Technical Reports Server (NTRS)

    Generazio, E. R.; Roth, D. J.

    1986-01-01

    Surface roughness and diffraction are two factors that have been observed to affect the accuracy of flaw characterization with scanning laser acoustic microscopy. In accuracies can arise when the surface of the test sample is acoustically rough. It is shown that, in this case, Snell's law is no longer valid for determining the direction of sound propagation within the sample. The relationship between the direction of sound propagation within the sample, the apparent flaw depth, and the sample's surface roughness is investigated. Diffraction effects can mask the acoustic images of minute flaws and make it difficult to establish their size, depth, and other characteristics. It is shown that for Fraunhofer diffraction conditions the acoustic image of a subsurface defect corresponds to a two-dimensional Fourier transform. Transforms based on simulated flaws are used to infer the size and shape of the actual flaw.

  11. Video coding for next-generation surveillance systems

    NASA Astrophysics Data System (ADS)

    Klasen, Lena M.; Fahlander, Olov

    1997-02-01

    Video is used as recording media in surveillance system and also more frequently by the Swedish Police Force. Methods for analyzing video using an image processing system have recently been introduced at the Swedish National Laboratory of Forensic Science, and new methods are in focus in a research project at Linkoping University, Image Coding Group. The accuracy of the result of those forensic investigations often depends on the quality of the video recordings, and one of the major problems when analyzing videos from crime scenes is the poor quality of the recordings. Enhancing poor image quality might add manipulative or subjective effects and does not seem to be the right way of getting reliable analysis results. The surveillance system in use today is mainly based on video techniques, VHS or S-VHS, and the weakest link is the video cassette recorder, (VCR). Multiplexers for selecting one of many camera outputs for recording is another problem as it often filters the video signal, and recording is limited to only one of the available cameras connected to the VCR. A way to get around the problem of poor recording is to simultaneously record all camera outputs digitally. It is also very important to build such a system bearing in mind that image processing analysis methods becomes more important as a complement to the human eye. Using one or more cameras gives a large amount of data, and the need for data compression is more than obvious. Crime scenes often involve persons or moving objects, and the available coding techniques are more or less useful. Our goal is to propose a possible system, being the best compromise with respect to what needs to be recorded, movements in the recorded scene, loss of information and resolution etc., to secure the efficient recording of the crime and enable forensic analysis. The preventative effective of having a well functioning surveillance system and well established image analysis methods is not to be neglected. Aspects of

  12. Blurry-frame detection and shot segmentation in colonoscopy videos

    NASA Astrophysics Data System (ADS)

    Oh, JungHwan; Hwang, Sae; Tavanapong, Wallapak; de Groen, Piet C.; Wong, Johnny

    2003-12-01

    Colonoscopy is an important screening procedure for colorectal cancer. During this procedure, the endoscopist visually inspects the colon. Human inspection, however, is not without error. We hypothesize that colonoscopy videos may contain additional valuable information missed by the endoscopist. Video segmentation is the first necessary step for the content-based video analysis and retrieval to provide efficient access to the important images and video segments from a large colonoscopy video database. Based on the unique characteristics of colonoscopy videos, we introduce a new scheme to detect and remove blurry frames, and segment the videos into shots based on the contents. Our experimental results show that the average precision and recall of the proposed scheme are over 90% for the detection of non-blurry images. The proposed method of blurry frame detection and shot segmentation is extensible to the videos captured from other endoscopic procedures such as upper gastrointestinal endoscopy, enteroscopy, cystoscopy, and laparoscopy.

  13. A clinical pilot study of a modular video-CT augmentation system for image-guided skull base surgery

    NASA Astrophysics Data System (ADS)

    Liu, Wen P.; Mirota, Daniel J.; Uneri, Ali; Otake, Yoshito; Hager, Gregory; Reh, Douglas D.; Ishii, Masaru; Gallia, Gary L.; Siewerdsen, Jeffrey H.

    2012-02-01

    Augmentation of endoscopic video with preoperative or intraoperative image data [e.g., planning data and/or anatomical segmentations defined in computed tomography (CT) and magnetic resonance (MR)], can improve navigation, spatial orientation, confidence, and tissue resection in skull base surgery, especially with respect to critical neurovascular structures that may be difficult to visualize in the video scene. This paper presents the engineering and evaluation of a video augmentation system for endoscopic skull base surgery translated to use in a clinical study. Extension of previous research yielded a practical system with a modular design that can be applied to other endoscopic surgeries, including orthopedic, abdominal, and thoracic procedures. A clinical pilot study is underway to assess feasibility and benefit to surgical performance by overlaying CT or MR planning data in realtime, high-definition endoscopic video. Preoperative planning included segmentation of the carotid arteries, optic nerves, and surgical target volume (e.g., tumor). An automated camera calibration process was developed that demonstrates mean re-projection accuracy (0.7+/-0.3) pixels and mean target registration error of (2.3+/-1.5) mm. An IRB-approved clinical study involving fifteen patients undergoing skull base tumor surgery is underway in which each surgery includes the experimental video-CT system deployed in parallel to the standard-of-care (unaugmented) video display. Questionnaires distributed to one neurosurgeon and two otolaryngologists are used to assess primary outcome measures regarding the benefit to surgical confidence in localizing critical structures and targets by means of video overlay during surgical approach, resection, and reconstruction.

  14. Indexed Captioned Searchable Videos: A Learning Companion for STEM Coursework

    NASA Astrophysics Data System (ADS)

    Tuna, Tayfun; Subhlok, Jaspal; Barker, Lecia; Shah, Shishir; Johnson, Olin; Hovey, Christopher

    2017-02-01

    Videos of classroom lectures have proven to be a popular and versatile learning resource. A key shortcoming of the lecture video format is accessing the content of interest hidden in a video. This work meets this challenge with an advanced video framework featuring topical indexing, search, and captioning (ICS videos). Standard optical character recognition (OCR) technology was enhanced with image transformations for extraction of text from video frames to support indexing and search. The images and text on video frames is analyzed to divide lecture videos into topical segments. The ICS video player integrates indexing, search, and captioning in video playback providing instant access to the content of interest. This video framework has been used by more than 70 courses in a variety of STEM disciplines and assessed by more than 4000 students. Results presented from the surveys demonstrate the value of the videos as a learning resource and the role played by videos in a students learning process. Survey results also establish the value of indexing and search features in a video platform for education. This paper reports on the development and evaluation of ICS videos framework and over 5 years of usage experience in several STEM courses.

  15. Feasibility study of utilizing ultraportable projectors for endoscopic video display (with videos).

    PubMed

    Tang, Shou-Jiang; Fehring, Amanda; Mclemore, Mac; Griswold, Michael; Wang, Wanmei; Paine, Elizabeth R; Wu, Ruonan; To, Filip

    2014-10-01

    Modern endoscopy requires video display. Recent miniaturized, ultraportable projectors are affordable, durable, and offer quality image display. Explore feasibility of using ultraportable projectors in endoscopy. Prospective bench-top comparison; clinical feasibility study. Masked comparison study of images displayed via 2 Samsung ultraportable light-emitting diode projectors (pocket-sized SP-HO3; pico projector SP-P410M) and 1 Microvision Showwx-II Laser pico projector. BENCH-TOP FEASIBILITY STUDY: Prerecorded endoscopic video was streamed via computer. CLINICAL COMPARISON STUDY: Live high-definition endoscopy video was simultaneously displayed through each processor onto a standard liquid crystal display monitor and projected onto a portable, pull-down projection screen. Endoscopists, endoscopy nurses, and technicians rated video images; ratings were analyzed by linear mixed-effects regression models with random intercepts. All projectors were easy to set up, adjust, focus, and operate, with no real-time lapse for any. Bench-top study outcomes: Samsung pico preferred to Laser pico, overall rating 1.5 units higher (95% confidence interval [CI] = 0.7-2.4), P < .001; Samsung pocket preferred to Laser pico, 3.3 units higher (95% CI = 2.4-4.1), P < .001; Samsung pocket preferred to Samsung pico, 1.7 units higher (95% CI = 0.9-2.5), P < .001. The clinical comparison study confirmed the Samsung pocket projector as best, with a higher overall rating of 2.3 units (95% CI = 1.6-3.0), P < .001, than Samsung pico. Low brightness currently limits pico projector use in clinical endoscopy. The pocket projector, with higher brightness levels (170 lumens), is clinically useful. Continued improvements to ultraportable projectors will supply a needed niche in endoscopy through portability, reduced cost, and equal or better image quality. © The Author(s) 2013.

  16. Computer-aided video exposure monitoring.

    PubMed

    Walsh, P T; Clark, R D; Flaherty, S; Gentry, S J

    2000-01-01

    A computer-aided video exposure monitoring system was used to record exposure information. The system comprised a handheld camcorder, portable video cassette recorder, radio-telemetry transmitter/receiver, and handheld or notebook computers for remote data logging, photoionization gas/vapor detectors (PIDs), and a personal aerosol monitor. The following workplaces were surveyed using the system: dry cleaning establishments--monitoring tetrachoroethylene in the air and in breath; printing works--monitoring white spirit type solvent; tire manufacturing factory--monitoring rubber fume; and a slate quarry--monitoring respirable dust and quartz. The system based on the handheld computer, in particular, simplified the data acquisition process compared with earlier systems in use by our laboratory. The equipment is more compact and easier to operate, and allows more accurate calibration of the instrument reading on the video image. Although a variety of data display formats are possible, the best format for videos intended for educational and training purposes was the review-preview chart superimposed on the video image of the work process. Recommendations for reducing exposure by engineering or by modifying work practice were possible through use of the video exposure system in the dry cleaning and tire manufacturing applications. The slate quarry work illustrated how the technique can be used to test ventilation configurations quickly to see their effect on the worker's personal exposure.

  17. ΤND: a thyroid nodule detection system for analysis of ultrasound images and videos.

    PubMed

    Keramidas, Eystratios G; Maroulis, Dimitris; Iakovidis, Dimitris K

    2012-06-01

    In this paper, we present a computer-aided-diagnosis (CAD) system prototype, named TND (Thyroid Nodule Detector), for the detection of nodular tissue in ultrasound (US) thyroid images and videos acquired during thyroid US examinations. The proposed system incorporates an original methodology that involves a novel algorithm for automatic definition of the boundaries of the thyroid gland, and a novel approach for the extraction of noise resilient image features effectively representing the textural and the echogenic properties of the thyroid tissue. Through extensive experimental evaluation on real thyroid US data, its accuracy in thyroid nodule detection has been estimated to exceed 95%. These results attest to the feasibility of the clinical application of TND, for the provision of a second more objective opinion to the radiologists by exploiting image evidences.

  18. Electrokinetic Transduction of Acoustic Waves In Ocean Sediments

    DTIC Science & Technology

    2002-09-30

    acoustic —motion in ocean sediments. The Biot theory of poroelastic media captures much of the sediment physics left out by other models [2]. It fits...in subsurface acoustical imaging, Mine Counter- Measures, and Anti-Submarine Warfare. To obtain essential experimental data to support the modeling ...Electrokinetic Transduction of Acoustic Waves In Ocean Sediments Gareth I. Block Applied Research Laboratories, U.T. Austin P.O. Box 8029

  19. The architecture of a video image processor for the space station

    NASA Technical Reports Server (NTRS)

    Yalamanchili, S.; Lee, D.; Fritze, K.; Carpenter, T.; Hoyme, K.; Murray, N.

    1987-01-01

    The architecture of a video image processor for space station applications is described. The architecture was derived from a study of the requirements of algorithms that are necessary to produce the desired functionality of many of these applications. Architectural options were selected based on a simulation of the execution of these algorithms on various architectural organizations. A great deal of emphasis was placed on the ability of the system to evolve and grow over the lifetime of the space station. The result is a hierarchical parallel architecture that is characterized by high level language programmability, modularity, extensibility and can meet the required performance goals.

  20. Guiding synchrotron X-ray diffraction by multimodal video-rate protein crystal imaging

    DOE PAGES

    Newman, Justin A.; Zhang, Shijie; Sullivan, Shane Z.; ...

    2016-05-16

    Synchronous digitization, in which an optical sensor is probed synchronously with the firing of an ultrafast laser, was integrated into an optical imaging station for macromolecular crystal positioning prior to synchrotron X-ray diffraction. Using the synchronous digitization instrument, second-harmonic generation, two-photon-excited fluorescence and bright field by laser transmittance were all acquired simultaneously with perfect image registry at up to video-rate (15 frames s –1). A simple change in the incident wavelength enabled simultaneous imaging by two-photon-excited ultraviolet fluorescence, one-photon-excited visible fluorescence and laser transmittance. Development of an analytical model for the signal-to-noise enhancement afforded by synchronous digitization suggests a 15.6-foldmore » improvement over previous photon-counting techniques. This improvement in turn allowed acquisition on nearly an order of magnitude more pixels than the preceding generation of instrumentation and reductions of well over an order of magnitude in image acquisition times. These improvements have allowed detection of protein crystals on the order of 1 µm in thickness under cryogenic conditions in the beamline. Lastly, these capabilities are well suited to support serial crystallography of crystals approaching 1 µm or less in dimension.« less