Science.gov

Sample records for high-speed video camera

  1. Contact freezing observed with a high speed video camera

    NASA Astrophysics Data System (ADS)

    Hoffmann, Nadine; Koch, Michael; Kiselev, Alexei; Leisner, Thomas

    2017-04-01

    Freezing of supercooled cloud droplets on collision with ice nucleating particle (INP) has been considered as one of the most effective heterogeneous freezing mechanisms. Potentially, it could play an important role in rapid glaciation of a mixed phase cloud especially if coupled with ice multiplication mechanism active at moderate subzero temperatures. The necessary condition for such coupling would be, among others, the presence of very efficient INPs capable of inducing ice nucleation of the supercooled drizzle droplets in the temperature range of -5°C to -20°C. Some mineral dust particles (K-feldspar) and biogenic INPs (pseudomonas bacteria, birch pollen) have been recently identified as such very efficient INPs. However, if observed with a high speed video (HSV) camera, the contact nucleation induced by these two classes of INPs exhibits a very different behavior. Whereas bacterial INPs can induce freezing within a millisecond after initial contact with supercooled water, birch pollen need much more time to initiate freezing. The mineral dust particles seem to induce ice nucleation faster than birch pollen but slower than bacterial INPs. In this contribution we show the HSV records of individual supercooled droplets suspended in an electrodynamic balance and colliding with airborne INPs of various types. The HSV camera is coupled with a long-working-distance microscope, allowing us to observe the contact nucleation of ice at very high spatial and temporal resolution. The average time needed to initiate freezing has been measured depending on the INP species. This time do not necessarily correlate with the contact freezing efficiency of the ice nucleating particles. We discuss possible mechanisms explaining this behavior and potential implications for future ice nucleation research.

  2. Shuttlecock detection system for fully-autonomous badminton robot with two high-speed video cameras

    NASA Astrophysics Data System (ADS)

    Masunari, T.; Yamagami, K.; Mizuno, M.; Une, S.; Uotani, M.; Kanematsu, T.; Demachi, K.; Sano, S.; Nakamura, Y.; Suzuki, S.

    2017-02-01

    Two high-speed video cameras are successfully used to detect the motion of a flying shuttlecock of badminton. The shuttlecock detection system is applied to badminton robots that play badminton fully autonomously. The detection system measures the three dimensional position and velocity of a flying shuttlecock, and predicts the position where the shuttlecock falls to the ground. The badminton robot moves quickly to the position where the shuttle-cock falls to, and hits the shuttlecock back into the opponent's side of the court. In the game of badminton, there is a large audience, and some of them move behind a flying shuttlecock, which are a kind of background noise and makes it difficult to detect the motion of the shuttlecock. The present study demonstrates that such noises can be eliminated by the method of stereo imaging with two high-speed cameras.

  3. High Speed Video Insertion

    NASA Astrophysics Data System (ADS)

    Janess, Don C.

    1984-11-01

    This paper describes a means of inserting alphanumeric characters and graphics into a high speed video signal and locking that signal to an IRIG B time code. A model V-91 IRIG processor, developed by Instrumentation Technology Systems under contract to Instrumentation Marketing Corporation has been designed to operate in conjunction with the NAC model FHS-200 High Speed Video Camera which operates at 200 fields per second. The system provides for synchronizing the vertical and horizontal drive signals such that the vertical sync precisely coincides with five millisecond transitions in the IRIG time code. Additionally, the unit allows for the insertion of an IRIG time message as well as other data and symbols.

  4. Network-linked long-time recording high-speed video camera system

    NASA Astrophysics Data System (ADS)

    Kimura, Seiji; Tsuji, Masataka

    2001-04-01

    This paper describes a network-oriented, long-recording-time high-speed digital video camera system that utilizes an HDD (Hard Disk Drive) as a recording medium. Semiconductor memories (DRAM, etc.) are the most common image data recording media with existing high-speed digital video cameras. They are extensively used because of their advantage of high-speed writing and reading of picture data. The drawback is that their recording time is limited to only several seconds because the data amount is very large. A recording time of several seconds is sufficient for many applications. However, a much longer recording time is required in some applications where an exact prediction of trigger timing is hard to make. In the Late years, the recording density of the HDD has been dramatically improved, which has attracted more attention to its value as a long-recording-time medium. We conceived an idea that we would be able to build a compact system that makes possible a long time recording if the HDD can be used as a memory unit for high-speed digital image recording. However, the data rate of such a system, capable of recording 640 X 480 pixel resolution pictures at 500 frames per second (fps) with 8-bit grayscale is 153.6 Mbyte/sec., and is way beyond the writing speed of the commonly used HDD. So, we developed a dedicated image compression system and verified its capability to lower the data rate from the digital camera to match the HDD writing rate.

  5. HIGH SPEED CAMERA

    DOEpatents

    Rogers, B.T. Jr.; Davis, W.C.

    1957-12-17

    This patent relates to high speed cameras having resolution times of less than one-tenth microseconds suitable for filming distinct sequences of a very fast event such as an explosion. This camera consists of a rotating mirror with reflecting surfaces on both sides, a narrow mirror acting as a slit in a focal plane shutter, various other mirror and lens systems as well as an innage recording surface. The combination of the rotating mirrors and the slit mirror causes discrete, narrow, separate pictures to fall upon the film plane, thereby forming a moving image increment of the photographed event. Placing a reflecting surface on each side of the rotating mirror cancels the image velocity that one side of the rotating mirror would impart, so as a camera having this short a resolution time is thereby possible.

  6. HDR {sup 192}Ir source speed measurements using a high speed video camera

    SciTech Connect

    Fonseca, Gabriel P.; Rubo, Rodrigo A.; Sales, Camila P. de; Verhaegen, Frank

    2015-01-15

    Purpose: The dose delivered with a HDR {sup 192}Ir afterloader can be separated into a dwell component, and a transit component resulting from the source movement. The transit component is directly dependent on the source speed profile and it is the goal of this study to measure accurate source speed profiles. Methods: A high speed video camera was used to record the movement of a {sup 192}Ir source (Nucletron, an Elekta company, Stockholm, Sweden) for interdwell distances of 0.25–5 cm with dwell times of 0.1, 1, and 2 s. Transit dose distributions were calculated using a Monte Carlo code simulating the source movement. Results: The source stops at each dwell position oscillating around the desired position for a duration up to (0.026 ± 0.005) s. The source speed profile shows variations between 0 and 81 cm/s with average speed of ∼33 cm/s for most of the interdwell distances. The source stops for up to (0.005 ± 0.001) s at nonprogrammed positions in between two programmed dwell positions. The dwell time correction applied by the manufacturer compensates the transit dose between the dwell positions leading to a maximum overdose of 41 mGy for the considered cases and assuming an air-kerma strength of 48 000 U. The transit dose component is not uniformly distributed leading to over and underdoses, which is within 1.4% for commonly prescribed doses (3–10 Gy). Conclusions: The source maintains its speed even for the short interdwell distances. Dose variations due to the transit dose component are much lower than the prescribed treatment doses for brachytherapy, although transit dose component should be evaluated individually for clinical cases.

  7. High Speed Digital Camera Technology Review

    NASA Technical Reports Server (NTRS)

    Clements, Sandra D.

    2009-01-01

    A High Speed Digital Camera Technology Review (HSD Review) is being conducted to evaluate the state-of-the-shelf in this rapidly progressing industry. Five HSD cameras supplied by four camera manufacturers participated in a Field Test during the Space Shuttle Discovery STS-128 launch. Each camera was also subjected to Bench Tests in the ASRC Imaging Development Laboratory. Evaluation of the data from the Field and Bench Tests is underway. Representatives from the imaging communities at NASA / KSC and the Optical Systems Group are participating as reviewers. A High Speed Digital Video Camera Draft Specification was updated to address Shuttle engineering imagery requirements based on findings from this HSD Review. This draft specification will serve as the template for a High Speed Digital Video Camera Specification to be developed for the wider OSG imaging community under OSG Task OS-33.

  8. Introducing Contactless Blood Pressure Assessment Using a High Speed Video Camera.

    PubMed

    Jeong, In Cheol; Finkelstein, Joseph

    2016-04-01

    Recent studies demonstrated that blood pressure (BP) can be estimated using pulse transit time (PTT). For PTT calculation, photoplethysmogram (PPG) is usually used to detect a time lag in pulse wave propagation which is correlated with BP. Until now, PTT and PPG were registered using a set of body-worn sensors. In this study a new methodology is introduced allowing contactless registration of PTT and PPG using high speed camera resulting in corresponding image-based PTT (iPTT) and image-based PPG (iPPG) generation. The iPTT value can be potentially utilized for blood pressure estimation however extent of correlation between iPTT and BP is unknown. The goal of this preliminary feasibility study was to introduce the methodology for contactless generation of iPPG and iPTT and to make initial estimation of the extent of correlation between iPTT and BP "in vivo." A short cycling exercise was used to generate BP changes in healthy adult volunteers in three consecutive visits. BP was measured by a verified BP monitor simultaneously with iPTT registration at three exercise points: rest, exercise peak, and recovery. iPPG was simultaneously registered at two body locations during the exercise using high speed camera at 420 frames per second. iPTT was calculated as a time lag between pulse waves obtained as two iPPG's registered from simultaneous recoding of head and palm areas. The average inter-person correlation between PTT and iPTT was 0.85 ± 0.08. The range of inter-person correlations between PTT and iPTT was from 0.70 to 0.95 (p < 0.05). The average inter-person coefficient of correlation between SBP and iPTT was -0.80 ± 0.12. The range of correlations between systolic BP and iPTT was from 0.632 to 0.960 with p < 0.05 for most of the participants. Preliminary data indicated that a high speed camera can be potentially utilized for unobtrusive contactless monitoring of abrupt blood pressure changes in a variety of settings. The initial prototype system was able to

  9. High-speed video capture by a single flutter shutter camera using three-dimensional hyperbolic wavelets

    NASA Astrophysics Data System (ADS)

    Huang, Kuihua; Zhang, Jun; Hou, Jinxin

    2014-09-01

    Based on the consideration of easy achievement in modern sensors, this paper further exploits the possibility of the recovery of high-speed video (HSV) by a single flutter shutter camera. Taking into account different degrees of smoothness along the spatial and temporal dimensions of HSV, this paper proposes to use a three-dimensional hyperbolic wavelet basis based on Kronecker product to jointly model the spatial and temporal redundancy of HSV. Besides, we incorporate the total variation of temporal correlations in HSV as a prior knowledge to further enhance our reconstruction quality. We recover the underlying HSV frames from the observed low-speed coded video by solving a convex minimization problem. The experimental results on simulated and real-world videos both demonstrate the validity of the proposed method.

  10. High speed multiwire photon camera

    NASA Technical Reports Server (NTRS)

    Lacy, Jeffrey L. (Inventor)

    1991-01-01

    An improved multiwire proportional counter camera having particular utility in the field of clinical nuclear medicine imaging. The detector utilizes direct coupled, low impedance, high speed delay lines, the segments of which are capacitor-inductor networks. A pile-up rejection test is provided to reject confused events otherwise caused by multiple ionization events occuring during the readout window.

  11. High speed multiwire photon camera

    NASA Technical Reports Server (NTRS)

    Lacy, Jeffrey L. (Inventor)

    1989-01-01

    An improved multiwire proportional counter camera having particular utility in the field of clinical nuclear medicine imaging. The detector utilizes direct coupled, low impedance, high speed delay lines, the segments of which are capacitor-inductor networks. A pile-up rejection test is provided to reject confused events otherwise caused by multiple ionization events occurring during the readout window.

  12. Contraction behaviors of Vorticella sp. stalk investigated using high-speed video camera. I: Nucleation and growth model

    PubMed Central

    Kamiguri, Junko; Tsuchiya, Noriko; Hidema, Ruri; Tachibana, Masatoshi; Yatabe, Zenji; Shoji, Masahiko; Hashimoto, Chihiro; Pansu, Robert Bernard; Ushiki, Hideharu

    2012-01-01

    The contraction process of living Vorticella sp. has been investigated by image processing using a high-speed video camera. In order to express the temporal change in the stalk length resulting from the contraction, a damped spring model and a nucleation and growth model are applied. A double exponential is deduced from a conventional damped spring model, while a stretched exponential is newly proposed from a nucleation and growth model. The stretched exponential function is more suitable for the curve fitting and suggests a more particular contraction mechanism in which the contraction of the stalk begins near the cell body and spreads downwards along the stalk. The index value of the stretched exponential is evaluated in the range from 1 to 2 in accordance with the model in which the contraction undergoes through nucleation and growth in a one-dimensional space. PMID:27857602

  13. Contraction behaviors of Vorticella sp. stalk investigated using high-speed video camera. I: Nucleation and growth model.

    PubMed

    Kamiguri, Junko; Tsuchiya, Noriko; Hidema, Ruri; Tachibana, Masatoshi; Yatabe, Zenji; Shoji, Masahiko; Hashimoto, Chihiro; Pansu, Robert Bernard; Ushiki, Hideharu

    2012-01-01

    The contraction process of living Vorticella sp. has been investigated by image processing using a high-speed video camera. In order to express the temporal change in the stalk length resulting from the contraction, a damped spring model and a nucleation and growth model are applied. A double exponential is deduced from a conventional damped spring model, while a stretched exponential is newly proposed from a nucleation and growth model. The stretched exponential function is more suitable for the curve fitting and suggests a more particular contraction mechanism in which the contraction of the stalk begins near the cell body and spreads downwards along the stalk. The index value of the stretched exponential is evaluated in the range from 1 to 2 in accordance with the model in which the contraction undergoes through nucleation and growth in a one-dimensional space.

  14. Using High Speed Smartphone Cameras and Video Analysis Techniques to Teach Mechanical Wave Physics

    ERIC Educational Resources Information Center

    Bonato, Jacopo; Gratton, Luigi M.; Onorato, Pasquale; Oss, Stefano

    2017-01-01

    We propose the use of smartphone-based slow-motion video analysis techniques as a valuable tool for investigating physics concepts ruling mechanical wave propagation. The simple experimental activities presented here, suitable for both high school and undergraduate students, allows one to measure, in a simple yet rigorous way, the speed of pulses…

  15. Using high speed smartphone cameras and video analysis techniques to teach mechanical wave physics

    NASA Astrophysics Data System (ADS)

    Bonato, Jacopo; Gratton, Luigi M.; Onorato, Pasquale; Oss, Stefano

    2017-07-01

    We propose the use of smartphone-based slow-motion video analysis techniques as a valuable tool for investigating physics concepts ruling mechanical wave propagation. The simple experimental activities presented here, suitable for both high school and undergraduate students, allows one to measure, in a simple yet rigorous way, the speed of pulses along a spring and the period of transverse standing waves generated in the same spring. These experiments can be helpful in addressing several relevant concepts about the physics of mechanical waves and in overcoming some of the typical student misconceptions in this same field.

  16. High Speed Holographic Movie Camera

    NASA Astrophysics Data System (ADS)

    Hentschel, W.; Lauterborn, W.

    1985-08-01

    A high speed holographic movie camera system has been developed to investigate the dynamic behavior of cavitation bubbles in liquids. As a light source for holography, a high power multiply cavity-dumped argonion laser is used to record very long hologram series with framing rates up to 300 kHz. For separating successively recorded holograms, two spatial multiplexing techniques are applied simultaneously: rotation of the holographic plate or film and acousto-optic beam deflection. With the combination of these two techniques we achieve up to 4000 single holograms in one series.

  17. High Speed Holographic Movie Camera

    NASA Astrophysics Data System (ADS)

    Hentschel, W.; Lauterborn, W.

    1985-02-01

    A high speed holographic movie camera system has been developed in our laboratories at the Third Physical Institute of the University of Gdttingen. As a light source for holography a high power multiply cavity-dumped argonion laser is used to record very long hologram series with framing rates up to 300 kHz. For separating successively recorded holograms two spatial multiplexing techniques are applied simultaneously: rotating of the holographic plate or film and acousto-optic beam deflection. With the combination of these two techniques we achieve up to 4000 single holograms in one series.

  18. Flickering aurora studies using high speed cameras

    NASA Astrophysics Data System (ADS)

    McHarg, M. G.; Stenbaek-Nielsen, H. C.; Samara, M.; Michell, R.; Hampton, D. L.; Haaland, R. K.

    2009-12-01

    We report on observations of flickering aurora using two different digital camera systems. The first, a high speed Phantom 7 camera with a Video Scope HS 1845 HS image intensifier coupled with an 50mm lens provides fast frame rates with data recorded at 200 and 400 frames per second with a 512x384 pixel, 11.8x8.8 degree FOV. The second system is an Andor Electron-Multiplying Charge Couple Device (EMCCD) running at 33 frames per second using a 256 by 256 format covering 16x16 degrees field of view. Both systems made observations of flickering aurora in the magnetic zenith, using optical filters transmitting the prompt blue and red emissions of nitrogen. The Andor system was deployed at the Poker Flat rocket range near Fairbanks AK, while the Phantom system was deployed approximately 400 miles north of Poker Flat at Toolik Lake observatory. We find both narrow band low frequency (~5-10 Hz) and wider band, higher frequency (50- 70 Hz) oscillations in the optical intensity of flickering aurora. Direct comparison of the optical data and the dispersion relation for ion cyclotron waves thought to be responsible for the modulation of electrons causing the intensity fluctuations seen in flickering aurora are presented.

  19. High Speed Video for Airborne Instrumentation Application

    NASA Technical Reports Server (NTRS)

    Tseng, Ting; Reaves, Matthew; Mauldin, Kendall

    2006-01-01

    A flight-worthy high speed color video system has been developed. Extensive system development and ground and environmental. testing hes yielded a flight qualified High Speed Video System (HSVS), This HSVS was initially used on the F-15B #836 for the Lifting Insulating Foam Trajectory (LIFT) project.

  20. Laser Trigger For High Speed Camera

    NASA Astrophysics Data System (ADS)

    Chang, Rong-Seng; Lin, Chin-Wu; Cheng, Tung

    1987-09-01

    High speed camera coorperated with laser trigger to catch high speed unpredictable events has many applications: such as scoring system for the end game of missile interception, war head explosive study etc. When the event happening in a very short duration, the repetition rate of the laser ranging must be as high as 5K herze and the pulse duration should be less than 10 nsec. In some environment, like inside the aircraft, the abailable space for high speed camera to set up is limited, large film capacity camera could not be used. In order to use the small capacity film, the exact trigger time for the camera are especially important. The target velocity, camera acceleration characteristics, speed regulation, camera size, weight and the ruggedness are all be considered before the laser trigger be designed. Electric temporal gate is used to measure the time of flight ranging datum. The triangular distance measurement principle are also used to get the ranging when the base line i.e. the distance between the laser transmitter and receiver are large enough.

  1. Contraction behaviors of Vorticella sp. stalk investigated using high-speed video camera. II: Viscosity effect of several types of polymer additives.

    PubMed

    Kamiguri, Junko; Tsuchiya, Noriko; Hidema, Ruri; Yatabe, Zenji; Shoji, Masahiko; Hashimoto, Chihiro; Pansu, Robert Bernard; Ushiki, Hideharu

    2012-01-01

    The contraction process of living Vorticella sp. in polymer solutions with various viscosities has been investigated by image processing using a high-speed video camera. The viscosity of the external fluid ranges from 1 to 5mPa·s for different polymer additives such as hydroxypropyl cellulose, polyethylene oxide, and Ficoll. The temporal change in the contraction length of Vorticella sp. in various macromolecular solutions is fitted well by a stretched exponential function based on the nucleation and growth model. The maximum speed of the contractile process monotonically decreases with an increase in the external viscosity, in accordance with power law behavior. The index values approximate to 0.5 and this suggests that the viscous energy dissipated by the contraction of Vorticella sp. is constant in a macromolecular environment.

  2. Contraction behaviors of Vorticella sp. stalk investigated using high-speed video camera. II: Viscosity effect of several types of polymer additives

    PubMed Central

    Kamiguri, Junko; Tsuchiya, Noriko; Hidema, Ruri; Yatabe, Zenji; Shoji, Masahiko; Hashimoto, Chihiro; Pansu, Robert Bernard; Ushiki, Hideharu

    2012-01-01

    The contraction process of living Vorticella sp. in polymer solutions with various viscosities has been investigated by image processing using a high-speed video camera. The viscosity of the external fluid ranges from 1 to 5mPa·s for different polymer additives such as hydroxypropyl cellulose, polyethylene oxide, and Ficoll. The temporal change in the contraction length of Vorticella sp. in various macromolecular solutions is fitted well by a stretched exponential function based on the nucleation and growth model. The maximum speed of the contractile process monotonically decreases with an increase in the external viscosity, in accordance with power law behavior. The index values approximate to 0.5 and this suggests that the viscous energy dissipated by the contraction of Vorticella sp. is constant in a macromolecular environment. PMID:27857603

  3. Visualization of explosion phenomena using a high-speed video camera with an uncoupled objective lens by fiber-optic

    NASA Astrophysics Data System (ADS)

    Tokuoka, Nobuyuki; Miyoshi, Hitoshi; Kusano, Hideaki; Hata, Hidehiro; Hiroe, Tetsuyuki; Fujiwara, Kazuhito; Yasushi, Kondo

    2008-11-01

    Visualization of explosion phenomena is very important and essential to evaluate the performance of explosive effects. The phenomena, however, generate blast waves and fragments from cases. We must protect our visualizing equipment from any form of impact. In the tests described here, the front lens was separated from the camera head by means of a fiber-optic cable in order to be able to use the camera, a Shimadzu Hypervision HPV-1, for tests in severe blast environment, including the filming of explosions. It was possible to obtain clear images of the explosion that were not inferior to the images taken by the camera with the lens directly coupled to the camera head. It could be confirmed that this system is very useful for the visualization of dangerous events, e.g., at an explosion site, and for visualizations at angles that would be unachievable under normal circumstances.

  4. The high-speed camera ULTRACAM

    NASA Astrophysics Data System (ADS)

    Marsh, T. R.; Dhillon, V. S.

    2006-08-01

    ULTRACAM is a high-speed, tri-band CCD camera designed for observations of time variable celestial objects. Commissioned on the 4.2m WHT in La Palma, it has now been used for observations of many types of phenomena and objects including stellar occultations, accreting black-holes, neutron stars and white dwarfs, pulsars, eclipsing binaries and pulsating stars. In this paper we describe the salient features of ULTRACAM and discuss some of the results of its use.

  5. HIGH SPEED KERR CELL FRAMING CAMERA

    DOEpatents

    Goss, W.C.; Gilley, L.F.

    1964-01-01

    The present invention relates to a high speed camera utilizing a Kerr cell shutter and a novel optical delay system having no moving parts. The camera can selectively photograph at least 6 frames within 9 x 10/sup -8/ seconds during any such time interval of an occurring event. The invention utilizes particularly an optical system which views and transmits 6 images of an event to a multi-channeled optical delay relay system. The delay relay system has optical paths of successively increased length in whole multiples of the first channel optical path length, into which optical paths the 6 images are transmitted. The successively delayed images are accepted from the exit of the delay relay system by an optical image focusing means, which in turn directs the images into a Kerr cell shutter disposed to intercept the image paths. A camera is disposed to simultaneously view and record the 6 images during a single exposure of the Kerr cell shutter. (AEC)

  6. High-strain-rate fracture behavior of steel: the new application of a high-speed video camera to the fracture initiation experiments of steel

    NASA Astrophysics Data System (ADS)

    Suzuki, Goro; Ichinose, Kensuke; Gomi, Kenji; Kaneda, Teruo

    1997-12-01

    High-speed event capturing was conducted to determine the fracture initiation load of a hot-rolled steel under rapid loading conditions. The loading tests were carried out on compact specimens which were a single edge-notched and fatigue cracked plate loaded in tension. The impact velocities in the tests were 0.1 - 5.0 m/s. The influences of the impact velocity on the fracture initiation load were confirmed. The new application of a high-speed camera to the fracture initiation experiments has been confirmed.

  7. High Speed and Slow Motion: The Technology of Modern High Speed Cameras

    ERIC Educational Resources Information Center

    Vollmer, Michael; Mollmann, Klaus-Peter

    2011-01-01

    The enormous progress in the fields of microsystem technology, microelectronics and computer science has led to the development of powerful high speed cameras. Recently a number of such cameras became available as low cost consumer products which can also be used for the teaching of physics. The technology of high speed cameras is discussed,…

  8. High Speed and Slow Motion: The Technology of Modern High Speed Cameras

    ERIC Educational Resources Information Center

    Vollmer, Michael; Mollmann, Klaus-Peter

    2011-01-01

    The enormous progress in the fields of microsystem technology, microelectronics and computer science has led to the development of powerful high speed cameras. Recently a number of such cameras became available as low cost consumer products which can also be used for the teaching of physics. The technology of high speed cameras is discussed,…

  9. High-speed shutter for mirror cameras

    NASA Astrophysics Data System (ADS)

    Trofimenko, Vladimir V.; Klimashin, V. P.; Drozhbin, Yu. A.

    1999-06-01

    High-speed mirror cameras are mainly used for investigations of quick processes in a wide spectral range of radiation including ultraviolet and infrared regions (from 0.2 to 11 micrometer). High-speed shutters for these cameras must be non-selective and when opened must transmit the whole radiation without refraction, absorption and scattering. Electromechanical, electrodynamic and induction-dynamic shutters possess such properties because their optical channels contain no medium. Electromechanical shutters are devices where the displacement of the working blind which opens or closes an aperture is produced by a spring. Such shutters are relatively slow and are capable of closing an aperture of 50 mm in diameter in 10 - 15 ms. Electrodynamic and induction-dynamic shutters are devices where displacement of a blind is produced by the electromagnetic interaction between circuits with electric currents. In induction-dynamic shutter the secondary circuit is current-conducting blind itself in which a short-circuited loop forms. The latter is more quick because of the lower mass of its moveable secondary circuit. For this reason induction-dynamic shutters with a flat primary circuit coil and a tightly fitted to it load- bearing aluminum plate have been investigated. The blind which opens or closes an aperture was attached to this plate. The dependencies of cut-off time on the form, size and the number of turns of the primary circuit coil, on size, type of material, thickness and weight of the load-bearing plate and the blind, as well as on capacitance in the discharge circuit and the capacitor voltage have been investigated. The influence of the environmental atmosphere on the cut-off time was also studied. For this purpose the shutter was placed into the chamber where vacuum up to 10- atm could be produced. As a result the values of the above mentioned parameters have been optimized and the designs of the shutters which are shown have been developed.

  10. Errors in particle tracking velocimetry with high-speed cameras.

    PubMed

    Feng, Yan; Goree, J; Liu, Bin

    2011-05-01

    Velocity errors in particle tracking velocimetry (PTV) are studied. When using high-speed video cameras, the velocity error may increase at a high camera frame rate. This increase in velocity error is due to particle-position uncertainty, which is one of the two sources of velocity errors studied here. The other source of error is particle acceleration, which has the opposite trend of diminishing at higher frame rates. Both kinds of errors can propagate into quantities calculated from velocity, such as the kinetic temperature of particles or correlation functions. As demonstrated in a dusty plasma experiment, the kinetic temperature of particles has no unique value when measured using PTV, but depends on the sampling time interval or frame rate. It is also shown that an artifact appears in an autocorrelation function computed from particle positions and velocities, and it becomes more severe when a small sampling-time interval is used. Schemes to reduce these errors are demonstrated.

  11. The VK-8L High - Speed Camera

    NASA Astrophysics Data System (ADS)

    Venatovsky, I. V.; Tsukanov, A. A.; Kirillov, V. A.

    1985-02-01

    To enhance the time resolution of high-speed cine equipment during the investigation of rapidly flowing processes, a light source to illumi late an object under test is represented b7 solid-state laser exposure devices operating in the mode of Q-factor flodulation. With a high-speed eine cafiera being run in the continuous scanning mode, these devices will permit a sequence of fra Mlles to be obtained within a short exposure time of 150 ns to 200 nanoseconds. At scanning speeds of up to 250 m/s this will ensure satisfactory image quality from the slear viewpoint. In the case of faster continuous scanuin speeds and of shorter exposure times, it becomes necessary to run the high-speed cauera in the fl ode of frame-by-frame cinematography.

  12. Defect visualization in FRP-bonded concrete by using high speed camera and motion magnification technique

    NASA Astrophysics Data System (ADS)

    Qiu, Qiwen; Lau, Denvid

    2017-04-01

    High speed camera has the unique capacity of recording fast-moving objects. By using the video processing technique (e.g. motion magnification), the small motions recorded by the high speed camera can be visualized. Combined use of video camera and motion magnification technique is strongly encouraged to inspect the structures from a distant scene of interest, due to the commonplace availability, operational convenience, and cost-efficiency. This paper presents a non-contact method to evaluate the defect in FRP-bonded concrete structural element based on the surface motion analysis of high speed video. In this study, an instant air pressure is used to initiate the vibration of FRP-bonded concrete and cause the distinct vibration for the interfacial defects. The entire structural surface under the air pressure is recorded by a high-speed camera and the surface motion in video is amplified by motion magnification processing technique. The experimental results demonstrate that motion in the interfacial defect region can be visualized in the high-speed video with motion magnification. This validates the effectiveness of the new NDT method for defect detection in the whole composites structural member. The use of high-speed camera and motion magnification technique has the advantages of remote detection, efficient inspection, and sensitive measurement, which would be beneficial to structural health monitoring.

  13. High-speed cameras at Los Alamos

    NASA Astrophysics Data System (ADS)

    Brixner, Berlyn

    1997-05-01

    In 1943, there was no camera with the microsecond resolution needed for research in Atomic Bomb development. We had the Mitchell camera (100 fps), the Fastax (10 000), the Marley (100 000), the drum streak (moving slit image) 10-5 s resolution, and electro-optical shutters for 10-6 s. Julian Mack invented a rotating-mirror camera for 10-7 s, which was in use by 1944. Small rotating mirror changes secured a resolution of 10-8 s. Photography of oscilloscope traces soon recorded 10-6 resolution, which was later improved to 10-8 s. Mack also invented two time resolving spectrographs for studying the radiation of the first atomic explosion. Much later, he made a large aperture spectrograph for shock wave spectra. An image dissecting drum camera running at 107 frames per second (fps) was used for studying high velocity jets. Brixner invented a simple streak camera which gave 10-8 s resolution. Using a moving film camera, an interferometer pressure gauge was developed for measuring shock-front pressures up to 100 000 psi. An existing Bowen 76-lens frame camera was speeded up by our turbine driven mirror to make 1 500 000 fps. Several streak cameras were made with writing arms from 4 1/2 to 40 in. and apertures from f/2.5 to f/20. We made framing cameras with top speeds of 50 000, 1 000 000, 3 500 000, and 14 000 000 fps.

  14. Observation of alloy solidification using high-speed video

    NASA Technical Reports Server (NTRS)

    Bassler, B. T.; Hofmeister, W. H.; Bayuzick, R. J.; Gorenflo, R.; Bergman, T.; Stockum, L.

    1992-01-01

    A high-speed video instrumentation system was used to observe solidification of undercooled Ti-51 at. pct Al. The camera system developed by Battelle is capable of operation at rates up to 12,000 frames per second. The system digitizes and stores video images acquired by a 64 x 64 pixel silicon photodiode array. In a joint effort with Vanderbilt University the camera was used to observe three transformations of the undercooled alloys, using containerless processing by electromagnetic levitation. The first was solidification where nucleation was induced at an undercooling of 9 percent Tl, where Tl is the liquidus temperature of the alloy, and the second was solidification where nucleation was spontaneous at an undercooling of 15 percent Tl. The third event was a solid-state nucleation and growth transformation following the solidification at an undercooling of 15 percent Tl.

  15. Design of high speed camera based on CMOS technology

    NASA Astrophysics Data System (ADS)

    Park, Sei-Hun; An, Jun-Sick; Oh, Tae-Seok; Kim, Il-Hwan

    2007-12-01

    The capacity of a high speed camera in taking high speed images has been evaluated using CMOS image sensors. There are 2 types of image sensors, namely, CCD and CMOS sensors. CMOS sensor consumes less power than CCD sensor and can take images more rapidly. High speed camera with built-in CMOS sensor is widely used in vehicle crash tests and airbag controls, golf training aids, and in bullet direction measurement in the military. The High Speed Camera System made in this study has the following components: CMOS image sensor that can take about 500 frames per second at a resolution of 1280*1024; FPGA and DDR2 memory that control the image sensor and save images; Camera Link Module that transmits saved data to PC; and RS-422 communication function that enables control of the camera from a PC.

  16. High-Speed Video Analysis of Damped Harmonic Motion

    ERIC Educational Resources Information Center

    Poonyawatpornkul, J.; Wattanakasiwich, P.

    2013-01-01

    In this paper, we acquire and analyse high-speed videos of a spring-mass system oscillating in glycerin at different temperatures. Three cases of damped harmonic oscillation are investigated and analysed by using high-speed video at a rate of 120 frames s[superscript -1] and Tracker Video Analysis (Tracker) software. We present empirical data for…

  17. High-Speed Video Analysis of Damped Harmonic Motion

    ERIC Educational Resources Information Center

    Poonyawatpornkul, J.; Wattanakasiwich, P.

    2013-01-01

    In this paper, we acquire and analyse high-speed videos of a spring-mass system oscillating in glycerin at different temperatures. Three cases of damped harmonic oscillation are investigated and analysed by using high-speed video at a rate of 120 frames s[superscript -1] and Tracker Video Analysis (Tracker) software. We present empirical data for…

  18. Visualization of high speed phenomena using high-speed infrared camera

    NASA Astrophysics Data System (ADS)

    Yaoita, T.; Marcotte, F.

    2017-02-01

    The standard infrared camera has taken certain integration time with the photography per once, it was unsuitable for high-speed photography. By the infrared camera which can buffer photography data efficiently continually, high-speed photography of 2,000fps is enabled in 320X240 pixels and 11,000fps in128X100 pixels by windowing mode. The heat generation of specimen phenomenon is used for the monitoring of the start point of the destruction and the thermometry of combustion gases.

  19. High-speed digital video tracking system for generic applications

    NASA Astrophysics Data System (ADS)

    Walton, James S.; Hallamasek, Karen G.

    2001-04-01

    The value of high-speed imaging for making subjective assessments is widely recognized, but the inability to acquire useful data from image sequences in a timely fashion has severely limited the use of the technology. 4DVideo has created a foundation for a generic instrument that can capture kinematic data from high-speed images. The new system has been designed to acquire (1) two-dimensional trajectories of points; (2) three-dimensional kinematics of structures or linked rigid-bodies; and (3) morphological reconstructions of boundaries. The system has been designed to work with an unlimited number of cameras configured as nodes in a network, with each camera able to acquire images at 1000 frames per second (fps) or better, with a spatial resolution of 512 X 512 or better, and an 8-bit gray scale. However, less demanding configurations are anticipated. The critical technology is contained in the custom hardware that services the cameras. This hardware optimizes the amount of information stored, and maximizes the available bandwidth. The system identifies targets using an algorithm implemented in hardware. When complete, the system software will provide all of the functionality required to capture and process video data from multiple perspectives. Thereafter it will extract, edit and analyze the motions of finite targets and boundaries.

  20. The application of high speed video system for blasting research

    SciTech Connect

    Liu, Q.

    1994-12-31

    Since the establishment of the CANMET Experimental Mine in Val d`Or, Quebec in 1991, research activities in rock fragmentation have been carried out not only in the underground laboratory but also in production mines in the north-west Quebec region. Among the instruments available for rock fragmentation, the Kodak EktaPro 1000 Motion Analyzer (also called the high speed video system) has been one of the most useful tools for monitoring blasts in both underground and open pit. This system is capable of recording events at rates from 50 to 12,000 frames per second (fps). The buffer memory has a capacity of storing 2,050 images. To use this system in the underground, a steel box was made with a bullet-proof glass in the front to protect the camera. The processing and recording equipment are placed in an aluminum case which can be located about 30 m away from the camera via an extension cable. For each blast monitored, high frequency geophones and accelerometers were used to monitor ground vibrations and give a timing reference of the explosive detonations to correlate with the observation of the captured images. This paper describes the practical features of the high speed video system, as well as some applications for monitoring blasts in underground mines and in an open pit. The camera protection and lighting set-up for underground application are described in details. Observation analysis and technical findings which have been useful to help the mines improve their blasting practices are also discussed.

  1. Coded strobing photography: compressive sensing of high speed periodic videos.

    PubMed

    Veeraraghavan, Ashok; Reddy, Dikpal; Raskar, Ramesh

    2011-04-01

    We show that, via temporal modulation, one can observe and capture a high-speed periodic video well beyond the abilities of a low-frame-rate camera. By strobing the exposure with unique sequences within the integration time of each frame, we take coded projections of dynamic events. From a sequence of such frames, we reconstruct a high-speed video of the high-frequency periodic process. Strobing is used in entertainment, medical imaging, and industrial inspection to generate lower beat frequencies. But this is limited to scenes with a detectable single dominant frequency and requires high-intensity lighting. In this paper, we address the problem of sub-Nyquist sampling of periodic signals and show designs to capture and reconstruct such signals. The key result is that for such signals, the Nyquist rate constraint can be imposed on the strobe rate rather than the sensor rate. The technique is based on intentional aliasing of the frequency components of the periodic signal while the reconstruction algorithm exploits recent advances in sparse representations and compressive sensing. We exploit the sparsity of periodic signals in the Fourier domain to develop reconstruction algorithms that are inspired by compressive sensing.

  2. Hypervelocity High Speed Projectile Imagery and Video

    NASA Technical Reports Server (NTRS)

    Henderson, Donald J.

    2009-01-01

    This DVD contains video showing the results of hypervelocity impact. One is showing a projectile impact on a Kevlar wrapped Aluminum bottle containing 3000 psi gaseous oxygen. One video show animations of a two stage light gas gun.

  3. Hypervelocity High Speed Projectile Imagery and Video

    NASA Technical Reports Server (NTRS)

    Henderson, Donald J.

    2009-01-01

    This DVD contains video showing the results of hypervelocity impact. One is showing a projectile impact on a Kevlar wrapped Aluminum bottle containing 3000 psi gaseous oxygen. One video show animations of a two stage light gas gun.

  4. High-speed video processing and display system

    NASA Astrophysics Data System (ADS)

    Dagtekin, Mustafa; DeMarco, Stephen C.; Ramanath, Rajeev; Snyder, Wesley E.

    2000-04-01

    A video processing and display system for performing high speed geometrical image transformations has been designed. It involves looking up the video image by using a pointer memory. The system supports any video format which does not exceed the clock rate that the system supports. It also is capable of changing the brightness and colormap of the image through hardware.

  5. Omnifocus video camera.

    PubMed

    Iizuka, Keigo

    2011-04-01

    The omnifocus video camera takes videos, in which objects at different distances are all in focus in a single video display. The omnifocus video camera consists of an array of color video cameras combined with a unique distance mapping camera called the Divcam. The color video cameras are all aimed at the same scene, but each is focused at a different distance. The Divcam provides real-time distance information for every pixel in the scene. A pixel selection utility uses the distance information to select individual pixels from the multiple video outputs focused at different distances, in order to generate the final single video display that is everywhere in focus. This paper presents principle of operation, design consideration, detailed construction, and over all performance of the omnifocus video camera. The major emphasis of the paper is the proof of concept, but the prototype has been developed enough to demonstrate the superiority of this video camera over a conventional video camera. The resolution of the prototype is high, capturing even fine details such as fingerprints in the image. Just as the movie camera was a significant advance over the still camera, the omnifocus video camera represents a significant advance over all-focus cameras for still images. © 2011 American Institute of Physics

  6. Video-rate fluorescence lifetime imaging camera with CMOS single-photon avalanche diode arrays and high-speed imaging algorithm.

    PubMed

    Li, David D-U; Arlt, Jochen; Tyndall, David; Walker, Richard; Richardson, Justin; Stoppa, David; Charbon, Edoardo; Henderson, Robert K

    2011-09-01

    A high-speed and hardware-only algorithm using a center of mass method has been proposed for single-detector fluorescence lifetime sensing applications. This algorithm is now implemented on a field programmable gate array to provide fast lifetime estimates from a 32 × 32 low dark count 0.13 μm complementary metal-oxide-semiconductor single-photon avalanche diode (SPAD) plus time-to-digital converter array. A simple look-up table is included to enhance the lifetime resolvability range and photon economics, making it comparable to the commonly used least-square method and maximum-likelihood estimation based software. To demonstrate its performance, a widefield microscope was adapted to accommodate the SPAD array and image different test samples. Fluorescence lifetime imaging microscopy on fluorescent beads in Rhodamine 6G at a frame rate of 50 fps is also shown.

  7. Ultra high-speed near-infrared camera

    NASA Astrophysics Data System (ADS)

    Cromwell, Brian; Wilson, Robert; Johnson, Robert

    2005-05-01

    Indigo Operations, a division of FLIR Systems, Inc., has teamed with the Air Force Research Laboratory (AFRL) to develop an ultra high-speed, sixteen channel focal plane array camera that operates in the near-infrared (NIR) region of the electromagnetic spectrum. This science-grade camera can generate over 2,300 frames per second operating with a full-frame spatial resolution of 320 horizontal by 256 vertical pixels and over 22,000 frames per second when windowed to a spatial resolution of 64 by 64 pixels. The camera features FLIR's ISC0207 read-out integrated circuit that provides unique functional modes for the research community such as pixel binning, wavefront sensing, zero dead-time, and external synchronization. The camera employs a standard Camera Link® compatible interface for system control and image acquisition with FLIR's ThermaCAM® RToolsTM software suite. A pre-production version of the high-speed camera will be integrated into the AFRL Directed Energy Directorate's Starfire Optical Range at Kirtland AFB, New Mexico, to be used for adaptive optics research.

  8. A feasibility study of damage detection in beams using high-speed camera (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Wan, Chao; Yuan, Fuh-Gwo

    2017-04-01

    In this paper a method for damage detection in beam structures using high-speed camera is presented. Traditional methods of damage detection in structures typically involve contact (i.e., piezoelectric sensor or accelerometer) or non-contact sensors (i.e., laser vibrometer) which can be costly and time consuming to inspect an entire structure. With the popularity of the digital camera and the development of computer vision technology, video cameras offer a viable capability of measurement including higher spatial resolution, remote sensing and low-cost. In the study, a damage detection method based on the high-speed camera was proposed. The system setup comprises a high-speed camera and a line-laser which can capture the out-of-plane displacement of a cantilever beam. The cantilever beam with an artificial crack was excited and the vibration process was recorded by the camera. A methodology called motion magnification, which can amplify subtle motions in a video is used for modal identification of the beam. A finite element model was used for validation of the proposed method. Suggestions for applications of this methodology and challenges in future work will be discussed.

  9. High-Speed Edge-Detecting Line Scan Smart Camera

    NASA Technical Reports Server (NTRS)

    Prokop, Norman F.

    2012-01-01

    A high-speed edge-detecting line scan smart camera was developed. The camera is designed to operate as a component in a NASA Glenn Research Center developed inlet shock detection system. The inlet shock is detected by projecting a laser sheet through the airflow. The shock within the airflow is the densest part and refracts the laser sheet the most in its vicinity, leaving a dark spot or shadowgraph. These spots show up as a dip or negative peak within the pixel intensity profile of an image of the projected laser sheet. The smart camera acquires and processes in real-time the linear image containing the shock shadowgraph and outputting the shock location. Previously a high-speed camera and personal computer would perform the image capture and processing to determine the shock location. This innovation consists of a linear image sensor, analog signal processing circuit, and a digital circuit that provides a numerical digital output of the shock or negative edge location. The smart camera is capable of capturing and processing linear images at over 1,000 frames per second. The edges are identified as numeric pixel values within the linear array of pixels, and the edge location information can be sent out from the circuit in a variety of ways, such as by using a microcontroller and onboard or external digital interface to include serial data such as RS-232/485, USB, Ethernet, or CAN BUS; parallel digital data; or an analog signal. The smart camera system can be integrated into a small package with a relatively small number of parts, reducing size and increasing reliability over the previous imaging system..

  10. In a Hurry to Work with High-Speed Video at School?

    ERIC Educational Resources Information Center

    Heck, Andre; Uylings, Peter

    2010-01-01

    Casio Computer Co., Ltd., brought in 2008 high-speed video to the consumer level with the release of the EXILIM Pro EX-F1 and the EX-FH20 digital camera.[R] The EX-F1 point-and-shoot camera can shoot up to 60 six-megapixel photos per second and capture movies at up to 1200 frames per second. All this, for a price of about US $1000 at the time of…

  11. In a Hurry to Work with High-Speed Video at School?

    ERIC Educational Resources Information Center

    Heck, Andre; Uylings, Peter

    2010-01-01

    Casio Computer Co., Ltd., brought in 2008 high-speed video to the consumer level with the release of the EXILIM Pro EX-F1 and the EX-FH20 digital camera.[R] The EX-F1 point-and-shoot camera can shoot up to 60 six-megapixel photos per second and capture movies at up to 1200 frames per second. All this, for a price of about US $1000 at the time of…

  12. High speed web printing inspection with multiple linear cameras

    NASA Astrophysics Data System (ADS)

    Shi, Hui; Yu, Wenyong

    2011-12-01

    Purpose: To detect the defects during the high speed process of web printing, such as smudges, doctor streaks, pin holes, character misprints, foreign matters, hazing, wrinkles, etc., which are the main infecting factors to the quality of printing presswork. Methods: A set of novel machine vision system is used to detect the defects. This system consists of distributed data processing with multiple linear cameras, effective anti-blooming illumination design and fast image processing algorithm with blob searching. Also, pattern matching adapted to paper tension and snake-moving are emphasized. Results: Experimental results verify the speed, reliability and accuracy of the proposed system, by which most of the main defects are inspected at real time under the speed of 300 m/min. Conclusions: High speed quality inspection of large-size web requires multiple linear cameras to construct distributed data processing system. Also material characters of the printings should also be stressed to design proper optical structure, so that tiny web defects can be inspected with variably angles of illumination.

  13. Exploring the Sutherland High-speed Optical Cameras (SHOC)

    NASA Astrophysics Data System (ADS)

    Coppejans, Rocco; Gulbis, A. A. S.; Fourie, P.; Rust, M.; Sass, C.; Stoffels, J.; Whittal, H.; Cloete, J.

    2012-10-01

    Based on two existing instruments POETS (Souza et al., 2006, PASP, 118, 1550) and MORIS (Gulbis et al. 2011, PASP, 123, 461), two new instruments, SHOC (the Sutherland High-speed Optical Cameras), have been developed for use on the South African Astronomical Observatorie's (SAAO) 1.9m, 1.0m and 0.75m telescopes at Sutherland, South Africa. Each SHOC system consists of a camera, GPS, control computer and peripherals. The primary components are two, off-the-shelf Andor iXon X3 888 UVB cameras, each of which utilizes a 1024x1024, frame transfer, thermoelectrically-cooled, back-illuminated CCD. SHOC's most important feature is that it can achieve frame rates of between one and twenty frames per second during normal operation (dependent on binning and subframing) with nanosecond timing accuracy on each frame (achieved using frame-by-frame GPS triggering). Frame rates can be increased further and fainter targets observed by making use of the electron multiplying (EM) modes. SHOC is therefore ideally suited to observing transiting exoplanets and stellar occultations of Kuiper Belt objects. For occultations, this advantage is further increased by Sutherland being one of a few observatories on the African continent operating 1m class optical telescopes. Here, we will present the instrument, measured characteristics (including signal-to-noise ratios (SNR) for conventional and EM modes as a function of stellar magnitudes and exposure times), and SHOC's applications to planetary science. Attention will specifically be given to recently completed characterization work in which the SNR parameter space was explored and a comparison made between the SNR obtained in the EM and conventional modes. This will not only enable observers to optimize the instrument settings for their observations but also clearly demonstrates the advantages and potential pitfalls of the EM modes.

  14. High Speed Video Applications In The Pharmaceutical Industry

    NASA Astrophysics Data System (ADS)

    Stapley, David

    1985-02-01

    The pursuit of quality is essential in the development and production of drugs. The pursuit of excellence is relentless, a never ending search. In the pharmaceutical industry, we all know and apply wide-ranging techniques to assure quality production. We all know that in reality none of these techniques are perfect for all situations. We have all experienced, the damaged foil, blister or tube, the missing leaflet, the 'hard to read' batch code. We are all aware of the need to supplement the traditional techniques of fault finding. This paper shows how high speed video systems can be applied to fully automated filling and packaging operations as a tool to aid the company's drive for high quality and productivity. The range of products involved totals some 350 in approximately 3,000 pack variants, encompassing creams, ointments, lotions, capsules, tablets, parenteral and sterile antibiotics. Pharmaceutical production demands diligence at all stages, with optimum use of the techniques offered by the latest technology. Figure 1 shows typical stages of pharmaceutical production in which quality must be assured, and highlights those stages where the use of high speed video systems have proved of value to date. The use of high speed video systems begins with the very first use of machine and materials: commissioning and validation, (the term used for determining that a process is capable of consistently producing the requisite quality) and continues to support inprocess monitoring, throughout the life of the plant. The activity of validation in the packaging environment is particularly in need of a tool to see the nature of high speed faults, no matter how infrequently they occur, so that informed changes can be made precisely and rapidly. The prime use of this tool is to ensure that machines are less sensitive to minor variations in component characteristics.

  15. Camera network video summarization

    NASA Astrophysics Data System (ADS)

    Panda, Rameswar; Roy-Chowdhury, Amit K.

    2017-05-01

    Networks of vision sensors are deployed in many settings, ranging from security needs to disaster response to environmental monitoring. Many of these setups have hundreds of cameras and tens of thousands of hours of video. The difficulty of analyzing such a massive volume of video data is apparent whenever there is an incident that requires foraging through vast video archives to identify events of interest. As a result, video summarization, that automatically extract a brief yet informative summary of these videos, has attracted intense attention in the recent years. Much progress has been made in developing a variety of ways to summarize a single video in form of a key sequence or video skim. However, generating a summary from a set of videos captured in a multi-camera network still remains as a novel and largely under-addressed problem. In this paper, with the aim of summarizing videos in a camera network, we introduce a novel representative selection approach via joint embedding and capped l21-norm minimization. The objective function is two-fold. The first is to capture the structural relationships of data points in a camera network via an embedding, which helps in characterizing the outliers and also in extracting a diverse set of representatives. The second is to use a capped l21-norm to model the sparsity and to suppress the influence of data outliers in representative selection. We propose to jointly optimize both of the objectives, such that embedding can not only characterize the structure, but also indicate the requirements of sparse representative selection. Extensive experiments on standard multi-camera datasets well demonstrate the efficacy of our method over state-of-the-art methods.

  16. Combining High-Speed Cameras and Stop-Motion Animation Software to Support Students' Modeling of Human Body Movement

    ERIC Educational Resources Information Center

    Lee, Victor R.

    2015-01-01

    Biomechanics, and specifically the biomechanics associated with human movement, is a potentially rich backdrop against which educators can design innovative science teaching and learning activities. Moreover, the use of technologies associated with biomechanics research, such as high-speed cameras that can produce high-quality slow-motion video,…

  17. Combining High-Speed Cameras and Stop-Motion Animation Software to Support Students' Modeling of Human Body Movement

    ERIC Educational Resources Information Center

    Lee, Victor R.

    2015-01-01

    Biomechanics, and specifically the biomechanics associated with human movement, is a potentially rich backdrop against which educators can design innovative science teaching and learning activities. Moreover, the use of technologies associated with biomechanics research, such as high-speed cameras that can produce high-quality slow-motion video,…

  18. High-speed optical shutter coupled to fast-readout CCD camera

    NASA Astrophysics Data System (ADS)

    Yates, George J.; Pena, Claudine R.; McDonald, Thomas E., Jr.; Gallegos, Robert A.; Numkena, Dustin M.; Turko, Bojan T.; Ziska, George; Millaud, Jacques E.; Diaz, Rick; Buckley, John; Anthony, Glen; Araki, Takae; Larson, Eric D.

    1999-04-01

    A high frame rate optically shuttered CCD camera for radiometric imaging of transient optical phenomena has been designed and several prototypes fabricated, which are now in evaluation phase. the camera design incorporates stripline geometry image intensifiers for ultra fast image shutters capable of 200ps exposures. The intensifiers are fiber optically coupled to a multiport CCD capable of 75 MHz pixel clocking to achieve 4KHz frame rate for 512 X 512 pixels from simultaneous readout of 16 individual segments of the CCD array. The intensifier, Philips XX1412MH/E03 is generically a Generation II proximity-focused micro channel plate intensifier (MCPII) redesigned for high speed gating by Los Alamos National Laboratory and manufactured by Philips Components. The CCD is a Reticon HSO512 split storage with bi-direcitonal vertical readout architecture. The camera main frame is designed utilizing a multilayer motherboard for transporting CCD video signals and clocks via imbedded stripline buses designed for 100MHz operation. The MCPII gate duration and gain variables are controlled and measured in real time and up-dated for data logging each frame, with 10-bit resolution, selectable either locally or by computer. The camera provides both analog and 10-bit digital video. The camera's architecture, salient design characteristics, and current test data depicting resolution, dynamic range, shutter sequences, and image reconstruction will be presented and discussed.

  19. Upward lightning flashes characteristics from high-speed videos

    NASA Astrophysics Data System (ADS)

    Saba, Marcelo M. F.; Schumann, Carina; Warner, Tom A.; Ferro, Marco Antonio S.; Paiva, Amanda Romão.; Helsdon, John; Orville, Richard E.

    2016-07-01

    One hundred high-speed video recordings (72 cases in Brazil and 28 cases in USA) of negative upward lightning flashes were analyzed. All upward flashes were triggered by another discharge, most of them positive CG flashes. A negative leader passing over the tower(s) was frequently seen in the high-speed video recordings before the initiation of the upward leader. One triggering component can sometimes initiate upward leader in several towers. Characteristics of leader branching, ICC pulses, recoil leader incidence, and interpulse interval are presented in this work. A comparison of the results is done for data obtained in Brazil and USA. The duration of ICC and the total flash duration are on average longer in Brazil than in USA. Only one fourth of all upward leaders are followed by any return strokes both in Brazil and USA, and the average number of return strokes following each upward leader is very low. The presence and duration of CC following return strokes in Brazil is more than two times larger than in USA. Several parameters of upward flashes were compared with similar ones from cloud-to-ground flashes.

  20. Jack & the Video Camera

    ERIC Educational Resources Information Center

    Charlan, Nathan

    2010-01-01

    This article narrates how the use of video camera has transformed the life of Jack Williams, a 10-year-old boy from Colorado Springs, Colorado, who has autism. The way autism affected Jack was unique. For the first nine years of his life, Jack remained in his world, alone. Functionally non-verbal and with motor skill problems that affected his…

  1. Advanced High-Definition Video Cameras

    NASA Technical Reports Server (NTRS)

    Glenn, William

    2007-01-01

    A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.

  2. Preliminary analysis on faint luminous lightning events recorded by multiple high speed cameras

    NASA Astrophysics Data System (ADS)

    Alves, J.; Saraiva, A. V.; Pinto, O.; Campos, L. Z.; Antunes, L.; Luz, E. S.; Medeiros, C.; Buzato, T. S.

    2013-12-01

    The objective of this work is the study of some faint luminous events produced by lightning flashes that were recorded simultaneously by multiple high-speed cameras during the previous RAMMER (Automated Multi-camera Network for Monitoring and Study of Lightning) campaigns. The RAMMER network is composed by three fixed cameras and one mobile color camera separated by, in average, distances of 13 kilometers. They were located in the Paraiba Valley (in the cities of São José dos Campos and Caçapava), SP, Brazil, arranged in a quadrilateral shape, centered in São José dos Campos region. This configuration allowed RAMMER to see a thunderstorm from different angles, registering the same lightning flashes simultaneously by multiple cameras. Each RAMMER sensor is composed by a triggering system and a Phantom high-speed camera version 9.1, which is set to operate at a frame rate of 2,500 frames per second with a lens Nikkor (model AF-S DX 18-55 mm 1:3.5 - 5.6 G in the stationary sensors, and a lens model AF-S ED 24 mm - 1:1.4 in the mobile sensor). All videos were GPS (Global Positioning System) time stamped. For this work we used a data set collected in four RAMMER manual operation days in the campaign of 2012 and 2013. On Feb. 18th the data set is composed by 15 flashes recorded by two cameras and 4 flashes recorded by three cameras. On Feb. 19th a total of 5 flashes was registered by two cameras and 1 flash registered by three cameras. On Feb. 22th we obtained 4 flashes registered by two cameras. Finally, in March 6th two cameras recorded 2 flashes. The analysis in this study proposes an evaluation methodology for faint luminous lightning events, such as continuing current. Problems in the temporal measurement of the continuing current can generate some imprecisions during the optical analysis, therefore this work aim to evaluate the effects of distance in this parameter with this preliminary data set. In the cases that include the color camera we analyzed the RGB

  3. Synthetic streak images (x-t diagrams) from high-speed digital video records

    NASA Astrophysics Data System (ADS)

    Settles, Gary

    2013-11-01

    Modern digital video cameras have entirely replaced the older photographic drum and rotating-mirror cameras for recording high-speed physics phenomena. They are superior in almost every regard except, at speeds approaching one million frames/s, sensor segmentation results in severely reduced frame size, especially height. However, if the principal direction of subject motion is arranged to be along the frame length, a simple Matlab code can extract a row of pixels from each frame and stack them to produce a pseudo-streak image or x-t diagram. Such a 2-D image can convey the essence of the large volume of information contained in a high-speed video sequence, and can be the basis for the extraction of quantitative velocity data. Examples include streak shadowgrams of explosions and gunshots, streak schlieren images of supersonic cavity-flow oscillations, and direct streak images of shock-wave motion in polyurea samples struck by gas-gun projectiles, from which the shock Hugoniot curve of the polymer is measured. This approach is especially useful, since commercial streak cameras remain very expensive and rooted in 20th-century technology.

  4. Development of a dynamic radiographic capability using high-speed video

    SciTech Connect

    Bryant, L.E. Jr.

    1984-01-01

    High-speed video equipment can be used to optically image up to 2000 full frames per second or 12,000 partial frames per second. X-ray image intensifiers have historically been used to image radiographic images at 30 frames per second. By combining these two types of equipment, it is possible to perform dynamic x-ray imaging of up to 2,000 full frames per second. The technique has been demonstrated using conventional, industrial x-ray sources such as 150 kV and 300 kV constant potential x-ray generators, 2.5 MeV Van de Graaffs, and linear accelerators. A crude form of this high-speed radiographic imaging has been shown to be possible with a cobalt 60 source. Use of a maximum aperture lens makes best use of the available light output from the image intensifier. The x-ray image intensifier input and output fluors decay rapidly enough to allow the high frame rate imaging. Data are presented on the maximum possible video frame rates versus x-ray penetration of various thicknesses of aluminum and steel. Photographs illustrate typical radiographic setups using the high speed imaging method. Video recordings show several demonstrations of this technique with the played-back x-ray images slowed down up to 100 times as compared to the actual event speed. Typical applications include boiling type action of liquids in metal containers, compressor operation with visualization of crankshaft, connecting rod and piston movement and thermal battery operation. An interesting aspect of this technique combines both the optical and x-ray capabilities to observe an object or event with both external and internal details with one camera in a visual mode and the other camera in an x-ray mode. This allows both kinds of video images to appear side by side in a synchronized presentation.

  5. Efficient and high speed depth-based 2D to 3D video conversion

    NASA Astrophysics Data System (ADS)

    Somaiya, Amisha Himanshu; Kulkarni, Ramesh K.

    2013-09-01

    Stereoscopic video is the new era in video viewing and has wide applications such as medicine, satellite imaging and 3D Television. Such stereo content can be generated directly using S3D cameras. However, this approach requires expensive setup and hence converting monoscopic content to S3D becomes a viable approach. This paper proposes a depth-based algorithm for monoscopic to stereoscopic video conversion by using the y axis co-ordinates of the bottom-most pixels of foreground objects. This code can be used for arbitrary videos without prior database training. It does not face the limitations of single monocular depth cues nor does it combine depth cues, thus consuming less processing time without affecting the efficiency of the 3D video output. The algorithm, though not comparable to real-time, is faster than the other available 2D to 3D video conversion techniques in the average ratio of 1:8 to 1:20, essentially qualifying as high-speed. It is an automatic conversion scheme, hence directly gives the 3D video output without human intervention and with the above mentioned features becomes an ideal choice for efficient monoscopic to stereoscopic video conversion. [Figure not available: see fulltext.

  6. Design and application of a digital array high-speed camera system

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Yao, Xuefeng; Ma, Yinji; Yuan, Yanan

    2016-03-01

    In this paper, a digital array high-speed camera system is designed and applied in dynamic fracture experiment. First, the design scheme for 3*3 array digital high-speed camera system is presented, including 3*3 array light emitting diode (LED) light source unit, 3*3 array charge coupled device (CCD) camera unit, timing delay control unit, optical imaging unit and impact loading unit. Second, the influence of geometric optical parameters on optical parallax is analyzed based on the geometric optical imaging mechanism. Finally, combining the method of dynamic caustics with the digital high-speed camera system, the dynamic fracture behavior of crack initiation and propagation in PMMA specimen under low-speed impact is investigated to verify the feasibility of the high-speed camera system.

  7. Very High-Speed Digital Video Capability for In-Flight Use

    NASA Technical Reports Server (NTRS)

    Corda, Stephen; Tseng, Ting; Reaves, Matthew; Mauldin, Kendall; Whiteman, Donald

    2006-01-01

    digital video camera system has been qualified for use in flight on the NASA supersonic F-15B Research Testbed aircraft. This system is capable of very-high-speed color digital imaging at flight speeds up to Mach 2. The components of this system have been ruggedized and shock-mounted in the aircraft to survive the severe pressure, temperature, and vibration of the flight environment. The system includes two synchronized camera subsystems installed in fuselage-mounted camera pods (see Figure 1). Each camera subsystem comprises a camera controller/recorder unit and a camera head. The two camera subsystems are synchronized by use of an MHub(TradeMark) synchronization unit. Each camera subsystem is capable of recording at a rate up to 10,000 pictures per second (pps). A state-of-the-art complementary metal oxide/semiconductor (CMOS) sensor in the camera head has a maximum resolution of 1,280 1,024 pixels at 1,000 pps. Exposure times of the electronic shutter of the camera range from 1/200,000 of a second to full open. The recorded images are captured in a dynamic random-access memory (DRAM) and can be downloaded directly to a personal computer or saved on a compact flash memory card. In addition to the high-rate recording of images, the system can display images in real time at 30 pps. Inter Range Instrumentation Group (IRIG) time code can be inserted into the individual camera controllers or into the M-Hub unit. The video data could also be used to obtain quantitative, three-dimensional trajectory information. The first use of this system was in support of the Space Shuttle Return to Flight effort. Data were needed to help in understanding how thermally insulating foam is shed from a space shuttle external fuel tank during launch. The cameras captured images of simulated external tank debris ejected from a fixture mounted under the centerline of the F-15B aircraft. Digital video was obtained at subsonic and supersonic flight conditions, including speeds up to Mach 2

  8. High-speed framing camera with an ellipsoidal scanner.

    PubMed

    Belinsky, A V; Plokhov, A V

    1995-01-01

    A new type of rotating-mirror framing-camera optical system is proposed. A study is reported of the feasibility of the use of an aspherical mirror, with its surface in the shape of a prolate ellipsoid of revolution, in the scanning system of the camera. Starting from the aberration minimization conditions, the optimization of the parameters of the optical system is carried out. An aspherical mirror of this kind performs not only the scanning function, but also acts as a condenser, thus greatly simplifying construction of the camera.

  9. The Calibration of High-Speed Camera Imaging System for ELMs Observation on EAST Tokamak

    NASA Astrophysics Data System (ADS)

    Fu, Chao; Zhong, Fangchuan; Hu, Liqun; Yang, Jianhua; Yang, Zhendong; Gan, Kaifu; Zhang, Bin; East Team

    2016-09-01

    A tangential fast visible camera has been set up in EAST tokamak for the study of edge MHD instabilities such as ELM. To determine the 3-D information from CCD images, Tsai's two-stage technique was utilized to calibrate the high-speed camera imaging system for ELM study. By applying tiles of the passive stabilizers in the tokamak device as the calibration pattern, transformation parameters for transforming from a 3-D world coordinate system to a 2-D image coordinate system were obtained, including the rotation matrix, the translation vector, the focal length and the lens distortion. The calibration errors were estimated and the results indicate the reliability of the method used for the camera imaging system. Through the calibration, some information about ELM filaments, such as positions and velocities were obtained from images of H-mode CCD videos. supported by National Natural Science Foundation of China (No. 11275047), the National Magnetic Confinement Fusion Science Program of China (No. 2013GB102000)

  10. A new high-speed IR camera system

    NASA Technical Reports Server (NTRS)

    Travis, Jeffrey W.; Shu, Peter K.; Jhabvala, Murzy D.; Kasten, Michael S.; Moseley, Samuel H.; Casey, Sean C.; Mcgovern, Lawrence K.; Luers, Philip J.; Dabney, Philip W.; Kaipa, Ravi C.

    1994-01-01

    A multi-organizational team at the Goddard Space Flight Center is developing a new far infrared (FIR) camera system which furthers the state of the art for this type of instrument by the incorporating recent advances in several technological disciplines. All aspects of the camera system are optimized for operation at the high data rates required for astronomical observations in the far infrared. The instrument is built around a Blocked Impurity Band (BIB) detector array which exhibits responsivity over a broad wavelength band and which is capable of operating at 1000 frames/sec, and consists of a focal plane dewar, a compact camera head electronics package, and a Digital Signal Processor (DSP)-based data system residing in a standard 486 personal computer. In this paper we discuss the overall system architecture, the focal plane dewar, and advanced features and design considerations for the electronics. This system, or one derived from it, may prove useful for many commercial and/or industrial infrared imaging or spectroscopic applications, including thermal machine vision for robotic manufacturing, photographic observation of short-duration thermal events such as combustion or chemical reactions, and high-resolution surveillance imaging.

  11. Development of High Speed Digital Camera: EXILIM EX-F1

    NASA Astrophysics Data System (ADS)

    Nojima, Osamu

    The EX-F1 is a high speed digital camera featuring a revolutionary improvement in burst shooting speed that is expected to create entirely new markets. This model incorporates a high speed CMOS sensor and a high speed LSI processor. With this model, CASIO has achieved an ultra-high speed 60 frames per second (fps) burst rate for still images, together with 1,200 fps high speed movie that captures movements which cannot even be seen by human eyes. Moreover, this model can record movies at full High-Definition. After launching it into the market, it was able to get a lot of high appraisals as an innovation camera. We will introduce the concept, features and technologies about the EX-F1.

  12. Real-time Simultaneous DKG and 2D DKG Using High-speed Digital Camera.

    PubMed

    Kang, Duck-Hoon; Wang, Soo-Geun; Park, Hee-June; Lee, Jin-Choon; Jeon, Gye-Rok; Choi, Ill-Sang; Kim, Seon-Jong; Shin, Bum-Joo

    2017-03-01

    For the evaluation of voice disorders, direct observation of vocal cord vibration is important. Among the various methods, laryngeal videostroboscopy (LVS) is widely used, but it was not a true image because it collects images from different cycles. In contrast, high-speed videoendoscopy and videokymography have much higher frame rates and can assess functional and mobility disorders. The purpose of the study is to describe real-time, simultaneous digital kymography (DKG), two-dimensional scanning (2D) DKG, and multi-frame (MF) LVS system using a high-speed digital camera, and identify the efficacy of this system in evaluating vibratory patterns of pathologic voice. The pattern of vocal fold vibration was evaluated in a vocally healthy subject and in subjects with vocal polyp, vocal nodules, vocal cord scar, and vocal cord paralysis. We used both quantitative (left-right phase symmetry, amplitude symmetry index) and qualitative (anterior-posterior phase symmetry) parameters for assessment of vocal fold vibration. Our system could record videos within seconds and required relatively little memory. The speed of replay in the DKG, 2D DKG, MF LVS, and high-speed videoendoscopy was controllable. The number of frame per cycle with MF LVS was almost the same as the fundamental frequency. Our system can provide images of various modalities simultaneously in real time and analyze morphological and functional vibratory patterns. It can be possible to provide a greater level of information for the diagnosis and treatment of vibratory disorders. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  13. Location accuracy evaluation of lightning location systems using natural lightning flashes recorded by a network of high-speed cameras

    NASA Astrophysics Data System (ADS)

    Alves, J.; Saraiva, A. C. V.; Campos, L. Z. D. S.; Pinto, O., Jr.; Antunes, L.

    2014-12-01

    This work presents a method for the evaluation of location accuracy of all Lightning Location System (LLS) in operation in southeastern Brazil, using natural cloud-to-ground (CG) lightning flashes. This can be done through a multiple high-speed cameras network (RAMMER network) installed in the Paraiba Valley region - SP - Brazil. The RAMMER network (Automated Multi-camera Network for Monitoring and Study of Lightning) is composed by four high-speed cameras operating at 2,500 frames per second. Three stationary black-and-white (B&W) cameras were situated in the cities of São José dos Campos and Caçapava. A fourth color camera was mobile (installed in a car), but operated in a fixed location during the observation period, within the city of São José dos Campos. The average distance among cameras was 13 kilometers. Each RAMMER sensor position was determined so that the network can observe the same lightning flash from different angles and all recorded videos were GPS (Global Position System) time stamped, allowing comparisons of events between cameras and the LLS. The RAMMER sensor is basically composed by a computer, a Phantom high-speed camera version 9.1 and a GPS unit. The lightning cases analyzed in the present work were observed by at least two cameras, their position was visually triangulated and the results compared with BrasilDAT network, during the summer seasons of 2011/2012 and 2012/2013. The visual triangulation method is presented in details. The calibration procedure showed an accuracy of 9 meters between the accurate GPS position of the object triangulated and the result from the visual triangulation method. Lightning return stroke positions, estimated with the visual triangulation method, were compared with LLS locations. Differences between solutions were not greater than 1.8 km.

  14. Investigation of fast nonstationary events with high-speed cinematography with a new drum camera

    NASA Astrophysics Data System (ADS)

    Eisfeld, Fritz

    1993-01-01

    For investigations of fast nonstationary events, e.g., flows, injection jets, etc., the high speed cinematography is particularly suitable, but there are difficulties with 3-dimensional motions. First the paper deals with problems and objectives which arose during the development of a new drum camera that is also suitable for high speed holography. The result is a drum camera for up to 200,000 f/s, also for use with holograms. The camera set-up and first test results are described, and possibilities of further developments are shown.

  15. Full-frame, high-speed 3D shape and deformation measurements using stereo-digital image correlation and a single color high-speed camera

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2017-08-01

    Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.

  16. Bullet Retarding Forces in Ballistic Gelatin by Analysis of High Speed Video

    DTIC Science & Technology

    2012-12-28

    analysis principles are the same as in most use of video for kinematic analysis. The position of the object of interest (bullet) is determined...details that can be employed for kinematic analysis of bullets penetrating ballistic gelatin when captured on high speed video with a suitable frame...then there is a dm/dt term that needs to be estimated. High speed video kinematics is much simpler for bullets which do not frag- ment. If a is in ft

  17. Color high-speed video stroboscope system for inspection of human larynx

    NASA Astrophysics Data System (ADS)

    Stasicki, Boleslaw; Meier, G. E. A.

    2001-04-01

    The videostroboscopy of the larynx has become a powerful tool for the study of vocal physiology, assessment of the fold abnormalities, motion impairments and functional disorders, as well as for the early diagnosis of diseases like cancer and pathologies like nodules, carcinoma, polyps and cysts. Since the vocal folds vibrate in the range of 100 Hz up to 1 kHz, the video stroboscope allows physicians to find otherwise undetectable problems. The color information is essential for the physician by the diagnosis e.g., of the early cancer stage. A previously presented 'general purpose' monochrome high-speed video stroboscope has been tested also for the inspection of the human larynx. Good results have encouraged the authors to develop a medical color version. In contrast to the conventional stroboscopes the system does not utilize pulsed light for the object illumination. Instead, a special asynchronously shuttered video camera triggered by the oscillating object has been used. The apparatus including a specially developed digital phase shifter provides a stop phase and slow-motion observation in real time with simultaneous recording of the periodically moving objects. The desired position of the vocal folds or their virtual slowed down vibration speed does not depend of the voice pitch changes. Sequences of hundreds of high resolution color frames can be stored on the hard disk in the standard graphic formats. Afterwards they can be played back frame-by-frame or as a video clip, evaluated, exported, printed out and transmitted via computer networks.

  18. High-speed video imaging and digital analysis of microscopic features in contracting striated muscle cells

    NASA Astrophysics Data System (ADS)

    Roos, Kenneth P.; Taylor, Stuart R.

    1993-02-01

    The rapid motion of microscopic features such as the cross striations of single contracting muscle cells are difficult to capture with conventional optical microscopes, video systems, and image processing approaches. An integrated digital video imaging microscope system specifically designed to capture images from single contracting muscle cells at speeds of up to 240 Hz and to analyze images to extract features critical for the understanding of muscle contraction is described. This system consists of a brightfield microscope with immersion optics coupled to a high-speed charge-coupled device (CCD) video camera, super-VHS (S- VHS) and optical media disk video recording (OMDR) systems, and a semiautomated digital image analysis system. Components are modified to optimize spatial and temporal resolution to permit the evaluation of submicrometer features in real physiological time. This approach permits the critical evaluation of the magnitude, time course, and uniformity of contractile function throughout the volume of a single living cell with higher temporal and spatial resolutions than previously possible.

  19. High Speed Intensified Video Observations of TLEs in Support of PhOCAL

    NASA Technical Reports Server (NTRS)

    Lyons, Walter A.; Nelson, Thomas E.; Cummer, Steven A.; Lang, Timothy; Miller, Steven; Beavis, Nick; Yue, Jia; Samaras, Tim; Warner, Tom A.

    2013-01-01

    The third observing season of PhOCAL (Physical Origins of Coupling to the upper Atmosphere by Lightning) was conducted over the U.S. High Plains during the late spring and summer of 2013. The goal was to capture using an intensified high-speed camera, a transient luminous event (TLE), especially a sprite, as well as its parent cloud-to-ground (SP+CG) lightning discharge, preferably within the domain of a 3-D lightning mapping array (LMA). The co-capture of sprite and its SP+CG was achieved within useful range of an interferometer operating near Rapid City. Other high-speed sprite video sequences were captured above the West Texas LMA. On several occasions the large mesoscale convective complexes (MCSs) producing the TLE-class lightning were also generating vertically propagating convectively generated gravity waves (CGGWs) at the mesopause which were easily visible using NIR-sensitive color cameras. These were captured concurrent with sprites. These observations were follow-ons to a case on 15 April 2012 in which CGGWs were also imaged by the new Day/Night Band on the Suomi NPP satellite system. The relationship between the CGGW and sprite initiation are being investigated. The past year was notable for a large number of elve+halo+sprite sequences sequences generated by the same parent CG. And on several occasions there appear to be prominent banded modulations of the elves' luminosity imaged at >3000 ips. These stripes appear coincident with the banded CGGW structure, and presumably its density variations. Several elves and a sprite from negative CGs were also noted. New color imaging systems have been tested and found capable of capturing sprites. Two cases of sprites with an aurora as a backdrop were also recorded. High speed imaging was also provided in support of the UPLIGHTS program near Rapid City, SD and the USAFA SPRITES II airborne campaign over the Great Plains.

  20. High Speed Video Observations of Natural Lightning and Their Implications to Fractal Description of Lightning

    NASA Astrophysics Data System (ADS)

    Liu, N.; Tilles, J.; Boggs, L.; Bozarth, A.; Rassoul, H.; Riousset, J. A.

    2016-12-01

    Recent high speed video observations of triggered and natural lightning flashes have significantly advanced our understanding of lightning initiation and propagation. For example, they have helped resolve the initiation of lightning leaders [Stolzenburg et al., JGR, 119, 12198, 2014; Montanyà et al, Sci. Rep., 5, 15180, 2015], the stepping of negative leaders [Hill et al., JGR, 116, D16117, 2011], the structure of streamer zone around the leader [Gamerota et al., GRL, 42, 1977, 2015], and transient rebrightening processes occurring during the leader propagation [Stolzenburg et al., JGR, 120, 3408, 2015]. We started an observational campaign in the summer of 2016 to study lightning by using a Phantom high-speed camera on the campus of Florida Institute of Technology, Melbourne, FL. A few interesting natural cloud-to-ground and intracloud lightning discharges have been recorded, including a couple of 8-9 stroke flashes, high peak current flashes, and upward propagating return stroke waves from ground to cloud. The videos show that the propagation of the downward leaders of cloud-to-ground lightning discharges is very complex, particularly for the high-peak current flashes. They tend to develop as multiple branches, and each of them splits repeatedly. For some cases, the propagation characteristics of the leader, such as speed, are subject to sudden changes. In this talk, we present several selected cases to show the complexity of the leader propagation. One of the effective approaches to characterize the structure and propagation of lightning leaders is the fractal description [Mansell et al., JGR, 107, 4075, 2002; Riousset et al., JGR, 112, D15203, 2007; Riousset et al., JGR, 115, A00E10, 2010]. We also present a detailed analysis of the high-speed images of our observations and formulate useful constraints to the fractal description. Finally, we compare the obtained results with fractal simulations conducted by using the model reported in [Riousset et al., 2007

  1. High-Speed Video Analysis in a Conceptual Physics Class

    ERIC Educational Resources Information Center

    Desbien, Dwain M.

    2011-01-01

    The use of probe ware and computers has become quite common in introductory physics classrooms. Video analysis is also becoming more popular and is available to a wide range of students through commercially available and/or free software. Video analysis allows for the study of motions that cannot be easily measured in the traditional lab setting…

  2. High-Speed Video Analysis in a Conceptual Physics Class

    ERIC Educational Resources Information Center

    Desbien, Dwain M.

    2011-01-01

    The use of probe ware and computers has become quite common in introductory physics classrooms. Video analysis is also becoming more popular and is available to a wide range of students through commercially available and/or free software. Video analysis allows for the study of motions that cannot be easily measured in the traditional lab setting…

  3. High-speed digital video imaging system to record cardiac action potentials

    NASA Astrophysics Data System (ADS)

    Mishima, Akira; Arafune, Tatsuhiko; Masamune, Ken; Sakuma, Ichiro; Dohi, Takeyoshi; Shibata, Nitaro; Honjo, Haruo; Kodama, Itsuo

    2001-01-01

    A new digital video imaging system was developed and its performance was evaluated to analyze the spiral wave dynamics during polymorphic ventricular tachycardia (PVT) with high spatio-temporal resolution (1 ms, 0.1 mm). The epicardial surface of isolated rabbit heart stained with di- 4-ANEPPS was illuminated by 72 high-power bluish-green light emitting diodes (BGLED: (lambda) 0 500 nm, 10mw). The emitted fluorescence image (256x256 pixels) passing through a long-pass filter ((lambda) c 660nm) was monitored by a high-speed digital video camera recorder (FASTCAM-Ultima- UV3, Photron) at 1125 fps. The data stored in DRAM were processed by PC for background subtraction. 2D images of excitation wave and single-pixel action potentials at target sites during PVT induced by DC shocks (S2: 10 ms, 20 V) were displayed for 4.5 s. The wave form quality is high enough to observe phase 0 upstroke and to identify repolarization timing. Membrane potentials at the center of spiral were characterized by double-peak or oscillatory depolarization. Singular points during PVT were obtained from isophase mapping. Our new digital video-BGLED system has an advantage over previous ones for more accurate and longer time action potential analysis during spiral wave reentry.

  4. Using a High-Speed Camera to Measure the Speed of Sound

    ERIC Educational Resources Information Center

    Hack, William Nathan; Baird, William H.

    2012-01-01

    The speed of sound is a physical property that can be measured easily in the lab. However, finding an inexpensive and intuitive way for students to determine this speed has been more involved. The introduction of affordable consumer-grade high-speed cameras (such as the Exilim EX-FC100) makes conceptually simple experiments feasible. Since the…

  5. Using a High-Speed Camera to Measure the Speed of Sound

    ERIC Educational Resources Information Center

    Hack, William Nathan; Baird, William H.

    2012-01-01

    The speed of sound is a physical property that can be measured easily in the lab. However, finding an inexpensive and intuitive way for students to determine this speed has been more involved. The introduction of affordable consumer-grade high-speed cameras (such as the Exilim EX-FC100) makes conceptually simple experiments feasible. Since the…

  6. The World in Slow Motion: Using a High-Speed Camera in a Physics Workshop

    ERIC Educational Resources Information Center

    Dewanto, Andreas; Lim, Geok Quee; Kuang, Jianhong; Zhang, Jinfeng; Yeo, Ye

    2012-01-01

    We present a physics workshop for college students to investigate various physical phenomena using high-speed cameras. The technical specifications required, the step-by-step instructions, as well as the practical limitations of the workshop, are discussed. This workshop is also intended to be a novel way to promote physics to Generation-Y…

  7. Digital synchroballistic schlieren camera for high-speed photography of bullets and rocket sleds

    NASA Astrophysics Data System (ADS)

    Buckner, Benjamin D.; L'Esperance, Drew

    2013-08-01

    A high-speed digital streak camera designed for simultaneous high-resolution color photography and focusing schlieren imaging is described. The camera uses a computer-controlled galvanometer scanner to achieve synchroballistic imaging through a narrow slit. Full color 20 megapixel images of a rocket sled moving at 480 m/s and of projectiles fired at around 400 m/s were captured, with high-resolution schlieren imaging in the latter cases, using conventional photographic flash illumination. The streak camera can achieve a line rate for streak imaging of up to 2.4 million lines/s.

  8. Perfect Optical Compensator With 1:1 Shutter Ratio Used For High Speed Camera

    NASA Astrophysics Data System (ADS)

    Zhihong, Rong

    1983-03-01

    An optical compensator used for high speed camera is described. The method of compensation, the analysis of the imaging quality and the result of experiment are introduced. The compensator consists of pairs of parallel mirrors. It can perform perfect compensation even at 1:1 shutter ratio. Using this compensator a high speed camera can be operated with no shutter and can obtain the same image sharpness as that of the intermittent camera. The advantages of this compensator are summarized as follows: . While compensating, the aberration correction of the objective would not be damaged. . There is no displacement and defocussing between the scanning image and the film in frame center during compensation. Increasing the exposure angle doesn't reduce the resolving power. . The compensator can also be used in the projector in place of the intermittent mechanism to practise continuous (non-intermittent) projection without shutter.

  9. Using High-Speed Video Images to Obtain Peak Current Estimates from Natural CG Flashes

    NASA Astrophysics Data System (ADS)

    Saraiva, A. C. V.; Antunes, L.; Campos, L. Z. D. S.; Pinto, O., Jr.

    2016-12-01

    Using high-speed camera data from the RAMMER network, we were able to estimate image-based peak current of some strokes within selected CG flashes. The intent of this work was the establishment of a relationship between the luminosity of the bottom portion of the channel and peak current data from BrasilDAT lightning detection network. The high-speed camera dataset was obtained during the campaign of 2013, in São José dos Campos, Brazil. We choose the videos containing flash multiplicities greater than 10, with visible channel and unsaturated pixels at the return stroke moment. Although new channels were presented in most of the cases, we needed at least 10 strokes that hit the same ground contact point. We ended up with five flashes and 90 individual strokes that met the conditions established for this initial investigation. All five flashes had also, at least, five strokes detected by BrasilDAT, which allowed us to test our methodology. For all flashes we also acquired their waveforms recorded by individual and nearby sensors. The peak current estimates of BrasilDAT were recently compared with older LLS networks with positive results. Also, the waveforms recorded by BrasilDAT individual sensors proved to be useful in the determination of some characteristics of bipolar flashes recorded in the same region. The initial results indicate a positive linear trend between the integrated luminosity of the return strokes and the absolute of the peak current (and also the peak E-field), with R2 > 0.78 for three cases and 0,67 for another one. There was only one case where we didn't find any correlation at all. Since the return stroke intensities were close to the saturation of the CMOS sensor, that may have been the cause of the non-correlation. For each flash, a different linear fit was found. This was somewhat expected since all flashes had their strokes connected to the ground at different distances from the camera and the rain can also interfere with the measurements

  10. Parallel phase-shifting digital holography system using a high-speed camera

    NASA Astrophysics Data System (ADS)

    Awatsuji, Yasuhiro; Kakue, Takashi; Tahara, Tatsuki; Xia, Peng; Nishio, Kenzo; Ura, Shogo; Kubota, Toshihiro; Matoba, Osamu

    2012-11-01

    A technique of single-shot phase-shifting interferometry called as parallel phase-shifting digital holography is described. This technique records multiple holograms required for phase-shifting digital holography using space-division multiplexing technique. The authors constructed a system based on the parallel phase-shifting digital holography consisting of a Mach-Zehnder interferometer and a high-speed polarization imaging camera. High-speed motion pictures of three-dimensional image and phase image of dynamically moving objects at the rate up to180,000 and 262,500 frames per second were achieved, respectively, for the 128 × 128-pixel images.

  11. High-Speed Video, Mapping and Broadband Electric Field Recordings of Lightning

    NASA Astrophysics Data System (ADS)

    Edens, H. E.; Eack, K.; Krehbiel, P. R.; Rison, W.; Thomas, R. J.; Winn, W. P.; Hunyady, S. J.

    2008-12-01

    During the summer of 2008 New Mexico Tech's Lightning Mapping Array (LMA) was reconfigured from a compact to a more widely-spaced network around the Langmuir Laboratory for Atmospheric Research. This change increases its three-dimensional location accuracy further away from the center of the array, as well as reducing false correlations caused by local corona and TV-broadcast interference. The LMA operated in a high time-resolution mode which located RF sources from lightning as often as once every 10 microseconds. We also operated a Vision Research Phantom v7.3 high-speed video camera to make recordings of lightning leaders around the mountaintop laboratory, typically at a rate of 6400 frames per second. These recordings are time-tagged by GPS allowing us to accurately compare individual frames with LMA data. In addition to lightning mapping data and video recordings we acquired broadband electric field waveforms from lightning, up to 100 MHz in bandwidth. These instruments are a comprehensive tool for studying lightning leader processes in great detail. In particular, we focus on continuing our studies into the leader behavior of bolt-from-the-blue lightning, where the leader exhibits a transition from impulsive to more continuous RF radiation as it exits the cloud and propagates to ground.

  12. Combining High-Speed Cameras and Stop-Motion Animation Software to Support Students' Modeling of Human Body Movement

    NASA Astrophysics Data System (ADS)

    Lee, Victor R.

    2015-04-01

    Biomechanics, and specifically the biomechanics associated with human movement, is a potentially rich backdrop against which educators can design innovative science teaching and learning activities. Moreover, the use of technologies associated with biomechanics research, such as high-speed cameras that can produce high-quality slow-motion video, can be deployed in such a way to support students' participation in practices of scientific modeling. As participants in classroom design experiment, fifteen fifth-grade students worked with high-speed cameras and stop-motion animation software (SAM Animation) over several days to produce dynamic models of motion and body movement. The designed series of learning activities involved iterative cycles of animation creation and critique and use of various depictive materials. Subsequent analysis of flipbooks of human jumping movements created by the students at the beginning and end of the unit revealed a significant improvement in both the epistemic fidelity of students' representations. Excerpts from classroom observations highlight the role that the teacher plays in supporting students' thoughtful reflection of and attention to slow-motion video. In total, this design and research intervention demonstrates that the combination of technologies, activities, and teacher support can lead to improvements in some of the foundations associated with students' modeling.

  13. Monitoring the rotation status of wind turbine blades using high-speed camera system

    NASA Astrophysics Data System (ADS)

    Zhang, Dongsheng; Chen, Jubing; Wang, Qiang; Li, Kai

    2013-06-01

    The measurement of the rotating object is of great significance in engineering applications. In this study, a high-speed dual camera system based on 3D digital image correlation has been developed in order to monitor the rotation status of the wind turbine blades. The system allows sequential images acquired at a rate of 500 frames per second (fps). An improved Newton-Raphson algorithm has been proposed which enables detection movement including large rotation and translation in subpixel precision. The simulation experiments showed that this algorithm is robust to identify the movement if the rotation angle is less than 16 degrees between the adjacent images. The subpixel precision is equivalent to the normal NR algorithm, i.e.0.01 pixels in displacement. As a laboratory research, the high speed camera system was used to measure the movement of the wind turbine model which was driven by an electric fan. In the experiment, the image acquisition rate was set at 387 fps and the cameras were calibrated according to Zhang's method. The blade was coated with randomly distributed speckles and 7 locations in the blade along the radial direction were selected. The displacement components of these 7 locations were measured with the proposed method. Conclusion is drawn that the proposed DIC algorithm is suitable for large rotation detection, and the high-speed dual camera system is a promising, economic method in health diagnose of wind turbine blades.

  14. High speed video analysis study of elastic and inelastic collisions

    NASA Astrophysics Data System (ADS)

    Baker, Andrew; Beckey, Jacob; Aravind, Vasudeva; Clarion Team

    We study inelastic and elastic collisions with a high frame rate video capture to study the process of deformation and other energy transformations during collision. Snapshots are acquired before and after collision and the dynamics of collision are analyzed using Tracker software. By observing the rapid changes (over few milliseconds) and slower changes (over few seconds) in momentum and kinetic energy during the process of collision, we study the loss of momentum and kinetic energy over time. Using this data, it could be possible to design experiments that reduce error involved in these experiments, helping students build better and more robust models to understand the physical world. We thank Clarion University undergraduate student grant for financial support involving this project.

  15. Dynamic characteristics of laser-induced vapor bubble formation in water based on high speed camera

    NASA Astrophysics Data System (ADS)

    Zhang, Xian-zeng; Guo, Wenqing; Zhan, Zhenlin; Xie, Shusen

    2013-08-01

    In clinical practice, laser ablation usually works under liquid environment such as water, blood or their mixture. Laser-induced vapor bubble or bubble formation and its consequent dynamics were believed to have important influence on tissue ablation. In the paper, the dynamic process of vapor bubble formation and consequently collapse induced by pulsed Ho:YAG laser in static water was investigated by using high-speed camera. The results showed that vapor channel / bubble can be produced with pulsed Ho:YAG laser, and the whole dynamic process of vapor bubble formation, pulsation and consequently collapse can be monitored by using high-speed camera. The dynamic characteristics of vapor bubble, such as pulsation period, the maximum depth and width were determined. The dependence of above dynamic parameters on incident radiant exposure was also presented. Based on which, the influence of vapor bubble on hard tissue ablation was discussed.

  16. Video cameras on wild birds.

    PubMed

    Rutz, Christian; Bluff, Lucas A; Weir, Alex A S; Kacelnik, Alex

    2007-11-02

    New Caledonian crows (Corvus moneduloides) are renowned for using tools for extractive foraging, but the ecological context of this unusual behavior is largely unknown. We developed miniaturized, animal-borne video cameras to record the undisturbed behavior and foraging ecology of wild, free-ranging crows. Our video recordings enabled an estimate of the species' natural foraging efficiency and revealed that tool use, and choice of tool materials, are more diverse than previously thought. Video tracking has potential for studying the behavior and ecology of many other bird species that are shy or live in inaccessible habitats.

  17. Precision of FLEET Velocimetry Using High-speed CMOS Camera Systems

    NASA Technical Reports Server (NTRS)

    Peters, Christopher J.; Danehy, Paul M.; Bathel, Brett F.; Jiang, Naibo; Calvert, Nathan D.; Miles, Richard B.

    2015-01-01

    Femtosecond laser electronic excitation tagging (FLEET) is an optical measurement technique that permits quantitative velocimetry of unseeded air or nitrogen using a single laser and a single camera. In this paper, we seek to determine the fundamental precision of the FLEET technique using high-speed complementary metal-oxide semiconductor (CMOS) cameras. Also, we compare the performance of several different high-speed CMOS camera systems for acquiring FLEET velocimetry data in air and nitrogen free-jet flows. The precision was defined as the standard deviation of a set of several hundred single-shot velocity measurements. Methods of enhancing the precision of the measurement were explored such as digital binning (similar in concept to on-sensor binning, but done in post-processing), row-wise digital binning of the signal in adjacent pixels and increasing the time delay between successive exposures. These techniques generally improved precision; however, binning provided the greatest improvement to the un-intensified camera systems which had low signal-to-noise ratio. When binning row-wise by 8 pixels (about the thickness of the tagged region) and using an inter-frame delay of 65 micro sec, precisions of 0.5 m/s in air and 0.2 m/s in nitrogen were achieved. The camera comparison included a pco.dimax HD, a LaVision Imager scientific CMOS (sCMOS) and a Photron FASTCAM SA-X2, along with a two-stage LaVision High Speed IRO intensifier. Excluding the LaVision Imager sCMOS, the cameras were tested with and without intensification and with both short and long inter-frame delays. Use of intensification and longer inter-frame delay generally improved precision. Overall, the Photron FASTCAM SA-X2 exhibited the best performance in terms of greatest precision and highest signal-to-noise ratio primarily because it had the largest pixels.

  18. Precision of FLEET Velocimetry Using High-Speed CMOS Camera Systems

    NASA Technical Reports Server (NTRS)

    Peters, Christopher J.; Danehy, Paul M.; Bathel, Brett F.; Jiang, Naibo; Calvert, Nathan D.; Miles, Richard B.

    2015-01-01

    Femtosecond laser electronic excitation tagging (FLEET) is an optical measurement technique that permits quantitative velocimetry of unseeded air or nitrogen using a single laser and a single camera. In this paper, we seek to determine the fundamental precision of the FLEET technique using high-speed complementary metal-oxide semiconductor (CMOS) cameras. Also, we compare the performance of several different high-speed CMOS camera systems for acquiring FLEET velocimetry data in air and nitrogen free-jet flows. The precision was defined as the standard deviation of a set of several hundred single-shot velocity measurements. Methods of enhancing the precision of the measurement were explored such as digital binning (similar in concept to on-sensor binning, but done in post-processing), row-wise digital binning of the signal in adjacent pixels and increasing the time delay between successive exposures. These techniques generally improved precision; however, binning provided the greatest improvement to the un-intensified camera systems which had low signal-to-noise ratio. When binning row-wise by 8 pixels (about the thickness of the tagged region) and using an inter-frame delay of 65 microseconds, precisions of 0.5 meters per second in air and 0.2 meters per second in nitrogen were achieved. The camera comparison included a pco.dimax HD, a LaVision Imager scientific CMOS (sCMOS) and a Photron FASTCAM SA-X2, along with a two-stage LaVision HighSpeed IRO intensifier. Excluding the LaVision Imager sCMOS, the cameras were tested with and without intensification and with both short and long inter-frame delays. Use of intensification and longer inter-frame delay generally improved precision. Overall, the Photron FASTCAM SA-X2 exhibited the best performance in terms of greatest precision and highest signal-to-noise ratio primarily because it had the largest pixels.

  19. Study on the Striation of an Atmospheric Pressure Plasma Flare Using a High Speed Camera

    NASA Astrophysics Data System (ADS)

    Fujiwara, Yutaka; Sakakita, Hajime; Yamada, Hiromasa; Itagaki, Hirotomo; Kiyama, Satoru; Fujiwara, Masanori; Ikehara, Yuzuru; Kim, Jaeho

    2015-09-01

    Characteristics of a low energy atmospheric pressure plasma (LEAPP) specially designed for a medical application has been studied by the visualization of plasma emissions using a high speed camera. The formation of striations in the LEAPP was observed between a nozzle exit and a target material. This result indicates that the plasma propagation is not a bullet type. Detail structure of the striation phenomena will be presented in the conference.

  20. A high speed camera with auto adjustable ROI for product's outline dimension measurement

    NASA Astrophysics Data System (ADS)

    Wang, Qian; Wei, Ping; Ke, Jun; Gao, Jingjing

    2014-11-01

    Currently most domestic factories still manually detect machine arbors to decide if they meet industry standards. This method is costly, low efficient, and easy to misjudge the qualified arbors or miss the unqualified ones, thus seriously affects factories' efficiency and credibility. In this paper, we design a specific high-speed camera system with auto adjustable ROI for machine arbor's outline dimension measurement. The entire system includes an illumination part, a camera part, a mechanic structure part and a signal processing part based on FPGA. The system will help factories to realize automatic arbor measurement, and improve their efficiency and reduce their cost.

  1. Dynamics at the Holuhraun eruption based on high speed video data analysis

    NASA Astrophysics Data System (ADS)

    Witt, Tanja; Walter, Thomas R.

    2016-04-01

    The 2014/2015 Holuhraun eruption was an gas rich fissure eruption with high fountains. The magma was transported by a horizontal dyke over a distance of 45km. At the first day the fountains occur over a distance of 1.5km and focused at isolated vents during the following day. Based on video analysis of the fountains we obtained a detailed view onto the velocities of the eruption, the propagation path of magma, communication between vents and complexities in the magma paths. We collected videos from the Holuhraun eruption with 2 high speed cameras and one DSLR camera from 31st August, 2015 to 4th September, 2015 for several hours. The fountains at adjacent vents visually seemed to be related at all days. Hence, we calculated the height as a function of time from the video data. All fountains show a pulsating regime with apparent and sporadic alternations from meter to several tens of meters heights. By a time-dependent cross-correlation approach developed within the FUTUREVOLC project, we are able to compare the pulses in the height at adjacent vents. We find that in most cases there is a time lag between the pulses. From the calculated time lags between the pulses and the distance between the correlated vents, we calculate the apparent speed of magma pulses. The analysis of the frequency of the fountains and the eruption and rest time between the the fountains itself, are quite similar and suggest a connection and controlling process of the fountains in the feeder below. At the Holuhraun eruption 2014/2015 (Iceland) we find a significant time shift between the single pulses of adjacent vents at all days. The mean velocity of all days is 30-40 km/hr, which could be interpreted by a magma flow velocity along the dike at depth.Comparison of the velocities derived from the video data analysis to the assumed magma flow velocity in the dike based on seismic data shows a very good agreement, implying that surface expressions of pulsating vents provide an insight into the

  2. Algorithm-based high-speed video analysis yields new insights into Strombolian eruptions

    NASA Astrophysics Data System (ADS)

    Gaudin, Damien; Taddeucci, Jacopo; Moroni, Monica; Scarlato, Piergiorgio

    2014-05-01

    Strombolian eruptions are characterized by mild, frequent explosions that eject gas and ash- to bomb-sized pyroclasts into the atmosphere. The observation of the products of the explosion is crucial, both for direct hazard assessment and for understanding eruption dynamics. Conventional thermal and optical imaging allows a first characterization of several eruptive processes, but the use of high speed cameras, with frame rates of 500 Hz or more, allows to follow the particles on multiples frames, and to reconstruct their trajectories. However, the manual processing of the images is time consuming. Consequently, it does not allow neither the routine monitoring nor averaged statistics, since only relatively few, selected particles (usually the fastest) can be taken into account. In addition, manual processing is quite inefficient to compute the total ejected mass, since it requires to count each individual particle. In this presentation, we discuss the advantages of using numerical methods for the tracking of the particles and the description of the explosion. A toolbox called "Pyroclast Tracking Velocimetry" is used to compute the size and the trajectory of each individual particle. A large variety of parameters can be derived and statistically compared: ejection velocity, ejection angle, deceleration, size, mass, etc. At the scale of the explosion, the total mass, the mean velocity of the particles, the number and the frequency of ejection pulses can be estimated. The study of high speed videos from 2 vents from Yasur volcano (Vanuatu) and 4 from Stromboli volcano (Italy) reveals that these parameters are positively correlated. As a consequence, the intensity of an explosion can be quantitatively, and operator-independently described by the total kinetic energy of the bombs, taking into account both the mass and the velocity of the particles. For each vent, a specific range of total kinetic energy can be defined, demonstrating the strong influence of the conduit in

  3. Development of a high-speed CT imaging system using EMCCD camera

    NASA Astrophysics Data System (ADS)

    Thacker, Samta C.; Yang, Kai; Packard, Nathan; Gaysinskiy, Valeriy; Burkett, George; Miller, Stuart; Boone, John M.; Nagarkar, Vivek

    2009-02-01

    The limitations of current CCD-based microCT X-ray imaging systems arise from two important factors. First, readout speeds are curtailed in order to minimize system read noise, which increases significantly with increasing readout rates. Second, the afterglow associated with commercial scintillator films can introduce image lag, leading to substantial artifacts in reconstructed images, especially when the detector is operated at several hundred frames/second (fps). For high speed imaging systems, high-speed readout electronics and fast scintillator films are required. This paper presents an approach to developing a high-speed CT detector based on a novel, back-thinned electron-multiplying CCD (EMCCD) coupled to various bright, high resolution, low afterglow films. The EMCCD camera, when operated in its binned mode, is capable of acquiring data at up to 300 fps with reduced imaging area. CsI:Tl,Eu and ZnSe:Te films, recently fabricated at RMD, apart from being bright, showed very good afterglow properties, favorable for high-speed imaging. Since ZnSe:Te films were brighter than CsI:Tl,Eu films, for preliminary experiments a ZnSe:Te film was coupled to an EMCCD camera at UC Davis Medical Center. A high-throughput tungsten anode X-ray generator was used, as the X-ray fluence from a mini- or micro-focus source would be insufficient to achieve high-speed imaging. A euthanized mouse held in a glass tube was rotated 360 degrees in less than 3 seconds, while radiographic images were recorded at various readout rates (up to 300 fps); images were reconstructed using a conventional Feldkamp cone-beam reconstruction algorithm. We have found that this system allows volumetric CT imaging of small animals in approximately two seconds at ~110 to 190 μm resolution, compared to several minutes at 160 μm resolution needed for the best current systems.

  4. Development of a high-speed H-alpha camera system for the observation of rapid fluctuations in solar flares

    NASA Technical Reports Server (NTRS)

    Kiplinger, Alan L.; Dennis, Brian R.; Orwig, Larry E.; Chen, P. C.

    1988-01-01

    A solid-state digital camera was developed for obtaining H alpha images of solar flares with 0.1 s time resolution. Beginning in the summer of 1988, this system will be operated in conjunction with SMM's hard X-ray burst spectrometer (HXRBS). Important electron time-of-flight effects that are crucial for determining the flare energy release processes should be detectable with these combined H alpha and hard X-ray observations. Charge-injection device (CID) cameras provide 128 x 128 pixel images simultaneously in the H alpha blue wing, line center, and red wing, or other wavelength of interest. The data recording system employs a microprocessor-controlled, electronic interface between each camera and a digital processor board that encodes the data into a serial bitstream for continuous recording by a standard video cassette recorder. Only a small fraction of the data will be permanently archived through utilization of a direct memory access interface onto a VAX-750 computer. In addition to correlations with hard X-ray data, observations from the high speed H alpha camera will also be correlated and optical and microwave data and data from future MAX 1991 campaigns. Whether the recorded optical flashes are simultaneous with X-ray peaks to within 0.1 s, are delayed by tenths of seconds or are even undetectable, the results will have implications on the validity of both thermal and nonthermal models of hard X-ray production.

  5. The development of a high-speed 100 fps CCD camera

    SciTech Connect

    Hoffberg, M.; Laird, R.; Lenkzsus, F. Liu, Chuande; Rodricks, B.; Gelbart, A.

    1996-09-01

    This paper describes the development of a high-speed CCD digital camera system. The system has been designed to use CCDs from various manufacturers with minimal modifications. The first camera built on this design utilizes a Thomson 512x512 pixel CCD as its sensor which is read out from two parallel outputs at a speed of 15 MHz/pixel/output. The data undergoes correlated double sampling after which, they are digitized into 12 bits. The throughput of the system translates into 60 MB/second which is either stored directly in a PC or transferred to a custom designed VXI module. The PC data acquisition version of the camera can collect sustained data in real time that is limited to the memory installed in the PC. The VXI version of the camera, also controlled by a PC, stores 512 MB of real-time data before it must be read out to the PC disk storage. The uncooled CCD can be used either with lenses for visible light imaging or with a phosphor screen for x-ray imaging. This camera has been tested with a phosphor screen coupled to a fiber-optic face plate for high-resolution, high-speed x-ray imaging. The camera is controlled through a custom event-driven user-friendly Windows package. The pixel clock speed can be changed from I MHz to 15 MHz. The noise was measure to be 1.05 bits at a 13.3 MHz pixel clock. This paper will describe the electronics, software, and characterizations that have been performed using both visible and x-ray photons.

  6. Characterization of Axial Inducer Cavitation Instabilities via High Speed Video Recordings

    NASA Technical Reports Server (NTRS)

    Arellano, Patrick; Peneda, Marinelle; Ferguson, Thomas; Zoladz, Thomas

    2011-01-01

    Sub-scale water tests were undertaken to assess the viability of utilizing high resolution, high frame-rate digital video recordings of a liquid rocket engine turbopump axial inducer to characterize cavitation instabilities. These high speed video (HSV) images of various cavitation phenomena, including higher order cavitation, rotating cavitation, alternating blade cavitation, and asymmetric cavitation, as well as non-cavitating flows for comparison, were recorded from various orientations through an acrylic tunnel using one and two cameras at digital recording rates ranging from 6,000 to 15,700 frames per second. The physical characteristics of these cavitation forms, including the mechanisms that define the cavitation frequency, were identified. Additionally, these images showed how the cavitation forms changed and transitioned from one type (tip vortex) to another (sheet cavitation) as the inducer boundary conditions (inlet pressures) were changed. Image processing techniques were developed which tracked the formation and collapse of cavitating fluid in a specified target area, both in the temporal and frequency domains, in order to characterize the cavitation instability frequency. The accuracy of the analysis techniques was found to be very dependent on target size for higher order cavitation, but much less so for the other phenomena. Tunnel-mounted piezoelectric, dynamic pressure transducers were present throughout these tests and were used as references in correlating the results obtained by image processing. Results showed good agreement between image processing and dynamic pressure spectral data. The test set-up, test program, and test results including H-Q and suction performance, dynamic environment and cavitation characterization, and image processing techniques and results will be discussed.

  7. Bullet Retarding Forces in Ballistic Gelatin by Analysis of High Speed Video

    DTIC Science & Technology

    2012-12-28

    use of video for kinematic analysis. The position of the object of interest (bullet) is determined frame by frame in the coordinate system of...energy and forces can be determined by the elementary laws of physics. This paper describes the details that can be employed for kinematic ...estimated. High speed video kinematics is much simpler for bullets which do not frag- ment. If a is in ft/s/s and m is in slugs, the retarding force

  8. A high-speed, pressurised multi-wire gamma camera for dynamic imaging in nuclear medicine

    NASA Astrophysics Data System (ADS)

    Barr, A.; Bonaldi, L.; Carugno, G.; Charpak, G.; Iannuzzi, D.; Nicoletto, M.; Pepato, A.; Ventura, S.

    2002-01-01

    High count rate detectors are of particular interest in nuclear medicine as they permit lower radiation doses to be received by the patient and allow dynamic images of high statistical quality to be obtained. We have developed a high-speed gamma camera based on a multi-wire proportional chamber. The chamber is filled with a xenon gas mixture and has been operated at pressures ranging from 5 to 10 bar. With an active imaging area of 25 cm×25 cm, the chamber has been equipped with an advanced, high rate, digital, electronic read-out system which carries out pulse shaping, energy discrimination, XY coincidence and cluster selection at speeds of up to a few megahertz. In order to ensure stable, long-term operation of the camera without degradation in performance, a gas purification system was designed and integrated into the camera. Measurements have been carried out to determine the properties and applicability of the camera using photon sources in the 20-120 keV energy range. We present some design features of the camera and selected results obtained from preliminary measurements carried out to measure its performance characteristics. Initial images obtained from the camera will also be presented.

  9. Review of concepts and applications of image sampling on high-speed streak cameras

    NASA Astrophysics Data System (ADS)

    Shiraga, H.

    2017-02-01

    Image sampling is a simple, convenient and working scheme to obtain two-dimensional (2D) images on high-speed streak cameras which have only one-dimensional (1D) slit cathode as an imaging sensor on a streak tube. 1D sampling of a 2D image in one direction was realized as Multi-Imaging X-ray Streak camera (MIXS) with a similar configuration to TV raster scan. 2D sampling of a 2D image was realized as 2-D Sampling Image X-ray Streak camera (2D-SIXS) with a similar configuration to CCD pixels. For optical-UV streak cameras, 2D fiber plate coupled to the output of a streak camera was untied and fibers were rearranged to form a line on the cathode slit. In these schemes, clever arrangement of the sampling lines or points relative to the streaking direction were essential for avoiding overlap of the streaked signals with each other. These streak cameras with image sampling technique were successfully applied to laser plasma experiment, particularly for laser-driven nuclear fusion research with simultaneous temporal- and spatial resolutions of 10 ps and 15 μm, respectively. This paper reviews the concept, history, and such applications of the scheme.

  10. Vibration extraction based on fast NCC algorithm and high-speed camera.

    PubMed

    Lei, Xiujun; Jin, Yi; Guo, Jie; Zhu, Chang'an

    2015-09-20

    In this study, a high-speed camera system is developed to complete the vibration measurement in real time and to overcome the mass introduced by conventional contact measurements. The proposed system consists of a notebook computer and a high-speed camera which can capture the images as many as 1000 frames per second. In order to process the captured images in the computer, the normalized cross-correlation (NCC) template tracking algorithm with subpixel accuracy is introduced. Additionally, a modified local search algorithm based on the NCC is proposed to reduce the computation time and to increase efficiency significantly. The modified algorithm can rapidly accomplish one displacement extraction 10 times faster than the traditional template matching without installing any target panel onto the structures. Two experiments were carried out under laboratory and outdoor conditions to validate the accuracy and efficiency of the system performance in practice. The results demonstrated the high accuracy and efficiency of the camera system in extracting vibrating signals.

  11. High-speed video or video stroboscopy in adolescents: which sheds more light?

    PubMed

    Shinghal, Tulika; Low, Aaron; Russell, Laurie; Propst, Evan J; Eskander, Antoine; Campisi, Paolo

    2014-12-01

    The primary objective of this study was to compare the utility of high-speed video (HSV) to videostroboscopy (VS) in the assessment of adolescents with normal and abnormal larynges. A secondary objective was to evaluate the ease of assessment of adolescents with HSV. Case series with chart review. Tertiary academic health care center. This study involved a retrospective review of recordings of 7 adolescents assessed with both HSV and VS. The 14 recordings were randomized and presented to 4 groups of blinded evaluators: 2 fellowship-trained laryngologists, 2 speech language pathologists (SLP) with multiyear experience working in a voice clinic, 2 pediatric otolaryngologists, and 2 otolaryngology residents. Raters were asked to evaluate the videos using a standardized scoring tool. Raters also completed a questionnaire assessing their opinion of the HSV and VS recordings. Evaluators required more time to complete their assessment of VS recordings (2.95 min ± 2.41 min) than HSV recordings (2.31 min ± 1.92 min) (P = .004). There was no difference in ease of evaluation (P = .878) or diagnostic accuracy within evaluator groups by recording modality (P = .5). The overall agreement between VS and HSV was moderate (kappa [SE] = 0.446 [0.029]). The debrief questionnaire revealed that 5 of 8 (62.5%) preferred VS to HSV. This is the first comparative study between HSV and VS in patients under 18 years of age. HSV permitted faster evaluation than VS, but there was no difference in diagnostic accuracy between the 2 modalities. The evaluators preferred VS to HSV. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2014.

  12. In-Situ Observation of Horizontal Centrifugal Casting using a High-Speed Camera

    NASA Astrophysics Data System (ADS)

    Esaka, Hisao; Kawai, Kohsuke; Kaneko, Hiroshi; Shinozuka, Kei

    2012-07-01

    In order to understand the solidification process of horizontal centrifugal casting, experimental equipment for in-situ observation using transparent organic substance has been constructed. Succinonitrile-1 mass% water alloy was filled in the round glass cell and the glass cell was completely sealed. To observe the movement of equiaxed grains more clearly and to understand the effect of movement of free surface, a high-speed camera has been installed on the equipment. The most advantageous point of this equipment is that the camera rotates with mold, so that one can observe the same location of the glass cell. Because the recording rate could be increased up to 250 frames per second, the quality of movie was dramatically modified and this made easier and more precise to pursue the certain equiaxed grain. The amplitude of oscillation of equiaxed grain ( = At) decreased as the solidification proceeded.

  13. Low cost alternative of high speed visible light camera for tokamak experiments

    SciTech Connect

    Odstrcil, T.; Grover, O.; Svoboda, V.; Odstrcil, M.; Duran, I.; Mlynar, J.

    2012-10-15

    We present design, analysis, and performance evaluation of a new, low cost and high speed visible-light camera diagnostic system for tokamak experiments. The system is based on the camera Casio EX-F1, with the overall price of approximately a thousand USD. The achieved temporal resolution is up to 40 kHz. This new diagnostic was successfully implemented and tested at the university tokamak GOLEM (R = 0.4 m, a = 0.085 m, B{sub T} < 0.5 T, I{sub p} < 4 kA). One possible application of this new diagnostic at GOLEM is discussed in detail. This application is tomographic reconstruction for estimation of plasma position and emissivity.

  14. A novel compact high speed x-ray streak camera (invited).

    PubMed

    Hares, J D; Dymoke-Bradshaw, A K L

    2008-10-01

    Conventional in-line high speed streak cameras have fundamental issues when their performance is extended below a picosecond. The transit time spread caused by both the spread in the photoelectron (PE) "birth" energy and space charge effects causes significant electron pulse broadening along the axis of the streak camera and limits the time resolution. Also it is difficult to generate a sufficiently large sweep speed. This paper describes a new instrument in which the extraction electrostatic field at the photocathode increases with time, converting time to PE energy. A uniform magnetic field is used to measure the PE energy, and thus time, and also focuses in one dimension. Design calculations are presented for the factors limiting the time resolution. With our design, subpicosecond resolution with high dynamic range is expected.

  15. Low cost alternative of high speed visible light camera for tokamak experiments.

    PubMed

    Odstrcil, T; Odstrcil, M; Grover, O; Svoboda, V; Duran, I; Mlynár, J

    2012-10-01

    We present design, analysis, and performance evaluation of a new, low cost and high speed visible-light camera diagnostic system for tokamak experiments. The system is based on the camera Casio EX-F1, with the overall price of approximately a thousand USD. The achieved temporal resolution is up to 40 kHz. This new diagnostic was successfully implemented and tested at the university tokamak GOLEM (R = 0.4 m, a = 0.085 m, B(T) < 0.5 T, I(p) < 4 kA). One possible application of this new diagnostic at GOLEM is discussed in detail. This application is tomographic reconstruction for estimation of plasma position and emissivity.

  16. Estimation of Rotational Velocity of Baseball Using High-Speed Camera Movies

    NASA Astrophysics Data System (ADS)

    Inoue, Takuya; Uematsu, Yuko; Saito, Hideo

    Movies can be used to analyze a player's performance and improve his/her skills. In the case of baseball, the pitching is recorded by using a high-speed camera, and the recorded images are used to improve the pitching skills of the players. In this paper, we present a method for estimating of the rotational velocity of a baseball on the basis of movies recorded by high-speed cameras. Unlike in the previous methods, we consider the original seam pattern of the ball seen in the input movie and identify the corresponding image from a database of images by adopting the parametric eigenspace method. These database images are CG Images. The ball's posture can be determined on the basis of the rotational parameters. In the proposed method, the symmetric property of the ball is also taken into consideration, and the time continuity is used to determine the ball's posture. In the experiments, we use the proposed method to estimate the rotational velocity of a baseball on the basis of real movies and movies consisting of CG images of the baseball. The results of both the experiments prove that our method can be used to estimate the ball's rotation accurately.

  17. System Synchronizes Recordings from Separated Video Cameras

    NASA Technical Reports Server (NTRS)

    Nail, William; Nail, William L.; Nail, Jasper M.; Le, Doung T.

    2009-01-01

    A system of electronic hardware and software for synchronizing recordings from multiple, physically separated video cameras is being developed, primarily for use in multiple-look-angle video production. The system, the time code used in the system, and the underlying method of synchronization upon which the design of the system is based are denoted generally by the term "Geo-TimeCode(TradeMark)." The system is embodied mostly in compact, lightweight, portable units (see figure) denoted video time-code units (VTUs) - one VTU for each video camera. The system is scalable in that any number of camera recordings can be synchronized. The estimated retail price per unit would be about $350 (in 2006 dollars). The need for this or another synchronization system external to video cameras arises because most video cameras do not include internal means for maintaining synchronization with other video cameras. Unlike prior video-camera-synchronization systems, this system does not depend on continuous cable or radio links between cameras (however, it does depend on occasional cable links lasting a few seconds). Also, whereas the time codes used in prior video-camera-synchronization systems typically repeat after 24 hours, the time code used in this system does not repeat for slightly more than 136 years; hence, this system is much better suited for long-term deployment of multiple cameras.

  18. High-speed video-based tracking of optically trapped colloids

    NASA Astrophysics Data System (ADS)

    Otto, O.; Gornall, J. L.; Stober, G.; Czerwinski, F.; Seidel, R.; Keyser, U. F.

    2011-04-01

    We have developed an optical tweezer setup, with high-speed and real-time position tracking, based on a CMOS camera technology. Our software encoded algorithm is cross-correlation based and implemented on a standard computer. By measuring the fluctuations of a confined colloid at 6000 frames s - 1, continuously for an hour, we show our technique is a viable alternative to quadrant photodiodes. The optical trap is calibrated by using power spectrum analysis and the Stokes method. The trap stiffness is independent of the camera frame rate and scales linearly with the applied laser power. The analysis of our data by Allan variance demonstrates single nanometer accuracy in position detection.

  19. The NACA High-Speed Motion-Picture Camera Optical Compensation at 40,000 Photographs Per Second

    NASA Technical Reports Server (NTRS)

    Miller, Cearcy D

    1946-01-01

    The principle of operation of the NACA high-speed camera is completely explained. This camera, operating at the rate of 40,000 photographs per second, took the photographs presented in numerous NACA reports concerning combustion, preignition, and knock in the spark-ignition engine. Many design details are presented and discussed, details of an entirely conventional nature are omitted. The inherent aberrations of the camera are discussed and partly evaluated. The focal-plane-shutter effect of the camera is explained. Photographs of the camera are presented. Some high-speed motion pictures of familiar objects -- photoflash bulb, firecrackers, camera shutter -- are reproduced as an illustration of the quality of the photographs taken by the camera.

  20. Review of ULTRANAC high-speed camera: applications, results, and techniques

    NASA Astrophysics Data System (ADS)

    Lawrence, Brett R.

    1997-05-01

    The ULTRANAC Ultra-High Speed Framing and Streak Camera System, from Imco Electro-Optics Limited of England was first presented to the market at the 19th ICHSPP held in Cambridge, England, in 1990. It was the world's first fully computerized image converter camera and is capable of remote programming at framing speeds up to 20 million fps and streak speeds up to 1 nS/mm. The delay, exposure, interframe and output trigger times can be independently programmed within any one sequence. Increased spatial resolution is obtained by generating a series of static frames during the exposure period as opposed to the previously utilized sine wave shuttering technique. The first ULTRANAC was supplied to Japan, through the parent company, NAC, in 1991. Since then, more than 40 cameras have been installed world-wide. The range of applications is many and varied covering impact studies, shock wave research, high voltage discharge, ballistics, detonics, laser and plasma effects, combustion and injection research, nuclear and particle studies, crack propagation and ink jet printer development among many others. This paper attempts to present the results obtained from such tests. It will describe the methods of recording the images, both film and electronically, and recent advances in cooled CCD image technology and associated software analysis programs.

  1. Three-dimensional optical reconstruction of vocal fold kinematics using high-speed video with a laser projection system

    PubMed Central

    Luegmair, Georg; Mehta, Daryush D.; Kobler, James B.; Döllinger, Michael

    2015-01-01

    Vocal fold kinematics and its interaction with aerodynamic characteristics play a primary role in acoustic sound production of the human voice. Investigating the temporal details of these kinematics using high-speed videoendoscopic imaging techniques has proven challenging in part due to the limitations of quantifying complex vocal fold vibratory behavior using only two spatial dimensions. Thus, we propose an optical method of reconstructing the superior vocal fold surface in three spatial dimensions using a high-speed video camera and laser projection system. Using stereo-triangulation principles, we extend the camera-laser projector method and present an efficient image processing workflow to generate the three-dimensional vocal fold surfaces during phonation captured at 4000 frames per second. Initial results are provided for airflow-driven vibration of an ex vivo vocal fold model in which at least 75% of visible laser points contributed to the reconstructed surface. The method captures the vertical motion of the vocal folds at a high accuracy to allow for the computation of three-dimensional mucosal wave features such as vibratory amplitude, velocity, and asymmetry. PMID:26087485

  2. Three-Dimensional Optical Reconstruction of Vocal Fold Kinematics Using High-Speed Video With a Laser Projection System.

    PubMed

    Luegmair, Georg; Mehta, Daryush D; Kobler, James B; Döllinger, Michael

    2015-12-01

    Vocal fold kinematics and its interaction with aerodynamic characteristics play a primary role in acoustic sound production of the human voice. Investigating the temporal details of these kinematics using high-speed videoendoscopic imaging techniques has proven challenging in part due to the limitations of quantifying complex vocal fold vibratory behavior using only two spatial dimensions. Thus, we propose an optical method of reconstructing the superior vocal fold surface in three spatial dimensions using a high-speed video camera and laser projection system. Using stereo-triangulation principles, we extend the camera-laser projector method and present an efficient image processing workflow to generate the three-dimensional vocal fold surfaces during phonation captured at 4000 frames per second. Initial results are provided for airflow-driven vibration of an ex vivo vocal fold model in which at least 75% of visible laser points contributed to the reconstructed surface. The method captures the vertical motion of the vocal folds at a high accuracy to allow for the computation of three-dimensional mucosal wave features such as vibratory amplitude, velocity, and asymmetry.

  3. High-speed camera based on a CMOS active pixel sensor

    NASA Astrophysics Data System (ADS)

    Bloss, Hans S.; Ernst, Juergen D.; Firla, Heidrun; Schmoelz, Sybille C.; Gick, Stephan K.; Lauxtermann, Stefan C.

    2000-02-01

    Standard CMOS technologies offer great flexibility in the design of image sensors, which is a big advantage especially for high framerate system. For this application we have integrated an active pixel sensor with 256 X 256 pixel using a standard 0.5 micrometers CMOS technologies. With 16 analog outputs and a clockrate of 25-30 MHz per output, a continuous framerate of more than 50000 Hz is achieved. A global synchronous shutter is provided, but it required a more complex pixel circuit of five transistors and a special pixel layout to get a good optical fill factor. The active area of the photodiode is 9 X 9 micrometers . These square diodes are arranged in a chess pattern, while the remaining space is used for the electronic circuit. FIll factor is nearly 50 percent. The sensor is embedded in a high-speed camera system with 16 ADCs, 256Mbyte dynamic RAM, FPGAs for high-speed real time image processing, and a PC for user interface, data archive and network operation. Fixed pattern noise, which is always a problem of CMOS sensor, and the mismatching of the 16 analog channels is removed by a pixelwise gain-offset correction. After this, the chess pattern requires a reconstruction of all the 'missing' pixels, which can be done by a special edge sensitive algorithm. So a high quality 512 X 256 image with low remaining noise can be displayed. Sensor, architecture and processing are also suitable for color imaging.

  4. ULTRACAM: an ultrafast, triple-beam CCD camera for high-speed astrophysics

    NASA Astrophysics Data System (ADS)

    Dhillon, V. S.; Marsh, T. R.; Stevenson, M. J.; Atkinson, D. C.; Kerry, P.; Peacocke, P. T.; Vick, A. J. A.; Beard, S. M.; Ives, D. J.; Lunney, D. W.; McLay, S. A.; Tierney, C. J.; Kelly, J.; Littlefair, S. P.; Nicholson, R.; Pashley, R.; Harlaftis, E. T.; O'Brien, K.

    2007-07-01

    ULTRACAM is a portable, high-speed imaging photometer designed to study faint astronomical objects at high temporal resolutions. ULTRACAM employs two dichroic beamsplitters and three frame-transfer CCD cameras to provide three-colour optical imaging at frame rates of up to 500 Hz. The instrument has been mounted on both the 4.2-m William Herschel Telescope on La Palma and the 8.2-m Very Large Telescope in Chile, and has been used to study white dwarfs, brown dwarfs, pulsars, black hole/neutron star X-ray binaries, gamma-ray bursts, cataclysmic variables, eclipsing binary stars, extrasolar planets, flare stars, ultracompact binaries, active galactic nuclei, asteroseismology and occultations by Solar System objects (Titan, Pluto and Kuiper Belt objects). In this paper we describe the scientific motivation behind ULTRACAM, present an outline of its design and report on its measured performance.

  5. High-speed motion picture camera experiments of cavitation in dynamically loaded journal bearings

    NASA Technical Reports Server (NTRS)

    Jacobson, B. O.; Hamrock, B. J.

    1982-01-01

    A high-speed camera was used to investigate cavitation in dynamically loaded journal bearings. The length-diameter ratio of the bearing, the speeds of the shaft and bearing, the surface material of the shaft, and the static and dynamic eccentricity of the bearing were varied. The results reveal not only the appearance of gas cavitation, but also the development of previously unsuspected vapor cavitation. It was found that gas cavitation increases with time until, after many hundreds of pressure cycles, there is a constant amount of gas kept in the cavitation zone of the bearing. The gas can have pressures of many times the atmospheric pressure. Vapor cavitation bubbles, on the other hand, collapse at pressures lower than the atmospheric pressure and cannot be transported through a high-pressure zone, nor does the amount of vapor cavitation in a bearing increase with time. Analysis is given to support the experimental findings for both gas and vapor cavitation.

  6. Study of cavitation bubble dynamics during Ho:YAG laser lithotripsy by high-speed camera

    NASA Astrophysics Data System (ADS)

    Zhang, Jian J.; Xuan, Jason R.; Yu, Honggang; Devincentis, Dennis

    2016-02-01

    Although laser lithotripsy is now the preferred treatment option for urolithiasis, the mechanism of laser pulse induced calculus damage is still not fully understood. This is because the process of laser pulse induced calculus damage involves quite a few physical and chemical processes and their time-scales are very short (down to sub micro second level). For laser lithotripsy, the laser pulse induced impact by energy flow can be summarized as: Photon energy in the laser pulse --> photon absorption generated heat in the water liquid and vapor (super heat water or plasma effect) --> shock wave (Bow shock, acoustic wave) --> cavitation bubble dynamics (oscillation, and center of bubble movement , super heat water at collapse, sonoluminscence) --> calculus damage and motion (calculus heat up, spallation/melt of stone, breaking of mechanical/chemical bond, debris ejection, and retropulsion of remaining calculus body). Cavitation bubble dynamics is the center piece of the physical processes that links the whole energy flow chain from laser pulse to calculus damage. In this study, cavitation bubble dynamics was investigated by a high-speed camera and a needle hydrophone. A commercialized, pulsed Ho:YAG laser at 2.1 mu;m, StoneLightTM 30, with pulse energy from 0.5J up to 3.0 J, and pulse width from 150 mu;s up to 800 μs, was used as laser pulse source. The fiber used in the investigation is SureFlexTM fiber, Model S-LLF365, a 365 um core diameter fiber. A high-speed camera with frame rate up to 1 million fps was used in this study. The results revealed the cavitation bubble dynamics (oscillation and center of bubble movement) by laser pulse at different energy level and pulse width. More detailed investigation on bubble dynamics by different type of laser, the relationship between cavitation bubble dynamics and calculus damage (fragmentation/dusting) will be conducted as a future study.

  7. Television camera video level control system

    NASA Technical Reports Server (NTRS)

    Kravitz, M.; Freedman, L. A.; Fredd, E. H.; Denef, D. E. (Inventor)

    1985-01-01

    A video level control system is provided which generates a normalized video signal for a camera processing circuit. The video level control system includes a lens iris which provides a controlled light signal to a camera tube. The camera tube converts the light signal provided by the lens iris into electrical signals. A feedback circuit in response to the electrical signals generated by the camera tube, provides feedback signals to the lens iris and the camera tube. This assures that a normalized video signal is provided in a first illumination range. An automatic gain control loop, which is also responsive to the electrical signals generated by the camera tube 4, operates in tandem with the feedback circuit. This assures that the normalized video signal is maintained in a second illumination range.

  8. Measurement of intracellular ice formation kinetics by high-speed video cryomicroscopy.

    PubMed

    Karlsson, Jens O M

    2015-01-01

    Quantitative information about the kinetics and cumulative probability of intracellular ice formation is necessary to develop minimally damaging freezing procedures for the cryopreservation of cells and tissue. Conventional cryomicroscopic assays, which rely on indirect evidence of intracellular freezing (e.g., opacity changes in the cell cytoplasm), can yield significant errors in the estimated kinetics. In contrast, the formation and growth of intracellular ice crystals can be accurately detected using temporally resolved imaging methods (i.e., video recording at sub-millisecond resolution). Here, detailed methods for the setup and operation of a high-speed video cryomicroscope system are described, including protocols for imaging of intracellular ice crystallization events, and stochastic analysis of the ice formation kinetics in a cell population. Recommendations are provided for temperature profile design, sample preparation, and configuration of the video acquisition parameters. Throughout this chapter, the protocols incorporate best practices that have been drawn from over a decade of experience with high-speed video cryomicroscopy in our laboratory.

  9. Video indirect ophthalmoscopy using a hand-held video camera.

    PubMed

    Shanmugam, Mahesh P

    2011-01-01

    Fundus photography in adults and cooperative children is possible with a fundus camera or by using a slit lamp-mounted digital camera. Retcam TM or a video indirect ophthalmoscope is necessary for fundus imaging in infants and young children under anesthesia. Herein, a technique of converting and using a digital video camera into a video indirect ophthalmoscope for fundus imaging is described. This device will allow anyone with a hand-held video camera to obtain fundus images. Limitations of this technique involve a learning curve and inability to perform scleral depression.

  10. Wind dynamic range video camera

    NASA Astrophysics Data System (ADS)

    Craig, G. D.

    1985-10-01

    A television camera apparatus is disclosed in which bright objects are attenuated to fit within the dynamic range of the system, while dim objects are not. The apparatus receives linearly polarized light from an object scene, the light being passed by a beam splitter and focused on the output plane of a liquid crystal light valve. The light valve is oriented such that, with no excitation from the cathode ray tube, all light is rotated 90 deg and focused on the input plane of the video sensor. The light is then converted to an electrical signal, which is amplified and used to excite the CRT. The resulting image is collected and focused by a lens onto the light valve which rotates the polarization vector of the light to an extent proportional to the light intensity from the CRT. The overall effect is to selectively attenuate the image pattern focused on the sensor.

  11. Wind dynamic range video camera

    NASA Technical Reports Server (NTRS)

    Craig, G. D. (Inventor)

    1985-01-01

    A television camera apparatus is disclosed in which bright objects are attenuated to fit within the dynamic range of the system, while dim objects are not. The apparatus receives linearly polarized light from an object scene, the light being passed by a beam splitter and focused on the output plane of a liquid crystal light valve. The light valve is oriented such that, with no excitation from the cathode ray tube, all light is rotated 90 deg and focused on the input plane of the video sensor. The light is then converted to an electrical signal, which is amplified and used to excite the CRT. The resulting image is collected and focused by a lens onto the light valve which rotates the polarization vector of the light to an extent proportional to the light intensity from the CRT. The overall effect is to selectively attenuate the image pattern focused on the sensor.

  12. Temperature measurement of mineral melt by means of a high-speed camera.

    PubMed

    Bizjan, Benjamin; Širok, Brane; Drnovšek, Janko; Pušnik, Igor

    2015-09-10

    This paper presents a temperature evaluation method by means of high-speed, visible light digital camera visualization and its application to the mineral wool production process. The proposed method adequately resolves the temperature-related requirements in mineral wool production and significantly improves the spatial and temporal resolution of measured temperature fields. Additionally, it is very cost effective in comparison with other non-contact temperature field measurement methods, such as infrared thermometry. Using the proposed method for temperatures between 800°C and 1500°C, the available temperature measurement range is approximately 300 K with a single temperature calibration point and without the need for camera setting adjustments. In the case of a stationary blackbody, the proposed method is able to produce deviations of less than 5 K from the reference (thermocouple-measured) temperature in a measurement range within 100 K from the calibration temperature. The method was also tested by visualization of rotating melt film in the rock wool production process. The resulting temperature fields are characterized by a very good temporal and spatial resolution (18,700 frames per second at 128  pixels×328  pixels and 8000 frames per second at 416  pixels×298  pixels).

  13. Flow-pattern analysis of artificial heart valves using high-speed camera and PIV technique

    NASA Astrophysics Data System (ADS)

    Lee, Dong Hyuk; Seo, Soo W.; Min, Byong Goo

    1995-05-01

    Artificial heart valve is one of the most important artificial organs which have been implanted to many patients. The most serious problems related to the artificial heart valve prothesis are thrombosis and hemolysis. In vivo experiment to test against this problem is complex and hard work. Nowadays the request for in vitro artificial heart valve testing system is increasing. Several papers have announced us that the flow pattern of artificial heart valve is highly correlated with thrombosis and hemolysis. LDA is a usual method to get flow pattern, which is difficult to operate, is expensive and has narrow measure region. PIV (Particle Image Velocimetry) can solve these problems. Because the flow speed of valve is too high to catch particles by CCD camera and high-speed camera (Hyspeed; Holland-Photonics) was used. The estimated max flow speed was 5 m/sec and max trackable length is 0.5 cm, so the shutter speed was determined as 1000 frames per sec. Several image processing techniques (blurring, segmentation, morphology, etc.) were used for the preprocessing. Particle tracking algorithm and 2D interpolation technique which were necessary in making gridrized velocity profile, were applied to this PIV program. By using Single- Pulse Multi-Frame particle tracking algorithm, some problems of PIV can be solved. To eliminate particles which penetrate the sheeted plane and to determine the direction of particle paths are these. 1D relaxation formula is modified to interpolate 2D field. Parachute artificial heart valve which was developed by Scoul National University and Bjork-Shiely valve was testified. For each valve, different flow pattern, velocity profile, wall shear stress, turbulence intensity profile and mean velocity were obtained. Those parameters were compared with the result of in vivo experiment. In this experiment we can conclude wall shear stress is not high enough to generate hemolysis and higher turbulence intensity to make more hemolysis. For further

  14. Game of thrown bombs in 3D: using high speed cameras and photogrammetry techniques to reconstruct bomb trajectories at Stromboli (Italy)

    NASA Astrophysics Data System (ADS)

    Gaudin, D.; Taddeucci, J.; Scarlato, P.; Del Bello, E.; Houghton, B. F.; Orr, T. R.; Andronico, D.; Kueppers, U.

    2015-12-01

    Large juvenile bombs and lithic clasts, produced and ejected during explosive volcanic eruptions, follow ballistic trajectories. Of particular interest are: 1) the determination of ejection velocity and launch angle, which give insights into shallow conduit conditions and geometry; 2) particle trajectories, with an eye on trajectory evolution caused by collisions between bombs, as well as the interaction between bombs and ash/gas plumes; and 3) the computation of the final emplacement of bomb-sized clasts, which is important for hazard assessment and risk management. Ground-based imagery from a single camera only allows the reconstruction of bomb trajectories in a plan perpendicular to the line of sight, which may lead to underestimation of bomb velocities and does not allow the directionality of the ejections to be studied. To overcome this limitation, we adapted photogrammetry techniques to reconstruct 3D bomb trajectories from two or three synchronized high-speed video cameras. In particular, we modified existing algorithms to consider the errors that may arise from the very high velocity of the particles and the impossibility of measuring tie points close to the scene. Our method was tested during two field campaigns at Stromboli. In 2014, two high-speed cameras with a 500 Hz frame rate and a ~2 cm resolution were set up ~350m from the crater, 10° apart and synchronized. The experiment was repeated with similar parameters in 2015, but using three high-speed cameras in order to significantly reduce uncertainties and allow their estimation. Trajectory analyses for tens of bombs at various times allowed for the identification of shifts in the mean directivity and dispersal angle of the jets during the explosions. These time evolutions are also visible on the permanent video-camera monitoring system, demonstrating the applicability of our method to all kinds of explosive volcanoes.

  15. Multi-Camera Reconstruction of Fine Scale High Speed Auroral Dynamics

    NASA Astrophysics Data System (ADS)

    Hirsch, M.; Semeter, J. L.; Zettergren, M. D.; Dahlgren, H.; Goenka, C.; Akbari, H.

    2014-12-01

    The fine spatial structure of dispersive aurora is known to have ground-observable scales of less than 100 meters. The lifetime of prompt emissions is much less than 1 millisecond, and high-speed cameras have observed auroral forms with millisecond scale morphology. Satellite observations have corroborated these spatial and temporal findings. Satellite observation platforms give a very valuable yet passing glance at the auroral region and the precipitation driving the aurora. To gain further insight into the fine structure of accelerated particles driven into the ionosphere, ground-based optical instruments staring at the same region of sky can capture the evolution of processes evolving on time scales from milliseconds to many hours, with continuous sample rates of 100Hz or more. Legacy auroral tomography systems have used baselines of hundreds of kilometers, capturing a "side view" of the field-aligned auroral structure. We show that short baseline (less than 10 km), high speed optical observations fill a measurement gap between legacy long baseline optical observations and incoherent scatter radar. The ill-conditioned inverse problem typical of auroral tomography, accentuated by short baseline optical ground stations is tackled with contemporary data inversion algorithms. We leverage the disruptive electron multiplying charge coupled device (EMCCD) imaging technology and solve the inverse problem via eigenfunctions obtained from a first-principles 1-D electron penetration ionospheric model. We present the latest analysis of observed auroral events from the Poker Flat Research Range near Fairbanks, Alaska. We discuss the system-level design and performance verification measures needed to ensure consistent performance for nightly multi-terabyte data acquisition synchronized between stations to better than 1 millisecond.

  16. The Eye, Film, And Video In High-Speed Motion Analysis

    NASA Astrophysics Data System (ADS)

    Hyzer, William G.

    1987-09-01

    The unaided human eye with its inherent limitations serves us well in the examination of most large-scale, slow-moving, natural and man-made phenomena, but constraints imposed by inertial factors in the visual mechanism severely limit our ability to observe fast-moving and short-duration events. The introduction of high-speed photography (c. 1851) and videography (c. 1970) served to stretch the temporal limits of human perception by several orders of magnitude so critical analysis could be performed on a wide range of rapidly occurring events of scientific, technological, industrial, and educational interest. The preferential selection of eye, film, or video imagery in fulfilling particular motion analysis requirements is determined largely by the comparative attributes and limitations of these methods. The choice of either film or video does not necessarily eliminate the eye, because it usually continues as a vital link in the analytical chain. The important characteristics of the eye, film, and video imagery in high-speed motion analysis are discussed with particular reference to fields of application which include biomechanics, ballistics, machine design, mechanics of materials, sports analysis, medicine, production engineering, and industrial trouble-shooting.

  17. Use of High-Speed X ray and Video to Analyze Distal Radius Fracture Pathomechanics.

    PubMed

    Gutowski, Christina; Darvish, Kurosh; Liss, Frederic E; Ilyas, Asif M; Jones, Christopher M

    2015-10-01

    The purpose of this study is to investigate the failure sequence of the distal radius during a simulated fall onto an outstretched hand using cadaver forearms and high-speed X ray and video systems. This apparatus records the beginning and propagation of bony failure, ultimately resulting in distal radius or forearm fracture. The effects of 3 different wrist guard designs are investigated using this system. Serving as a proof-of-concept analysis, this study supports this imaging technique to be used in larger studies of orthopedic trauma and protective devices and specifically for distal radius fractures.

  18. Performance analysis of a new positron camera geometry for high speed, fine particle tracking

    NASA Astrophysics Data System (ADS)

    Sovechles, J. M.; Boucher, D.; Pax, R.; Leadbeater, T.; Sasmito, A. P.; Waters, K. E.

    2017-09-01

    A new positron camera arrangement was assembled using 16 ECAT951 modular detector blocks. A closely packed, cross pattern arrangement was selected to produce a highly sensitive cylindrical region for tracking particles with low activities and high speeds. To determine the capabilities of this system a comprehensive analysis of the tracking performance was conducted to determine the 3D location error and location frequency as a function of tracer activity and speed. The 3D error was found to range from 0.54 mm for a stationary particle, consistent for all tracer activities, up to 4.33 mm for a tracer with an activity of 3 MBq and a speed of 4 m · s-1. For lower activity tracers (<10-2 MBq), the error was more sensitive to increases in speed, increasing to 28 mm (at 4 m · s-1), indicating that at these conditions a reliable trajectory is not possible. These results expanded on, but correlated well with, previous literature that only contained location errors for tracer speeds up to 1.5 m · s-1. The camera was also used to track directly activated mineral particles inside a two-inch hydrocyclone and a 142 mm diameter flotation cell. A detailed trajectory, inside the hydrocyclone, of a  -212  +  106 µm (10-1 MBq) quartz particle displayed the expected spiralling motion towards the apex. This was the first time a mineral particle of this size had been successfully traced within a hydrocyclone, however more work is required to develop detailed velocity fields.

  19. Synchronization of high speed framing camera and intense electron-beam accelerator

    SciTech Connect

    Cheng Xinbing; Liu Jinliang; Hong Zhiqiang; Qian Baoliang

    2012-06-15

    A new trigger program is proposed to realize the synchronization of high speed framing camera (HSFC) and intense electron-beam accelerator (IEBA). The trigger program which include light signal acquisition radiated from main switch of IEBA and signal processing circuit could provide a trigger signal with rise time of 17 ns and amplitude of about 5 V. First, the light signal was collected by an avalanche photodiode (APD) module, and the delay time between the output voltage of APD and load voltage of IEBA was tested, it was about 35 ns. Subsequently, the output voltage of APD was processed further by the signal processing circuit to obtain the trigger signal. At last, by combining the trigger program with an IEBA, the trigger program operated stably, and a delay time of 30 ns between the trigger signal of HSFC and output voltage of IEBA was obtained. Meanwhile, when surface flashover occurred at the high density polyethylene sample, the delay time between the trigger signal of HSFC and flashover current was up to 150 ns, which satisfied the need of synchronization of HSFC and IEBA. So the experiment results proved that the trigger program could compensate the time (called compensated time) of the trigger signal processing time and the inherent delay time of the HSFC.

  20. High-speed motion picture camera experiments of cavitation in dynamically loaded journal bearings

    NASA Technical Reports Server (NTRS)

    Hamrock, B. J.; Jacobson, B. O.

    1983-01-01

    A high-speed camera was used to investigate cavitation in dynamically loaded journal bearings. The length-diameter ratio of the bearing, the speeds of the shaft and bearing, the surface material of the shaft, and the static and dynamic eccentricity of the bearing were varied. The results reveal not only the appearance of gas cavitation, but also the development of previously unsuspected vapor cavitation. It was found that gas cavitation increases with time until, after many hundreds of pressure cycles, there is a constant amount of gas kept in the cavitation zone of the bearing. The gas can have pressures of many times the atmospheric pressure. Vapor cavitation bubbles, on the other hand, collapse at pressures lower than the atmospheric pressure and cannot be transported through a high-pressure zone, nor does the amount of vapor cavitation in a bearing increase with time. Analysis is given to support the experimental findings for both gas and vapor cavitation. Previously announced in STAR as N82-20543

  1. On the accuracy of framing-rate measurements in ultra-high speed rotating mirror cameras.

    PubMed

    Conneely, Michael; Rolfsnes, Hans O; Main, Charles; McGloin, David; Campbell, Paul A

    2011-08-15

    Rotating mirror systems based on the Miller Principle are a mainstay modality for ultra-high speed imaging within the range 1-25 million frames per second. Importantly, the true temporal accuracy of observations recorded in such cameras is sensitive to the framing rate that the system directly associates with each individual data acquisition. The purpose for the present investigation was to examine the validity of such system-reported frame rates in a widely used commercial system (a Cordin 550-62 model) by independently measuring the framing rate at the instant of triggering. Here, we found a small but significant difference between such measurements: the average discrepancy (over the entire spectrum of frame rates used) was found to be 0.66 ± 0.48%, with a maximum difference of 2.33%. The principal reason for this discrepancy was traced to non-optimized sampling of the mirror rotation rate within the system protocol. This paper thus serves three purposes: (i) we highlight a straightforward diagnostic approach to facilitate scrutiny of rotating-mirror system integrity; (ii) we raise awareness of the intrinsic errors associated with data previously acquired with this particular system and model; and (iii), we recommend that future control routines address the sampling issue by implementing real-time measurement at the instant of triggering.

  2. Measurement of inkjet first-drop behavior using a high-speed camera

    NASA Astrophysics Data System (ADS)

    Kwon, Kye-Si; Kim, Hyung-Seok; Choi, Moohyun

    2016-03-01

    Drop-on-demand inkjet printing has been used as a manufacturing tool for printed electronics, and it has several advantages since a droplet of an exact amount can be deposited on an exact location. Such technology requires positioning the inkjet head on the printing location without jetting, so a jetting pause (non-jetting) idle time is required. Nevertheless, the behavior of the first few drops after the non-jetting pause time is well known to be possibly different from that which occurs in the steady state. The abnormal behavior of the first few drops may result in serious problems regarding printing quality. Therefore, a proper evaluation of a first-droplet failure has become important for the inkjet industry. To this end, in this study, we propose the use of a high-speed camera to evaluate first-drop dissimilarity. For this purpose, the image acquisition frame rate was determined to be an integer multiple of the jetting frequency, and in this manner, we can directly compare the droplet locations of each drop in order to characterize the first-drop behavior. Finally, we evaluate the effect of a sub-driving voltage during the non-jetting pause time to effectively suppress the first-drop dissimilarity.

  3. Quantitative visualization of oil-water mixture behind sudden expansion by high speed camera

    NASA Astrophysics Data System (ADS)

    Babakhani Dehkordi, Parham; P. M Colombo, Luigi; Guilizzoni, Manfredo; Sotgia, Giorgio; Cozzi, Fabio

    2017-08-01

    The present work describes the application of an image processing technique to study the two phase flow of high viscous oil and water through a sudden expansion. Six different operating conditions were considered, depending on input volume fraction of phases, and all of them are resulting in a flow pattern of the type oil dispersion in continuous water flow. The objective is to use an optical diagnostic method, with a high speed camera, to give detailed information about the flow field and spatial distribution, such as instantaneous velocity and in situ phase fraction. Artificial tracer particles were not used due to the fact that oil drops can be easily distinguished from the continuous water phase and thus they can act as natural tracers. The pipe has a total length of 11 meters and the abrupt sudden expansion is placed at a distance equal to 6 meters from the inlet section, to ensure that the flow is fully developed when it reaches the singularity. Upstream and downstream pipes have 30 mm and 50 mm i.d., respectively. Velocity profiles, holdup and drop size distribution after the sudden expansion were analyzed and compared with literature models and results.

  4. Measurement of inkjet first-drop behavior using a high-speed camera

    SciTech Connect

    Kwon, Kye-Si Kim, Hyung-Seok; Choi, Moohyun

    2016-03-15

    Drop-on-demand inkjet printing has been used as a manufacturing tool for printed electronics, and it has several advantages since a droplet of an exact amount can be deposited on an exact location. Such technology requires positioning the inkjet head on the printing location without jetting, so a jetting pause (non-jetting) idle time is required. Nevertheless, the behavior of the first few drops after the non-jetting pause time is well known to be possibly different from that which occurs in the steady state. The abnormal behavior of the first few drops may result in serious problems regarding printing quality. Therefore, a proper evaluation of a first-droplet failure has become important for the inkjet industry. To this end, in this study, we propose the use of a high-speed camera to evaluate first-drop dissimilarity. For this purpose, the image acquisition frame rate was determined to be an integer multiple of the jetting frequency, and in this manner, we can directly compare the droplet locations of each drop in order to characterize the first-drop behavior. Finally, we evaluate the effect of a sub-driving voltage during the non-jetting pause time to effectively suppress the first-drop dissimilarity.

  5. Synchronization of high speed framing camera and intense electron-beam accelerator.

    PubMed

    Cheng, Xin-Bing; Liu, Jin-Liang; Hong, Zhi-Qiang; Qian, Bao-Liang

    2012-06-01

    A new trigger program is proposed to realize the synchronization of high speed framing camera (HSFC) and intense electron-beam accelerator (IEBA). The trigger program which include light signal acquisition radiated from main switch of IEBA and signal processing circuit could provide a trigger signal with rise time of 17 ns and amplitude of about 5 V. First, the light signal was collected by an avalanche photodiode (APD) module, and the delay time between the output voltage of APD and load voltage of IEBA was tested, it was about 35 ns. Subsequently, the output voltage of APD was processed further by the signal processing circuit to obtain the trigger signal. At last, by combining the trigger program with an IEBA, the trigger program operated stably, and a delay time of 30 ns between the trigger signal of HSFC and output voltage of IEBA was obtained. Meanwhile, when surface flashover occurred at the high density polyethylene sample, the delay time between the trigger signal of HSFC and flashover current was up to 150 ns, which satisfied the need of synchronization of HSFC and IEBA. So the experiment results proved that the trigger program could compensate the time (called compensated time) of the trigger signal processing time and the inherent delay time of the HSFC.

  6. Measurement of inkjet first-drop behavior using a high-speed camera.

    PubMed

    Kwon, Kye-Si; Kim, Hyung-Seok; Choi, Moohyun

    2016-03-01

    Drop-on-demand inkjet printing has been used as a manufacturing tool for printed electronics, and it has several advantages since a droplet of an exact amount can be deposited on an exact location. Such technology requires positioning the inkjet head on the printing location without jetting, so a jetting pause (non-jetting) idle time is required. Nevertheless, the behavior of the first few drops after the non-jetting pause time is well known to be possibly different from that which occurs in the steady state. The abnormal behavior of the first few drops may result in serious problems regarding printing quality. Therefore, a proper evaluation of a first-droplet failure has become important for the inkjet industry. To this end, in this study, we propose the use of a high-speed camera to evaluate first-drop dissimilarity. For this purpose, the image acquisition frame rate was determined to be an integer multiple of the jetting frequency, and in this manner, we can directly compare the droplet locations of each drop in order to characterize the first-drop behavior. Finally, we evaluate the effect of a sub-driving voltage during the non-jetting pause time to effectively suppress the first-drop dissimilarity.

  7. Analysis mist spray development with Al-75 nozzle by using high speed camera

    NASA Astrophysics Data System (ADS)

    Rahman, M. Faqhrurrazi Bin Abd; Asmuin, Norzelawati; Sies, M. Farid

    2017-04-01

    Spray nozzles are used for the industries as the cleaning, cutting, and spraying. The nozzle will came so many varieties and usually can be classified according to specific mode of atomization. The present study experimentally is to investigate the spray development for the droplet size of the water in the nozzle AL-75 by using the high-speed camera. The spray development is divided into five stage and each stage consist of time from 0 milliseconds (ms) to 32 milliseconds (ms). For this experiments, the supplied pressure has been use is 1 bar, 2 bar and 3 bar for the liquid and 1 bar, 3 bar and 6 bar for the air pressure. The experiment data were obtained from the released of the mist spray nozzle in water liquid 100%, water 90% mix with lime 10% (L10W90) and water 70% mix with lime 30% (L30W70). The results shows that for the lower pressure, the time taken for the spray to become fully development is longer compared to the higher pressure.

  8. ARINC 818 express for high-speed avionics video and power over coax

    NASA Astrophysics Data System (ADS)

    Keller, Tim; Alexander, Jon

    2012-06-01

    CoaXPress is a new standard for high-speed video over coax cabling developed for the machine vision industry. CoaXPress includes both a physical layer and a video protocol. The physical layer has desirable features for aerospace and defense applications: it allows 3Gbps (up to 6Gbps) communication, includes 21Mbps return path allowing for bidirectional communication, and provides up to 13W of power, all over a single coax connection. ARINC 818, titled "Avionics Digital Video Bus" is a protocol standard developed specifically for high speed, mission critical aerospace video systems. ARINC 818 is being widely adopted for new military and commercial display and sensor applications. The ARINC 818 protocol combined with the CoaXPress physical layer provide desirable characteristics for many aerospace systems. This paper presents the results of a technology demonstration program to marry the physical layer from CoaXPress with the ARINC 818 protocol. ARINC 818 is a protocol, not a physical layer. Typically, ARINC 818 is implemented over fiber or copper for speeds of 1 to 2Gbps, but beyond 2Gbps, it has been implemented exclusively over fiber optic links. In many rugged applications, a copper interface is still desired, by implementing ARINC 818 over the CoaXPress physical layer, it provides a path to 3 and 6 Gbps copper interfaces for ARINC 818. Results of the successful technology demonstration dubbed ARINC 818 Express are presented showing 3Gbps communication while powering a remote module over a single coax cable. The paper concludes with suggested next steps for bring this technology to production readiness.

  9. Practical use of high-speed cameras for research and development within the automotive industry: yesterday and today

    NASA Astrophysics Data System (ADS)

    Steinmetz, Klaus

    1995-05-01

    Within the automotive industry, especially for the development and improvement of safety systems, we find a lot of high accelerated motions, that can not be followed and consequently not be analyzed by human eye. For the vehicle safety tests at AUDI, which are performed as 'Crash Tests', 'Sled Tests' and 'Static Component Tests', 'Stalex', 'Hycam', and 'Locam' cameras are in use. Nowadays the automobile production is inconceivable without the use of high speed cameras.

  10. A dual-camera cinematographic PIV measurement system at kilohertz frame rate for high-speed, unsteady flows

    NASA Astrophysics Data System (ADS)

    Bian, Shiyao; Ceccio, Steven L.; Driscoll, James F.

    2010-03-01

    A digital dual-camera cinematographic particle image velocimetry (CPIV) system has been developed to provide time-resolved, high resolution flow measurements in high-Reynolds number, turbulent flows. Two high-speed CMOS cameras were optically combined to acquire double-pulsed CPIV images at kilohertz frame rates. Bias and random errors due to camera misalignment, camera vibration, and lens aberration were corrected or estimated. Systematic errors due to the camera misalignment were reduced to less than 2 pixels throughout the image plane using mechanical alignment, resulting in 3.1% positional uncertainty of velocity measurements. Frame-to-frame uncertainties caused by mechanical vibration were eliminated with the aid of digital image calibration and frame-to-frame camera registration. This dual-camera CPIV system is capable of resolving high speed, unsteady flows with high temporal and spatial resolutions. It also allows time intervals between the two exposures down to 4 μs, enabling the measurements of speed flows 5-10 times higher than possible with frame-straddling using similar cameras. A turbulent shallow cavity was then chosen as the experimental object investigated by this dual-camera CPIV technique.

  11. Calculus migration characterization during Ho:YAG laser lithotripsy by high-speed camera using suspended pendulum method.

    PubMed

    Zhang, Jian James; Rajabhandharaks, Danop; Xuan, Jason Rongwei; Chia, Ray W J; Hasenberg, Thomas

    2017-07-01

    Calculus migration is a common problem during ureteroscopic laser lithotripsy procedure to treat urolithiasis. A conventional experimental method to characterize calculus migration utilized a hosting container (e.g., a "V" grove or a test tube). These methods, however, demonstrated large variation and poor detectability, possibly attributed to the friction between the calculus and the container on which the calculus was situated. In this study, calculus migration was investigated using a pendulum model suspended underwater to eliminate the aforementioned friction. A high-speed camera was used to study the movement of the calculus which covered zero order (displacement), first order (speed), and second order (acceleration). A commercialized, pulsed Ho:YAG laser at 2.1 μm, a 365-μm core diameter fiber, and a calculus phantom (Plaster of Paris, 10 × 10 × 10 mm(3)) was utilized to mimic laser lithotripsy procedure. The phantom was hung on a stainless steel bar and irradiated by the laser at 0.5, 1.0, and 1.5 J energy per pulse at 10 Hz for 1 s (i.e., 5, 10, and 15 W). Movement of the phantom was recorded by a high-speed camera with a frame rate of 10,000 FPS. The video data files are analyzed by MATLAB program by processing each image frame and obtaining position data of the calculus. With a sample size of 10, the maximum displacement was 1.25 ± 0.10, 3.01 ± 0.52, and 4.37 ± 0.58 mm for 0.5, 1, and 1.5 J energy per pulse, respectively. Using the same laser power, the conventional method showed <0.5 mm total displacement. When reducing the phantom size to 5 × 5 × 5 mm(3) (one eighth in volume), the displacement was very inconsistent. The results suggested that using the pendulum model to eliminate the friction improved sensitivity and repeatability of the experiment. A detailed investigation on calculus movement and other causes of experimental variation will be conducted as a future study.

  12. Close-range photogrammetry with video cameras

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Goad, W. K.

    1985-01-01

    Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.

  13. Close-Range Photogrammetry with Video Cameras

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Goad, W. K.

    1983-01-01

    Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.

  14. The application of high-speed camera for analysis of chip creation process during the steel turning

    NASA Astrophysics Data System (ADS)

    Struzikiewicz, Grzegorz

    2016-09-01

    The paper presents the results of application of the high-speed camera Phantom v5.2 and Tracker program for the analysis of chip forming in the case of the AMS6265 steel turning. The experimental research was carried for two cutting speeds and different wear of cutting inserts.

  15. High-speed light field camera and frequency division multiplexing for fast multi-plane velocity measurements.

    PubMed

    Fischer, Andreas; Kupsch, Christian; Gürtler, Johannes; Czarske, Jürgen

    2015-09-21

    Non-intrusive fast 3d measurements of volumetric velocity fields are necessary for understanding complex flows. Using high-speed cameras and spectroscopic measurement principles, where the Doppler frequency of scattered light is evaluated within the illuminated plane, each pixel allows one measurement and, thus, planar measurements with high data rates are possible. While scanning is one standard technique to add the third dimension, the volumetric data is not acquired simultaneously. In order to overcome this drawback, a high-speed light field camera is proposed for obtaining volumetric data with each single frame. The high-speed light field camera approach is applied to a Doppler global velocimeter with sinusoidal laser frequency modulation. As a result, a frequency multiplexing technique is required in addition to the plenoptic refocusing for eliminating the crosstalk between the measurement planes. However, the plenoptic refocusing is still necessary in order to achieve a large refocusing range for a high numerical aperture that minimizes the measurement uncertainty. Finally, two spatially separated measurement planes with 25×25 pixels each are simultaneously acquired with a measurement rate of 0.5 kHz with a single high-speed camera.

  16. Characterizing highly correlated video traffic in high-speed asynchronous transfer mode networks

    NASA Astrophysics Data System (ADS)

    Shroff, Ness; Schwartz, Mischa

    1996-04-01

    The enormous bandwidth potential of optical fiber has resulted in a worldwide effort to develop high-speed ATM networks, also called broadband integrated services digital networks (B-ISDN). Many of the applications that ATM networks will support will have a strong video component to them. Hence, it is important to understand the behavior of video traffic as it travels through these networks. To that end, we develop the generalized histogram model (GHM) to characterize 'highly correlated' traffic, such as motion JPEG or 'smoothed' MPEG traffic over ATM networks end-to- end. Using our GHM model we show how to determine the loss rate at any node in an ATM network. We find that, for highly correlated video sources, increasing the buffer size beyond a certain region called the 'cell region' only marginally decreases the probability of loss. This implies that large buffers cannot be used to control the loss for such sources. The analytical model provided in this paper can be used for admission control, and network dimensioning and design in ATM networks. We have validated our results using simulations of real traces of video sources.

  17. Lifetime and structures of TLEs captured by high-speed camera on board aircraft

    NASA Astrophysics Data System (ADS)

    Takahashi, Y.; Sanmiya, Y.; Sato, M.; Kudo, T.; Inoue, T.

    2012-12-01

    Temporal development of sprite streamer is the manifestation of the local electric field and conductivity. Therefore, in order to understand the mechanisms of sprite, which show a large variety in temporal and spatial structures, the detailed analysis of both fine and macro-structures with high time resolution are to be the key approach. However, due to the long distance from the optical equipments to the phenomena and to the contamination by aerosols, it's not easy to get clear images of TLEs on the ground. In the period of June 27 - July 10, 2011, a combined aircraft and ground-based campaign, in support of NHK Cosmic Shore project, was carried with two jet airplanes under collaboration between NHK, Japan Broadcasting Corporation, and universities. On 8 nights out of 16 standing-by, the jets took off from the airport near Denver, Colorado, and an airborne high speed camera captured over 60 TLE events at a frame rate of 8000-10,000 /sec. Some of them show several tens of streamers in one sprite event, which repeat splitting at the down-going end of streamers or beads. The velocities of the bottom ends and the variations of their brightness are traced carefully. It is found that the top velocity is maintained only for the brightest beads and others become slow just after the splitting. Also the whole luminosity of one sprite event has short time duration with rapid downward motion if the charge moment change of the parent lightning is large. The relationship between diffuse glows such as elves and sprite halos, and subsequent discrete structure of sprite streamers is also examined. In most cases the halo and elves seem to show inhomogenous structures before being accompanied by streamers, which develop to bright spots or streamers with acceleration of the velocity. Those characteristics of velocity and lifetime of TLEs provide key information of their generation mechanism.

  18. An Impact Velocity Device Design for Blood Spatter Pattern Generation with Considerations for High-Speed Video Analysis.

    PubMed

    Stotesbury, Theresa; Illes, Mike; Vreugdenhil, Andrew J

    2016-03-01

    A mechanical device that uses gravitational and spring compression forces to create spatter patterns of known impact velocities is presented and discussed. The custom-made device uses either two or four springs (k1 = 267.8 N/m, k2 = 535.5 N/m) in parallel to create seventeen reproducible impact velocities between 2.1 and 4.0 m/s. The impactor is held at several known spring extensions using an electromagnet. Trigger inputs to the high-speed video camera allow the user to control the magnet's release while capturing video footage simultaneously. A polycarbonate base is used to allow for simultaneous monitoring of the side and bottom views of the impact event. Twenty-four patterns were created across the impact velocity range and analyzed using HemoSpat. Area of origin estimations fell within an acceptable range (ΔXav = -5.5 ± 1.9 cm, ΔYav = -2.6 ± 2.8 cm, ΔZav = +5.5 ± 3.8 cm), supporting distribution analysis for the use in research or bloodstain pattern training. This work provides a framework for those interested in developing a robust impact device. © 2015 American Academy of Forensic Sciences.

  19. Measurement of steady and transient liquid coiling with high-speed video and digital image processing

    NASA Astrophysics Data System (ADS)

    Mier, Frank Austin; Bhakta, Raj; Castano, Nicolas; Thackrah, Joshua; Marquis, Tyler; Garcia, John; Hargather, Michael

    2016-11-01

    Liquid coiling occurs as a gravitationally-accelerated viscous fluid flows into a stagnant reservoir causing a localized accumulation of settling material, commonly designated as stack. This flow is broadly characterized by a vertical rope of liquid, the tail, flowing into the stack in a coiled motion with frequency defined parametrically within four different flow regimes. These regimes are defined as viscous, gravitational, inertial-gravitational, and inertial. Relations include parameters such as flow rate, drop height, rope radius, gravitational acceleration, and kinematic viscosity. While previous work on the subject includes high speed imaging, only basic and often averaged measurements have been taken by visual inspection of images. Through the implementation of additional image processing routines in MATLAB, time resolved measurements are taken on coiling frequency, tail diameter, stack diameter and height. Synchronization between a high speed camera and stepper motor driven syringe pump provides accurate correlation with flow rate. Additionally, continuous measurement of unsteady transition between flow regimes is visualized and quantified. This capability allows a deeper experimental understanding of processes involved in the liquid coiling phenomenon.

  20. High-speed video and electric field observation of upward flashes in Brazil

    NASA Astrophysics Data System (ADS)

    Saba, Marcelo M. F.; Schumann, Carina; Ferro, Marco A. S.; Paiva, Amanda R.; Jaques, Robson; Warner, Tom A.

    2015-04-01

    Upward flashes from tall towers in Brazil have been observed since January 2012. They have been responsible for damages on equipment installed nearby tall structures that caused their initiation. Almost all upward flashes were observed with high-speed cameras and electric field sensors; a combination of measurements that provides a very accurate classification and characterization of their properties. Although present during all seasons, upward flashes are predominant during summer. They are almost always initiated by a preceding positive cloud-to-ground flash. This study is based on an up-to-date database of 86 upward flashes observed during the last three years. The main characteristics described in this work are: time interval between triggering event and the upward leader initiation, characteristics of the triggering +CG flashes, upward leader characteristics (polarity, presence of recoil leaders and branching), initial continuous current (duration, presence of pulses and recoil leaders), flash duration and presence of subsequence of return stroke.

  1. Laboratory Calibration and Characterization of Video Cameras

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Shortis, M. R.; Goad, W. K.

    1989-01-01

    Some techniques for laboratory calibration and characterization of video cameras used with frame grabber boards are presented. A laser-illuminated displaced reticle technique (with camera lens removed) is used to determine the camera/grabber effective horizontal and vertical pixel spacing as well as the angle of non-perpendicularity of the axes. The principal point of autocollimation and point of symmetry are found by illuminating the camera with an unexpanded laser beam, either aligned with the sensor or lens. Lens distortion and the principal distance are determined from images of a calibration plate suitable aligned with the camera. Calibration and characterization results for several video cameras are presented. Differences between these laboratory techniques and test range and plumb line calibration are noted.

  2. Laboratory calibration and characterization of video cameras

    NASA Astrophysics Data System (ADS)

    Burner, A. W.; Snow, W. L.; Shortis, M. R.; Goad, W. K.

    1990-08-01

    Some techniques for laboratory calibration and characterization of video cameras used with frame grabber boards are presented. A laser-illuminated displaced reticle technique (with camera lens removed) is used to determine the camera/grabber effective horizontal and vertical pixel spacing as well as the angle of nonperpendicularity of the axes. The principal point of autocollimation and point of symmetry are found by illuminating the camera with an unexpanded laser beam, either aligned with the sensor or lens. Lens distortion and the principal distance are determined from images of a calibration plate suitably aligned with the camera. Calibration and characterization results for several video cameras are presented. Differences between these laboratory techniques and test range and plumb line calibration are noted.

  3. High speed video analysis of rockfall fence system evaluation. Final report

    SciTech Connect

    Fry, D.A.; Lucero, J.P.

    1998-07-01

    Rockfall fence systems are used to protect motorists from rocks, dislodged from slopes near roadways, which would potentially roll onto the road at high speeds carrying significant energy. There is an unfortunate list of such rocks on unprotected roads that have caused fatalities and other damage. Los Alamos National Laboratory (LANL) personnel from the Engineering Science and Applications Division, Measurement Technology Group (ESA-MT), participated in a series of rockfall fence system tests at a test range in Rifle, Colorado during March 1998. The tests were for the evaluation and certification of four rockfall fence system designs of Chama Valley Manufacturing (CVM), a Small Business, located in Chama, New Mexico. Also participating in the tests were the Colorado Department of Transportation (CDOT) who provided the test range and some heavy equipment support and High Tech Construction who installed the fence systems. LANL provided two high speed video systems and operators to record each individual rockfall on each fence system. From the recordings LANL then measured the linear and rotational velocities at impact for each rockfall. Using the LANL velocity results, CVM then could calculate the impact energy of each rockfall and therefore certify each design up to the maximum energy that each fence system could absorb without failure. LANL participated as an independent, impartial velocity measurement entity only and did not contribute to the fence systems design or installation. CVM has published a more detailed final report covering all aspects of the project.

  4. High-resolution, high-speed, three-dimensional video imaging with digital fringe projection techniques.

    PubMed

    Ekstrand, Laura; Karpinsky, Nikolaus; Wang, Yajun; Zhang, Song

    2013-12-03

    Digital fringe projection (DFP) techniques provide dense 3D measurements of dynamically changing surfaces. Like the human eyes and brain, DFP uses triangulation between matching points in two views of the same scene at different angles to compute depth. However, unlike a stereo-based method, DFP uses a digital video projector to replace one of the cameras(1). The projector rapidly projects a known sinusoidal pattern onto the subject, and the surface of the subject distorts these patterns in the camera's field of view. Three distorted patterns (fringe images) from the camera can be used to compute the depth using triangulation. Unlike other 3D measurement methods, DFP techniques lead to systems that tend to be faster, lower in equipment cost, more flexible, and easier to develop. DFP systems can also achieve the same measurement resolution as the camera. For this reason, DFP and other digital structured light techniques have recently been the focus of intense research (as summarized in(1-5)). Taking advantage of DFP, the graphics processing unit, and optimized algorithms, we have developed a system capable of 30 Hz 3D video data acquisition, reconstruction, and display for over 300,000 measurement points per frame(6,7). Binary defocusing DFP methods can achieve even greater speeds(8). Diverse applications can benefit from DFP techniques. Our collaborators have used our systems for facial function analysis(9), facial animation(10), cardiac mechanics studies(11), and fluid surface measurements, but many other potential applications exist. This video will teach the fundamentals of DFP techniques and illustrate the design and operation of a binary defocusing DFP system.

  5. Video Analysis with a Web Camera

    ERIC Educational Resources Information Center

    Wyrembeck, Edward P.

    2009-01-01

    Recent advances in technology have made video capture and analysis in the introductory physics lab even more affordable and accessible. The purchase of a relatively inexpensive web camera is all you need if you already have a newer computer and Vernier's Logger Pro 3 software. In addition to Logger Pro 3, other video analysis tools such as…

  6. Video Analysis with a Web Camera

    ERIC Educational Resources Information Center

    Wyrembeck, Edward P.

    2009-01-01

    Recent advances in technology have made video capture and analysis in the introductory physics lab even more affordable and accessible. The purchase of a relatively inexpensive web camera is all you need if you already have a newer computer and Vernier's Logger Pro 3 software. In addition to Logger Pro 3, other video analysis tools such as…

  7. Microscopic observations of riming on an ice surface using high speed video

    NASA Astrophysics Data System (ADS)

    Emersic, C.; Connolly, P. J.

    2017-03-01

    Microscopic droplet riming events on an ice surface have been observed using high speed video. Observations included greater propensity for droplet spreading at temperatures higher than - 15 °C on flatter ice surfaces, and subsequently, the formation of growing rime spires into the flow, allowing glancing droplet collisions and more spherical freezing of smaller droplets. Insight into differences between laboratory observations of the Hallett-Mossop process is offered, relating to the nature of droplet spreading associated with the structure of the rimer surface prior to impact. Observations of a difference between air speed and resulting droplet impact speed on an ice surface may affect interpretations of riming laboratory studies, and may explain recent observations of a high secondary ice production rate in supercooled layer clouds.

  8. Movement of fine particles on an air bubble surface studied using high-speed video microscopy.

    PubMed

    Nguyen, Anh V; Evans, Geoffrey M

    2004-05-01

    A CCD high-speed video microscopy system operating at 1000 frames per second was used to obtain direct quantitative measurements of the trajectories of fine glass spheres on the surface of air bubbles. The glass spheres were rendered hydrophobic by a methylation process. Rupture of the intervening water film between a hydrophobic particle and an air bubble with the consequent formation of a three-phase contact was observed. The bubble-particle sliding attachment interaction is not satisfactorily described by the available theories. Surface forces had little effect on the particle sliding with a water film, which ruptured probably due to the submicrometer-sized gas bubbles existing at the hydrophobic particle-water interface.

  9. Audio extraction from silent high-speed video using an optical technique

    NASA Astrophysics Data System (ADS)

    Wang, Zhaoyang; Nguyen, Hieu; Quisberth, Jason

    2014-11-01

    It is demonstrated that audio information can be extracted from silent high-speed video with a simple and fast optical technique. The basic principle is that the sound waves can stimulate objects encountered in the traveling path to vibrate. The vibrations, although usually with small amplitudes, can be detected by using an image matching process. The proposed technique applies a subset-based image correlation approach to detect the motions of points on the surface of an object. It employs the Gauss-Newton algorithm and a few other measures to achieve very fast and highly accurate image matching. Because the detected vibrations are directly related to the sound waves, a simple model is introduced to reconstruct the original audio information of the sound waves. The proposed technique is robust and easy to implement, and its effectiveness has been verified by experiments.

  10. High-speed video observations of positive ground flashes produced by intracloud lightning

    NASA Astrophysics Data System (ADS)

    Saba, Marcelo M. F.; Campos, Leandro Z. S.; Krider, E. Philip; Pinto, Osmar

    2009-06-01

    High-speed video recordings of two lightning flashes confirm that positive cloud-to-ground (CG) strokes can be produced by extensive horizontal intracloud (IC) discharges within and near the cloud base. These recordings constitute the first observations of CG leaders emanating from IC discharges of either polarity. In one case, the discharge began with a negative leader that propagated horizontally, went upward and produced an IC discharge. After the beginning of the IC discharge, a positive leader emanated from the lowest portion of the IC discharge, and initiated a positive return stroke. In the other case, the IC discharge began with a positive leader and then initiated a downward-propagating positive leader that contained recoil processes and produced a bright return stroke followed by a long continuing luminosity. These observations help to understand the complex genesis of positive CG flashes, why IC lightning commonly precedes them and why extensive horizontal channels are often involved.

  11. High-speed video analysis of wing-snapping in two manakin clades (Pipridae: Aves).

    PubMed

    Bostwick, Kimberly S; Prum, Richard O

    2003-10-01

    Basic kinematic and detailed physical mechanisms of avian, non-vocal sound production are both unknown. Here, for the first time, field-generated high-speed video recordings and acoustic analyses are used to test numerous competing hypotheses of the kinematics underlying sonations, or non-vocal communicative sounds, produced by two genera of Pipridae, Manacus and Pipra (Aves). Eleven behaviorally and acoustically distinct sonations are characterized, five of which fall into a specific acoustic class of relatively loud, brief, broad-frequency sound pulses, or snaps. The hypothesis that one kinematic mechanism of snap production is used within and between birds in general, and manakins specifically, is rejected. Instead, it is verified that three of four competing hypotheses of the kinematic mechanisms used for producing snaps, namely: (1). above-the-back wing-against-wing claps, (2). wing-against-body claps and (3). wing-into-air flicks, are employed between these two clades, and a fourth mechanism, (4). wing-against-tail feather interactions, is discovered. The kinematic mechanisms used to produce snaps are invariable within each identified sonation, despite the fact that a diversity of kinematic mechanisms are used among sonations. The other six sonations described are produced by kinematic mechanisms distinct from those used to create snaps, but are difficult to distinguish from each other and from the kinematics of flight. These results provide the first detailed kinematic information on mechanisms of sonation in birds in general, and the Pipridae specifically. Further, these results provide the first evidence that acoustically similar avian sonations, such as brief, broad frequency snaps, can be produced by diverse kinematic means, both among and within species. The use of high-speed video recordings in the field in a comparative manner documents the diversity of kinematic mechanisms used to sonate, and uncovers a hidden, sexually selected radiation of

  12. High speed television camera system processes photographic film data for digital computer analysis

    NASA Technical Reports Server (NTRS)

    Habbal, N. A.

    1970-01-01

    Data acquisition system translates and processes graphical information recorded on high speed photographic film. It automatically scans the film and stores the information with a minimal use of the computer memory.

  13. Direct observation of pH-induced coalescence of latex-stabilized bubbles using high-speed video imaging.

    PubMed

    Ata, Seher; Davis, Elizabeth S; Dupin, Damien; Armes, Steven P; Wanless, Erica J

    2010-06-01

    The coalescence of pairs of 2 mm air bubbles grown in a dilute electrolyte solution containing a lightly cross-linked 380 nm diameter PEGMA-stabilized poly(2-vinylpyridine) (P2VP) latex was monitored using a high-speed video camera. The air bubbles were highly stable at pH 10 when coated with this latex, although coalescence could be induced by increasing the bubble volume when in contact. Conversely, coalescence was rapid when the bubbles were equilibrated at pH 2, since the latex undergoes a latex-to-microgel transition and the swollen microgel particles are no longer adsorbed at the air-water interface. Rapid coalescence was also observed for latex-coated bubbles equilibrated at pH 10 and then abruptly adjusted to pH 2. Time-dependent postrupture oscillations in the projected surface area of coalescing P2VP-coated bubble pairs were studied using a high-speed video camera in order to reinvestigate the rapid acid-induced catastrophic foam collapse previously reported [Dupin, D.; et al. J. Mater. Chem. 2008, 18, 545]. At pH 10, the P2VP latex particles adsorbed at the surface of coalescing bubbles reduce the oscillation frequency significantly. This is attributed to a close-packed latex monolayer, which increases the bubble stiffness and hence restricts surface deformation. The swollen P2VP microgel particles that are formed in acid also affected the coalescence dynamics. It was concluded that there was a high concentration of swollen microgel at the air-water interface, which created a localized, viscous surface gel layer that inhibited at least the first period of the surface area oscillation. Close comparison between latex-coated bubbles at pH 10 and those coated with 66 microm spherical glass beads indicated that the former system exhibits more elastic behavior. This was attributed to the compressibility of the latex monolayer on the bubble surface during coalescence. A comparable elastic response was observed for similar sized titania particles, suggesting

  14. High speed imaging - An important industrial tool

    NASA Technical Reports Server (NTRS)

    Moore, Alton; Pinelli, Thomas E.

    1986-01-01

    High-speed photography, which is a rapid sequence of photographs that allow an event to be analyzed through the stoppage of motion or the production of slow-motion effects, is examined. In high-speed photography 16, 35, and 70 mm film and framing rates between 64-12,000 frames per second are utilized to measure such factors as angles, velocities, failure points, and deflections. The use of dual timing lamps in high-speed photography and the difficulties encountered with exposure and programming the camera and event are discussed. The application of video cameras to the recording of high-speed events is described.

  15. Photogrammetric Applications of Immersive Video Cameras

    NASA Astrophysics Data System (ADS)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  16. High-Speed Video Observations of Upward Leaders from Tall Towers

    NASA Astrophysics Data System (ADS)

    Warner, T. A.; Mazur, V.; Ruhnke, L.

    2008-12-01

    High-speed (7,200 frames per sec) video observations of upward leaders from several tall towers, of heights greater than 150 m in Rapid City, South Dakota, revealed a variety of processes associated with the development of these upward leaders which were previously unknown from studies of rocket-triggered lightning. Confirmed by the NLDN data, and also by immediately preceding video images, all upward leaders were triggered either by the approaching negative leaders of intracloud or negative cloud-to-ground flashes, or by the return strokes of positive cloud-to-ground flashes. This indicates the positive polarity of the upward leaders. Following the progression of the branched upward positive leaders, recoil leaders retraced parts of the decayed channels of the forked structure toward the stems of the positive leaders. Some of these recoil leaders were extremely powerful, based on saturating bright channel luminosity. A few upward leaders developed as single channels without visible branches. In those cases recoil leaders manifested themselves as pulsing channel luminosities. On a few occasions, the positive upward leaders were followed, after the visible current cut-off from the ground, by a series of dart leader-return strokes of a negative cloud-to-ground flash, which is a development well established by the studies of rocket-triggered lightning.

  17. SPLASSH: Open source software for camera-based high-speed, multispectral in-vivo optical image acquisition.

    PubMed

    Sun, Ryan; Bouchard, Matthew B; Hillman, Elizabeth M C

    2010-08-02

    Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software's framework and provide details to guide users with development of this and similar software.

  18. SPLASSH: Open source software for camera-based high-speed, multispectral in-vivo optical image acquisition

    PubMed Central

    Sun, Ryan; Bouchard, Matthew B.; Hillman, Elizabeth M. C.

    2010-01-01

    Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software’s framework and provide details to guide users with development of this and similar software. PMID:21258475

  19. Per-Pixel Coded Exposure for High-Speed and High-Resolution Imaging Using a Digital Micromirror Device Camera.

    PubMed

    Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei

    2016-03-04

    High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera.

  20. Per-Pixel Coded Exposure for High-Speed and High-Resolution Imaging Using a Digital Micromirror Device Camera

    PubMed Central

    Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei

    2016-01-01

    High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera. PMID:26959023

  1. Eulerian frequency analysis of structural vibrations from high-speed video

    NASA Astrophysics Data System (ADS)

    Venanzoni, Andrea; De Ryck, Laurent; Cuenca, Jacques

    2016-06-01

    An approach for the analysis of the frequency content of structural vibrations from high-speed video recordings is proposed. The techniques and tools proposed rely on an Eulerian approach, that is, using the time history of pixels independently to analyse structural motion, as opposed to Lagrangian approaches, where the motion of the structure is tracked in time. The starting point is an existing Eulerian motion magnification method, which consists in decomposing the video frames into a set of spatial scales through a so-called Laplacian pyramid [1]. Each scale - or level - can be amplified independently to reconstruct a magnified motion of the observed structure. The approach proposed here provides two analysis tools or pre-amplification steps. The first tool provides a representation of the global frequency content of a video per pyramid level. This may be further enhanced by applying an angular filter in the spatial frequency domain to each frame of the video before the Laplacian pyramid decomposition, which allows for the identification of the frequency content of the structural vibrations in a particular direction of space. This proposed tool complements the existing Eulerian magnification method by amplifying selectively the levels containing relevant motion information with respect to their frequency content. This magnifies the displacement while limiting the noise contribution. The second tool is a holographic representation of the frequency content of a vibrating structure, yielding a map of the predominant frequency components across the structure. In contrast to the global frequency content representation of the video, this tool provides a local analysis of the periodic gray scale intensity changes of the frame in order to identify the vibrating parts of the structure and their main frequencies. Validation cases are provided and the advantages and limits of the approaches are discussed. The first validation case consists of the frequency content

  2. Eulerian frequency analysis of structural vibrations from high-speed video

    SciTech Connect

    Venanzoni, Andrea; De Ryck, Laurent; Cuenca, Jacques

    2016-06-28

    An approach for the analysis of the frequency content of structural vibrations from high-speed video recordings is proposed. The techniques and tools proposed rely on an Eulerian approach, that is, using the time history of pixels independently to analyse structural motion, as opposed to Lagrangian approaches, where the motion of the structure is tracked in time. The starting point is an existing Eulerian motion magnification method, which consists in decomposing the video frames into a set of spatial scales through a so-called Laplacian pyramid [1]. Each scale — or level — can be amplified independently to reconstruct a magnified motion of the observed structure. The approach proposed here provides two analysis tools or pre-amplification steps. The first tool provides a representation of the global frequency content of a video per pyramid level. This may be further enhanced by applying an angular filter in the spatial frequency domain to each frame of the video before the Laplacian pyramid decomposition, which allows for the identification of the frequency content of the structural vibrations in a particular direction of space. This proposed tool complements the existing Eulerian magnification method by amplifying selectively the levels containing relevant motion information with respect to their frequency content. This magnifies the displacement while limiting the noise contribution. The second tool is a holographic representation of the frequency content of a vibrating structure, yielding a map of the predominant frequency components across the structure. In contrast to the global frequency content representation of the video, this tool provides a local analysis of the periodic gray scale intensity changes of the frame in order to identify the vibrating parts of the structure and their main frequencies. Validation cases are provided and the advantages and limits of the approaches are discussed. The first validation case consists of the frequency content

  3. Determining aerodynamic coefficients from high speed video of a free-flying model in a shock tunnel

    NASA Astrophysics Data System (ADS)

    Neely, Andrew J.; West, Ivan; Hruschka, Robert; Park, Gisu; Mudford, Neil R.

    2008-11-01

    This paper describes the application of the free flight technique to determine the aerodynamic coefficients of a model for the flow conditions produced in a shock tunnel. Sting-based force measurement techniques either lack the required temporal response or are restricted to large complex models. Additionally the free flight technique removes the flow interference produced by the sting that is present for these other techniques. Shock tunnel test flows present two major challenges to the practical implementation of the free flight technique. These are the millisecond-order duration of the test flows and the spatial and temporal nonuniformity of these flows. These challenges are overcome by the combination of an ultra-high speed digital video camera to record the trajectory, with spatial and temporal mapping of the test flow conditions. Use of a lightweight model ensures sufficient motion during the test time. The technique is demonstrated using the simple case of drag measurement on a spherical model, free flown in a Mach 10 shock tunnel condition.

  4. High-Speed Video-Oculography for Measuring Three-Dimensional Rotation Vectors of Eye Movements in Mice

    PubMed Central

    Takeda, Noriaki; Uno, Atsuhiko; Inohara, Hidenori; Shimada, Shoichi

    2016-01-01

    Background The mouse is the most commonly used animal model in biomedical research because of recent advances in molecular genetic techniques. Studies related to eye movement in mice are common in fields such as ophthalmology relating to vision, neuro-otology relating to the vestibulo-ocular reflex (VOR), neurology relating to the cerebellum’s role in movement, and psychology relating to attention. Recording eye movements in mice, however, is technically difficult. Methods We developed a new algorithm for analyzing the three-dimensional (3D) rotation vector of eye movement in mice using high-speed video-oculography (VOG). The algorithm made it possible to analyze the gain and phase of VOR using the eye’s angular velocity around the axis of eye rotation. Results When mice were rotated at 0.5 Hz and 2.5 Hz around the earth’s vertical axis with their heads in a 30° nose-down position, the vertical components of their left eye movements were in phase with the horizontal components. The VOR gain was 0.42 at 0.5 Hz and 0.74 at 2.5 Hz, and the phase lead of the eye movement against the turntable was 16.1° at 0.5 Hz and 4.88° at 2.5 Hz. Conclusions To the best of our knowledge, this is the first report of this algorithm being used to calculate a 3D rotation vector of eye movement in mice using high-speed VOG. We developed a technique for analyzing the 3D rotation vector of eye movements in mice with a high-speed infrared CCD camera. We concluded that the technique is suitable for analyzing eye movements in mice. We also include a C++ source code that can calculate the 3D rotation vectors of the eye position from two-dimensional coordinates of the pupil and the iris freckle in the image to this article. PMID:27023859

  5. High-performance digital color video camera

    NASA Astrophysics Data System (ADS)

    Parulski, Kenneth A.; D'Luna, Lionel J.; Benamati, Brian L.; Shelley, Paul R.

    1992-01-01

    Typical one-chip color cameras use analog video processing circuits. An improved digital camera architecture has been developed using a dual-slope A/D conversion technique and two full-custom CMOS digital video processing integrated circuits, the color filter array (CFA) processor and the RGB postprocessor. The system used a 768 X 484 active element interline transfer CCD with a new field-staggered 3G color filter pattern and a lenslet overlay, which doubles the sensitivity of the camera. The industrial-quality digital camera design offers improved image quality, reliability, manufacturability, while meeting aggressive size, power, and cost constraints. The CFA processor digital VLSI chip includes color filter interpolation processing, an optical black clamp, defect correction, white balance, and gain control. The RGB postprocessor digital integrated circuit includes a color correction matrix, gamma correction, 2D edge enhancement, and circuits to control the black balance, lens aperture, and focus.

  6. High-performance digital color video camera

    NASA Astrophysics Data System (ADS)

    Parulski, Kenneth A.; Benamati, Brian L.; D'Luna, Lionel J.; Shelley, Paul R.

    1991-06-01

    Typical one-chip color cameras use analog video processing circuits. An improved digital camera architecture has been developed using a dual-slope A/D conversion technique, and two full custom CMOS digital video processing ICs, the 'CFA processor' and the 'RGB post- processor.' The system uses a 768 X 484 active element interline transfer CCD with a new 'field-staggered 3G' color filter pattern and a 'lenslet' overlay, which doubles the sensitivity of the camera. The digital camera design offers improved image quality, reliability, and manufacturability, while meeting aggressive size, power, and cost constraints. The CFA processor digital VLSI chip includes color filter interpolation processing, an optical black clamp, defect correction, white balance, and gain control. The RGB post-processor digital IC includes a color correction matrix, gamma correction, two-dimensional edge-enhancement, and circuits to control the black balance, lens aperture, and focus.

  7. High-speed video observations of the fine structure of a natural negative stepped leader at close distance

    NASA Astrophysics Data System (ADS)

    Qi, Qi; Lu, Weitao; Ma, Ying; Chen, Luwen; Zhang, Yijun; Rakov, Vladimir A.

    2016-09-01

    We present new high-speed video observations of a natural downward negative lightning flash that occurred at a close distance of 350 m. The stepped leader of this flash was imaged by three high-speed video cameras operating at framing rates of 1000, 10,000 and 50,000 frames per second, respectively. Synchronized electromagnetic field records were also obtained. Nine pronounced field pulses which we attributed to individual leader steps were recorded. The time intervals between the step pulses ranged from 13.9 to 23.9 μs, with a mean value of 17.4 μs. Further, for the first time, smaller pulses were observed between the pronounced step pulses in the magnetic field derivative records. Time intervals between the smaller pulses (indicative of intermittent processes between steps) ranged from 0.9 to 5.5 μs, with a mean of 2.2 μs and a standard deviation of 0.82 μs. A total of 23 luminous segments, commonly attributed to space stems/leaders, were captured. Their two-dimensional lengths varied from 1 to 13 m, with a mean of 5 m. The distances between the luminous segments and the existing leader channels ranged from 1 to 8 m, with a mean value of 4 m. Three possible scenarios of the evolution of space stems/leaders located beside the leader channel have been inferred: (A) the space stem/leader fails to make connection to the leader channel; (B) the space stem/leader connects to the existing leader channel, but may die off and be followed, tens of microseconds later, by a low luminosity streamer; (C) the space stem/leader connects to the existing channel and launches an upward-propagating luminosity wave. Weakly luminous filamentary structures, which we interpreted as corona streamers, were observed emanating from the leader tip. The stepped leader branches extended downward with speeds ranging from 4.1 × 105 to 14.6 × 105 m s- 1.

  8. High-speed mirror-scanning tracker

    NASA Astrophysics Data System (ADS)

    Tong, HengWei

    1999-06-01

    This paper introduces a high speed single-mirror scanner developed by us as a versatile tracker. It can be connected with a high speed camera, a TV tracker (or color video recorder) /measurer/recorder. It can be guided by a computer, a joystick (automatic or manual) or TV tracker. In this paper, we also present the advantages of our scanner contrasted with the limitations of fixed camera system. In addition, several usable projects of mirror scanner are discussed.

  9. High speed video shooting with continuous-wave laser illumination in laboratory modeling of wind - wave interaction

    NASA Astrophysics Data System (ADS)

    Kandaurov, Alexander; Troitskaya, Yuliya; Caulliez, Guillemette; Sergeev, Daniil; Vdovin, Maxim

    2014-05-01

    Three examples of usage of high-speed video filming in investigation of wind-wave interaction in laboratory conditions is described. Experiments were carried out at the Wind - wave stratified flume of IAP RAS (length 10 m, cross section of air channel 0.4 x 0.4 m, wind velocity up to 24 m/s) and at the Large Air-Sea Interaction Facility (LASIF) - MIO/Luminy (length 40 m, cross section of air channel 3.2 x 1.6 m, wind velocity up to 10 m/s). A combination of PIV-measurements, optical measurements of water surface form and wave gages were used for detailed investigation of the characteristics of the wind flow over the water surface. The modified PIV-method is based on the use of continuous-wave (CW) laser illumination of the airflow seeded by particles and high-speed video. During the experiments on the Wind - wave stratified flume of IAP RAS Green (532 nm) CW laser with 1.5 Wt output power was used as a source for light sheet. High speed digital camera Videosprint (VS-Fast) was used for taking visualized air flow images with the frame rate 2000 Hz. Velocity air flow field was retrieved by PIV images processing with adaptive cross-correlation method on the curvilinear grid following surface wave profile. The mean wind velocity profiles were retrieved using conditional in phase averaging like in [1]. In the experiments on the LASIF more powerful Argon laser (4 Wt, CW) was used as well as high-speed camera with higher sensitivity and resolution: Optronics Camrecord CR3000x2, frame rate 3571 Hz, frame size 259×1696 px. In both series of experiments spherical 0.02 mm polyamide particles with inertial time 7 ms were used for seeding airflow. New particle seeding system based on utilization of air pressure is capable of injecting 2 g of particles per second for 1.3 - 2.4 s without flow disturbance. Used in LASIF this system provided high particle density on PIV-images. In combination with high-resolution camera it allowed us to obtain momentum fluxes directly from

  10. Measuring contraction propagation and localizing pacemaker cells using high speed video microscopy

    NASA Astrophysics Data System (ADS)

    Akl, Tony J.; Nepiyushchikh, Zhanna V.; Gashev, Anatoliy A.; Zawieja, David C.; Coté, Gerard L.

    2011-02-01

    Previous studies have shown the ability of many lymphatic vessels to contract phasically to pump lymph. Every lymphangion can act like a heart with pacemaker sites that initiate the phasic contractions. The contractile wave propagates along the vessel to synchronize the contraction. However, determining the location of the pacemaker sites within these vessels has proven to be very difficult. A high speed video microscopy system with an automated algorithm to detect pacemaker location and calculate the propagation velocity, speed, duration, and frequency of the contractions is presented in this paper. Previous methods for determining the contractile wave propagation velocity manually were time consuming and subject to errors and potential bias. The presented algorithm is semiautomated giving objective results based on predefined criteria with the option of user intervention. The system was first tested on simulation images and then on images acquired from isolated microlymphatic mesenteric vessels. We recorded contraction propagation velocities around 10 mm/s with a shortening speed of 20.4 to 27.1 μm/s on average and a contraction frequency of 7.4 to 21.6 contractions/min. The simulation results showed that the algorithm has no systematic error when compared to manual tracking. The system was used to determine the pacemaker location with a precision of 28 μm when using a frame rate of 300 frames per second.

  11. Studying the internal ballistics of a combustion-driven potato cannon using high-speed video

    NASA Astrophysics Data System (ADS)

    Courtney, E. D. S.; Courtney, M. W.

    2013-07-01

    A potato cannon was designed to accommodate several different experimental propellants and have a transparent barrel so the movement of the projectile could be recorded on high-speed video (at 2000 frames per second). Five experimental propellants were tested: propane (C3H8), acetylene (C2H2), ethanol (C2H6O), methanol (CH4O) and butane (C4H10). The quantity of each experimental propellant was calculated to approximate a stoichometric mixture and considering the upper and lower flammability limits, which in turn were affected by the volume of the combustion chamber. Cylindrical projectiles were cut from raw potatoes so that there was an airtight fit, and each weighed 50 (± 0.5) g. For each trial, position as a function of time was determined via frame-by-frame analysis. Five trials were made for each experimental propellant and the results analyzed to compute velocity and acceleration as functions of time. Additional quantities, including force on the potato and the pressure applied to the potato, were also computed. For each experimental propellant average velocity versus barrel position curves were plotted. The most effective experimental propellant was defined as that which accelerated the potato to the highest muzzle velocity. The experimental propellant acetylene performed the best on average (138.1 m s-1), followed by methanol (48.2 m s-1), butane (34.6 m s-1), ethanol (33.3 m s-1) and propane (27.9 m s-1), respectively.

  12. High-speed video cinematographic demonstration of stalk and zooid contraction of Vorticella convallaria.

    PubMed

    Moriyama, Y; Hiyama, S; Asai, H

    1998-01-01

    Stalk contraction and zooid contraction of living Vorticella convallaria were studied by high-speed video cinematography. Contraction was monitored at a speed of 9000 frames per second to study the contractile process in detail. Complete stalk contraction required approximately 9 ms. The maximal contraction velocity, 8.8 cm/s, was observed 2 ms after the start of contraction. We found that a twist appeared in the zooid during contraction. As this twist unwound, the zooid began to rotate like a right-handed screw. The subsequent stalk contraction steps, the behavior of which was similar to that of a damped harmonic oscillator, were analyzed by means of the equation of motion. From the beginning of stalk contraction, the Hookean force constant increased, and reached an upper limit of 2.23 x 10(-4) N/m 2-3 ms after the start of contraction. Thus, within 2 ms, the contraction signal spread to the entire stalk, allowing the stalk to generate the full force of contraction. The tension of an extended stalk was estimated to be 5.58 x 10(-8) N from the Hookean force constant of a stalk. This value coincides with that of the isometric tension of a glycerol-treated V. convallaria, confirming that the contractile system of V. convallaria is well preserved despite glycerol treatment.

  13. Measuring contraction propagation and localizing pacemaker cells using high speed video microscopy

    PubMed Central

    Akl, Tony J.; Nepiyushchikh, Zhanna V.; Gashev, Anatoliy A.; Zawieja, David C.; Coté, Gerard L.

    2011-01-01

    Previous studies have shown the ability of many lymphatic vessels to contract phasically to pump lymph. Every lymphangion can act like a heart with pacemaker sites that initiate the phasic contractions. The contractile wave propagates along the vessel to synchronize the contraction. However, determining the location of the pacemaker sites within these vessels has proven to be very difficult. A high speed video microscopy system with an automated algorithm to detect pacemaker location and calculate the propagation velocity, speed, duration, and frequency of the contractions is presented in this paper. Previous methods for determining the contractile wave propagation velocity manually were time consuming and subject to errors and potential bias. The presented algorithm is semiautomated giving objective results based on predefined criteria with the option of user intervention. The system was first tested on simulation images and then on images acquired from isolated microlymphatic mesenteric vessels. We recorded contraction propagation velocities around 10 mm∕s with a shortening speed of 20.4 to 27.1 μm∕s on average and a contraction frequency of 7.4 to 21.6 contractions∕min. The simulation results showed that the algorithm has no systematic error when compared to manual tracking. The system was used to determine the pacemaker location with a precision of 28 μm when using a frame rate of 300 frames per second. PMID:21361700

  14. The Mechanical Properties of Early Drosophila Embryos Measured by High-Speed Video Microrheology

    PubMed Central

    Wessel, Alok D.; Gumalla, Maheshwar; Grosshans, Jörg; Schmidt, Christoph F.

    2015-01-01

    In early development, Drosophila melanogaster embryos form a syncytium, i.e., multiplying nuclei are not yet separated by cell membranes, but are interconnected by cytoskeletal polymer networks consisting of actin and microtubules. Between division cycles 9 and 13, nuclei and cytoskeleton form a two-dimensional cortical layer. To probe the mechanical properties and dynamics of this self-organizing pre-tissue, we measured shear moduli in the embryo by high-speed video microrheology. We recorded position fluctuations of injected micron-sized fluorescent beads with kHz sampling frequencies and characterized the viscoelasticity of the embryo in different locations. Thermal fluctuations dominated over nonequilibrium activity for frequencies between 0.3 and 1000 Hz. Between the nuclear layer and the yolk, the cytoplasm was homogeneous and viscously dominated, with a viscosity three orders of magnitude higher than that of water. Within the nuclear layer we found an increase of the elastic and viscous moduli consistent with an increased microtubule density. Drug-interference experiments showed that microtubules contribute to the measured viscoelasticity inside the embryo whereas actin only plays a minor role in the regions outside of the actin caps that are closely associated with the nuclei. Measurements at different stages of the nuclear division cycle showed little variation. PMID:25902430

  15. Dynamic Impact Deformation Analysis Using High-speed Cameras and ARAMIS Photogrammetry Software

    DTIC Science & Technology

    2010-06-01

    photogrammetry for high-speed impact deformation. ARAMIS is a non-contact measurement system that calculates the strain history of a deformation event...stereo photography and photogrammetry, one can obtain a detail history of a target that undergoes fast rate of deformation. This report is a...desired (highlighted) panel, and then selecting the Edit option. The values in the “Name,” “Calibration scale,” “Cert. Temp.,” and “Exp. Coff

  16. Field-programmable gate array-based hardware architecture for high-speed camera with KAI-0340 CCD image sensor

    NASA Astrophysics Data System (ADS)

    Wang, Hao; Yan, Su; Zhou, Zuofeng; Cao, Jianzhong; Yan, Aqi; Tang, Linao; Lei, Yangjie

    2013-08-01

    We present a field-programmable gate array (FPGA)-based hardware architecture for high-speed camera which have fast auto-exposure control and colour filter array (CFA) demosaicing. The proposed hardware architecture includes the design of charge coupled devices (CCD) drive circuits, image processing circuits, and power supply circuits. CCD drive circuits transfer the TTL (Transistor-Transistor-Logic) level timing Sequences which is produced by image processing circuits to the timing Sequences under which CCD image sensor can output analog image signals. Image processing circuits convert the analog signals to digital signals which is processing subsequently, and the TTL timing, auto-exposure control, CFA demosaicing, and gamma correction is accomplished in this module. Power supply circuits provide the power for the whole system, which is very important for image quality. Power noises effect image quality directly, and we reduce power noises by hardware way, which is very effective. In this system, the CCD is KAI-0340 which is can output 210 full resolution frame-per-second, and our camera can work outstandingly in this mode. The speed of traditional auto-exposure control algorithms to reach a proper exposure level is so slow that it is necessary to develop a fast auto-exposure control method. We present a new auto-exposure algorithm which is fit high-speed camera. Color demosaicing is critical for digital cameras, because it converts a Bayer sensor mosaic output to a full color image, which determines the output image quality of the camera. Complexity algorithm can acquire high quality but cannot implement in hardware. An low-complexity demosaicing method is presented which can implement in hardware and satisfy the demand of quality. The experiment results are given in this paper in last.

  17. High-speed camera observation of multi-component droplet coagulation in an ultrasonic standing wave field

    NASA Astrophysics Data System (ADS)

    Reißenweber, Marina; Krempel, Sandro; Lindner, Gerhard

    2013-12-01

    With an acoustic levitator small particles can be aggregated near the nodes of a standing pressure field. Furthermore it is possible to atomize liquids on a vibrating surface. We used a combination of both mechanisms and atomized several liquids simultaneously, consecutively and emulsified in the ultrasonic field. Using a high-speed camera we observed the coagulation of the spray droplets into single large levitated droplets resolved in space and time. In case of subsequent atomization of two components the spray droplets of the second component were deposited on the surface of the previously coagulated droplet of the first component without mixing.

  18. CID25: radiation hardened color video camera

    NASA Astrophysics Data System (ADS)

    Baiko, D. A.; Bhaskaran, S. K.; Czebiniak, S. W.

    2006-02-01

    The charge injection device, CID25, is presented. The CID25 is a color video imager. The imager is compliant with the NTSC interlaced TV standard. It has 484 by 710 displayable pixels and is capable of producing 30 frames-per-second color video. The CID25 is equipped with the preamplifier-per-pixel technology combined with parallel row processing to achieve high conversion gain and low noise bandwidth. The on-chip correlated double sampling circuitry serves to reduce the low frequency noise components. The CID25 is operated by a camera system consisting of two parts, the head assembly and the camera control unit (CCU). The head assembly and the CCU can be separated by up to 150 meter long cable. The CID25 imager and the head portion of the camera are radiation hardened. They can produce color video with insignificant SNR degradation out to at least 2.85 Mrad of total dose of Co 60 γ-radiation. This represents the first in industry radiation hardened color video system, based on a semiconductor photo-detector that has an adequate sensitivity for room light operation.

  19. Single-camera high-speed stereo-digital image correlation for full-field vibration measurement

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2017-09-01

    A low-cost, easy-to-implement single-camera high-speed stereo-digital image correlation (SCHS stereo-DIC) method using a four-mirror adapter is proposed for full-field 3D vibration measurement. With the aid of the four-mirror adapter, surface images of calibration target and test objects can be separately imaged onto two halves of the camera sensor through two different optical paths. These images can be further processed to retrieve the vibration responses on the specimen surface. To validate the effectiveness and accuracy of the proposed approach, dynamic parameters including natural frequencies, damping ratios and mode shapes of a rectangular cantilever plate were extracted from the directly measured vibration responses using the established system. The results reveal that the SCHS stereo-DIC is a simple, practical and effective technique for vibration measurements and dynamic parameters identification.

  20. A novel optical apparatus for the study of rolling contact wear/fatigue based on a high-speed camera and multiple-source laser illumination.

    PubMed

    Bodini, I; Sansoni, G; Lancini, M; Pasinetti, S; Docchio, F

    2016-08-01

    Rolling contact wear/fatigue tests on wheel/rail specimens are important to produce wheels and rails of new materials for improved lifetime and performance, which are able to operate in harsh environments and at high rolling speeds. This paper presents a novel non-invasive, all-optical system, based on a high-speed video camera and multiple laser illumination sources, which is able to continuously monitor the dynamics of the specimens used to test wheel and rail materials, in a laboratory test bench. 3D macro-topography and angular position of the specimen are simultaneously performed, together with the acquisition of surface micro-topography, at speeds up to 500 rpm, making use of a fast camera and image processing algorithms. Synthetic indexes for surface micro-topography classification are defined, the 3D macro-topography is measured with a standard uncertainty down to 0.019 mm, and the angular position is measured on a purposely developed analog encoder with a standard uncertainty of 2.9°. The very small camera exposure time enables to obtain blur-free images with excellent definition. The system will be described with the aid of end-cycle specimens, as well as of in-test specimens.

  1. A novel optical apparatus for the study of rolling contact wear/fatigue based on a high-speed camera and multiple-source laser illumination

    NASA Astrophysics Data System (ADS)

    Bodini, I.; Sansoni, G.; Lancini, M.; Pasinetti, S.; Docchio, F.

    2016-08-01

    Rolling contact wear/fatigue tests on wheel/rail specimens are important to produce wheels and rails of new materials for improved lifetime and performance, which are able to operate in harsh environments and at high rolling speeds. This paper presents a novel non-invasive, all-optical system, based on a high-speed video camera and multiple laser illumination sources, which is able to continuously monitor the dynamics of the specimens used to test wheel and rail materials, in a laboratory test bench. 3D macro-topography and angular position of the specimen are simultaneously performed, together with the acquisition of surface micro-topography, at speeds up to 500 rpm, making use of a fast camera and image processing algorithms. Synthetic indexes for surface micro-topography classification are defined, the 3D macro-topography is measured with a standard uncertainty down to 0.019 mm, and the angular position is measured on a purposely developed analog encoder with a standard uncertainty of 2.9°. The very small camera exposure time enables to obtain blur-free images with excellent definition. The system will be described with the aid of end-cycle specimens, as well as of in-test specimens.

  2. Advanced High-Speed Framing Camera Development for Fast, Visible Imaging Experiments

    SciTech Connect

    Amy Lewis, Stuart Baker, Brian Cox, Abel Diaz, David Glass, Matthew Martin

    2011-05-11

    The advances in high-voltage switching developed in this project allow a camera user to rapidly vary the number of output frames from 1 to 25. A high-voltage, variable-amplitude pulse train shifts the deflection location to the new frame location during the interlude between frames, making multiple frame counts and locations possible. The final deflection circuit deflects to five different frame positions per axis, including the center position, making for a total of 25 frames. To create the preset voltages, electronically adjustable {+-}500 V power supplies were chosen. Digital-to-analog converters provide digital control of the supplies. The power supplies are clamped to {+-}400 V so as not to exceed the voltage ratings of the transistors. A field-programmable gated array (FPGA) receives the trigger signal and calculates the combination of plate voltages for each frame. The interframe time and number of frames are specified by the user, but are limited by the camera electronics. The variable-frame circuit shifts the plate voltages of the first frame to those of the second frame during the user-specified interframe time. Designed around an electrostatic image tube, a framing camera images the light present during each frame (at the photocathode) onto the tube’s phosphor. The phosphor persistence allows the camera to display multiple frames on the phosphor at one time. During this persistence, a CCD camera is triggered and the analog image is collected digitally. The tube functions by converting photons to electrons at the negatively charged photocathode. The electrons move quickly toward the more positive charge of the phosphor. Two sets of deflection plates skew the electron’s path in horizontal and vertical (x axis and y axis, respectively) directions. Hence, each frame’s electrons bombard the phosphor surface at a controlled location defined by the voltages on the deflection plates. To prevent the phosphor from being exposed between frames, the image tube

  3. Two-dimensional thermal analysis for freezing of plant and animal cells by high-speed microscopic IR camera

    NASA Astrophysics Data System (ADS)

    Morikawa, Junko; Hashimoto, Toshimasa; Hayakawa, Eita; Uemura, Hideyuki

    2003-04-01

    Using a high speed IR camera for temperature sensor is a powerful new tool for thermal analysis in the cell scale biomaterials. In this study, we propose a new type of two-dimensional thermal analysis by means of a high speed IR camera with a microscopic lens, and applied it to the analysis of freezing of plant and animal cells. The latent heat on the freezing of super cooled onion epidermal cell was randomly observed by a unit cell size, one by one, under a cooling rate of 80degC/min with a spatial resolution of 7.5m. The freezing front of ice formation and the thermal diffusion behavior of generated latent heat were analyzed. As a result it was possible to determine simultaneously the intercellular/intracellular temperature distribution, the growing speed of freezing front in a single cell, and the thermal diffusion in the freezing process of living tissue. A new measuring system presented here will be significant in a transient process of biomaterials. A newly developed temperature wave methods for the measurement of in-plane thermal diffusivity was also applied to the cell systems.

  4. A study on ice crystal formation behavior at intracellular freezing of plant cells using a high-speed camera.

    PubMed

    Ninagawa, Takako; Eguchi, Akemi; Kawamura, Yukio; Konishi, Tadashi; Narumi, Akira

    2016-08-01

    Intracellular ice crystal formation (IIF) causes several problems to cryopreservation, and it is the key to developing improved cryopreservation techniques that can ensure the long-term preservation of living tissues. Therefore, the ability to capture clear intracellular freezing images is important for understanding both the occurrence and the IIF behavior. The authors developed a new cryomicroscopic system that was equipped with a high-speed camera for this study and successfully used this to capture clearer images of the IIF process in the epidermal tissues of strawberry geranium (Saxifraga stolonifera Curtis) leaves. This system was then used to examine patterns in the location and formation of intracellular ice crystals and to evaluate the degree of cell deformation because of ice crystals inside the cell and the growing rate and grain size of intracellular ice crystals at various cooling rates. The results showed that an increase in cooling rate influenced the formation pattern of intracellular ice crystals but had less of an effect on their location. Moreover, it reduced the degree of supercooling at the onset of intracellular freezing and the degree of cell deformation; the characteristic grain size of intracellular ice crystals was also reduced, but the growing rate of intracellular ice crystals was increased. Thus, the high-speed camera images could expose these changes in IIF behaviors with an increase in the cooling rate, and these are believed to have been caused by an increase in the degree of supercooling.

  5. High-speed real-time 3-D coordinates measurement based on fringe projection profilometry considering camera lens distortion

    NASA Astrophysics Data System (ADS)

    Feng, Shijie; Chen, Qian; Zuo, Chao; Sun, Jiasong; Yu, Shi Ling

    2014-10-01

    Optical three-dimensional (3-D) profilometry is gaining increasing attention for its simplicity, flexibility, high accuracy, and non-contact nature. Recent advances in imaging sensors and digital projection technology further its progress in high-speed, real-time applications, enabling 3-D shapes reconstruction of moving objects and dynamic scenes. However, the camera lens is never perfect and the lens distortion does influence the accuracy of the measurement result, which is often overlooked in the existing real-time 3-D shape measurement systems. To this end, here we present a novel high-speed real-time 3-D coordinates measuring technique based on fringe projection with the consideration of the camera lens distortion. A pixel mapping relation between a distorted image and a corrected one is pre-determined and stored in computer memory for real-time fringe correction. The out-of-plane height is obtained firstly and the acquisition for the two corresponding in-plane coordinates follows on the basis of the solved height. Besides, a method of lookup table (LUT) is introduced as well for fast data processing. Our experimental results reveal that the measurement error of the in-plane coordinates has been reduced by one order of magnitude and the accuracy of the out-plane coordinate been tripled after the distortions being eliminated. Moreover, owing to the generated LUTs, a 3-D reconstruction speed of 92.34 frames per second can be achieved.

  6. Calibrating Video Cameras For Meteor Works

    NASA Astrophysics Data System (ADS)

    Khaleghy-Rad, Mona; Campbell-Brown, M.

    2006-09-01

    The calculation of the intensity of light produced by a meteor ablating in the atmosphere is crucial to determination of meteoroid masses, and to uncovering the meteoroid's physical structure through ablation modeling. A necessary step in the determination is to use cameras which have been end-to-end calibrated to determine their precise spectral response. We report here a new procedure for calibrating low-light video cameras used for meteor observing, which will be used in conjunction with average meteor spectra to determine absolute light intensities.

  7. Pixel-based characterisation of CMOS high-speed camera systems

    NASA Astrophysics Data System (ADS)

    Weber, V.; Brübach, J.; Gordon, R. L.; Dreizler, A.

    2011-05-01

    Quantifying high-repetition rate laser diagnostic techniques for measuring scalars in turbulent combustion relies on a complete description of the relationship between detected photons and the signal produced by the detector. CMOS-chip based cameras are becoming an accepted tool for capturing high frame rate cinematographic sequences for laser-based techniques such as Particle Image Velocimetry (PIV) and Planar Laser Induced Fluorescence (PLIF) and can be used with thermographic phosphors to determine surface temperatures. At low repetition rates, imaging techniques have benefitted from significant developments in the quality of CCD-based camera systems, particularly with the uniformity of pixel response and minimal non-linearities in the photon-to-signal conversion. The state of the art in CMOS technology displays a significant number of technical aspects that must be accounted for before these detectors can be used for quantitative diagnostics. This paper addresses these issues.

  8. Multi-Camera, High-Speed Imaging System for Kinematics Data Collection

    DTIC Science & Technology

    2007-09-21

    2 Fig. 3. LEGO block calibration...done in a number of ways, but in the case of the fin test an easy method was devised by suspending a structure of LEGO bricks from a platform into...the field of interest, Fig. 3. An image of the LEGO structure is taken from each of the two cameras and stored on the computer. Each point of

  9. Clinically evaluated procedure for the reconstruction of vocal fold vibrations from endoscopic digital high-speed videos.

    PubMed

    Lohscheller, Jörg; Toy, Hikmet; Rosanowski, Frank; Eysholdt, Ulrich; Döllinger, Michael

    2007-08-01

    Investigation of voice disorders requires the examination of vocal fold vibrations. State of the art is the recording of endoscopic high-speed movies which capture vocal fold vibrations in real-time. It enables investigating the interrelation between disturbances of vocal fold vibrations and voice disorders. However, the lack of clinical studies and of a standardized procedure to reconstruct vocal fold vibrations from high-speed videos constrain the clinical acceptance of the high-speed technique. An image processing approach is presented that extracts the vibrating vocal fold edges from digital high-speed movies. The initial segmentation is principally based on a seeded region-growing algorithm. Even in movies with low image quality the algorithm segments successfully the glottal area by an introduced two-dimensional threshold matrix. Following segmentation, the vocal fold edges are reconstructed from the computed time-varying glottal area. The performance of the procedure was objectively evaluated within a study comprising 372 high-speed recordings. The accuracy of vocal fold reconstruction exceeds manual segmentation results obtained by clinical experts. The algorithm reaches an information flow-rate of up to 98 images per second. The robustness and high accuracy of the procedure makes it suitable for the application in clinical routine. It enables an objective and highly accurate description of vocal fold vibrations which is essential to realize extensive clinical studies which focus on the classification of voice disorders.

  10. Machine Vision Techniques For High Speed Videography

    NASA Astrophysics Data System (ADS)

    Hunter, David B.

    1984-11-01

    The priority associated with U.S. efforts to increase productivity has led to, among other things, the development of Machine Vision systems for use in manufacturing automation requirements. Many such systems combine solid state television cameras and data processing equipment to facilitate high speed, on-line inspection and real time dimensional measurement of parts and assemblies. These parts are often randomly oriented and spaced on a conveyor belt under continuous motion. Television imagery of high speed events has historically been achieved by use of pulsed (strobe) illumination or high speed shutter techniques synchronized with a camera's vertical blanking to separate write and read cycle operation. Lack of synchronization between part position and camera scanning in most on-line applications precludes use of this vertical interval illumination technique. Alternatively, many Machine Vision cameras incorporate special techniques for asynchronous, stop-motion imaging. Such cameras are capable of imaging parts asynchronously at rates approaching 60 hertz while remaining compatible with standard video recording units. Techniques for asynchronous, stop-motion imaging have not been incorporated in cameras used for High Speed Videography. Imaging of these events has alternatively been obtained through the utilization of special, high frame rate cameras to minimize motion during the frame interval. High frame rate cameras must undoubtedly be utilized for recording of high speed events occurring at high repetition rates. However, such cameras require very specialized, and often expensive, video recording equipment. It seems, therefore, that Machine Vision cameras with capability for asynchronous, stop-motion imaging represent a viable approach for cost effective video recording of high speed events occurring at repetition rates up to 60 hertz.

  11. Terminal Performance of Lead Free Pistol Bullets in Ballistic Gelatin Using Retarding Force Analysis from High Speed Video

    DTIC Science & Technology

    2016-04-04

    Terminal Performance of Lead -Free Pistol Bullets in Ballistic Gelatin Using Retarding Force Analysis from High Speed Video ELIJAH COURTNEY, AMY...COURTNEY, LUBOV ANDRUSIV, AND MICHAEL COURTNEY Michael_Courtney@alum.mit.edu Abstract Due to concerns about environmental and industrial hazards of lead ...a number of military, law enforce- ment, and wildlife management agencies are giving careful consideration to lead -free ammunition. The goal of

  12. The effectiveness of detection of splashed particles using a system of three integrated high-speed cameras

    NASA Astrophysics Data System (ADS)

    Ryżak, Magdalena; Beczek, Michał; Mazur, Rafał; Sochan, Agata; Bieganowski, Andrzej

    2017-04-01

    The phenomenon of splash, which is one of the factors causing erosion of the soil surface, is the subject of research of various scientific teams. One of efficient methods of observation and analysis of this phenomenon are high-speed cameras to measure particles at 2000 frames per second or higher. Analysis of the phenomenon of splash with the use of high-speed cameras and specialized software can reveal, among other things, the number of broken particles, their speeds, trajectories, and the distances over which they were transferred. The paper presents an attempt at evaluation of the efficiency of detection of splashed particles with the use of a set of 3 cameras (Vision Research MIRO 310) and software Dantec Dynamics Studio, using a 3D module (Volumetric PTV). In order to assess the effectiveness of estimating the number of particles, the experiment was performed on glass beads with a diameter of 0.5 mm (corresponding to the sand fraction). Water droplets with a diameter of 4.2 mm fell on a sample from a height of 1.5 m. Two types of splashed particles were observed: particle having a low range (up to 18 mm) splashed at larger angles and particles of a high range (up to 118 mm) splashed at smaller angles. The detection efficiency the number of splashed particles estimated by the software was 45 - 65% for particles with a large range. The effectiveness of the detection of particles by the software has been calculated on the basis of comparison with the number of beads that fell on the adhesive surface around the sample. This work was partly financed from the National Science Centre, Poland; project no. 2014/14/E/ST10/00851.

  13. The High-Speed and Wide-Field TORTORA Camera: description & results .

    NASA Astrophysics Data System (ADS)

    Greco, G.; Beskin, G.; Karpov, S.; Guarnieri, A.; Bartolini, C.; Bondar, S.; Piccioni, A.; Molinari, E.

    We present the description and the most significant results of the wide-field and ultra-fast TORTORA camera devoted to the investigation of rapid changes in light intensity in a phenomenon occurring within an extremely short period of time and randomly distributed over the sky. In particular, the ground-based TORTORA observations synchronized with the gamma -ray BAT telescope on board of the Swift satellite has permitted to trace the optical burst time-structure of the Naked-Eye GRB 080319B with an unprecedented level of accuracy.

  14. SPADAS: a high-speed 3D single-photon camera for advanced driver assistance systems

    NASA Astrophysics Data System (ADS)

    Bronzi, D.; Zou, Y.; Bellisai, S.; Villa, F.; Tisa, S.; Tosi, A.; Zappa, F.

    2015-02-01

    Advanced Driver Assistance Systems (ADAS) are the most advanced technologies to fight road accidents. Within ADAS, an important role is played by radar- and lidar-based sensors, which are mostly employed for collision avoidance and adaptive cruise control. Nonetheless, they have a narrow field-of-view and a limited ability to detect and differentiate objects. Standard camera-based technologies (e.g. stereovision) could balance these weaknesses, but they are currently not able to fulfill all automotive requirements (distance range, accuracy, acquisition speed, and frame-rate). To this purpose, we developed an automotive-oriented CMOS single-photon camera for optical 3D ranging based on indirect time-of-flight (iTOF) measurements. Imagers based on Single-photon avalanche diode (SPAD) arrays offer higher sensitivity with respect to CCD/CMOS rangefinders, have inherent better time resolution, higher accuracy and better linearity. Moreover, iTOF requires neither high bandwidth electronics nor short-pulsed lasers, hence allowing the development of cost-effective systems. The CMOS SPAD sensor is based on 64 × 32 pixels, each able to process both 2D intensity-data and 3D depth-ranging information, with background suppression. Pixel-level memories allow fully parallel imaging and prevents motion artefacts (skew, wobble, motion blur) and partial exposure effects, which otherwise would hinder the detection of fast moving objects. The camera is housed in an aluminum case supporting a 12 mm F/1.4 C-mount imaging lens, with a 40°×20° field-of-view. The whole system is very rugged and compact and a perfect solution for vehicle's cockpit, with dimensions of 80 mm × 45 mm × 70 mm, and less that 1 W consumption. To provide the required optical power (1.5 W, eye safe) and to allow fast (up to 25 MHz) modulation of the active illumination, we developed a modular laser source, based on five laser driver cards, with three 808 nm lasers each. We present the full characterization of

  15. Light sources and cameras for standard in vitro membrane potential and high-speed ion imaging.

    PubMed

    Davies, R; Graham, J; Canepari, M

    2013-07-01

    Membrane potential and fast ion imaging are now standard optical techniques routinely used to record dynamic physiological signals in several preparations in vitro. Although detailed resolution of optical signals can be improved by confocal or two-photon microscopy, high spatial and temporal resolution can be obtained using conventional microscopy and affordable light sources and cameras. Thus, standard wide-field imaging methods are still the most common in research laboratories and can often produce measurements with a signal-to-noise ratio that is superior to other optical approaches. This paper seeks to review the most important instrumentation used in these experiments, with particular reference to recent technological advances. We analyse in detail the optical constraints dictating the type of signals that are obtained with voltage and ion imaging and we discuss how to use this information to choose the optimal apparatus. Then, we discuss the available light sources with specific attention to light emitting diodes and solid state lasers. We then address the current state-of-the-art of available charge coupled device, electron multiplying charge coupled device and complementary metal oxide semiconductor cameras and we analyse the characteristics that need to be taken into account for the choice of optimal detector. Finally, we conclude by discussing prospective future developments that are likely to further improve the quality of the signals expanding the capability of the techniques and opening the gate to novel applications.

  16. Photometric Calibration of Consumer Video Cameras

    NASA Technical Reports Server (NTRS)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to

  17. Frequency identification of vibration signals using video camera image data.

    PubMed

    Jeng, Yih-Nen; Wu, Chia-Hung

    2012-10-16

    This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC) can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system.

  18. Frequency Identification of Vibration Signals Using Video Camera Image Data

    PubMed Central

    Jeng, Yih-Nen; Wu, Chia-Hung

    2012-01-01

    This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC) can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system. PMID:23202026

  19. High speed wide field CMOS camera for Transneptunian Automatic Occultation Survey

    NASA Astrophysics Data System (ADS)

    Wang, Shiang-Yu; Geary, John C.; Amato, Stephen M.; Hu, Yen-Sang; Ling, Hung-Hsu; Huang, Pin-Jie; Furesz, Gabor; Chen, Hsin-Yo; Chang, Yin-Chang; Szentgyorgyi, Andrew; Lehner, Matthew; Norton, Timothy

    2014-08-01

    The Transneptunian Automated Occultation Survey (TAOS II) is a three robotic telescope project to detect the stellar occultation events generated by Trans Neptunian Objects (TNOs). TAOS II project aims to monitor about 10000 stars simultaneously at 20Hz to enable statistically significant event rate. The TAOS II camera is designed to cover the 1.7 degree diameter field of view (FoV) of the 1.3m telescope with 10 mosaic 4.5kx2k CMOS sensors. The new CMOS sensor has a back illumination thinned structure and high sensitivity to provide similar performance to that of the backillumination thinned CCDs. The sensor provides two parallel and eight serial decoders so the region of interests can be addressed and read out separately through different output channels efficiently. The pixel scale is about 0.6"/pix with the 16μm pixels. The sensors, mounted on a single Invar plate, are cooled to the operation temperature of about 200K by a cryogenic cooler. The Invar plate is connected to the dewar body through a supporting ring with three G10 bipods. The deformation of the cold plate is less than 10μm to ensure the sensor surface is always within ±40μm of focus range. The control electronics consists of analog part and a Xilinx FPGA based digital circuit. For each field star, 8×8 pixels box will be readout. The pixel rate for each channel is about 1Mpix/s and the total pixel rate for each camera is about 80Mpix/s. The FPGA module will calculate the total flux and also the centroid coordinates for every field star in each exposure.

  20. Development Of Model P-302 Beryllium Rotating Mirror Component In High Speed Streak Camera

    NASA Astrophysics Data System (ADS)

    Sang, Yongsheng

    1989-06-01

    This paper depicts the development and test of Model P-302 Beryllium Rotating Mirror Component used in Model WPG-30 or Model SJZ-15 Streak Camera. The mirror body is made of Hot lsostatic Pressing (HIP) Beryllium. The mirror reflective surface is made by replica film method and consists of 0.1 wave-length flatness beryllium substrate with an aluminum overcoating. Its reflectance is 83%. The cavity of the rotating mirror is not vacuumized. The rotor is strictly adjusted with the dynamic balance method. The turbine is driven by compressive-air and the maximum rotating speed in test is 5,833 rps. The size of the mirror body is 22.5X25 mmX8 mm (rotating diameter is 22.5 mm). Under examination of dynamic performance its writing rate is 15 km/s, the time resolution is 1.4 ns (0.01 mm slit width), the dynamic resolution in scanning direction is 28 line pairs/mm and the effective aperture at film is 1/10.6. The results for detonation experiments indicated that when its rotating speed was 5,000rps, the image density was suitable for measurement while using 36 DIN film. And the results also showed that the precision of measurement have been greatly improved as compared with the steel rotating mirror used before.

  1. Low-cost and high-speed optical mark reader based on an intelligent line camera

    NASA Astrophysics Data System (ADS)

    Hussmann, Stephan; Chan, Leona; Fung, Celine; Albrecht, Martin

    2003-08-01

    Optical Mark Recognition (OMR) is thoroughly reliable and highly efficient provided that high standards are maintained at both the planning and implementation stages. It is necessary to ensure that OMR forms are designed with due attention to data integrity checks, the best use is made of features built into the OMR, used data integrity is checked before the data is processed and data is validated before it is processed. This paper describes the design and implementation of an OMR prototype system for marking multiple-choice tests automatically. Parameter testing is carried out before the platform and the multiple-choice answer sheet has been designed. Position recognition and position verification methods have been developed and implemented in an intelligent line scan camera. The position recognition process is implemented into a Field Programmable Gate Array (FPGA), whereas the verification process is implemented into a micro-controller. The verified results are then sent to the Graphical User Interface (GUI) for answers checking and statistical analysis. At the end of the paper the proposed OMR system will be compared with commercially available system on the market.

  2. Characterization of calculus migration during Ho:YAG laser lithotripsy by high speed camera using suspended pendulum method

    NASA Astrophysics Data System (ADS)

    Zhang, Jian James; Rajabhandharaks, Danop; Xuan, Jason Rongwei; Chia, Ray W. J.; Hasenberg, Tom

    2014-03-01

    Calculus migration is a common problem during ureteroscopic laser lithotripsy procedure to treat urolithiasis. A conventional experimental method to characterize calculus migration utilized a hosting container (e.g. a "V" grove or a test tube). These methods, however, demonstrated large variation and poor detectability, possibly attributing to friction between the calculus and the container on which the calculus was situated. In this study, calculus migration was investigated using a pendulum model suspended under water to eliminate the aforementioned friction. A high speed camera was used to study the movement of the calculus which covered zero order (displacement), 1st order (speed) and 2nd order (acceleration). A commercialized, pulsed Ho:YAG laser at 2.1 um, 365-um core fiber, and calculus phantom (Plaster of Paris, 10×10×10mm cube) were utilized to mimic laser lithotripsy procedure. The phantom was hung on a stainless steel bar and irradiated by the laser at 0.5, 1.0 and 1.5J energy per pulse at 10Hz for 1 second (i.e., 5, 10, and 15W). Movement of the phantom was recorded by a high-speed camera with a frame rate of 10,000 FPS. Maximum displacement was 1.25+/-0.10, 3.01+/-0.52, and 4.37+/-0.58 mm for 0.5, 1, and 1.5J energy per pulse, respectively. Using the same laser power, the conventional method showed <0.5 mm total displacement. When reducing the phantom size to 5×5×5mm (1/8 in volume), the displacement was very inconsistent. The results suggested that using the pendulum model to eliminate the friction improved sensitivity and repeatability of the experiment. Detailed investigation on calculus movement and other causes of experimental variation will be conducted as a future study.

  3. Eyelid contour detection and tracking for startle research related eye-blink measurements from high-speed video records.

    PubMed

    Bernard, Florian; Deuter, Christian Eric; Gemmar, Peter; Schachinger, Hartmut

    2013-10-01

    Using the positions of the eyelids is an effective and contact-free way for the measurement of startle induced eye-blinks, which plays an important role in human psychophysiological research. To the best of our knowledge, no methods for an efficient detection and tracking of the exact eyelid contours in image sequences captured at high-speed exist that are conveniently usable by psychophysiological researchers. In this publication a semi-automatic model-based eyelid contour detection and tracking algorithm for the analysis of high-speed video recordings from an eye tracker is presented. As a large number of images have been acquired prior to method development it was important that our technique is able to deal with images that are recorded without any special parametrisation of the eye tracker. The method entails pupil detection, specular reflection removal and makes use of dynamic model adaption. In a proof-of-concept study we could achieve a correct detection rate of 90.6%. With this approach, we provide a feasible method to accurately assess eye-blinks from high-speed video recordings.

  4. Optical engineering application of modeled photosynthetically active radiation (PAR) for high-speed digital camera dynamic range optimization

    NASA Astrophysics Data System (ADS)

    Alves, James; Gueymard, Christian A.

    2009-08-01

    As efforts to create accurate yet computationally efficient estimation models for clear-sky photosynthetically active solar radiation (PAR) have succeeded, the range of practical engineering applications where these models can be successfully applied has increased. This paper describes a novel application of the REST2 radiative model (developed by the second author) in optical engineering. The PAR predictions in this application are used to predict the possible range of instantaneous irradiances that could impinge on the image plane of a stationary video camera designed to image license plates on moving vehicles. The overall spectral response of the camera (including lens and optical filters) is similar to the 400-700 nm PAR range, thereby making PAR irradiance (rather than luminance) predictions most suitable for this application. The accuracy of the REST2 irradiance predictions for horizontal surfaces, coupled with another radiative model to obtain irradiances on vertical surfaces, and to standard optical image formation models, enable setting the dynamic range controls of the camera to ensure that the license plate images are legible (unsaturated with adequate contrast) regardless of the time of day, sky condition, or vehicle speed. A brief description of how these radiative models are utilized as part of the camera control algorithm is provided. Several comparisons of the irradiance predictions derived from the radiative model versus actual PAR measurements under varying sky conditions with three Licor sensors (one horizontal and two vertical) have been made and showed good agreement. Various camera-to-plate geometries and compass headings have been considered in these comparisons. Time-lapse sequences of license plate images taken with the camera under various sky conditions over a 30-day period are also analyzed. They demonstrate the success of the approach at creating legible plate images under highly variable lighting, which is the main goal of this

  5. Determining the TNT equivalence of gram-sized explosive charges using shock-wave shadowgraphy and high-speed video recording

    NASA Astrophysics Data System (ADS)

    Hargather, Michael

    2005-11-01

    Explosive materials are routinely characterized by their TNT equivalence. This can be determined by chemical composition calculations, measurements of shock wave overpressure, or measurements of the shock wave position vs. time. However, TNT equivalence is an imperfect criterion because it is only valid at a given radius from the explosion center (H. Kleine et al., Shock Waves 13(2):123-138, 2003). Here we use a large retroreflective shadowgraph system and a high-speed digital video camera to image the shock wave and record its location vs. time. Optical data obtained from different explosions can be combined to determine a characteristic shock wave x-t diagram, from which the overpressure and the TNT equivalent are determined at any radius. This method is applied to gram-sized triacetone triperoxide (TATP) charges. Such small charges can be used inexpensively and safely for explosives research.

  6. Video inpainting under constrained camera motion.

    PubMed

    Patwardhan, Kedar A; Sapiro, Guillermo; Bertalmío, Marcelo

    2007-02-01

    A framework for inpainting missing parts of a video sequence recorded with a moving or stationary camera is presented in this work. The region to be inpainted is general: it may be still or moving, in the background or in the foreground, it may occlude one object and be occluded by some other object. The algorithm consists of a simple preprocessing stage and two steps of video inpainting. In the preprocessing stage, we roughly segment each frame into foreground and background. We use this segmentation to build three image mosaics that help to produce time consistent results and also improve the performance of the algorithm by reducing the search space. In the first video inpainting step, we reconstruct moving objects in the foreground that are "occluded" by the region to be inpainted. To this end, we fill the gap as much as possible by copying information from the moving foreground in other frames, using a priority-based scheme. In the second step, we inpaint the remaining hole with the background. To accomplish this, we first align the frames and directly copy when possible. The remaining pixels are filled in by extending spatial texture synthesis techniques to the spatiotemporal domain. The proposed framework has several advantages over state-of-the-art algorithms that deal with similar types of data and constraints. It permits some camera motion, is simple to implement, fast, does not require statistical models of background nor foreground, works well in the presence of rich and cluttered backgrounds, and the results show that there is no visible blurring or motion artifacts. A number of real examples taken with a consumer hand-held camera are shown supporting these findings.

  7. Digital high-speed camera system for combustion research using UV-laser diagnostic under microgravity at Bremen drop tower

    NASA Astrophysics Data System (ADS)

    Renken, Hartmut; Bolik, T.; Eigenbrod, Ch.; Koenig, Jens; Rath, Hans J.

    1997-04-01

    A digital high-speed camera- and recording system for 2D UV- laser spectroscopy was recently completed at Bremen drip tower. At the moment the primary users are the microgravity combustion researchers. The current project studies the reaction zones during the process of combustion. Particularly OH-radicals are detected 2D by using the method of laser induced predissociation fluorescence (LIPF). A pulsed high-energy excimer lasersystem combined with a two- staged intensified CCD-camera allows a repetition rate of 250 images per second, according to the maximum laser pulse repetition. The laser system is integrated at the top of the 110 m high evacuatable drop tube. Motorized mirrors are necessary to achieve a stable beam position within the area of interest during the drop of the experiment-capsule. The duration of 1 drop will be 4.7 seconds. About 1500 images are captured and stored onboard the drop capsule 96 Mbyte RAM image storagesystem. After saving capsule and data, a special PC-based image processing software visualizes the movies and extracts physical information out of the images. Now, after two and a half years of development the system is working operational and capable of high temporal 2D LIPF- measuring of OH, H2O, O2 and CO concentrations and 2D temperature distribution of these species.

  8. Measurement of liquid film flow on nuclear rod bundle in micro-scale by using very high speed camera system

    NASA Astrophysics Data System (ADS)

    Pham, Son; Kawara, Zensaku; Yokomine, Takehiko; Kunugi, Tomoaki

    2012-11-01

    Playing important roles in the mass and heat transfer as well as the safety of boiling water reactor, the liquid film flow on nuclear fuel rods has been studied by different measurement techniques such as ultrasonic transmission, conductivity probe, etc. Obtained experimental data of this annular two-phase flow, however, are still not enough to construct the physical model for critical heat flux analysis especially at the micro-scale. Remain problems are mainly caused by complicated geometry of fuel rod bundles, high velocity and very unstable interface behavior of liquid and gas flow. To get over these difficulties, a new approach using a very high speed digital camera system has been introduced in this work. The test section simulating a 3×3 rectangular rod bundle was made of acrylic to allow a full optical observation of the camera. Image data were taken through Cassegrain optical system to maintain the spatiotemporal resolution up to 7 μm and 20 μs. The results included not only the real-time visual information of flow patterns, but also the quantitative data such as liquid film thickness, the droplets' size and speed distributions, and the tilt angle of wavy surfaces. These databases could contribute to the development of a new model for the annular two-phase flow. Partly supported by the Global Center of Excellence (G-COE) program (J-051) of MEXT, Japan.

  9. Sports video categorizing method using camera motion parameters

    NASA Astrophysics Data System (ADS)

    Takagi, Shinichi; Hattori, Shinobu; Yokoyama, Kazumasa; Kodate, Akihisa; Tominaga, Hideyoshi

    2003-06-01

    In this paper, we propose a content based video categorizing method for broadcasted sports videos using camera motion parameters. We define and introduce two new features in the proposed method; "Camera motion extraction ratio" and "Camera motion transition". Camera motion parameters in the video sequence contain very significant information for categorization of broadcasted sports video, because in most of sports video, camera motions are closely related to the actions taken in the sports, which are mostly based on a certain rule depending on types of sports. Based on the charactersitics, we design a sports video categorization algorithm for identifying 6 major different sports types. In our algorithm, the features automatically extracted from videos are analysed statistically. The experimental results show a clear tendency and the applicability of the proposed method for sports genre identification.

  10. Video-Based Point Cloud Generation Using Multiple Action Cameras

    NASA Astrophysics Data System (ADS)

    Teo, T.

    2015-05-01

    Due to the development of action cameras, the use of video technology for collecting geo-spatial data becomes an important trend. The objective of this study is to compare the image-mode and video-mode of multiple action cameras for 3D point clouds generation. Frame images are acquired from discrete camera stations while videos are taken from continuous trajectories. The proposed method includes five major parts: (1) camera calibration, (2) video conversion and alignment, (3) orientation modelling, (4) dense matching, and (5) evaluation. As the action cameras usually have large FOV in wide viewing mode, camera calibration plays an important role to calibrate the effect of lens distortion before image matching. Once the camera has been calibrated, the author use these action cameras to take video in an indoor environment. The videos are further converted into multiple frame images based on the frame rates. In order to overcome the time synchronous issues in between videos from different viewpoints, an additional timer APP is used to determine the time shift factor between cameras in time alignment. A structure form motion (SfM) technique is utilized to obtain the image orientations. Then, semi-global matching (SGM) algorithm is adopted to obtain dense 3D point clouds. The preliminary results indicated that the 3D points from 4K video are similar to 12MP images, but the data acquisition performance of 4K video is more efficient than 12MP digital images.

  11. A single pixel camera video ophthalmoscope

    NASA Astrophysics Data System (ADS)

    Lochocki, B.; Gambin, A.; Manzanera, S.; Irles, E.; Tajahuerce, E.; Lancis, J.; Artal, P.

    2017-02-01

    There are several ophthalmic devices to image the retina, from fundus cameras capable to image the whole fundus to scanning ophthalmoscopes with photoreceptor resolution. Unfortunately, these devices are prone to a variety of ocular conditions like defocus and media opacities, which usually degrade the quality of the image. Here, we demonstrate a novel approach to image the retina in real-time using a single pixel camera, which has the potential to circumvent those optical restrictions. The imaging procedure is as follows: a set of spatially coded patterns is projected rapidly onto the retina using a digital micro mirror device. At the same time, the inner product's intensity is measured for each pattern with a photomultiplier module. Subsequently, an image of the retina is reconstructed computationally. Obtained image resolution is up to 128 x 128 px with a varying real-time video framerate up to 11 fps. Experimental results obtained in an artificial eye confirm the tolerance against defocus compared to a conventional multi-pixel array based system. Furthermore, the use of a multiplexed illumination offers a SNR improvement leading to a lower illumination of the eye and hence an increase in patient's comfort. In addition, the proposed system could enable imaging in wavelength ranges where cameras are not available.

  12. Distribution of biomolecules in porous nitrocellulose membrane pads using confocal laser scanning microscopy and high-speed cameras.

    PubMed

    Mujawar, Liyakat Hamid; Maan, Abid Aslam; Khan, Muhammad Kashif Iqbal; Norde, Willem; van Amerongen, Aart

    2013-04-02

    The main focus of our research was to study the distribution of inkjet printed biomolecules in porous nitrocellulose membrane pads of different brands. We produced microarrays of fluorophore-labeled IgG and bovine serum albumin (BSA) on FAST, Unisart, and Oncyte-Avid slides and compared the spot morphology of the inkjet printed biomolecules. The distribution of these biomolecules within the spot embedded in the nitrocellulose membrane was analyzed by confocal laser scanning microscopy in the "Z" stack mode. By applying a "concentric ring" format, the distribution profile of the fluorescence intensity in each horizontal slice was measured and represented in a graphical color-coded way. Furthermore, a one-step diagnostic antibody assay was performed with a primary antibody, double-labeled amplicons, and fluorophore-labeled streptavidin in order to study the functionality and distribution of the immune complex in the nitrocellulose membrane slides. Under the conditions applied, the spot morphology and distribution of the primary labeled biomolecules was nonhomogenous and doughnut-like on the FAST and Unisart nitrocellulose slides, whereas a better spot morphology with more homogeneously distributed biomolecules was observed on the Oncyte-Avid slide. Similar morphologies and distribution patterns were observed when the diagnostic one-step nucleic acid microarray immunoassay was performed on these nitrocellulose slides. We also investigated possible reasons for the differences in the observed spot morphology by monitoring the dynamic behavior of a liquid droplet on and in these nitrocellulose slides. Using high speed cameras, we analyzed the wettability and fluid flow dynamics of a droplet on the various nitrocellulose substrates. The spreading of the liquid droplet was comparable for the FAST and Unisart slides but different, i.e., slower, for the Oncyte-Avid slide. The results of the spreading of the droplet and the penetration behavior of the liquid in the

  13. Non-invasive seedingless measurements of the flame transfer function using high-speed camera-based laser vibrometry

    NASA Astrophysics Data System (ADS)

    Gürtler, Johannes; Greiffenhagen, Felix; Woisetschläger, Jakob; Haufe, Daniel; Czarske, Jürgen

    2017-06-01

    The characterization of modern jet engines or stationary gas turbines running with lean combustion by means of swirl-stabilized flames necessitates seedingless optical field measurements of the flame transfer function, i.e. the ratio of the fluctuating heat release rate inside the flame volume, the instationary flow velocity at the combustor outlet and the time average of both quantities. For this reason, a high-speed camera-based laser interferometric vibrometer is proposed for spatio-temporally resolved measurements of the flame transfer function inside a swirl-stabilized technically premixed flame. Each pixel provides line-of-sight measurements of the heat release rate due to the linear coupling to fluctuations of the refractive index along the laser beam, which are based on density fluctuations inside the flame volume. Additionally, field measurements of the instationary flow velocity are possible due to correlation of simultaneously measured pixel signals and the known distance between the measurement positions. Thus, the new system enables the spatially resolved detection of the flame transfer function and instationary flow behavior with a single measurement for the first time. The presented setup offers single pixel resolution with measurement rates up to 40 kHz at an maximum image resolution of 256 px x 128 px. Based on a comparison with reference measurements using a standard pointwise laser interferometric vibrometer, the new system is validated and a discussion of the measurement uncertainty is presented. Finally, the measurement of refractive index fluctuations inside a flame volume is demonstrated.

  14. A compact single-camera system for high-speed, simultaneous 3-D velocity and temperature measurements.

    SciTech Connect

    Lu, Louise; Sick, Volker; Frank, Jonathan H.

    2013-09-01

    The University of Michigan and Sandia National Laboratories collaborated on the initial development of a compact single-camera approach for simultaneously measuring 3-D gasphase velocity and temperature fields at high frame rates. A compact diagnostic tool is desired to enable investigations of flows with limited optical access, such as near-wall flows in an internal combustion engine. These in-cylinder flows play a crucial role in improving engine performance. Thermographic phosphors were proposed as flow and temperature tracers to extend the capabilities of a novel, compact 3D velocimetry diagnostic to include high-speed thermometry. Ratiometric measurements were performed using two spectral bands of laser-induced phosphorescence emission from BaMg2Al10O17:Eu (BAM) phosphors in a heated air flow to determine the optimal optical configuration for accurate temperature measurements. The originally planned multi-year research project ended prematurely after the first year due to the Sandia-sponsored student leaving the research group at the University of Michigan.

  15. Large Area Divertor Temperature Measurements Using A High-speed Camera With Near-infrared FiIters in NSTX

    SciTech Connect

    Lyons, B C; Zweben, S J; Gray, T K; Hosea, J; Kaita, R; Kugel, H W; Maqueda, R J; McLean, A G; Roquemore, A L; Soukhanovskii, V A

    2011-04-05

    Fast cameras already installed on the National Spherical Torus Experiment (NSTX) have be equipped with near-infrared (NIR) filters in order to measure the surface temperature in the lower divertor region. Such a system provides a unique combination of high speed (> 50 kHz) and wide fi eld-of-view (> 50% of the divertor). Benchtop calibrations demonstrated the system's ability to measure thermal emission down to 330 oC. There is also, however, signi cant plasma light background in NSTX. Without improvements in background reduction, the current system is incapable of measuring signals below the background equivalent temperature (600 - 700 oC). Thermal signatures have been detected in cases of extreme divertor heating. It is observed that the divertor can reach temperatures around 800 oC when high harmonic fast wave (HHFW) heating is used. These temperature profiles were fi t using a simple heat diffusion code, providing a measurement of the heat flux to the divertor. Comparisons to other infrared thermography systems on NSTX are made.

  16. The concurrent validity and reliability of a low-cost, high-speed camera-based method for measuring the flight time of vertical jumps.

    PubMed

    Balsalobre-Fernández, Carlos; Tejero-González, Carlos M; del Campo-Vecino, Juan; Bavaresco, Nicolás

    2014-02-01

    Flight time is the most accurate and frequently used variable when assessing the height of vertical jumps. The purpose of this study was to analyze the validity and reliability of an alternative method (i.e., the HSC-Kinovea method) for measuring the flight time and height of vertical jumping using a low-cost high-speed Casio Exilim FH-25 camera (HSC). To this end, 25 subjects performed a total of 125 vertical jumps on an infrared (IR) platform while simultaneously being recorded with a HSC at 240 fps. Subsequently, 2 observers with no experience in video analysis analyzed the 125 videos independently using the open-license Kinovea 0.8.15 software. The flight times obtained were then converted into vertical jump heights, and the intraclass correlation coefficient (ICC), Bland-Altman plot, and Pearson correlation coefficient were calculated for those variables. The results showed a perfect correlation agreement (ICC = 1, p < 0.0001) between both observers' measurements of flight time and jump height and a highly reliable agreement (ICC = 0.997, p < 0.0001) between the observers' measurements of flight time and jump height using the HSC-Kinovea method and those obtained using the IR system, thus explaining 99.5% (p < 0.0001) of the differences (shared variance) obtained using the IR platform. As a result, besides requiring no previous experience in the use of this technology, the HSC-Kinovea method can be considered to provide similarly valid and reliable measurements of flight time and vertical jump height as more expensive equipment (i.e., IR). As such, coaches from many sports could use the HSC-Kinovea method to measure the flight time and height of their athlete's vertical jumps.

  17. Photorealistic scene presentation: virtual video camera

    NASA Astrophysics Data System (ADS)

    Johnson, Michael J.; Rogers, Joel Clark W.

    1994-07-01

    This paper presents a low cost alternative for presenting photo-realistic imagery during the final approach, which often is a peak workload phase of flight. The method capitalizes on `a priori' information. It accesses out-the-window `snapshots' from a mass storage device, selecting the snapshots that deliver the best match for a given aircraft position and runway scene. It then warps the snapshots to align them more closely with the current viewpoint. The individual snapshots, stored as highly compressed images, are decompressed and interpolated to produce a `clear-day' video stream. The paper shows how this warping, when combined with other compression methods, saves considerable amounts of storage; compression factors from 1000 to 3000 were achieved. Thus, a CD-ROM today can store reference snapshots for thousands of different runways. Dynamic scene elements not present in the snapshot database can be inserted as separate symbolic or pictorial images. When underpinned by an appropriate suite of sensor technologies, the methods discussed indicate an all-weather virtual video camera is possible.

  18. High-definition slit-lamp video camera system.

    PubMed

    Yamamoto, Satoru; Manabe, Noriyoshi; Yamamoto, Kenji

    2010-01-01

    Using a high-definition video camera for slit-lamp examination is now possible with the assistance of an adaptor. The authors describe the easy manipulation, convenience of use, and performance of a high-definition slit-lamp video camera system and provide images of eyes that were obtained using the system.

  19. Initial laboratory evaluation of color video cameras: Phase 2

    SciTech Connect

    Terry, P.L.

    1993-07-01

    Sandia National Laboratories has considerable experience with monochrome video cameras used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than to identify intruders. The monochrome cameras were selected over color cameras because they have greater sensitivity and resolution. There is a growing interest in the identification function of security video systems for both access control and insider protection. Because color camera technology is rapidly changing and because color information is useful for identification purposes, Sandia National Laboratories has established an on-going program to evaluate the newest color solid-state cameras. Phase One of the Sandia program resulted in the SAND91-2579/1 report titled: Initial Laboratory Evaluation of Color Video Cameras. The report briefly discusses imager chips, color cameras, and monitors, describes the camera selection, details traditional test parameters and procedures, and gives the results reached by evaluating 12 cameras. Here, in Phase Two of the report, we tested 6 additional cameras using traditional methods. In addition, all 18 cameras were tested by newly developed methods. This Phase 2 report details those newly developed test parameters and procedures, and evaluates the results.

  20. Digitizing Standard & High Resolution High Frame Rate Video Camera Signals

    NASA Astrophysics Data System (ADS)

    Simmons, David M.

    1988-03-01

    The digitizing of an analog video camera signal requires special techniques to accurately sample the signal. Careful attention must be paid to both amplitude and timing considerations. Specifications exist which define amplitude and timing parame-ters of so called "standard" cameras. Recent advances in CCD technology have lead to the development of high resolution line scan and area cameras. Unfortunately these cameras do not con-form to any published standard. Hardware designed to digitize these "non-standard" cameras must have a flexible architecture to allow for each cameras' particular interface requirements.

  1. Analyzing the Dynamics and Morphology of Cast-off Pattern at Different Speed Levels Using High-speed Digital Video Imaging.

    PubMed

    Kunz, Sebastian Niko; Adamec, Jiri; Grove, Christina

    2017-03-01

    During a bloodstain pattern analysis, one of the essential tasks is to distinguish between different kinds of applied forces as well as to estimate their level of intensity. In this study, high-speed digital imaging has been used to analyze the formation of cast-off patterns generated by a simulated backswing with a blood-bearing object. For this purpose, 0.5 mL blood was applied evenly over the last 5 cm of a blade simulant. Bloodstains were created through the controlled acceleration of a backswing at different speed levels between 1.1 m/sec and 3.8 m/sec. The flight dynamics of blood droplets were captured with an Olympus(®) i-Speed 3 high-speed digital camera with a Nikon(®) AF Nikkor 50 mm f/1.8 D lens and analyzed using the Olympus i-Speed 3 Viewer software. The video analysis showed that, during the backswing, blood droplets would move toward the lower end of the knifepoint and would be tangentially thrown off. These droplets impacted on the horizontal surface according to the arc of the swing. An increase in velocity led to longer cast-off patterns with distinct morphological characteristics. Under laboratory conditions, bloodstain pattern analysis allows certain conclusions about the intensity of a backswing and provides instructions on the position of the offender. However, due to the number of unknown variables at a crime scene, such interpretation of cast-off patterns is extremely limited and should be performed with extreme caution.

  2. The influence of bur blade concentricity on high-speed tooth-cutting interactions: a video-rate confocal microscopic study.

    PubMed

    Watson, T F; Cook, R J

    1995-11-01

    This study aimed to determine the degree of eccentricity between different tungsten carbide bur manufacturing techniques and to study the effect of bur inaccuracy on dental enamel. Error in bur concentricity may arise from malalignment of the steel shaft and carbide head in a two-piece construction bur. Cutting blades rotate at multiple radii from the shaft axis, potentially producing vibrations and damage to the cut substrate. Techniques now allow for the manufacture of one-piece tungsten carbide burs with strength adequate to withstand lateral loading. A comparison of tungsten carbide dental cutting tools revealed the true extent of concentricity errors. Variation in alignment of the cutting head and shaft in the two-part constructions incurred between 20 and 50 microns of additional axial error. High-speed cutting interactions with dental enamel between carbide burs were studied by means of a video-rate confocal microscope. A cutting stage fitted to a Tandem Scanning Microscope (TSM) allowed for real-time dynamic image acquisition. Images were captured and retrieved by means of a low-light-level camera recording directly to S-VHS videotape. Videotape showing the interactions of high-speed rotary cutting instruments (at 120,000 rpm) were taken under simulated normal wet-cutting environments, and the consequent damage to the tooth tissue was observed as it occurred. Concentrically engineered bur types produced a superior quality cut surface at the entry, exit, and advancing front aspects of a cavity, as well as less subsurface cracking. Imaging of the coolant water film local to recent cutting operations showed regular spherical cutting debris of 6 to 18 microns diameter from the concentric tools, whereas the less-well-engineered burs produced ragged, irregular chips, with 25-40 microns diameter debris, indicating far more aggressive cutting actions. This study has shown that there is reduced substrate damage with high-concentricity carbide burs.

  3. High speed imaging technology: yesterday, today, and tomorrow

    NASA Astrophysics Data System (ADS)

    Pendley, Gil J.

    2003-07-01

    The purpose of this discussion is to familiarize readers with an overview of high-speed imaging technology as a means of analyzing objects in motion that occur too fast for the eye to see or conventional photography or video to capture. This information is intended to provide a brief historical narrative from the inception of high-speed imaging in the USA and the acceptance of digital video technology to augment or replace high-speed motion picture cameras. It is not intended a definitive work on the subject. For those interested in greater detail, such as application techniques, formulae, very high-speed and ultra speed technology etc. I recommend the latest text on the subject: High Speed Photography and Photonics first published in 1997 by Focal Press in the UK and copyrighted by the Association for High Speed Photography in the United Kingdom.

  4. Using a slit lamp-mounted digital high-speed camera for dynamic observation of phakic lenses during eye movements: a pilot study

    PubMed Central

    Leitritz, Martin Alexander; Ziemssen, Focke; Bartz-Schmidt, Karl Ulrich; Voykov, Bogomil

    2014-01-01

    Purpose To evaluate a digital high-speed camera combined with digital morphometry software for dynamic measurements of phakic intraocular lens movements to observe kinetic influences, particularly in fast direction changes and at lateral end points. Materials and methods A high-speed camera taking 300 frames per second observed movements of eight iris-claw intraocular lenses and two angle-supported intraocular lenses. Standardized saccades were performed by the patients to trigger mass inertia with lens position changes. Freeze images with maximum deviation were used for digital software-based morphometry analysis with ImageJ. Results Two eyes from each of five patients (median age 32 years, range 28–45 years) without findings other than refractive errors were included. The high-speed images showed sufficient usability for further morphometric processing. In the primary eye position, the median decentrations downward and in a lateral direction were −0.32 mm (range −0.69 to 0.024) and 0.175 mm (range −0.37 to 0.45), respectively. Despite the small sample size of asymptomatic patients, we found a considerable amount of lens dislocation. The median distance amplitude during eye movements was 0.158 mm (range 0.02–0.84). There was a slight positive correlation (r=0.39, P<0.001) between the grade of deviation in the primary position and the distance increase triggered by movements. Conclusion With the use of a slit lamp-mounted high-speed camera system and morphometry software, observation and objective measurements of iris-claw intraocular lenses and angle-supported intraocular lenses movements seem to be possible. Slight decentration in the primary position might be an indicator of increased lens mobility during kinetic stress during eye movements. Long-term assessment by high-speed analysis with higher case numbers has to clarify the relationship between progressing motility and endothelial cell damage. PMID:25071365

  5. Not So Fast: Swimming Behavior of Sailfish during Predator-Prey Interactions using High-Speed Video and Accelerometry.

    PubMed

    Marras, Stefano; Noda, Takuji; Steffensen, John F; Svendsen, Morten B S; Krause, Jens; Wilson, Alexander D M; Kurvers, Ralf H J M; Herbert-Read, James; Boswell, Kevin M; Domenici, Paolo

    2015-10-01

    Billfishes are considered among the fastest swimmers in the oceans. Despite early estimates of extremely high speeds, more recent work showed that these predators (e.g., blue marlin) spend most of their time swimming slowly, rarely exceeding 2 m s(-1). Predator-prey interactions provide a context within which one may expect maximal speeds both by predators and prey. Beyond speed, however, an important component determining the outcome of predator-prey encounters is unsteady swimming (i.e., turning and accelerating). Although large predators are faster than their small prey, the latter show higher performance in unsteady swimming. To contrast the evading behaviors of their highly maneuverable prey, sailfish and other large aquatic predators possess morphological adaptations, such as elongated bills, which can be moved more rapidly than the whole body itself, facilitating capture of the prey. Therefore, it is an open question whether such supposedly very fast swimmers do use high-speed bursts when feeding on evasive prey, in addition to using their bill for slashing prey. Here, we measured the swimming behavior of sailfish by using high-frequency accelerometry and high-speed video observations during predator-prey interactions. These measurements allowed analyses of tail beat frequencies to estimate swimming speeds. Our results suggest that sailfish burst at speeds of about 7 m s(-1) and do not exceed swimming speeds of 10 m s(-1) during predator-prey interactions. These speeds are much lower than previous estimates. In addition, the oscillations of the bill during swimming with, and without, extension of the dorsal fin (i.e., the sail) were measured. We suggest that extension of the dorsal fin may allow sailfish to improve the control of the bill and minimize its yaw, hence preventing disturbance of the prey. Therefore, sailfish, like other large predators, may rely mainly on accuracy of movement and the use of the extensions of their bodies, rather than resorting

  6. Fused Six-Camera Video of STS-134 Launch

    NASA Image and Video Library

    Imaging experts funded by the Space Shuttle Program and located at NASA's Ames Research Center prepared this video by merging nearly 20,000 photographs taken by a set of six cameras capturing 250 i...

  7. Station Cameras Capture New Videos of Hurricane Katia

    NASA Image and Video Library

    Aboard the International Space Station, external cameras captured new video of Hurricane Katia as it moved northwest across the western Atlantic north of Puerto Rico at 10:35 a.m. EDT on September ...

  8. DETAIL VIEW OF A VIDEO CAMERA POSITIONED ALONG THE PERIMETER ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    DETAIL VIEW OF A VIDEO CAMERA POSITIONED ALONG THE PERIMETER OF THE MLP - Cape Canaveral Air Force Station, Launch Complex 39, Mobile Launcher Platforms, Launcher Road, East of Kennedy Parkway North, Cape Canaveral, Brevard County, FL

  9. DETAIL VIEW OF VIDEO CAMERA, MAIN FLOOR LEVEL, PLATFORM ESOUTH, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    DETAIL VIEW OF VIDEO CAMERA, MAIN FLOOR LEVEL, PLATFORM E-SOUTH, HB-3, FACING SOUTHWEST - Cape Canaveral Air Force Station, Launch Complex 39, Vehicle Assembly Building, VAB Road, East of Kennedy Parkway North, Cape Canaveral, Brevard County, FL

  10. Time-synchronized high-speed video images, electric fields, and currents of rocket-and-wire triggered lightning

    NASA Astrophysics Data System (ADS)

    Biagi, C. J.; Hill, J. D.; Jordan, D. M.; Uman, M. A.; Rakov, V. A.

    2009-12-01

    We present novel observations of 20 classically-triggered lightning flashes from the 2009 summer season at the International Center for Lightning Research and Testing (ICLRT) in north-central Florida. We focus on: (1) upward positive leaders (UPL), (2) current decreases and current reflections associated with the destruction of the triggering wire, and (3) dart-stepped leader propagation involving space stems or space leaders ahead of the leader tip. High-speed video data were acquired 440 m from the triggered lightning using a Phantom v7.1 operating at frame rates of up to 10 kfps (90 µs frame time) with a field of view from ground to an altitude of 325 m and a Photron SA1.1 operating at frame rates of up to 300 kfps (3.3 µs frame time) that viewed from ground to an altitude of 120 m. These data were acquired along with time-synchronized measurements of electric field (dc to 3 MHz) and channel-base current (dc to 8 MHz). The sustained UPLs developed when the rockets were between altitudes of 100 m and 200 m, and accelerated from about 104 to 105 m s-1 from the top of the triggering wire to an altitude of 325 m. In each successive 10 kfps high-speed video image, the newly formed UPL channels were brighter than the previously established channel and the new channel segments were longer. The UPLs in two flashes were imaged at a frame rate of 300 kfps from the top of the wire to about 10 m above the wire (110 m to 120 m above ground). In these images the UPL developed in a stepped manner with luminosity waves traveling from the channel tip back toward the wire during a time of 2 to 3 frames (6.6 µs to 9.9 µs). The new channel segments were on average 1 m in length and the average interstep interval was 23 µs. During 13 of the 20 initial continuous currents, an abrupt current decrease and the beginning of the wire illumination (due to its melting) occurred simultaneously to within 1 high-speed video frame (between 3.3 µs and 10 µs). For two of the triggered

  11. Analysis of unstructured video based on camera motion

    NASA Astrophysics Data System (ADS)

    Abdollahian, Golnaz; Delp, Edward J.

    2007-01-01

    Although considerable work has been done in management of "structured" video such as movies, sports, and television programs that has known scene structures, "unstructured" video analysis is still a challenging problem due to its unrestricted nature. The purpose of this paper is to address issues in the analysis of unstructured video and in particular video shot by a typical unprofessional user (i.e home video). We describe how one can make use of camera motion information for unstructured video analysis. A new concept, "camera viewing direction," is introduced as the building block of home video analysis. Motion displacement vectors are employed to temporally segment the video based on this concept. We then find the correspondence between the camera behavior with respect to the subjective importance of the information in each segment and describe how different patterns in the camera motion can indicate levels of interest in a particular object or scene. By extracting these patterns, the most representative frames, keyframes, for the scenes are determined and aggregated to summarize the video sequence.

  12. A comparison of DIC and grid measurements for processing spalling tests with the VFM and an 80-kpixel ultra-high speed camera

    NASA Astrophysics Data System (ADS)

    Saletti, D.; Forquin, P.

    2016-05-01

    During the last decades, the spalling technique has been more and more used to characterize the tensile strength of geomaterials at high-strain-rates. In 2012, a new processing technique was proposed by Pierron and Forquin [1] to measure the stress level and apparent Young's modulus in a concrete sample by means of an ultra-high speed camera, a grid bonded onto the sample and the Virtual Fields Method. However the possible benefit to use the DIC (Digital Image Correlation) technique instead of the grid method has not been investigated. In the present work, spalling experiments were performed on two aluminum alloy samples with HPV1 (Shimadzu) ultra-high speed camera providing 1 Mfps maximum recording frequency and about 80 kpixel spatial resolution. A grid with 1 mm pitch was bonded onto the first sample whereas a speckle pattern was covering the second sample for DIC measurements. Both methods were evaluated in terms of displacement and acceleration measurements by comparing the experimental data to laser interferometer measurements. In addition, the stress and strain levels in a given cross-section were compared to the experimental data provided by a strain gage glued on each sample. The measurements allow discussing the benefit of each (grid and DIC) technique to obtain the stress-strain relationship in the case of using an 80-kpixel ultra-high speed camera.

  13. Three-dimensional digital image correlation technique using single high-speed camera for measuring large out-of-plane displacements at high framing rates.

    PubMed

    Pankow, Mark; Justusson, Brian; Waas, Anthony M

    2010-06-10

    We are concerned with the development of a three-dimensional (3D) full-field high-speed digital image correlation (DIC) measurement system using a single camera, specifically aimed at measuring large out-of-plane displacements. A system has been devised to record images at ultrahigh speeds using a single camera and a series of mirrors. These mirrors effectively converted a single camera into two virtual cameras that view a specimen surface from different angles and capture two images simultaneously. This pair of images enables one to perform DIC measurements to obtain 3D displacement fields at high framing rates. Bench testing along with results obtained using a shock wave blast test facility are used to show the validity of the method.

  14. Implicit Memory in Monkeys: Development of a Delay Eyeblink Conditioning System with Parallel Electromyographic and High-Speed Video Measurements.

    PubMed

    Kishimoto, Yasushi; Yamamoto, Shigeyuki; Suzuki, Kazutaka; Toyoda, Haruyoshi; Kano, Masanobu; Tsukada, Hideo; Kirino, Yutaka

    2015-01-01

    Delay eyeblink conditioning, a cerebellum-dependent learning paradigm, has been applied to various mammalian species but not yet to monkeys. We therefore developed an accurate measuring system that we believe is the first system suitable for delay eyeblink conditioning in a monkey species (Macaca mulatta). Monkey eyeblinking was simultaneously monitored by orbicularis oculi electromyographic (OO-EMG) measurements and a high-speed camera-based tracking system built around a 1-kHz CMOS image sensor. A 1-kHz tone was the conditioned stimulus (CS), while an air puff (0.02 MPa) was the unconditioned stimulus. EMG analysis showed that the monkeys exhibited a conditioned response (CR) incidence of more than 60% of trials during the 5-day acquisition phase and an extinguished CR during the 2-day extinction phase. The camera system yielded similar results. Hence, we conclude that both methods are effective in evaluating monkey eyeblink conditioning. This system incorporating two different measuring principles enabled us to elucidate the relationship between the actual presence of eyelid closure and OO-EMG activity. An interesting finding permitted by the new system was that the monkeys frequently exhibited obvious CRs even when they produced visible facial signs of drowsiness or microsleep. Indeed, the probability of observing a CR in a given trial was not influenced by whether the monkeys closed their eyelids just before CS onset, suggesting that this memory could be expressed independently of wakefulness. This work presents a novel system for cognitive assessment in monkeys that will be useful for elucidating the neural mechanisms of implicit learning in nonhuman primates.

  15. Demonstrations of Optical Spectra with a Video Camera

    ERIC Educational Resources Information Center

    Kraftmakher, Yaakov

    2012-01-01

    The use of a video camera may markedly improve demonstrations of optical spectra. First, the output electrical signal from the camera, which provides full information about a picture to be transmitted, can be used for observing the radiant power spectrum on the screen of a common oscilloscope. Second, increasing the magnification by the camera…

  16. Demonstrations of Optical Spectra with a Video Camera

    ERIC Educational Resources Information Center

    Kraftmakher, Yaakov

    2012-01-01

    The use of a video camera may markedly improve demonstrations of optical spectra. First, the output electrical signal from the camera, which provides full information about a picture to be transmitted, can be used for observing the radiant power spectrum on the screen of a common oscilloscope. Second, increasing the magnification by the camera…

  17. The calibration of video cameras for quantitative measurements

    NASA Technical Reports Server (NTRS)

    Snow, Walter L.; Childers, Brooks A.; Shortis, Mark R.

    1993-01-01

    Several different recent applications of velocimetry at Langley Research Center are described in order to show the need for video camera calibration for quantitative measurements. Problems peculiar to video sensing are discussed, including synchronization and timing, targeting, and lighting. The extension of the measurements to include radiometric estimates is addressed.

  18. The calibration of video cameras for quantitative measurements

    NASA Technical Reports Server (NTRS)

    Snow, Walter L.; Childers, Brooks A.; Shortis, Mark R.

    1993-01-01

    Several different recent applications of velocimetry at Langley Research Center are described in order to show the need for video camera calibration for quantitative measurements. Problems peculiar to video sensing are discussed, including synchronization and timing, targeting, and lighting. The extension of the measurements to include radiometric estimates is addressed.

  19. High-speed video gait analysis reveals early and characteristic locomotor phenotypes in mouse models of neurodegenerative movement disorders.

    PubMed

    Preisig, Daniel F; Kulic, Luka; Krüger, Maik; Wirth, Fabian; McAfoose, Jordan; Späni, Claudia; Gantenbein, Pascal; Derungs, Rebecca; Nitsch, Roger M; Welt, Tobias

    2016-09-15

    Neurodegenerative diseases of the central nervous system frequently affect the locomotor system resulting in impaired movement and gait. In this study we performed a whole-body high-speed video gait analysis in three different mouse lines of neurodegenerative movement disorders to investigate the motor phenotype. Based on precise computerized motion tracking of all relevant joints and the tail, a custom-developed algorithm generated individual and comprehensive locomotor profiles consisting of 164 spatial and temporal parameters. Gait changes observed in the three models corresponded closely to the classical clinical symptoms described in these disorders: Muscle atrophy due to motor neuron loss in SOD1 G93A transgenic mice led to gait characterized by changes in hind-limb movement and positioning. In contrast, locomotion in huntingtin N171-82Q mice modeling Huntington's disease with basal ganglia damage was defined by hyperkinetic limb movements and rigidity of the trunk. Harlequin mutant mice modeling cerebellar degeneration showed gait instability and extensive changes in limb positioning. Moreover, model specific gait parameters were identified and were shown to be more sensitive than conventional motor tests. Altogether, this technique provides new opportunities to decipher underlying disease mechanisms and test novel therapeutic approaches.

  20. Synchronised electrical monitoring and high speed video of bubble growth associated with individual discharges during plasma electrolytic oxidation

    NASA Astrophysics Data System (ADS)

    Troughton, S. C.; Nominé, A.; Nominé, A. V.; Henrion, G.; Clyne, T. W.

    2015-12-01

    Synchronised electrical current and high speed video information are presented from individual discharges on Al substrates during PEO processing. Exposure time was 8 μs and linear spatial resolution 9 μm. Image sequences were captured for periods of 2 s, during which the sample surface was illuminated with short duration flashes (revealing bubbles formed where the discharge reached the surface of the coating). Correlations were thus established between discharge current, light emission from the discharge channel and (externally-illuminated) dimensions of the bubble as it expanded and contracted. Bubbles reached radii of 500 μm, within periods of 100 μs, with peak growth velocity about 10 m/s. It is deduced that bubble growth occurs as a consequence of the progressive volatilisation of water (electrolyte), without substantial increases in either pressure or temperature within the bubble. Current continues to flow through the discharge as the bubble expands, and this growth (and the related increase in electrical resistance) is thought to be responsible for the current being cut off (soon after the point of maximum radius). A semi-quantitative audit is presented of the transformations between different forms of energy that take place during the lifetime of a discharge.

  1. High-speed x-ray video demonstrates significant skin movement errors with standard optical kinematics during rat locomotion

    PubMed Central

    Bauman, Jay M.; Chang, Young-Hui

    2009-01-01

    The sophistication of current rodent injury and disease models outpaces that of the most commonly used behavioral assays. The first objective of this study was to measure rat locomotion using high-speed x-ray video to establish an accurate baseline for rat hindlimb kinematics. The second objective was to quantify the kinematics errors due to skin movement artefacts by simultaneously recording and comparing hindlimb kinematics derived from skin markers and from direct visualization of skeletal landmarks. Joint angle calculations from skin-derived kinematics yielded errors as high as 39° in the knee and 31° in the hip around paw contact with respect to the x-ray data. Triangulation of knee position from the ankle and hip skin markers provided closer, albeit still inaccurate, approximations of bone-derived, x-ray kinematics. We found that soft tissue movement errors are the result of multiple factors, the most impressive of which is overall limb posture. Treadmill speed had surprisingly little effect on kinematics errors. These findings illustrate the significance and context of skin movement error in rodent kinematics. PMID:19900476

  2. High-speed X-ray video demonstrates significant skin movement errors with standard optical kinematics during rat locomotion.

    PubMed

    Bauman, Jay M; Chang, Young-Hui

    2010-01-30

    The sophistication of current rodent injury and disease models outpaces that of the most commonly used behavioral assays. The first objective of this study was to measure rat locomotion using high-speed X-ray video to establish an accurate baseline for rat hindlimb kinematics. The second objective was to quantify the kinematics errors due to skin movement artefacts by simultaneously recording and comparing hindlimb kinematics derived from skin markers and from direct visualization of skeletal landmarks. Joint angle calculations from skin-derived kinematics yielded errors as high as 39 degrees in the knee and 31 degrees in the hip around paw contact with respect to the X-ray data. Triangulation of knee position from the ankle and hip skin markers provided closer, albeit still inaccurate, approximations of bone-derived, X-ray kinematics. We found that soft tissue movement errors are the result of multiple factors, the most impressive of which is overall limb posture. Treadmill speed had surprisingly little effect on kinematics errors. These findings illustrate the significance and context of skin movement error in rodent kinematics. (c) 2009 Elsevier B.V. All rights reserved.

  3. BEHAVIORAL INTERACTIONS OF THE BLACK IMPORTED FIRE ANT (SOLENOPSIS RICHTERI FOREL) AND ITS PARASITOID FLY (PSEUDACTEON CURVATUS BORGMEIER) AS REVEALED BY HIGH-SPEED VIDEO.

    USDA-ARS?s Scientific Manuscript database

    High-speed video recordings were used to study the interactions between the phorid fly (Pseudacteon curvatus), and the black imported fire ant (Solenopsis richteri) in the field. Phorid flies are extremely fast agile fliers that can hover and fly in all directions. Wingbeat frequency recorded with...

  4. Controlled Impact Demonstration (CID) tail camera video

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The Controlled Impact Demonstration (CID) was a joint research project by NASA and the FAA to test a survivable aircraft impact using a remotely piloted Boeing 720 aircraft. The tail camera movie is one shot running 27 seconds. It shows the impact from the perspective of a camera mounted high on the vertical stabilizer, looking forward over the fuselage and wings.

  5. Burbank uses video camera during installation and routing of HRCS Video Cables

    NASA Image and Video Library

    2012-02-01

    ISS030-E-060104 (1 Feb. 2012) --- NASA astronaut Dan Burbank, Expedition 30 commander, uses a video camera in the Destiny laboratory of the International Space Station during installation and routing of video cable for the High Rate Communication System (HRCS). HRCS will allow for two additional space-to-ground audio channels and two additional downlink video channels.

  6. Improving Photometric Calibration of Meteor Video Camera Systems

    NASA Technical Reports Server (NTRS)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2016-01-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Oce (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the rst point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric ux within the camera band-pass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at 0:20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0:05 ?? 0:10 mag in both ltered and un ltered camera observations with no evidence for lingering systematics.

  7. Improving photometric calibration of meteor video camera systems

    NASA Astrophysics Data System (ADS)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2017-09-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera bandpass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at ∼ 0.20 mag , and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to ∼ 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.

  8. Assessment of the metrological performance of an in situ storage image sensor ultra-high speed camera for full-field deformation measurements

    NASA Astrophysics Data System (ADS)

    Rossi, Marco; Pierron, Fabrice; Forquin, Pascal

    2014-02-01

    Ultra-high speed (UHS) cameras allow us to acquire images typically up to about 1 million frames s-1 for a full spatial resolution of the order of 1 Mpixel. Different technologies are available nowadays to achieve these performances, an interesting one is the so-called in situ storage image sensor architecture where the image storage is incorporated into the sensor chip. Such an architecture is all solid state and does not contain movable devices as occurs, for instance, in the rotating mirror UHS cameras. One of the disadvantages of this system is the low fill factor (around 76% in the vertical direction and 14% in the horizontal direction) since most of the space in the sensor is occupied by memory. This peculiarity introduces a series of systematic errors when the camera is used to perform full-field strain measurements. The aim of this paper is to develop an experimental procedure to thoroughly characterize the performance of such kinds of cameras in full-field deformation measurement and identify the best operative conditions which minimize the measurement errors. A series of tests was performed on a Shimadzu HPV-1 UHS camera first using uniform scenes and then grids under rigid movements. The grid method was used as full-field measurement optical technique here. From these tests, it has been possible to appropriately identify the camera behaviour and utilize this information to improve actual measurements.

  9. X-ray imaging with ePix100a: a high-speed, high-resolution, low-noise camera

    NASA Astrophysics Data System (ADS)

    Blaj, G.; Caragiulo, P.; Dragone, A.; Haller, G.; Hasi, J.; Kenney, C. J.; Kwiatkowski, M.; Markovic, B.; Segal, J.; Tomada, A.

    2016-09-01

    The ePix100A camera is a 0.5 megapixel (704 x 768 pixels) camera for low noise x-ray detection applications requiring high spatial and spectral resolution. The camera is built around a hybrid pixel detector consisting of 4 ePix100a ASICs ip-chip bonded to one sensor. The pixels are 50 μm x 50 μm (active sensor size 35:4mm x 38:6 mm), with a noise of 180 eV rms, a range of 100 8 keV photons, and a current frame rate of 240 Hz (with an upgrade path towards 10 kHz). This performance leads to a camera combining a high dynamic range, high signal to noise ratio, high speed and excellent linearity and spectroscopic performance. While the ePix100A ASIC has been developed for pulsed source applications (e.g., free-electron lasers), it performs well with more common sources (e.g., x-ray tubes, synchrotron radiation). Several cameras have been produced and characterized and the results are reported here, along with x-ray imaging applications demonstrating the camera performance.

  10. Remote control video cameras on a suborbital rocket

    SciTech Connect

    Wessling, Francis C.

    1997-01-10

    Three video cameras were controlled in real time from the ground to a sub-orbital rocket during a fifteen minute flight from White Sands Missile Range in New Mexico. Telemetry communications with the rocket allowed the control of the cameras. The pan, tilt, zoom, focus, and iris of two of the camera lenses, the power and record functions of the three cameras, and also the analog video signal that would be sent to the ground was controlled by separate microprocessors. A microprocessor was used to record data from three miniature accelerometers, temperature sensors and a differential pressure sensor. In addition to the selected video signal sent to the ground and recorded there, the video signals from the three cameras also were recorded on board the rocket. These recorders were mounted inside the pressurized segment of the rocket payload. The lenses, lens control mechanisms, and the three small television cameras were located in a portion of the rocket payload that was exposed to the vacuum of space. The accelerometers were also exposed to the vacuum of space.

  11. Remote control video cameras on a suborbital rocket

    NASA Astrophysics Data System (ADS)

    Wessling, Francis C., Dr.

    1997-01-01

    Three video cameras were controlled in real time from the ground to a sub-orbital rocket during a fifteen minute flight from White Sands Missile Range in New Mexico. Telemetry communications with the rocket allowed the control of the cameras. The pan, tilt, zoom, focus, and iris of two of the camera lenses, the power and record functions of the three cameras, and also the analog video signal that would be sent to the ground was controlled by separate microprocessors. A microprocessor was used to record data from three miniature accelerometers, temperature sensors and a differential pressure sensor. In addition to the selected video signal sent to the ground and recorded there, the video signals from the three cameras also were recorded on board the rocket. These recorders were mounted inside the pressurized segment of the rocket payload. The lenses, lens control mechanisms, and the three small television cameras were located in a portion of the rocket payload that was exposed to the vacuum of space. The accelerometers were also exposed to the vacuum of space.

  12. Single software platform used for high speed data transfer implementation in a 65k pixel camera working in single photon counting mode

    NASA Astrophysics Data System (ADS)

    Maj, P.; Kasiński, K.; Gryboś, P.; Szczygieł, R.; Kozioł, A.

    2015-12-01

    Integrated circuits designed for specific applications generally use non-standard communication methods. Hybrid pixel detector readout electronics produces a huge amount of data as a result of number of frames per seconds. The data needs to be transmitted to a higher level system without limiting the ASIC's capabilities. Nowadays, the Camera Link interface is still one of the fastest communication methods, allowing transmission speeds up to 800 MB/s. In order to communicate between a higher level system and the ASIC with a dedicated protocol, an FPGA with dedicated code is required. The configuration data is received from the PC and written to the ASIC. At the same time, the same FPGA should be able to transmit the data from the ASIC to the PC at the very high speed. The camera should be an embedded system enabling autonomous operation and self-monitoring. In the presented solution, at least three different hardware platforms are used—FPGA, microprocessor with real-time operating system and the PC with end-user software. We present the use of a single software platform for high speed data transfer from 65k pixel camera to the personal computer.

  13. Court Reconstruction for Camera Calibration in Broadcast Basketball Videos.

    PubMed

    Wen, Pei-Chih; Cheng, Wei-Chih; Wang, Yu-Shuen; Chu, Hung-Kuo; Tang, Nick C; Liao, Hong-Yuan Mark

    2016-05-01

    We introduce a technique of calibrating camera motions in basketball videos. Our method particularly transforms player positions to standard basketball court coordinates and enables applications such as tactical analysis and semantic basketball video retrieval. To achieve a robust calibration, we reconstruct the panoramic basketball court from a video, followed by warping the panoramic court to a standard one. As opposed to previous approaches, which individually detect the court lines and corners of each video frame, our technique considers all video frames simultaneously to achieve calibration; hence, it is robust to illumination changes and player occlusions. To demonstrate the feasibility of our technique, we present a stroke-based system that allows users to retrieve basketball videos. Our system tracks player trajectories from broadcast basketball videos. It then rectifies the trajectories to a standard basketball court by using our camera calibration method. Consequently, users can apply stroke queries to indicate how the players move in gameplay during retrieval. The main advantage of this interface is an explicit query of basketball videos so that unwanted outcomes can be prevented. We show the results in Figs. 1, 7, 9, 10 and our accompanying video to exhibit the feasibility of our technique.

  14. Study of fiber-tip damage mechanism during Ho:YAG laser lithotripsy by high-speed camera and the Schlieren method

    NASA Astrophysics Data System (ADS)

    Zhang, Jian J.; Getzan, Grant; Xuan, Jason R.; Yu, Honggang

    2015-02-01

    Fiber-tip degradation, damage, or burn back is a common problem during the ureteroscopic laser lithotripsy procedure to treat urolithiasis. Fiber-tip burn back results in reduced transmission of laser energy, which greatly reduces the efficiency of stone comminution. In some cases, the fiber-tip degradation is so severe that the damaged fiber-tip will absorb most of the laser energy, which can cause the tip portion to be overheated and melt the cladding or jacket layers of the fiber. Though it is known that the higher the energy density (which is the ratio of the laser energy fluence over the cross section area of the fiber core), the faster the fiber-tip degradation, the damage mechanism of the fibertip is still unclear. In this study, fiber-tip degradation was investigated by visualization of shockwave, cavitation/bubble dynamics, and calculus debris ejection with a high-speed camera and the Schlieren method. A commercialized, pulsed Ho:YAG laser at 2.12 um, 273/365/550-um core fibers, and calculus phantoms (Plaster of Paris, 10x10x10 mm cube) were utilized to mimic the laser lithotripsy procedure. Laser energy induced shockwave, cavitation/bubble dynamics, and stone debris ejection were recorded by a high-speed camera with a frame rate of 10,000 to 930,000 fps. The results suggested that using a high-speed camera and the Schlieren method to visualize the shockwave provided valuable information about time-dependent acoustic energy propagation and its interaction with cavitation and calculus. Detailed investigation on acoustic energy beam shaping by fiber-tip modification and interaction between shockwave, cavitation/bubble dynamics, and calculus debris ejection will be conducted as a future study.

  15. Cranz-Schardin camera with a large working distance for the observation of small scale high-speed flows

    NASA Astrophysics Data System (ADS)

    Skupsch, C.; Chaves, H.; Brücker, C.

    2011-08-01

    The Cranz-Schardin camera utilizes a Q-switched Nd:YAG laser and four single CCD cameras. Light pulse energy in the range of 25 mJ and pulse duration of about 5 ns is provided by the laser. The laser light is converted to incoherent light by Rhodamine-B fluorescence dye in a cuvette. The laser beam coherence is intentionally broken in order to avoid speckle. Four light fibers collect the fluorescence light and are used for illumination. Different light fiber lengths enable a delay of illumination between consecutive images. The chosen interframe time is 25 ns, corresponding to 40 × 106 frames per second. Exemplarily, the camera is applied to observe the bow shock in front of a water jet, propagating in air at supersonic speed. The initial phase of the formation of a jet structure is recorded.

  16. High-speed imaging system for observation of discharge phenomena

    NASA Astrophysics Data System (ADS)

    Tanabe, R.; Kusano, H.; Ito, Y.

    2008-11-01

    A thin metal electrode tip instantly changes its shape into a sphere or a needlelike shape in a single electrical discharge of high current. These changes occur within several hundred microseconds. To observe these high-speed phenomena in a single discharge, an imaging system using a high-speed video camera and a high repetition rate pulse laser was constructed. A nanosecond laser, the wavelength of which was 532 nm, was used as the illuminating source of a newly developed high-speed video camera, HPV-1. The time resolution of our system was determined by the laser pulse width and was about 80 nanoseconds. The system can take one hundred pictures at 16- or 64-microsecond intervals in a single discharge event. A band-pass filter at 532 nm was placed in front of the camera to block the emission of the discharge arc at other wavelengths. Therefore, clear images of the electrode were recorded even during the discharge. If the laser was not used, only images of plasma during discharge and thermal radiation from the electrode after discharge were observed. These results demonstrate that the combination of a high repetition rate and a short pulse laser with a high speed video camera provides a unique and powerful method for high speed imaging.

  17. Voice Controlled Stereographic Video Camera System

    NASA Astrophysics Data System (ADS)

    Goode, Georgianna D.; Philips, Michael L.

    1989-09-01

    For several years various companies have been developing voice recognition software. Yet, there are few applications of voice control in the robotics field and virtually no examples of voice controlled three dimensional (3-D) systems. In late 1987 ARD developed a highly specialized, voice controlled 3-D vision system for use in remotely controlled, non-tethered robotic applications. The system was designed as an operator's aid and incorporates features thought to be necessary or helpful in remotely maneuvering a vehicle. Foremost is the three dimensionality of the operator's console display. An image that provides normal depth perception cues over a range of depths greatly increases the ease with which an operator can drive a vehicle and investigate its environment. The availability of both vocal and manual control of all system functions allows the operator to guide the system according to his personal preferences. The camera platform can be panned +/-178 degrees and tilted +/-30 degrees for a full range of view of the vehicle's environment. The cameras can be zoomed and focused for close inspection of distant objects, while retaining substantial stereo effect by increasing the separation between the cameras. There is a ranging and measurement function, implemented through a graphical cursor, which allows the operator to mark objects in a scene to determine their relative positions. This feature will be helpful in plotting a driving path. The image seen on the screen is overlaid with icons and digital readouts which provide information about the position of the camera platform, the range to the graphical cursor and the measurement results. The cursor's "range" is actually the distance from the cameras to the object on which the cursor is resting. Other such features are included in the system and described in subsequent sections of this paper.

  18. Lock-in spectroscopy employing a high-speed camera and a micro-scanner for volumetric investigations of unsteady flows.

    PubMed

    Fischer, Andreas; Schlüßler, Raimund; Haufe, Daniel; Czarske, Jürgen

    2014-09-01

    Spectroscopic methods are established tools for nonintrusive measurements of flow velocity. However, those methods are either restricted by measuring pointwise or with low measurement rates of several hertz. To investigate fast unsteady phenomena, e.g., in sprays, volumetric (3D) measurement techniques with kHz rate are required. For this purpose, a spectroscopic technique is realized with a power amplified, frequency modulated laser and an Mfps high-speed camera. This allows fast continuous planar measurements of the velocity. Volumetric data is finally obtained by slewing the laser light sheet in depth with an oscillating microelectromechanical systems (MEMS) scanner. As a result, volumetric velocity measurements are obtained for 256×128×25 voxels over 14.4  mm×7.2  mm×6.5  mm with a repetition rate of 1 kHz, which allows the investigation of unsteady phenomena in sprays such as transients and local velocity oscillations. The respective measurement capabilities are demonstrated by experiments. Hence, a significant progress regarding the data rate was achieved in spectroscopy by using the Mfps high-speed camera, which enables new application fields such as the analysis of fast unsteady phenomena.

  19. Development of high-speed InGaAs linear array and camera for OCT and machine vision

    NASA Astrophysics Data System (ADS)

    Malchow, Douglas S.; Brubaker, Robert M.; Nguyen, Hai; Flynn, Kevin

    2008-02-01

    Spectral Domain Optical Coherence Tomography (SD-OCT) is a rapidly growing imaging technique for high-resolution visualization of structures within strongly scattering media. It is being used to create 2-D and 3-D images in biological tissues to measure structures in the eye, image abnormal growths in organ tissue, and to assess the health of arterial walls. The ability to image to depths of several millimeters with resolutions better than 5 microns has driven the need to maximize the image depth, while also increasing the imaging speed. Researchers are using short-wave-infrared light wavelengths from 1 to 1.6 microns to penetrate deeper in denser tissue than visible or NIR wavelengths. This, in turn, has created the need to increase the line rates of InGaAs linear array cameras by a factor of ten, while also increasing gain and reducing dead time. This paper will describe the development and characterization of a 1024 pixel linear array with 25 micron pitch and readout rate of over 45,000 lines per second and the resulting camera. This camera will also have application for machine vision inspection of hot glass globs, for sorting of fast moving agricultural materials and for quality control of pharmaceutical products.

  20. Using a Digital Video Camera to Study Motion

    ERIC Educational Resources Information Center

    Abisdris, Gil; Phaneuf, Alain

    2007-01-01

    To illustrate how a digital video camera can be used to analyze various types of motion, this simple activity analyzes the motion and measures the acceleration due to gravity of a basketball in free fall. Although many excellent commercially available data loggers and software can accomplish this task, this activity requires almost no financial…

  1. Lights, Camera, Action! Using Video Recordings to Evaluate Teachers

    ERIC Educational Resources Information Center

    Petrilli, Michael J.

    2011-01-01

    Teachers and their unions do not want test scores to count for everything; classroom observations are key, too. But planning a couple of visits from the principal is hardly sufficient. These visits may "change the teacher's behavior"; furthermore, principals may not be the best judges of effective teaching. So why not put video cameras in…

  2. Using a Digital Video Camera to Study Motion

    ERIC Educational Resources Information Center

    Abisdris, Gil; Phaneuf, Alain

    2007-01-01

    To illustrate how a digital video camera can be used to analyze various types of motion, this simple activity analyzes the motion and measures the acceleration due to gravity of a basketball in free fall. Although many excellent commercially available data loggers and software can accomplish this task, this activity requires almost no financial…

  3. Lights, Camera, Action! Using Video Recordings to Evaluate Teachers

    ERIC Educational Resources Information Center

    Petrilli, Michael J.

    2011-01-01

    Teachers and their unions do not want test scores to count for everything; classroom observations are key, too. But planning a couple of visits from the principal is hardly sufficient. These visits may "change the teacher's behavior"; furthermore, principals may not be the best judges of effective teaching. So why not put video cameras in…

  4. 67. DETAIL OF VIDEO CAMERA CONTROL PANEL LOCATED IMMEDIATELY WEST ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    67. DETAIL OF VIDEO CAMERA CONTROL PANEL LOCATED IMMEDIATELY WEST OF ASSISTANT LAUNCH CONDUCTOR PANEL SHOWN IN CA-133-1-A-66 - Vandenberg Air Force Base, Space Launch Complex 3, Launch Operations Building, Napa & Alden Roads, Lompoc, Santa Barbara County, CA

  5. Can video cameras replace visual estrus detection in dairy cows?

    PubMed

    Bruyère, P; Hétreau, T; Ponsart, C; Gatien, J; Buff, S; Disenhaus, C; Giroud, O; Guérin, P

    2012-02-01

    A 6-mo experiment was conducted in a dairy herd to evaluate a video system for estrus detection. From October 2007 to April 2008, 35 dairy cows of three breeds that ranged in age from 2 to 6 yr were included in the study. Four daylight cameras were set up in two free stalls with straw litter and connected to a computer equipped with specific software to detect movement. This system allowed the continuous observation of the cows as well as video storage. An observation method related to the functionality of the video management software ("Camera-Icons" method) was used to detect the standing mount position and was compared to direct visual observation (direct visual method). Both methods were based on the visualization of standing mount position. A group of profile photos consisting of the full face, left side, right side, and back of each cow was used to identify animals on the videos. Milk progesterone profiles allowed the determination of ovulatory periods (reference method), and a total of 84 ovulatory periods were used. Data obtained by direct visual estrus detection were used as a control. Excluding the first postpartum ovulatory periods, the "Camera-Icons" method allowed the detection of 80% of the ovulatory periods versus 68.6% with the direct visual method (control) (P = 0.07). Consequently, the "Camera-Icons" method gave at least similar results to the direct visual method. When combining the two methods, the detection rate was 88.6%, which was significantly higher than the detection rate allowed by the direct visual method (P < 0.0005). Eight to 32 min (mean 20 min) were used daily to analyze stored images. When compared with the 40 min (four periods of 10 min) dedicated to the direct visual method, we conclude that the video survey system not only saved time but also can replace direct visual estrus detection.

  6. Relationship between structures of sprite streamers and inhomogeneity of preceding halos captured by high-speed camera during a combined aircraft and ground-based campaign

    NASA Astrophysics Data System (ADS)

    Takahashi, Y.; Sato, M.; Kudo, T.; Shima, Y.; Kobayashi, N.; Inoue, T.; Stenbaek-Nielsen, H. C.; McHarg, M. G.; Haaland, R. K.; Kammae, T.; Yair, Y.; Lyons, W. A.; Cummer, S. A.; Ahrns, J.; Yukman, P.; Warner, T. A.; Sonnenfeld, R. G.; Li, J.; Lu, G.

    2011-12-01

    The relationship between diffuse glows such as elves and sprite halos and subsequent discrete structure of sprite streamers is considered to be one of the keys to solve the mechanism causing a large variation of sprite structures. However, it's not easy to image at high frame rate both the diffuse and discrete structures simultaneously, since it requires high sensitivity, high spatial resolution and high signal-to-noise ratio. To capture the real spatial structure of TLEs without influence of atmospheric absorption, spacecraft would be the best solution. However, since the imaging observation from space is mostly made for TLEs appeared near the horizon, the range from spacecraft to TLEs becomes large, such as few thousand km, resulting in low spatial resolution. The aircraft can approach thunderstorm up to a few hundred km or less and can carry heavy high-speed cameras with huge size data memories. In the period of June 27 - July 10, 2011, a combined aircraft and ground-based campaign, in support of NHK Cosmic Shore project, was carried with two jet airplanes under collaboration between NHK (Japan Broadcasting Corporation) and universities. On 8 nights out of 16 standing-by, the jets took off from the airport near Denver, Colorado, and an airborne high speed camera captured over 40 TLE events at a frame rate of 8300 /sec. Here we introduce the time development of sprite streamers and the both large and fine structures of preceding halos showing inhomogeneity, suggesting a mechanism to cause the large variation of sprite types, such as crown like sprites.

  7. Unmanned Vehicle Guidance Using Video Camera/Vehicle Model

    NASA Technical Reports Server (NTRS)

    Sutherland, T.

    1999-01-01

    A video guidance sensor (VGS) system has flown on both STS-87 and STS-95 to validate a single camera/target concept for vehicle navigation. The main part of the image algorithm was the subtraction of two consecutive images using software. For a nominal size image of 256 x 256 pixels this subtraction can take a large portion of the time between successive frames in standard rate video leaving very little time for other computations. The purpose of this project was to integrate the software subtraction into hardware to speed up the subtraction process and allow for more complex algorithms to be performed, both in hardware and software.

  8. In-flight Video Captured by External Tank Camera System

    NASA Technical Reports Server (NTRS)

    2005-01-01

    In this July 26, 2005 video, Earth slowly fades into the background as the STS-114 Space Shuttle Discovery climbs into space until the External Tank (ET) separates from the orbiter. An External Tank ET Camera System featuring a Sony XC-999 model camera provided never before seen footage of the launch and tank separation. The camera was installed in the ET LO2 Feedline Fairing. From this position, the camera had a 40% field of view with a 3.5 mm lens. The field of view showed some of the Bipod area, a portion of the LH2 tank and Intertank flange area, and some of the bottom of the shuttle orbiter. Contained in an electronic box, the battery pack and transmitter were mounted on top of the Solid Rocker Booster (SRB) crossbeam inside the ET. The battery pack included 20 Nickel-Metal Hydride batteries (similar to cordless phone battery packs) totaling 28 volts DC and could supply about 70 minutes of video. Located 95 degrees apart on the exterior of the Intertank opposite orbiter side, there were 2 blade S-Band antennas about 2 1/2 inches long that transmitted a 10 watt signal to the ground stations. The camera turned on approximately 10 minutes prior to launch and operated for 15 minutes following liftoff. The complete camera system weighs about 32 pounds. Marshall Space Flight Center (MSFC), Johnson Space Center (JSC), Goddard Space Flight Center (GSFC), and Kennedy Space Center (KSC) participated in the design, development, and testing of the ET camera system.

  9. In-flight Video Captured by External Tank Camera System

    NASA Technical Reports Server (NTRS)

    2005-01-01

    In this July 26, 2005 video, Earth slowly fades into the background as the STS-114 Space Shuttle Discovery climbs into space until the External Tank (ET) separates from the orbiter. An External Tank ET Camera System featuring a Sony XC-999 model camera provided never before seen footage of the launch and tank separation. The camera was installed in the ET LO2 Feedline Fairing. From this position, the camera had a 40% field of view with a 3.5 mm lens. The field of view showed some of the Bipod area, a portion of the LH2 tank and Intertank flange area, and some of the bottom of the shuttle orbiter. Contained in an electronic box, the battery pack and transmitter were mounted on top of the Solid Rocker Booster (SRB) crossbeam inside the ET. The battery pack included 20 Nickel-Metal Hydride batteries (similar to cordless phone battery packs) totaling 28 volts DC and could supply about 70 minutes of video. Located 95 degrees apart on the exterior of the Intertank opposite orbiter side, there were 2 blade S-Band antennas about 2 1/2 inches long that transmitted a 10 watt signal to the ground stations. The camera turned on approximately 10 minutes prior to launch and operated for 15 minutes following liftoff. The complete camera system weighs about 32 pounds. Marshall Space Flight Center (MSFC), Johnson Space Center (JSC), Goddard Space Flight Center (GSFC), and Kennedy Space Center (KSC) participated in the design, development, and testing of the ET camera system.

  10. Combining high-speed SVM learning with CNN feature encoding for real-time target recognition in high-definition video for ISR missions

    NASA Astrophysics Data System (ADS)

    Kroll, Christine; von der Werth, Monika; Leuck, Holger; Stahl, Christoph; Schertler, Klaus

    2017-05-01

    For Intelligence, Surveillance, Reconnaissance (ISR) missions of manned and unmanned air systems typical electrooptical payloads provide high-definition video data which has to be exploited with respect to relevant ground targets in real-time by automatic/assisted target recognition software. Airbus Defence and Space is developing required technologies for real-time sensor exploitation since years and has combined the latest advances of Deep Convolutional Neural Networks (CNN) with a proprietary high-speed Support Vector Machine (SVM) learning method into a powerful object recognition system with impressive results on relevant high-definition video scenes compared to conventional target recognition approaches. This paper describes the principal requirements for real-time target recognition in high-definition video for ISR missions and the Airbus approach of combining an invariant feature extraction using pre-trained CNNs and the high-speed training and classification ability of a novel frequency-domain SVM training method. The frequency-domain approach allows for a highly optimized implementation for General Purpose Computation on a Graphics Processing Unit (GPGPU) and also an efficient training of large training samples. The selected CNN which is pre-trained only once on domain-extrinsic data reveals a highly invariant feature extraction. This allows for a significantly reduced adaptation and training of the target recognition method for new target classes and mission scenarios. A comprehensive training and test dataset was defined and prepared using relevant high-definition airborne video sequences. The assessment concept is explained and performance results are given using the established precision-recall diagrams, average precision and runtime figures on representative test data. A comparison to legacy target recognition approaches shows the impressive performance increase by the proposed CNN+SVM machine-learning approach and the capability of real-time high

  11. Robust camera calibration for sport videos using court models

    NASA Astrophysics Data System (ADS)

    Farin, Dirk; Krabbe, Susanne; de With, Peter H. N.; Effelsberg, Wolfgang

    2003-12-01

    We propose an automatic camera calibration algorithm for court sports. The obtained camera calibration parameters are required for applications that need to convert positions in the video frame to real-world coordinates or vice versa. Our algorithm uses a model of the arrangement of court lines for calibration. Since the court model can be specified by the user, the algorithm can be applied to a variety of different sports. The algorithm starts with a model initialization step which locates the court in the image without any user assistance or a-priori knowledge about the most probable position. Image pixels are classified as court line pixels if they pass several tests including color and local texture constraints. A Hough transform is applied to extract line elements, forming a set of court line candidates. The subsequent combinatorial search establishes correspondences between lines in the input image and lines from the court model. For the succeeding input frames, an abbreviated calibration algorithm is used, which predicts the camera parameters for the new image and optimizes the parameters using a gradient-descent algorithm. We have conducted experiments on a variety of sport videos (tennis, volleyball, and goal area sequences of soccer games). Video scenes with considerable difficulties were selected to test the robustness of the algorithm. Results show that the algorithm is very robust to occlusions, partial court views, bad lighting conditions, or shadows.

  12. Application of a digital high-speed camera system for combustion research by using UV laser diagnostics under microgravity at Bremen drop tower

    NASA Astrophysics Data System (ADS)

    Renken, Hartmut; Bolik, T.; Eigenbrod, Ch.; Koenig, Jens; Rath, Hans J.

    1997-05-01

    This paper describes a digital high-speed camera- and recording system that will be used primary for combustion research under microgravity ((mu) g) at Bremen drop tower. To study the reactionzones during the process of combustion particularly OH-radicals are detected 2D by using the method of laser induced predissociation fluorescence (LIPF). A pulsed high-energy excimer lasersystem combined with a two- staged intensified CCD-camera allows a repetition rate of 250 images (256 X 256 pixel) per second, according to the maximum laser pulse repetition. The laser system is integrated at the top of the 110 m high evacutable drop tube. Motorized mirrors are necessary to achieve a stable beam position within the area of interest during the drop of the experiment-capsule. The duration of 1 drop will be 4.7 seconds (microgravity conditions). About 1500 images are captured and stored onboard the drop capsule 96 Mbyte RAM image storagesystem. After saving capsule and datas, a special PC-based image processing software visualizes the movies and extracts physical information out of the images. Now, after two and a half years of developments the system is working operational and capable of high temporal 2D LIPF- measuring of OH, H2O, O2, and CO concentrations and 2D temperature distribution of these species.

  13. Identifying sports videos using replay, text, and camera motion features

    NASA Astrophysics Data System (ADS)

    Kobla, Vikrant; DeMenthon, Daniel; Doermann, David S.

    1999-12-01

    Automated classification of digital video is emerging as an important piece of the puzzle in the design of content management systems for digital libraries. The ability to classify videos into various classes such as sports, news, movies, or documentaries, increases the efficiency of indexing, browsing, and retrieval of video in large databases. In this paper, we discuss the extraction of features that enable identification of sports videos directly from the compressed domain of MPEG video. These features include detecting the presence of action replays, determining the amount of scene text in vide, and calculating various statistics on camera and/or object motion. The features are derived from the macroblock, motion,and bit-rate information that is readily accessible from MPEG video with very minimal decoding, leading to substantial gains in processing speeds. Full-decoding of selective frames is required only for text analysis. A decision tree classifier built using these features is able to identify sports clips with an accuracy of about 93 percent.

  14. Improving Photometric Calibration of Meteor Video Camera Systems

    NASA Technical Reports Server (NTRS)

    Ehlert, Steven; Kingery, Aaron; Cooke, William

    2016-01-01

    Current optical observations of meteors are commonly limited by systematic uncertainties in photometric calibration at the level of approximately 0.5 mag or higher. Future improvements to meteor ablation models, luminous efficiency models, or emission spectra will hinge on new camera systems and techniques that significantly reduce calibration uncertainties and can reliably perform absolute photometric measurements of meteors. In this talk we discuss the algorithms and tests that NASA's Meteoroid Environment Office (MEO) has developed to better calibrate photometric measurements for the existing All-Sky and Wide-Field video camera networks as well as for a newly deployed four-camera system for measuring meteor colors in Johnson-Cousins BV RI filters. In particular we will emphasize how the MEO has been able to address two long-standing concerns with the traditional procedure, discussed in more detail below.

  15. Advancement of thyroid surgery video recording: A comparison between two full HD head mounted video cameras.

    PubMed

    Ortensi, Andrea; Panunzi, Andrea; Trombetta, Silvia; Cattaneo, Alberto; Sorrenti, Salvatore; D'Orazi, Valerio

    2017-05-01

    The aim of this study was to test two different video cameras and recording systems used in thyroid surgery in our Department. This is meant to be an attempt to record the real point of view of the magnified vision of surgeon, so as to make the viewer aware of the difference with the naked eye vision. In this retrospective study, we recorded and compared twenty thyroidectomies performed using loupes magnification and microsurgical technique: ten were recorded with GoPro(®) 4 Session action cam (commercially available) and ten with our new prototype of head mounted video camera. Settings were selected before surgery for both cameras. The recording time is about from 1 to 2 h for GoPro(®) and from 3 to 5 h for our prototype. The average time of preparation to fit the camera on the surgeon's head and set the functionality is about 5 min for GoPro(®) and 7-8 min for the prototype, mostly due to HDMI wiring cable. Videos recorded with the prototype require no further editing, which is mandatory for videos recorded with GoPro(®) to highlight the surgical details. the present study showed that our prototype of video camera, compared with GoPro(®) 4 Session, guarantees best results in terms of surgical video recording quality, provides to the viewer the exact perspective of the microsurgeon and shows accurately his magnified view through the loupes in thyroid surgery. These recordings are surgical aids for teaching and education and might be a method of self-analysis of surgical technique. Copyright © 2017 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  16. Measuring the accuracy of particle position and force in optical tweezers using high-speed video microscopy.

    PubMed

    Gibson, Graham M; Leach, Jonathan; Keen, Stephen; Wright, Amanda J; Padgett, Miles J

    2008-09-15

    We assess the performance of a CMOS camera for the measurement of particle position within optical tweezers and the associated autocorrelation function and power spectrum. Measurement of the displacement of the particle from the trap center can also be related to the applied force. By considering the Allan variance of these measurements, we show that such cameras are capable of reaching the thermal limits of nanometer and femtonewton accuracies, and hence are suitable for many of the applications that traditionally use quadrant photodiodes. As an example of a multi-particle measurement we show the hydrodynamic coupling between two particles.

  17. Comparison of high speed imaging technique to laser vibrometry for detection of vibration information from objects

    NASA Astrophysics Data System (ADS)

    Paunescu, Gabriela; Lutzmann, Peter; Göhler, Benjamin; Wegner, Daniel

    2015-10-01

    The development of camera technology in recent years has made high speed imaging a reliable method in vibration and dynamic measurements. The passive recovery of vibration information from high speed video recordings was reported in several recent papers. A highly developed technique, involving decomposition of the input video into spatial subframes to compute local motion signals, allowed an accurate sound reconstruction. A simpler technique based on image matching for vibration measurement was also reported as efficient in extracting audio information from a silent high speed video. In this paper we investigate and discuss the sensitivity and the limitations of the high speed imaging technique for vibration detection in comparison to the well-established Doppler vibrometry technique. Experiments on the extension of the high speed imaging method to longer range applications are presented.

  18. Observations and analysis of FTU plasmas by video cameras

    NASA Astrophysics Data System (ADS)

    De Angelis, R.; Di Matteo, L.

    2010-11-01

    The interaction of the FTU plasma with the vessel walls and with the limiters is responsible for the release of hydrogen and impurities through various physical mechanisms (physical and chemical sputtering, desorption, etc.). In the cold plasma periphery, these particles are weakly ionised and emit mainly in the visible spectral range. A good description of plasma periphery can then be obtained by use of video cameras. In FTU small size video cameras, placed close to the plasma edge, give wide-angle images of the plasma at a standard rate of 25 frames/s. Images are stored digitally, allowing their retrieval and analysis. This paper reports some of the most interesting features of the discharges evidenced by the images. As a first example, the accumulation of cold neutral gas in the plasma periphery above a density threshold (a phenomenon known as Marfe) can be seen on the video images as a toroidally symmetric band oscillating poloidally; on the multi-chord spectroscopy or bolometer channels, this appears only as a sudden rise of the signals whose overall behaviour could not be clearly interpreted. A second example is the identification of runaway discharges by the signature of the fast electrons emitting synchrotron radiation in their motion direction; this appears as a bean shaped bright spot on one toroidal side, which reverts according to plasma current direction. A relevant side effect of plasma discharges, as potentially dangerous, is the formation of dust as a consequence of some strong plasma-wall interaction event; video images allow monitoring and possibly estimating numerically the amount of dust, which can be produced in these events. Specialised software can automatically search experimental database identifying relevant events, partly overcoming the difficulties associated with the very large amount of data produced by video techniques.

  19. A multiscale product approach for an automatic classification of voice disorders from endoscopic high-speed videos.

    PubMed

    Unger, Jakob; Schuster, Maria; Hecker, Dietmar J; Schick, Bernhard; Lohscheller, Joerg

    2013-01-01

    Direct observation of vocal fold vibration is indispensable for a clinical diagnosis of voice disorders. Among current imaging techniques, high-speed videoendoscopy constitutes a state-of-the-art method capturing several thousand frames per second of the vocal folds during phonation. Recently, a method for extracting descriptive features from phonovibrograms, a two-dimensional image containing the spatio-temporal pattern of vocal fold dynamics, was presented. The derived features are closely related to a clinically established protocol for functional assessment of pathologic voices. The discriminative power of these features for different pathologic findings and configurations has not been assessed yet. In the current study, a collective of 220 subjects is considered for two- and multi-class problems of healthy and pathologic findings. The performance of the proposed feature set is compared to conventional feature reduction routines and was found to clearly outperform these. As such, the proposed procedure shows great potential for diagnostical issues of vocal fold disorders.

  20. Refocusing images and videos with a conventional compact camera

    NASA Astrophysics Data System (ADS)

    Kang, Lai; Wu, Lingda; Wei, Yingmei; Song, Hanchen; Yang, Zheng

    2015-03-01

    Digital refocusing is an interesting and useful tool for generating dynamic depth-of-field (DOF) effects in many types of photography such as portraits and creative photography. Since most existing digital refocusing methods rely on four-dimensional light field captured by special precisely manufactured devices or a sequence of images captured by a single camera, existing systems are either expensive for wide practical use or incapable of handling dynamic scenes. We present a low-cost approach for refocusing high-resolution (up to 8 mega pixels) images and videos based on a single shot using an easy to build camera-mirror stereo system. Our proposed method consists of four main steps, namely system calibration, image rectification, disparity estimation, and refocusing rendering. The effectiveness of our proposed method has been evaluated extensively using both static and dynamic scenes with various depth ranges. Promising experimental results demonstrate that our method is able to simulate various controllable realistic DOF effects. To the best of our knowledge, our method is the first that allows one to refocus high-resolution images and videos of dynamic scenes captured by a conventional compact camera.

  1. A new paradigm for video cameras: optical sensors

    NASA Astrophysics Data System (ADS)

    Grottle, Kevin; Nathan, Anoo; Smith, Catherine

    2007-04-01

    This paper presents a new paradigm for the utilization of video surveillance cameras as optical sensors to augment and significantly improve the reliability and responsiveness of chemical monitoring systems. Incorporated into a hierarchical tiered sensing architecture, cameras serve as 'Tier 1' or 'trigger' sensors monitoring for visible indications after a release of warfare or industrial toxic chemical agents. No single sensor today yet detects the full range of these agents, but the result of exposure is harmful and yields visible 'duress' behaviors. Duress behaviors range from simple to complex types of observable signatures. By incorporating optical sensors in a tiered sensing architecture, the resulting alarm signals based on these behavioral signatures increases the range of detectable toxic chemical agent releases and allows timely confirmation of an agent release. Given the rapid onset of duress type symptoms, an optical sensor can detect the presence of a release almost immediately. This provides cues for a monitoring system to send air samples to a higher-tiered chemical sensor, quickly launch protective mitigation steps, and notify an operator to inspect the area using the camera's video signal well before the chemical agent can disperse widely throughout a building.

  2. Simultaneous Camera Path Optimization and Distraction Removal for Improving Amateur Video.

    PubMed

    Zhang, Fang-Lue; Wang, Jue; Zhao, Han; Martin, Ralph R; Hu, Shi-Min

    2015-12-01

    A major difference between amateur and professional video lies in the quality of camera paths. Previous work on video stabilization has considered how to improve amateur video by smoothing the camera path. In this paper, we show that additional changes to the camera path can further improve video aesthetics. Our new optimization method achieves multiple simultaneous goals: 1) stabilizing video content over short time scales; 2) ensuring simple and consistent camera paths over longer time scales; and 3) improving scene composition by automatically removing distractions, a common occurrence in amateur video. Our approach uses an L(1) camera path optimization framework, extended to handle multiple constraints. Two passes of optimization are used to address both low-level and high-level constraints on the camera path. The experimental and user study results show that our approach outputs video that is perceptually better than the input, or the results of using stabilization only.

  3. A low-bandwidth graphical user interface for high-speed triage of potential items of interest in video imagery

    NASA Astrophysics Data System (ADS)

    Huber, David J.; Khosla, Deepak; Martin, Kevin; Chen, Yang

    2013-06-01

    In this paper, we introduce a user interface called the "Threat Chip Display" (TCD) for rapid human-in-the-loop analysis and detection of "threats" in high-bandwidth imagery and video from a list of "Items of Interest" (IOI), which includes objects, targets and events that the human is interested in detecting and identifying. Typically some front-end algorithm (e.g., computer vision, cognitive algorithm, EEG RSVP based detection, radar detection) has been applied to the video and has pre-processed and identified a potential list of IOI. The goal of the TCD is to facilitate rapid analysis and triaging of this list of IOI to detect and confirm actual threats. The layout of the TCD is designed for ease of use, fast triage of IOI, and a low bandwidth requirement. Additionally, a very low mental demand allows the system to be run for extended periods of time.

  4. Release and velocity of micronized dexamethasone implants with an intravitreal drug delivery system: kinematic analysis with a high-speed camera.

    PubMed

    Meyer, Carsten H; Klein, Adrian; Alten, Florian; Liu, Zengping; Stanzel, Boris V; Helb, Hans M; Brinkmann, Christian K

    2012-01-01

    Ozurdex, a novel dexamethasone (DEX) implant, is released by a drug delivery system into the vitreous cavity. We analyzed the mechanical release aperture of the novel applicator, obtained real-time recordings using a high-speed camera system and performed kinematic analysis of the DEX application. Experimental study. : The application of intravitreal DEX implants (6 mm length, 0.46 mm diameter; 700 μg DEX mass, 0.0012 g total implant mass) was recorded by a high-speed camera (500 frames per second) in water (Group A: n = 7) or vitreous (Group B: n = 7) filled tanks. Kinematic analysis calculated the initial muzzle velocity as well as the impact on the retinal surface at approximately 15 mm of the injected drug delivery system implant in both groups. A series of drug delivery system implant positions was obtained and graphically plotted over time. High-speed real-time recordings revealed that the entire movement of the DEX implant lasted between 28 milliseconds and 55 milliseconds in Group A and 1 millisecond and 7 milliseconds in Group B. The implants moved with a mean muzzle velocity of 820 ± 350 mm/s (±SD, range, 326-1,349 mm/s) in Group A and 817 ± 307 mm/s (±SD, range, 373-1,185 mm/s) in Group B. In both groups, the implant gradually decelerated because of drag force. With greater distances, the velocity of the DEX implant decreased exponentially to a complete stop at 13.9 mm to 24.7 mm in Group A and at 6.4 mm to 8.0 mm in Group B. Five DEX implants in Group A reached a total distance of more than 15 mm, and their calculated mean velocity at a retinal impact of 15 mm was 408 ± 145 mm/s (±SD, range, 322-667 mm/s), and the consecutive normalized energy was 0.55 ± 0.44 J/m (±SD). In Group B, none of the DEX implants reached a total distance of 6 mm or more. An accidental application at an angle of 30 grade and consecutively reduced distance of approximately 6 mm may result in a mean velocity of 844 and mean normalized energy of 0.15 J/m (SD ± 0.47) in a

  5. Social Justice through Literacy: Integrating Digital Video Cameras in Reading Summaries and Responses

    ERIC Educational Resources Information Center

    Liu, Rong; Unger, John A.; Scullion, Vicki A.

    2014-01-01

    Drawing data from an action-oriented research project for integrating digital video cameras into the reading process in pre-college courses, this study proposes using digital video cameras in reading summaries and responses to promote critical thinking and to teach social justice concepts. The digital video research project is founded on…

  6. Non-mydriatic, wide field, fundus video camera

    NASA Astrophysics Data System (ADS)

    Hoeher, Bernhard; Voigtmann, Peter; Michelson, Georg; Schmauss, Bernhard

    2014-02-01

    We describe a method we call "stripe field imaging" that is capable of capturing wide field color fundus videos and images of the human eye at pupil sizes of 2mm. This means that it can be used with a non-dilated pupil even with bright ambient light. We realized a mobile demonstrator to prove the method and we could acquire color fundus videos of subjects successfully. We designed the demonstrator as a low-cost device consisting of mass market components to show that there is no major additional technical outlay to realize the improvements we propose. The technical core idea of our method is breaking the rotational symmetry in the optical design that is given in many conventional fundus cameras. By this measure we could extend the possible field of view (FOV) at a pupil size of 2mm from a circular field with 20° in diameter to a square field with 68° by 18° in size. We acquired a fundus video while the subject was slightly touching and releasing the lid. The resulting video showed changes at vessels in the region of the papilla and a change of the paleness of the papilla.

  7. Scientists Behind the Camera - Increasing Video Documentation in the Field

    NASA Astrophysics Data System (ADS)

    Thomson, S.; Wolfe, J.

    2013-12-01

    Over the last two years, Skypunch Creative has designed and implemented a number of pilot projects to increase the amount of video captured by scientists in the field. The major barrier to success that we tackled with the pilot projects was the conflicting demands of the time, space, storage needs of scientists in the field and the demands of shooting high quality video. Our pilots involved providing scientists with equipment, varying levels of instruction on shooting in the field and post-production resources (editing and motion graphics). In each project, the scientific team was provided with cameras (or additional equipment if they owned their own), tripods, and sometimes sound equipment, as well as an external hard drive to return the footage to us. Upon receiving the footage we professionally filmed follow-up interviews and created animations and motion graphics to illustrate their points. We also helped with the distribution of the final product (http://climatescience.tv/2012/05/the-story-of-a-flying-hippo-the-hiaper-pole-to-pole-observation-project/ and http://climatescience.tv/2013/01/bogged-down-in-alaska/). The pilot projects were a success. Most of the scientists returned asking for additional gear and support for future field work. Moving out of the pilot phase, to continue the project, we have produced a 14 page guide for scientists shooting in the field based on lessons learned - it contains key tips and best practice techniques for shooting high quality footage in the field. We have also expanded the project and are now testing the use of video cameras that can be synced with sensors so that the footage is useful both scientifically and artistically. Extract from A Scientist's Guide to Shooting Video in the Field

  8. Unusual features of negative leaders' development in natural lightning, according to simultaneous records of current, electric field, luminosity, and high-speed video

    NASA Astrophysics Data System (ADS)

    Guimaraes, Miguel; Arcanjo, Marcelo; Murta Vale, Maria Helena; Visacro, Silverio

    2017-02-01

    The development of downward and upward leaders that formed two negative cloud-to-ground return strokes in natural lightning, spaced only about 200 µs apart and terminating on ground only a few hundred meters away, was monitored at Morro do Cachimbo Station, Brazil. The simultaneous records of current, close electric field, relative luminosity, and corresponding high-speed video frames (sampling rate of 20,000 frames per second) reveal that the initiation of the first return stroke interfered in the development of the second negative leader, leading it to an apparent continuous development before the attachment, without stepping, and at a regular two-dimensional speed. Based on the experimental data, the formation processes of the two return strokes are discussed, and plausible interpretations for their development are provided.

  9. HiPERCAM: a high-speed quintuple-beam CCD camera for the study of rapid variability in the universe

    NASA Astrophysics Data System (ADS)

    Dhillon, Vikram S.; Marsh, Thomas R.; Bezawada, Naidu; Black, Martin; Dixon, Simon; Gamble, Trevor; Henry, David; Kerry, Paul; Littlefair, Stuart; Lunney, David W.; Morris, Timothy; Osborn, James; Wilson, Richard W.

    2016-08-01

    HiPERCAM is a high-speed camera for the study of rapid variability in the Universe. The project is funded by a ɛ3.5M European Research Council Advanced Grant. HiPERCAM builds on the success of our previous instrument, ULTRACAM, with very significant improvements in performance thanks to the use of the latest technologies. HiPERCAM will use 4 dichroic beamsplitters to image simultaneously in 5 optical channels covering the u'g'r'I'z' bands. Frame rates of over 1000 per second will be achievable using an ESO CCD controller (NGC), with every frame GPS timestamped. The detectors are custom-made, frame-transfer CCDs from e2v, with 4 low noise (2.5e-) outputs, mounted in small thermoelectrically-cooled heads operated at 180 K, resulting in virtually no dark current. The two reddest CCDs will be deep-depletion devices with anti-etaloning, providing high quantum efficiencies across the red part of the spectrum with no fringing. The instrument will also incorporate scintillation noise correction via the conjugate-plane photometry technique. The opto-mechanical chassis will make use of additive manufacturing techniques in metal to make a light-weight, rigid and temperature-invariant structure. First light is expected on the 4.2m William Herschel Telescope on La Palma in 2017 (on which the field of view will be 10' with a 0.3"/pixel scale), with subsequent use planned on the 10.4m Gran Telescopio Canarias on La Palma (on which the field of view will be 4' with a 0.11"/pixel scale) and the 3.5m New Technology Telescope in Chile.

  10. Video-Camera-Based Position-Measuring System

    NASA Technical Reports Server (NTRS)

    Lane, John; Immer, Christopher; Brink, Jeffrey; Youngquist, Robert

    2005-01-01

    A prototype optoelectronic system measures the three-dimensional relative coordinates of objects of interest or of targets affixed to objects of interest in a workspace. The system includes a charge-coupled-device video camera mounted in a known position and orientation in the workspace, a frame grabber, and a personal computer running image-data-processing software. Relative to conventional optical surveying equipment, this system can be built and operated at much lower cost; however, it is less accurate. It is also much easier to operate than are conventional instrumentation systems. In addition, there is no need to establish a coordinate system through cooperative action by a team of surveyors. The system operates in real time at around 30 frames per second (limited mostly by the frame rate of the camera). It continuously tracks targets as long as they remain in the field of the camera. In this respect, it emulates more expensive, elaborate laser tracking equipment that costs of the order of 100 times as much. Unlike laser tracking equipment, this system does not pose a hazard of laser exposure. Images acquired by the camera are digitized and processed to extract all valid targets in the field of view. The three-dimensional coordinates (x, y, and z) of each target are computed from the pixel coordinates of the targets in the images to accuracy of the order of millimeters over distances of the orders of meters. The system was originally intended specifically for real-time position measurement of payload transfers from payload canisters into the payload bay of the Space Shuttle Orbiters (see Figure 1). The system may be easily adapted to other applications that involve similar coordinate-measuring requirements. Examples of such applications include manufacturing, construction, preliminary approximate land surveying, and aerial surveying. For some applications with rectangular symmetry, it is feasible and desirable to attach a target composed of black and white

  11. Deep-Sea Video Cameras Without Pressure Housings

    NASA Technical Reports Server (NTRS)

    Cunningham, Thomas

    2004-01-01

    Underwater video cameras of a proposed type (and, optionally, their light sources) would not be housed in pressure vessels. Conventional underwater cameras and their light sources are housed in pods that keep the contents dry and maintain interior pressures of about 1 atmosphere (.0.1 MPa). Pods strong enough to withstand the pressures at great ocean depths are bulky, heavy, and expensive. Elimination of the pods would make it possible to build camera/light-source units that would be significantly smaller, lighter, and less expensive. The depth ratings of the proposed camera/light source units would be essentially unlimited because the strengths of their housings would no longer be an issue. A camera according to the proposal would contain an active-pixel image sensor and readout circuits, all in the form of a single silicon-based complementary metal oxide/semiconductor (CMOS) integrated- circuit chip. As long as none of the circuitry and none of the electrical leads were exposed to seawater, which is electrically conductive, silicon integrated- circuit chips could withstand the hydrostatic pressure of even the deepest ocean. The pressure would change the semiconductor band gap by only a slight amount . not enough to degrade imaging performance significantly. Electrical contact with seawater would be prevented by potting the integrated-circuit chip in a transparent plastic case. The electrical leads for supplying power to the chip and extracting the video signal would also be potted, though not necessarily in the same transparent plastic. The hydrostatic pressure would tend to compress the plastic case and the chip equally on all sides; there would be no need for great strength because there would be no need to hold back high pressure on one side against low pressure on the other side. A light source suitable for use with the camera could consist of light-emitting diodes (LEDs). Like integrated- circuit chips, LEDs can withstand very large hydrostatic pressures. If

  12. The Camera Is Not a Methodology: Towards a Framework for Understanding Young Children's Use of Video Cameras

    ERIC Educational Resources Information Center

    Bird, Jo; Colliver, Yeshe; Edwards, Susan

    2014-01-01

    Participatory research methods argue that young children should be enabled to contribute their perspectives on research seeking to understand their worldviews. Visual research methods, including the use of still and video cameras with young children have been viewed as particularly suited to this aim because cameras have been considered easy and…

  13. The Camera Is Not a Methodology: Towards a Framework for Understanding Young Children's Use of Video Cameras

    ERIC Educational Resources Information Center

    Bird, Jo; Colliver, Yeshe; Edwards, Susan

    2014-01-01

    Participatory research methods argue that young children should be enabled to contribute their perspectives on research seeking to understand their worldviews. Visual research methods, including the use of still and video cameras with young children have been viewed as particularly suited to this aim because cameras have been considered easy and…

  14. High speed photography, videography, and photonics IV; Proceedings of the Meeting, San Diego, CA, Aug. 19, 20, 1986

    NASA Technical Reports Server (NTRS)

    Ponseggi, B. G. (Editor)

    1986-01-01

    Various papers on high-speed photography, videography, and photonics are presented. The general topics addressed include: photooptical and video instrumentation, streak camera data acquisition systems, photooptical instrumentation in wind tunnels, applications of holography and interferometry in wind tunnel research programs, and data analysis for photooptical and video instrumentation.

  15. Application of Optical Measurement Techniques During Stages of Pregnancy: Use of Phantom High Speed Cameras for Digital Image Correlation (D.I.C.) During Baby Kicking and Abdomen Movements

    NASA Technical Reports Server (NTRS)

    Gradl, Paul

    2016-01-01

    Paired images were collected using a projected pattern instead of standard painting of the speckle pattern on her abdomen. High Speed cameras were post triggered after movements felt. Data was collected at 120 fps -limited due to 60hz frequency of projector. To ensure that kicks and movement data was real a background test was conducted with no baby movement (to correct for breathing and body motion).

  16. Initiation and propagation of cloud-to-ground lightning observed with a high-speed video camera.

    PubMed

    Tran, M D; Rakov, V A

    2016-12-21

    Complete evolution of a lightning discharge, from its initiation at an altitude of about 4 km to its ground attachment, was optically observed for the first time at the Lightning Observatory in Gainesville, Florida. The discharge developed during the late stage of a cloud flash and was initiated in a decayed branch of the latter. The initial channel section was intermittently illuminated for over 100 ms, until a bidirectionally extending channel (leader) was formed. During the bidirectional leader extension, the negative end exhibited optical and radio-frequency electromagnetic features expected for negative cloud-to-ground strokes developing in virgin air, while the positive end most of the time appeared to be inactive or showed intermittent channel luminosity enhancements. The development of positive end involved an abrupt creation of a 1-km long, relatively straight branch with a streamer corona burst at its far end. This 1-km jump appeared to occur in virgin air at a remarkably high effective speed of the order of 10(6) m/s. The positive end of the bidirectional leader connected to another bidirectional leader to form a larger bidirectional leader, whose negative end attached to the ground and produced a 36-kA return stroke.

  17. Initiation and propagation of cloud-to-ground lightning observed with a high-speed video camera

    PubMed Central

    Tran, M. D.; Rakov, V. A.

    2016-01-01

    Complete evolution of a lightning discharge, from its initiation at an altitude of about 4 km to its ground attachment, was optically observed for the first time at the Lightning Observatory in Gainesville, Florida. The discharge developed during the late stage of a cloud flash and was initiated in a decayed branch of the latter. The initial channel section was intermittently illuminated for over 100 ms, until a bidirectionally extending channel (leader) was formed. During the bidirectional leader extension, the negative end exhibited optical and radio-frequency electromagnetic features expected for negative cloud-to-ground strokes developing in virgin air, while the positive end most of the time appeared to be inactive or showed intermittent channel luminosity enhancements. The development of positive end involved an abrupt creation of a 1-km long, relatively straight branch with a streamer corona burst at its far end. This 1-km jump appeared to occur in virgin air at a remarkably high effective speed of the order of 106 m/s. The positive end of the bidirectional leader connected to another bidirectional leader to form a larger bidirectional leader, whose negative end attached to the ground and produced a 36-kA return stroke. PMID:28000746

  18. Initiation and propagation of cloud-to-ground lightning observed with a high-speed video camera

    NASA Astrophysics Data System (ADS)

    Tran, M. D.; Rakov, V. A.

    2016-12-01

    Complete evolution of a lightning discharge, from its initiation at an altitude of about 4 km to its ground attachment, was optically observed for the first time at the Lightning Observatory in Gainesville, Florida. The discharge developed during the late stage of a cloud flash and was initiated in a decayed branch of the latter. The initial channel section was intermittently illuminated for over 100 ms, until a bidirectionally extending channel (leader) was formed. During the bidirectional leader extension, the negative end exhibited optical and radio-frequency electromagnetic features expected for negative cloud-to-ground strokes developing in virgin air, while the positive end most of the time appeared to be inactive or showed intermittent channel luminosity enhancements. The development of positive end involved an abrupt creation of a 1-km long, relatively straight branch with a streamer corona burst at its far end. This 1-km jump appeared to occur in virgin air at a remarkably high effective speed of the order of 106 m/s. The positive end of the bidirectional leader connected to another bidirectional leader to form a larger bidirectional leader, whose negative end attached to the ground and produced a 36-kA return stroke.

  19. On the Complexity of Digital Video Cameras in/as Research: Perspectives and Agencements

    ERIC Educational Resources Information Center

    Bangou, Francis

    2014-01-01

    The goal of this article is to consider the potential for digital video cameras to produce as part of a research agencement. Our reflection will be guided by the current literature on the use of video recordings in research, as well as by the rhizoanalysis of two vignettes. The first of these vignettes is associated with a short video clip shot by…

  20. On the Complexity of Digital Video Cameras in/as Research: Perspectives and Agencements

    ERIC Educational Resources Information Center

    Bangou, Francis

    2014-01-01

    The goal of this article is to consider the potential for digital video cameras to produce as part of a research agencement. Our reflection will be guided by the current literature on the use of video recordings in research, as well as by the rhizoanalysis of two vignettes. The first of these vignettes is associated with a short video clip shot by…

  1. High speed photography, videography, and photonics III; Proceedings of the Meeting, San Diego, CA, August 22, 23, 1985

    NASA Astrophysics Data System (ADS)

    Ponseggi, B. G.; Johnson, H. C.

    1985-01-01

    Papers are presented on the picosecond electronic framing camera, photogrammetric techniques using high-speed cineradiography, picosecond semiconductor lasers for characterizing high-speed image shutters, the measurement of dynamic strain by high-speed moire photography, the fast framing camera with independent frame adjustments, design considerations for a data recording system, and nanosecond optical shutters. Consideration is given to boundary-layer transition detectors, holographic imaging, laser holographic interferometry in wind tunnels, heterodyne holographic interferometry, a multispectral video imaging and analysis system, a gated intensified camera, a charge-injection-device profile camera, a gated silicon-intensified-target streak tube and nanosecond-gated photoemissive shutter tubes. Topics discussed include high time-space resolved photography of lasers, time-resolved X-ray spectrographic instrumentation for laser studies, a time-resolving X-ray spectrometer, a femtosecond streak camera, streak tubes and cameras, and a short pulse X-ray diagnostic development facility.

  2. High speed photography, videography, and photonics III; Proceedings of the Meeting, San Diego, CA, August 22, 23, 1985

    NASA Technical Reports Server (NTRS)

    Ponseggi, B. G. (Editor); Johnson, H. C. (Editor)

    1985-01-01

    Papers are presented on the picosecond electronic framing camera, photogrammetric techniques using high-speed cineradiography, picosecond semiconductor lasers for characterizing high-speed image shutters, the measurement of dynamic strain by high-speed moire photography, the fast framing camera with independent frame adjustments, design considerations for a data recording system, and nanosecond optical shutters. Consideration is given to boundary-layer transition detectors, holographic imaging, laser holographic interferometry in wind tunnels, heterodyne holographic interferometry, a multispectral video imaging and analysis system, a gated intensified camera, a charge-injection-device profile camera, a gated silicon-intensified-target streak tube and nanosecond-gated photoemissive shutter tubes. Topics discussed include high time-space resolved photography of lasers, time-resolved X-ray spectrographic instrumentation for laser studies, a time-resolving X-ray spectrometer, a femtosecond streak camera, streak tubes and cameras, and a short pulse X-ray diagnostic development facility.

  3. [Research Award providing funds for a tracking video camera

    NASA Technical Reports Server (NTRS)

    Collett, Thomas

    2000-01-01

    The award provided funds for a tracking video camera. The camera has been installed and the system calibrated. It has enabled us to follow in real time the tracks of individual wood ants (Formica rufa) within a 3m square arena as they navigate singly in-doors guided by visual cues. To date we have been using the system on two projects. The first is an analysis of the navigational strategies that ants use when guided by an extended landmark (a low wall) to a feeding site. After a brief training period, ants are able to keep a defined distance and angle from the wall, using their memory of the wall's height on the retina as a controlling parameter. By training with walls of one height and length and testing with walls of different heights and lengths, we can show that ants adjust their distance from the wall so as to keep the wall at the height that they learned during training. Thus, their distance from the base of a tall wall is further than it is from the training wall, and the distance is shorter when the wall is low. The stopping point of the trajectory is defined precisely by the angle that the far end of the wall makes with the trajectory. Thus, ants walk further if the wall is extended in length and not so far if the wall is shortened. These experiments represent the first case in which the controlling parameters of an extended trajectory can be defined with some certainty. It raises many questions for future research that we are now pursuing.

  4. [Research Award providing funds for a tracking video camera

    NASA Technical Reports Server (NTRS)

    Collett, Thomas

    2000-01-01

    The award provided funds for a tracking video camera. The camera has been installed and the system calibrated. It has enabled us to follow in real time the tracks of individual wood ants (Formica rufa) within a 3m square arena as they navigate singly in-doors guided by visual cues. To date we have been using the system on two projects. The first is an analysis of the navigational strategies that ants use when guided by an extended landmark (a low wall) to a feeding site. After a brief training period, ants are able to keep a defined distance and angle from the wall, using their memory of the wall's height on the retina as a controlling parameter. By training with walls of one height and length and testing with walls of different heights and lengths, we can show that ants adjust their distance from the wall so as to keep the wall at the height that they learned during training. Thus, their distance from the base of a tall wall is further than it is from the training wall, and the distance is shorter when the wall is low. The stopping point of the trajectory is defined precisely by the angle that the far end of the wall makes with the trajectory. Thus, ants walk further if the wall is extended in length and not so far if the wall is shortened. These experiments represent the first case in which the controlling parameters of an extended trajectory can be defined with some certainty. It raises many questions for future research that we are now pursuing.

  5. Robust Video Stabilization Using Particle Keypoint Update and l1-Optimized Camera Path

    PubMed Central

    Jeon, Semi; Yoon, Inhye; Jang, Jinbeum; Yang, Seungji; Kim, Jisung; Paik, Joonki

    2017-01-01

    Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i) robust feature detection using particle keypoints between adjacent frames; (ii) camera path estimation and smoothing; and (iii) rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV). The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems. PMID:28208622

  6. Robust Video Stabilization Using Particle Keypoint Update and l₁-Optimized Camera Path.

    PubMed

    Jeon, Semi; Yoon, Inhye; Jang, Jinbeum; Yang, Seungji; Kim, Jisung; Paik, Joonki

    2017-02-10

    Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i) robust feature detection using particle keypoints between adjacent frames; (ii) camera path estimation and smoothing; and (iii) rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV). The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems.

  7. High speed imaging television system

    DOEpatents

    Wilkinson, William O.; Rabenhorst, David W.

    1984-01-01

    A television system for observing an event which provides a composite video output comprising the serially interlaced images the system is greater than the time resolution of any of the individual cameras.

  8. A new, accurate and easy to implement camera and video projector model.

    PubMed

    Hoppe, Harald; Däuber, Sascha; Kübler, Carsten; Raczkowsky, Jörg; Wörn, Heinz

    2002-01-01

    In 2000, the Institute for Process Control and Robotics/Universität Karlsruhe (TH) has developed a prototype system for projector based augmented reality consisting of a state-of-the-art PC, two CCD cameras and a video projector which is used for registration and projection of surgical planning data. Tracking, registration as well as projection require an accurate calibration process for cameras and video projectors. We have developed a new, flexible, plain and easy to implement model, which can both be used for calibration of cameras and video projectors.

  9. High speed data compactor

    DOEpatents

    Baumbaugh, Alan E.; Knickerbocker, Kelly L.

    1988-06-04

    A method and apparatus for suppressing from transmission, non-informational data words from a source of data words such as a video camera. Data words having values greater than a predetermined threshold are transmitted whereas data words having values less than a predetermined threshold are not transmitted but their occurrences instead are counted. Before being transmitted, the count of occurrences of invalid data words and valid data words are appended with flag digits which a receiving system decodes. The original data stream is fully reconstructable from the stream of valid data words and count of invalid data words.

  10. Optical buffer films for high-speed interferometric imaging

    NASA Astrophysics Data System (ADS)

    Lysogorski, Charles D.

    1995-02-01

    To understand wind tunnel flow-field turbulence, it is necessary to understand how high speed (kHz) transient events develop in time. The framing rates necessary to record such imagery are too high for conventional video camera systems to be used. While high-speed, film-based cameras (e.g. Cordin drum film recorders) have sufficient spatial resolution and framing rates, analyzing the data acquired with these cameras is time consuming, possibly taking days to process and digitize the film images. These limitations in existing digital imaging technologies, and pulsed flow-field illumination systems have prevented digital movies of phenomena in turbulent and unstable flow-field regions to be made with sufficient spatial and temporal resolution. To address this need, I am presenting two techniques which can record data onto an intermediate optical buffer with the desired temporal and spatial resolution. These optical buffers incorporate real-time erasable recording film which consist of a phosphor or bacteriorhodopsin (BR) that would be used to temporarily store the images which were recorded at kHz rates. These images are then reconstructed and digitized at standard video rates, and stored on an optical disk. The primary advantage of this technique is in the ability to record images at extremely fast rates (60 kHz or faster) and then digitize the images at standard video recording rates.

  11. A New Methodology for Studying Dynamics of Aerosol Particles in Sneeze and Cough Using a Digital High-Vision, High-Speed Video System and Vector Analyses

    PubMed Central

    Nishimura, Hidekazu; Sakata, Soichiro; Kaga, Akikazu

    2013-01-01

    Microbial pathogens of respiratory infectious diseases are often transmitted through particles in sneeze and cough. Therefore, understanding the particle movement is important for infection control. Images of a sneeze induced by nasal cavity stimulation by healthy adult volunteers, were taken by a digital high-vision, high-speed video system equipped with a computer system and treated as a research model. The obtained images were enhanced electronically, converted to digital images every 1/300 s, and subjected to vector analysis of the bioparticles contained in the whole sneeze cloud using automatic image processing software. The initial velocity of the particles or their clusters in the sneeze was greater than 6 m/s, but decreased as the particles moved forward; the momentums of the particles seemed to be lost by 0.15–0.20 s and started a diffusion movement. An approximate equation of a function of elapsed time for their velocity was obtained from the vector analysis to represent the dynamics of the front-line particles. This methodology was also applied for a cough. Microclouds contained in a smoke exhaled with a voluntary cough by a volunteer after smoking one breath of cigarette, were traced as the visible, aerodynamic surrogates for invisible bioparticles of cough. The smoke cough microclouds had an initial velocity greater than 5 m/s. The fastest microclouds were located at the forefront of cloud mass that moving forward; however, their velocity clearly decreased after 0.05 s and they began to diffuse in the environmental airflow. The maximum direct reaches of the particles and microclouds driven by sneezing and coughing unaffected by environmental airflows were estimated by calculations using the obtained equations to be about 84 cm and 30 cm from the mouth, respectively, both achieved in about 0.2 s, suggesting that data relating to the dynamics of sneeze and cough became available by calculation. PMID:24312206

  12. A Raman Spectroscopy and High-Speed Video Experimental Study: The Effect of Pressure on the Solid-Liquid Transformation Kinetics of N-octane

    NASA Astrophysics Data System (ADS)

    Liu, C.; Wang, D.

    2015-12-01

    Phase transitions of minerals and rocks in the interior of the Earth, especially at elevated pressures and temperatures, can make the crystal structures and state parameters obviously changed, so it is very important for the physical and chemical properties of these materials. It is known that the transformation between solid and liquid is relatively common in nature, such as the melting of ice and the crystallization of mineral or water. The kinetics relevant to these transformations might provide valuable information on the reaction rate and the reaction mechanism involving nucleation and growth. An in-situ transformation kinetic study of n-octane, which served as an example for this type of phase transition, has been carried out using a hydrothermal diamond anvil cell (HDAC) and high-speed video technique, and that the overall purpose of this study is to develop a comprehensive understanding of the reaction mechanism and the influence of pressure on the different transformation rates. At ambient temperature, the liquid-solid transformation of n-octane first took place with increasing pressure, and then the solid phase gradually transformed into the liquid phase when the sample was heated to a certain temperature. Upon the cooling of the system, the liquid-solid transformation occurred again. According to the established quantitative assessments of the transformation rates, pressure and temperature, it showed that there was a negative pressure dependence of the solid-liquid transformation rate. However, the elevation of pressure can accelerate the liquid-solid transformation rate. Based on the calculated activation energy values, an interfacial reaction and diffusion dominated the solid-liquid transformation, but the liquid-solid transformation was mainly controlled by diffusion. This experimental technique is a powerful and effective tool for the transformation kinetics study of n-octane, and the obtained results are of great significance to the kinetics study

  13. A new methodology for studying dynamics of aerosol particles in sneeze and cough using a digital high-vision, high-speed video system and vector analyses.

    PubMed

    Nishimura, Hidekazu; Sakata, Soichiro; Kaga, Akikazu

    2013-01-01

    Microbial pathogens of respiratory infectious diseases are often transmitted through particles in sneeze and cough. Therefore, understanding the particle movement is important for infection control. Images of a sneeze induced by nasal cavity stimulation by healthy adult volunteers, were taken by a digital high-vision, high-speed video system equipped with a computer system and treated as a research model. The obtained images were enhanced electronically, converted to digital images every 1/300 s, and subjected to vector analysis of the bioparticles contained in the whole sneeze cloud using automatic image processing software. The initial velocity of the particles or their clusters in the sneeze was greater than 6 m/s, but decreased as the particles moved forward; the momentums of the particles seemed to be lost by 0.15-0.20 s and started a diffusion movement. An approximate equation of a function of elapsed time for their velocity was obtained from the vector analysis to represent the dynamics of the front-line particles. This methodology was also applied for a cough. Microclouds contained in a smoke exhaled with a voluntary cough by a volunteer after smoking one breath of cigarette, were traced as the visible, aerodynamic surrogates for invisible bioparticles of cough. The smoke cough microclouds had an initial velocity greater than 5 m/s. The fastest microclouds were located at the forefront of cloud mass that moving forward; however, their velocity clearly decreased after 0.05 s and they began to diffuse in the environmental airflow. The maximum direct reaches of the particles and microclouds driven by sneezing and coughing unaffected by environmental airflows were estimated by calculations using the obtained equations to be about 84 cm and 30 cm from the mouth, respectively, both achieved in about 0.2 s, suggesting that data relating to the dynamics of sneeze and cough became available by calculation.

  14. Lori Losey - The Woman Behind the Video Camera

    NASA Image and Video Library

    The often-spectacular aerial video imagery of NASA flight research, airborne science missions and space satellite launches doesn't just happen. Much of it is the work of Lori Losey, senior video pr...

  15. Using a Video Camera to Measure the Radius of the Earth

    ERIC Educational Resources Information Center

    Carroll, Joshua; Hughes, Stephen

    2013-01-01

    A simple but accurate method for measuring the Earth's radius using a video camera is described. A video camera was used to capture a shadow rising up the wall of a tall building at sunset. A free program called ImageJ was used to measure the time it took the shadow to rise a known distance up the building. The time, distance and length of…

  16. Using a Video Camera to Measure the Radius of the Earth

    ERIC Educational Resources Information Center

    Carroll, Joshua; Hughes, Stephen

    2013-01-01

    A simple but accurate method for measuring the Earth's radius using a video camera is described. A video camera was used to capture a shadow rising up the wall of a tall building at sunset. A free program called ImageJ was used to measure the time it took the shadow to rise a known distance up the building. The time, distance and length of…

  17. Ultra-high-speed bionanoscope for cell and microbe imaging

    NASA Astrophysics Data System (ADS)

    Etoh, T. Goji; Vo Le, Cuong; Kawano, Hiroyuki; Ishikawa, Ikuko; Miyawaki, Atshushi; Dao, Vu T. S.; Nguyen, Hoang Dung; Yokoi, Sayoko; Yoshida, Shigeru; Nakano, Hitoshi; Takehara, Kohsei; Saito, Yoshiharu

    2008-11-01

    We are developing an ultra-high-sensitivity and ultra-high-speed imaging system for bioscience, mainly for imaging of microbes with visible light and cells with fluorescence emission. Scarcity of photons is the most serious problem in applications of high-speed imaging to the scientific field. To overcome the problem, the system integrates new technologies consisting of (1) an ultra-high-speed video camera with sub-ten-photon sensitivity with the frame rate of more than 1 mega frames per second, (2) a microscope with highly efficient use of light applicable to various unstained and fluorescence cell observations, and (3) very powerful long-pulse-strobe Xenon lights and lasers for microscopes. Various auxiliary technologies to support utilization of the system are also being developed. One example of them is an efficient video trigger system, which detects a weak signal of a sudden change in a frame under ultra-high-speed imaging by canceling high-frequency fluctuation of illumination light. This paper outlines the system with its preliminary evaluation results.

  18. High Speed Photometry for BUSCA

    NASA Astrophysics Data System (ADS)

    Cordes, O.; Reif, K.

    The camera BUSCA (Bonn University Simultaneous CAmera) is a standard instrument at the 2.2m telescope at Calar Alto Observatory (Spain) since 2001. At the moment some modifications of BUSCA are planned and partially realised. One major goal is the replacement of the old thick CCDs in the blue, yellow-green, and near-infrared channels. The newer CCDs have better cosmetics and performance in sensitivity. The other goal is to replace the old "Heidelberg"-style controller with a newly designed controller with the main focus on high-speed readout and on an advanced windowing mechanism. We present a theoretical analysis of the new controller design and its advantage in high speed photometry of rapidly pulsating stars. As an example PG1605+072 was chosen which was observed with BUSCA before in 2001 and 2002.

  19. Correction of spatially varying image and video motion blur using a hybrid camera.

    PubMed

    Tai, Yu-Wing; Du, Hao; Brown, Michael S; Lin, Stephen

    2010-06-01

    We describe a novel approach to reduce spatially varying motion blur in video and images using a hybrid camera system. A hybrid camera is a standard video camera that is coupled with an auxiliary low-resolution camera sharing the same optical path but capturing at a significantly higher frame rate. The auxiliary video is temporally sharper but at a lower resolution, while the lower frame-rate video has higher spatial resolution but is susceptible to motion blur. Our deblurring approach uses the data from these two video streams to reduce spatially varying motion blur in the high-resolution camera with a technique that combines both deconvolution and super-resolution. Our algorithm also incorporates a refinement of the spatially varying blur kernels to further improve results. Our approach can reduce motion blur from the high-resolution video as well as estimate new high-resolution frames at a higher frame rate. Experimental results on a variety of inputs demonstrate notable improvement over current state-of-the-art methods in image/video deblurring.

  20. Prompting Spontaneity by Means of the Video Camera in the Beginning Foreign Language Class.

    ERIC Educational Resources Information Center

    Pelletier, Raymond J.

    1990-01-01

    Describes four techniques for using a video camera to generate higher levels of student interest, involvement, and productivity in beginning foreign language courses. The techniques include spontaneous discussion of video images, enhancement of students' use of interrogative pronouns and phrases, grammar instruction, and student-produced skits.…

  1. Prompting Spontaneity by Means of the Video Camera in the Beginning Foreign Language Class.

    ERIC Educational Resources Information Center

    Pelletier, Raymond J.

    1990-01-01

    Describes four techniques for using a video camera to generate higher levels of student interest, involvement, and productivity in beginning foreign language courses. The techniques include spontaneous discussion of video images, enhancement of students' use of interrogative pronouns and phrases, grammar instruction, and student-produced skits.…

  2. Still-Video Photography: Tomorrow's Electronic Cameras in the Hands of Today's Photojournalists.

    ERIC Educational Resources Information Center

    Foss, Kurt; Kahan, Robert S.

    This paper examines the still-video camera and its potential impact by looking at recent experiments and by gathering information from some of the few people knowledgeable about the new technology. The paper briefly traces the evolution of the tools and processes of still-video photography, examining how photographers and their work have been…

  3. Flexible Fiber-Optic High-Speed Imaging of Vocal Fold Vibration: A Preliminary Report.

    PubMed

    Woo, Peak; Baxter, Peter

    2017-03-01

    High-speed video (HSV) imaging of vocal fold vibration has been possible only through the rigid endoscope. This study reports that a fiberscope-based high-speed imaging system may allow HSV imaging of naturalistic voicing. Twenty-two subjects were recorded using a commercially available black and white high-speed camera (Photron Motion Tools, 256 × 120 pixel, 2000 frames per second, 8 second acquisition time). The camera gain is set to +6 db. The camera is coupled to a standard fiber-optic laryngoscope (Olympus ENF P-4) with a 300-W Xenon light. Image acquisition was done by asking the subject to perform repeated phonation at modal phonation. Video images were processed using commercial video editing and video noise reduction software (After effects, Magix, and Neat Video 4.1). After video processing, the video images were analyzed using digital kymography (DKG). The HSV black and white video acquired by the camera is gray and lacks contrast. By adjustment of image contrast, brightness, and gamma and using noise reduction software, the flexible laryngoscopy image can be converted to video image files suitable for DKG and waveform analysis. The increased noise still makes edge tracking for objective analysis difficult, but subjective analysis of DKG plot is possible. This is the first report of HSV acquisition in an unsedated patient using a fiberscope. Image enhancement and noise reduction can enhance the HSV to allow extraction of the digital kymogram. Further image enhancement may allow for objective analysis of the vibratory waveform. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  4. Digital video technology and production 101: lights, camera, action.

    PubMed

    Elliot, Diane L; Goldberg, Linn; Goldberg, Michael J

    2014-01-01

    Videos are powerful tools for enhancing the reach and effectiveness of health promotion programs. They can be used for program promotion and recruitment, for training program implementation staff/volunteers, and as elements of an intervention. Although certain brief videos may be produced without technical assistance, others often require collaboration and contracting with professional videographers. To get practitioners started and to facilitate interactions with professional videographers, this Tool includes a guide to the jargon of video production and suggestions for how to integrate videos into health education and promotion work. For each type of video, production principles and issues to consider when working with a professional videographer are provided. The Tool also includes links to examples in each category of video applications to health promotion.

  5. High-speed imaging of explosive eruptions: applications and perspectives

    NASA Astrophysics Data System (ADS)

    Taddeucci, Jacopo; Scarlato, Piergiorgio; Gaudin, Damien; Capponi, Antonio; Alatorre-Ibarguengoitia, Miguel-Angel; Moroni, Monica

    2013-04-01

    Explosive eruptions, being by definition highly dynamic over short time scales, necessarily call for observational systems capable of relatively high sampling rates. "Traditional" tools, like as seismic and acoustic networks, have recently been joined by Doppler radar and electric sensors. Recent developments in high-speed camera systems now allow direct visual information of eruptions to be obtained with a spatial and temporal resolution suitable for the analysis of several key eruption processes. Here we summarize the methods employed to gather and process high-speed videos of explosive eruptions, and provide an overview of the several applications of these new type of data in understanding different aspects of explosive volcanism. Our most recent set up for high-speed imaging of explosive eruptions (FAMoUS - FAst, MUltiparametric Set-up,) includes: 1) a monochrome high speed camera, capable of 500 frames per second (fps) at high-definition (1280x1024 pixel) resolution and up to 200000 fps at reduced resolution; 2) a thermal camera capable of 50-200 fps at 480-120x640 pixel resolution; and 3) two acoustic to infrasonic sensors. All instruments are time-synchronized via a data logging system, a hand- or software-operated trigger, and via GPS, allowing signals from other instruments or networks to be directly recorded by the same logging unit or to be readily synchronized for comparison. FAMoUS weights less than 20 kg, easily fits into four, hand-luggage-sized backpacks, and can be deployed in less than 20' (and removed in less than 2', if needed). So far, explosive eruptions have been recorded in high-speed at several active volcanoes, including Fuego and Santiaguito (Guatemala), Stromboli (Italy), Yasur (Vanuatu), and Eyjafiallajokull (Iceland). Image processing and analysis from these eruptions helped illuminate several eruptive processes, including: 1) Pyroclasts ejection. High-speed videos reveal multiple, discrete ejection pulses within a single Strombolian

  6. High-Speed Scanning for the Quantitative Evaluation of Glycogen Concentration in Bioethanol Feedstock Synechocystis sp. PCC6803 Using a Near-Infrared Hyperspectral Imaging System with a New Near-Infrared Spectral Camera.

    PubMed

    Ishigaki, Mika; Nakanishi, Akihito; Hasunuma, Tomohisa; Kondo, Akihiko; Morishima, Tetsu; Okuno, Toshiaki; Ozaki, Yukihiro

    2017-03-01

    In the present study, the high-speed quantitative evaluation of glycogen concentration accumulated in bioethanol feedstock Synechocystis sp. PCC6803 was performed using a near-infrared (NIR) imaging system with a hyperspectral NIR spectral camera named Compovision. The NIR imaging system has a feature for high-speed and wide area monitoring and the two-dimensional scanning speed is almost 100 times faster than the general NIR imaging systems for the same pixel size. For the quantitative analysis of glycogen concentration, partial least squares regression (PLSR) and moving window PLSR (MWPLSR) were performed with the information of glycogen concentration measured by high performance liquid chromatography (HPLC) and the calibration curves for the concentration within the Synechocystis sp. PCC6803 cell were constructed. The results had high accuracy for the quantitative estimation of glycogen concentration as the best squared correlation coefficient R(2) was bigger than 0.99 and a root mean square error (RMSE) was less than 2.9%. The present results proved not only the potential for the applicability of NIR spectroscopy to the high-speed quantitative evaluation of glycogen concentration in the bioethanol feedstock but also the expansivity of the NIR imaging instrument to in-line or on-line product evaluation on a factory production line of bioethanol in the future.

  7. Feasibility study of transmission of OTV camera control information in the video vertical blanking interval

    NASA Technical Reports Server (NTRS)

    White, Preston A., III

    1994-01-01

    The Operational Television system at Kennedy Space Center operates hundreds of video cameras, many remotely controllable, in support of the operations at the center. This study was undertaken to determine if commercial NABTS (North American Basic Teletext System) teletext transmission in the vertical blanking interval of the genlock signals distributed to the cameras could be used to send remote control commands to the cameras and the associated pan and tilt platforms. Wavelength division multiplexed fiberoptic links are being installed in the OTV system to obtain RS-250 short-haul quality. It was demonstrated that the NABTS transmission could be sent over the fiberoptic cable plant without excessive video quality degradation and that video cameras could be controlled using NABTS transmissions over multimode fiberoptic paths as long as 1.2 km.

  8. Lights, Cameras, Pencils! Using Descriptive Video to Enhance Writing

    ERIC Educational Resources Information Center

    Hoffner, Helen; Baker, Eileen; Quinn, Kathleen Benson

    2008-01-01

    Students of various ages and abilities can increase their comprehension and build vocabulary with the help of a new technology, Descriptive Video. Descriptive Video (also known as described programming) was developed to give individuals with visual impairments access to visual media such as television programs and films. Described programs,…

  9. Lights! Camera! Action! Handling Your First Video Assignment.

    ERIC Educational Resources Information Center

    Thomas, Marjorie Bekaert

    1989-01-01

    The author discusses points to consider when hiring and working with a video production company to develop a video for human resources purposes. Questions to ask the consultants are included, as is information on the role of the company liaison and on how to avoid expensive, time-wasting pitfalls. (CH)

  10. High speed handpieces

    PubMed Central

    Bhandary, Nayan; Desai, Asavari; Shetty, Y Bharath

    2014-01-01

    High speed instruments are versatile instruments used by clinicians of all specialties of dentistry. It is important for clinicians to understand the types of high speed handpieces available and the mechanism of working. The centers for disease control and prevention have issued guidelines time and again for disinfection and sterilization of high speed handpieces. This article presents the recent developments in the design of the high speed handpieces. With a view to prevent hospital associated infections significant importance has been given to disinfection, sterilization & maintenance of high speed handpieces. How to cite the article: Bhandary N, Desai A, Shetty YB. High speed handpieces. J Int Oral Health 2014;6(1):130-2. PMID:24653618

  11. Turn up the lights: Deep-sea in situ application of a high-speed, high-resolution sCMOS camera to observe marine bioluminescence

    NASA Astrophysics Data System (ADS)

    Phillips, B. T.; Gruber, D. F.; Sparks, J. S.; Vasan, G.; Roman, C.; Pieribone, V. A.

    2016-02-01

    Observing and measuring marine bioluminescence presents unique challenges in situ. Technology is the greatest limiting factor in this endeavor, with sensitivity, speed and resolution constraining the imaging tools available to researchers. State-of-the-art microscopy cameras offer to bridge this gap. An ultra-low-light, scientific complimentary-metal-oxide-semiconductor (sCMOS) camera was outfitted for in-situ imaging of marine bioluminescence. This system was deployed on multiple deep-sea platforms (manned submersible, remotely operated vehicle, and towed body) in three oceanic regions (Western Tropical Pacific, Eastern Equatorial Pacific, and Northwestern Atlantic) to depths up to 2500m. Using light stimulation, bioluminescent responses were recorded at high frame rates and in high resolution, offering unprecedented low-light imagery of deep-sea bioluminescence in situ. The kinematics and physiology of light production in several zooplankton groups is presented, and luminescent responses at different depths are quantified as intensity vs. time.

  12. Imaging with organic indicators and high-speed charge-coupled device cameras in neurons: some applications where these classic techniques have advantages.

    PubMed

    Ross, William N; Miyazaki, Kenichi; Popovic, Marko A; Zecevic, Dejan

    2015-04-01

    Dynamic calcium and voltage imaging is a major tool in modern cellular neuroscience. Since the beginning of their use over 40 years ago, there have been major improvements in indicators, microscopes, imaging systems, and computers. While cutting edge research has trended toward the use of genetically encoded calcium or voltage indicators, two-photon microscopes, and in vivo preparations, it is worth noting that some questions still may be best approached using more classical methodologies and preparations. In this review, we highlight a few examples in neurons where the combination of charge-coupled device (CCD) imaging and classical organic indicators has revealed information that has so far been more informative than results using the more modern systems. These experiments take advantage of the high frame rates, sensitivity, and spatial integration of the best CCD cameras. These cameras can respond to the faster kinetics of organic voltage and calcium indicators, which closely reflect the fast dynamics of the underlying cellular events.

  13. Kinematic Measurements of the Vocal-Fold Displacement Waveform in Typical Children and Adult Populations: Quantification of High-Speed Endoscopic Videos

    ERIC Educational Resources Information Center

    Patel, Rita; Donohue, Kevin D.; Unnikrishnan, Harikrishnan; Kryscio, Richard J.

    2015-01-01

    Purpose: This article presents a quantitative method for assessing instantaneous and average lateral vocal-fold motion from high-speed digital imaging, with a focus on developmental changes in vocal-fold kinematics during childhood. Method: Vocal-fold vibrations were analyzed for 28 children (aged 5-11 years) and 28 adults (aged 21-45 years)…

  14. Kinematic Measurements of the Vocal-Fold Displacement Waveform in Typical Children and Adult Populations: Quantification of High-Speed Endoscopic Videos

    ERIC Educational Resources Information Center

    Patel, Rita; Donohue, Kevin D.; Unnikrishnan, Harikrishnan; Kryscio, Richard J.

    2015-01-01

    Purpose: This article presents a quantitative method for assessing instantaneous and average lateral vocal-fold motion from high-speed digital imaging, with a focus on developmental changes in vocal-fold kinematics during childhood. Method: Vocal-fold vibrations were analyzed for 28 children (aged 5-11 years) and 28 adults (aged 21-45 years)…

  15. Application of high-speed videography in sports analysis

    NASA Astrophysics Data System (ADS)

    Smith, Sarah L.

    1993-01-01

    The goal of sport biomechanists is to provide information to coaches and athletes about sport skill technique that will assist them in obtaining the highest levels of athletic performance. Within this technique evaluation process, two methodological approaches can be taken to study human movement. One method describes the motion being performed; the second approach focuses on understanding the forces causing the motion. It is with the movement description method that video image recordings offer a means for athletes, coaches, and sport biomechanists to analyze sport performance. Staff members of the Technique Evaluation Program provide video recordings of sport performance to athletes and coaches during training sessions held at the Olympic Training Center in Colorado Springs, Colorado. These video records are taken to provide a means for the qualitative evaluation or the quantitative analysis of sport skills as performed by elite athletes. High-speed video equipment (NAC HVRB-200 and NAC HSV-400 Video Systems) is used to capture various sport movement sequences that will permit coaches, athletes, and sport biomechanists to evaluate and/or analyze sport performance. The PEAK Performance Motion Measurement System allows sport biomechanists to measure selected mechanical variables appropriate to the sport being analyzed. Use of two high-speed cameras allows for three-dimensional analysis of the sport skill or the ability to capture images of an athlete's motion from two different perspectives. The simultaneous collection and synchronization of force data provides for a more comprehensive analysis and understanding of a particular sport skill. This process of combining force data with motion sequences has been done extensively with cycling. The decision to use high-speed videography rather than normal speed video is based upon the same criteria that are used in other settings. The rapidness of the sport movement sequence and the need to see the location of body parts

  16. Imaging with organic indicators and high-speed charge-coupled device cameras in neurons: some applications where these classic techniques have advantages

    PubMed Central

    Ross, William N.; Miyazaki, Kenichi; Popovic, Marko A.; Zecevic, Dejan

    2014-01-01

    Abstract. Dynamic calcium and voltage imaging is a major tool in modern cellular neuroscience. Since the beginning of their use over 40 years ago, there have been major improvements in indicators, microscopes, imaging systems, and computers. While cutting edge research has trended toward the use of genetically encoded calcium or voltage indicators, two-photon microscopes, and in vivo preparations, it is worth noting that some questions still may be best approached using more classical methodologies and preparations. In this review, we highlight a few examples in neurons where the combination of charge-coupled device (CCD) imaging and classical organic indicators has revealed information that has so far been more informative than results using the more modern systems. These experiments take advantage of the high frame rates, sensitivity, and spatial integration of the best CCD cameras. These cameras can respond to the faster kinetics of organic voltage and calcium indicators, which closely reflect the fast dynamics of the underlying cellular events. PMID:26157996

  17. A three-camera imaging microscope for high-speed single-molecule tracking and super-resolution imaging in living cells

    NASA Astrophysics Data System (ADS)

    English, Brian P.; Singer, Robert H.

    2015-08-01

    Our aim is to develop quantitative single-molecule assays to study when and where molecules are interacting inside living cells and where enzymes are active. To this end we present a three-camera imaging microscope for fast tracking of multiple interacting molecules simultaneously, with high spatiotemporal resolution. The system was designed around an ASI RAMM frame using three separate tube lenses and custom multi-band dichroics to allow for enhanced detection efficiency. The frame times of the three Andor iXon Ultra EMCCD cameras are hardware synchronized to the laser excitation pulses of the three excitation lasers, such that the fluorophores are effectively immobilized during frame acquisitions and do not yield detections that are motion-blurred. Stroboscopic illumination allows robust detection from even rapidly moving molecules while minimizing bleaching, and since snapshots can be spaced out with varying time intervals, stroboscopic illumination enables a direct comparison to be made between fast and slow molecules under identical light dosage. We have developed algorithms that accurately track and co-localize multiple interacting biomolecules. The three-color microscope combined with our co-movement algorithms have made it possible for instance to simultaneously image and track how the chromosome environment affects diffusion kinetics or determine how mRNAs diffuse during translation. Such multiplexed single-molecule measurements at a high spatiotemporal resolution inside living cells will provide a major tool for testing models relating molecular architecture and biological dynamics.

  18. A three-camera imaging microscope for high-speed single-molecule tracking and super-resolution imaging in living cells.

    PubMed

    English, Brian P; Singer, Robert H

    2015-08-21

    Our aim is to develop quantitative single-molecule assays to study when and where molecules are interacting inside living cells and where enzymes are active. To this end we present a three-camera imaging microscope for fast tracking of multiple interacting molecules simultaneously, with high spatiotemporal resolution. The system was designed around an ASI RAMM frame using three separate tube lenses and custom multi-band dichroics to allow for enhanced detection efficiency. The frame times of the three Andor iXon Ultra EMCCD cameras are hardware synchronized to the laser excitation pulses of the three excitation lasers, such that the fluorophores are effectively immobilized during frame acquisitions and do not yield detections that are motion-blurred. Stroboscopic illumination allows robust detection from even rapidly moving molecules while minimizing bleaching, and since snapshots can be spaced out with varying time intervals, stroboscopic illumination enables a direct comparison to be made between fast and slow molecules under identical light dosage. We have developed algorithms that accurately track and co-localize multiple interacting biomolecules. The three-color microscope combined with our co-movement algorithms have made it possible for instance to simultaneously image and track how the chromosome environment affects diffusion kinetics or determine how mRNAs diffuse during translation. Such multiplexed single-molecule measurements at a high spatiotemporal resolution inside living cells will provide a major tool for testing models relating molecular architecture and biological dynamics.

  19. A three-camera imaging microscope for high-speed single-molecule tracking and super-resolution imaging in living cells

    PubMed Central

    English, Brian P.; Singer, Robert H.

    2016-01-01

    Our aim is to develop quantitative single-molecule assays to study when and where molecules are interacting inside living cells and where enzymes are active. To this end we present a three-camera imaging microscope for fast tracking of multiple interacting molecules simultaneously, with high spatiotemporal resolution. The system was designed around an ASI RAMM frame using three separate tube lenses and custom multi-band dichroics to allow for enhanced detection efficiency. The frame times of the three Andor iXon Ultra EMCCD cameras are hardware synchronized to the laser excitation pulses of the three excitation lasers, such that the fluorophores are effectively immobilized during frame acquisitions and do not yield detections that are motion-blurred. Stroboscopic illumination allows robust detection from even rapidly moving molecules while minimizing bleaching, and since snapshots can be spaced out with varying time intervals, stroboscopic illumination enables a direct comparison to be made between fast and slow molecules under identical light dosage. We have developed algorithms that accurately track and co-localize multiple interacting biomolecules. The three-color microscope combined with our co-movement algorithms have made it possible for instance to simultaneously image and track how the chromosome environment affects diffusion kinetics or determine how mRNAs diffuse during translation. Such multiplexed single-molecule measurements at a high spatiotemporal resolution inside living cells will provide a major tool for testing models relating molecular architecture and biological dynamics. PMID:26819489

  20. A Comparison of Techniques for Camera Selection and Hand-Off in a Video Network

    NASA Astrophysics Data System (ADS)

    Li, Yiming; Bhanu, Bir

    Video networks are becoming increasingly important for solving many real-world problems. Multiple video sensors require collaboration when performing various tasks. One of the most basic tasks is the tracking of objects, which requires mechanisms to select a camera for a certain object and hand-off this object from one camera to another so as to accomplish seamless tracking. In this chapter, we provide a comprehensive comparison of current and emerging camera selection and hand-off techniques. We consider geometry-, statistics-, and game theory-based approaches and provide both theoretical and experimental comparison using centralized and distributed computational models. We provide simulation and experimental results using real data for various scenarios of a large number of cameras and objects for in-depth understanding of strengths and weaknesses of these techniques.

  1. Observations of in situ deep-sea marine bioluminescence with a high-speed, high-resolution sCMOS camera

    NASA Astrophysics Data System (ADS)

    Phillips, Brennan T.; Gruber, David F.; Vasan, Ganesh; Roman, Christopher N.; Pieribone, Vincent A.; Sparks, John S.

    2016-05-01

    Observing and measuring marine bioluminescence in situ presents unique challenges, characterized by the difficult task of approaching and imaging weakly illuminated bodies in a three-dimensional environment. To address this problem, a scientific complementary-metal-oxide-semiconductor (sCMOS) microscopy camera was outfitted for deep-sea imaging of marine bioluminescence. This system was deployed on multiple platforms (manned submersible, remotely operated vehicle, and towed body) in three oceanic regions (Western Tropical Pacific, Eastern Equatorial Pacific, and Northwestern Atlantic) to depths up to 2500 m. Using light stimulation, bioluminescent responses were recorded at high frame rates and in high resolution, offering unprecedented low-light imagery of deep-sea bioluminescence in situ. The kinematics of light production in several zooplankton groups was observed, and luminescent responses at different depths were quantified as intensity vs. time. These initial results signify a clear advancement in the bioluminescent imaging methods available for observation and experimentation in the deep-sea.

  2. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras

    PubMed Central

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-01-01

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961

  3. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.

    PubMed

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-06-24

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.

  4. High-speed Digital Color Imaging Pyrometry

    DTIC Science & Technology

    2011-08-01

    and environment of the events. To overcome these challenges, we have characterized and calibrated a digital high-speed color camera that may be...correction) to determine their effect on the calculated temperature. Using this technique with a Phantom color camera , we measured the temperature of...constant value of approximately 1980~K. 15. SUBJECT TERMS Pyrometry, color camera 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT

  5. BOREAS RSS-3 Imagery and Snapshots from a Helicopter-Mounted Video Camera

    NASA Technical Reports Server (NTRS)

    Walthall, Charles L.; Loechel, Sara; Nickeson, Jaime (Editor); Hall, Forrest G. (Editor)

    2000-01-01

    The BOREAS RSS-3 team collected helicopter-based video coverage of forested sites acquired during BOREAS as well as single-frame "snapshots" processed to still images. Helicopter data used in this analysis were collected during all three 1994 IFCs (24-May to 16-Jun, 19-Jul to 10-Aug, and 30-Aug to 19-Sep), at numerous tower and auxiliary sites in both the NSA and the SSA. The VHS-camera observations correspond to other coincident helicopter measurements. The field of view of the camera is unknown. The video tapes are in both VHS and Beta format. The still images are stored in JPEG format.

  6. Automatic detection of camera translation in eye video recordings using multiple methods.

    PubMed

    Karmali, Faisal; Shelhamer, Mark

    2005-04-01

    A concern with video eye movement tracking is that movement of the camera headset relative to the head creates an artifact of eye movement in pupil-detection software. We describe the development of, and compare the results of, three automatic image processing algorithms to measure camera movement. The best of the algorithms has an average accuracy of 1.3 pixels, equivalent to 0.49 deg with our eye tracking system.

  7. GoPro Hero Cameras for Creation of a Three-Dimensional, Educational, Neurointerventional Video.

    PubMed

    Park, Min S; Brock, Andrea; Mortimer, Vance; Taussky, Philipp; Couldwell, William T; Quigley, Edward

    2017-01-17

    Neurointerventional education relies on an apprenticeship model, with the trainee observing and participating in procedures with the guidance of a mentor. While educational videos are becoming prevalent in surgical cases, there is a dearth of comparable educational material for trainees in neurointerventional programs. We sought to create a high-quality, three-dimensional video of a routine diagnostic cerebral angiogram for use as an educational tool. A diagnostic cerebral angiogram was recorded using two GoPro HERO 3+ cameras with the Dual HERO System to capture the proceduralist's hands during the case. This video was edited with recordings from the video monitors to create a real-time three-dimensional video of both the actions of the neurointerventionalist and the resulting wire/catheter movements. The final edited video, in either two or three dimensions, can serve as another instructional tool for the training of residents and/or fellows. Additional videos can be created in a similar fashion of more complicated neurointerventional cases. The GoPro HERO 3+ camera and Dual HERO System can be used to create educational videos of neurointerventional procedures.

  8. High Speed Research Program

    NASA Technical Reports Server (NTRS)

    Anderson, Robert E.; Corsiglia, Victor R.; Schmitz, Frederic H. (Technical Monitor)

    1994-01-01

    An overview of the NASA High Speed Research Program will be presented from a NASA Headquarters perspective. The presentation will include the objectives of the program and an outline of major programmatic issues.

  9. High-Speed Photography

    SciTech Connect

    Paisley, D.L.; Schelev, M.Y.

    1998-08-01

    The applications of high-speed photography to a diverse set of subjects including inertial confinement fusion, laser surgical procedures, communications, automotive airbags, lightning etc. are briefly discussed. (AIP) {copyright} {ital 1998 Society of Photo-Optical Instrumentation Engineers.}

  10. Laser Imaging Video Camera Sees Through Fire, Fog, Smoke

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Under a series of SBIR contracts with Langley Research Center, inventor Richard Billmers refined a prototype for a laser imaging camera capable of seeing through fire, fog, smoke, and other obscurants. Now, Canton, Ohio-based Laser Imaging through Obscurants (LITO) Technologies Inc. is demonstrating the technology as a perimeter security system at Glenn Research Center and planning its future use in aviation, shipping, emergency response, and other fields.

  11. Proceedings of the International Congress on High-Speed Photography (9th) held at Denver, Colorado on August 2-7 1970,

    DTIC Science & Technology

    SYMPOSIA, *TELEVISION EQUIPMENT, * MOTION PICTURE PHOTOGRAPHY, HIGH SPEED PHOTOGRAPHY, HIGH SPEED PHOTOGRAPHY, IMAGE CONVERTERS, HIGH SPEED CAMERAS, LIGHTING EQUIPMENT, X RAY PHOTOGRAPHY, STEREOPHOTOGRAPHY.

  12. Lights! Camera! Action!: video projects in the classroom.

    PubMed

    Epstein, Carol Diane; Hovancsek, Marcella T; Dolan, Pamela L; Durner, Erin; La Rocco, Nicole; Preiszig, Patricia; Winnen, Caitlin

    2003-12-01

    We report on two classroom video projects intended to promote active student involvement in their classroom experience during a year-long medical-surgical nursing course. We implemented two types of projects, Nursing Grand Rounds and FPBTV. The projects are templates that can be applied to any nursing specialty and can be implemented without the use of video technology. During the course of several years, both projects have proven effective in encouraging students to promote pattern recognition of characteristic features of common illnesses, to develop teamwork strategies, and to practice their presentation skills in a safe environment among their peers. The projects appealed to students because they increased retention of information and immersed students in the experience of becoming experts about an illness or a family of medications. These projects have enabled students to become engaged and invested in their own learning in the classroom.

  13. Duque and Kaleri in Zvezda Service module with video camera

    NASA Image and Video Library

    2003-10-23

    ISS007-E-17842 (23 October 2003) --- European Space Agency (ESA) astronaut Pedro Duque (left) of Spain and cosmonaut Alexander Y. Kaleri, Expedition 8 flight engineer representing Rosaviakosmos, work with a scientific experiment in the Zvezda Service Module on the International Space Station (ISS). Duque and Kaleri performed the European educational VIDEO-2 (VID-01) experiment, which uses the Russian DSR PD-150P digital video camcorder for recording demos of several basic physical phenomena, viz., Isaac Newton's three motion laws, with narration. [The demo made use of a sealed bag containing coffee and a syringe to fill one of two hollow balls with the brown liquid (to provide "mass", as opposed to the other, "mass-less" ball).

  14. Online coupled camera pose estimation and dense reconstruction from video

    DOEpatents

    Medioni, Gerard; Kang, Zhuoliang

    2016-11-01

    A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing the next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.

  15. Development of low-noise high-speed analog ASIC for X-ray CCD cameras and wide-band X-ray imaging sensors

    NASA Astrophysics Data System (ADS)

    Nakajima, Hiroshi; Hirose, Shin-nosuke; Imatani, Ritsuko; Nagino, Ryo; Anabuki, Naohisa; Hayashida, Kiyoshi; Tsunemi, Hiroshi; Doty, John P.; Ikeda, Hirokazu; Kitamura, Hisashi; Uchihori, Yukio

    2016-09-01

    We report on the development and performance evaluation of the mixed-signal Application Specific Integrated Circuit (ASIC) developed for the signal processing of onboard X-ray CCD cameras and various types of X-ray imaging sensors in astrophysics. The quick and low-noise readout is essential for the pile-up free imaging spectroscopy with a future X-ray telescope. Our goal is the readout noise of 5e- r . m . s . at the pixel rate of 1 Mpix/s that is about 10 times faster than those of the currently working detectors. We successfully developed a low-noise ASIC as the front-end electronics of the Soft X-ray Imager onboard Hitomi that was launched on February 17, 2016. However, it has two analog-to-digital converters per chain due to the limited processing speed and hence we need to correct the difference of gain to obtain the X-ray spectra. Furthermore, its input equivalent noise performance is not satisfactory (> 100 μV) at the pixel rate higher than 500 kpix/s. Then we upgrade the design of the ASIC with the fourth-order ΔΣ modulators to enhance its inherent noise-shaping performance. Its performance is measured using pseudo CCD signals with variable processing speed. Although its input equivalent noise is comparable with the conventional one, the integrated non-linearity (0.1%) improves to about the half of that of the conventional one. The radiation tolerance is also measured with regard to the total ionizing dose effect and the single event latch-up using protons and Xenon, respectively. The former experiment shows that all of the performances does not change after imposing the dose corresponding to 590 years in a low earth orbit. We also put the upper limit on the frequency of the latch-up to be once per 48 years.

  16. Observation of hydrothermal flows with acoustic video camera

    NASA Astrophysics Data System (ADS)

    Mochizuki, M.; Asada, A.; Tamaki, K.; Scientific Team Of Yk09-13 Leg 1

    2010-12-01

    Ridge 18-20deg.S, where hydrothermal plume signatures were previously perceived. DIDSON was equipped on the top of Shinkai6500 in order to get acoustic video images of hydrothermal plumes. In this cruise, seven dives of Shinkai6500 were conducted. The acoustic video images of the hydrothermal plumes had been captured in three of seven dives. These are only a few acoustic video images of the hydrothermal plumes. Processing and analyzing the acoustic video image data are going on. We will report the overview of the acoustic video image of the hydrothermal plumes and discuss possibility of DIDSON as an observation tool for seafloor hydrothermal activity.

  17. Cost-effective multi-camera array for high quality video with very high dynamic range

    NASA Astrophysics Data System (ADS)

    Keinert, Joachim; Wetzel, Marcus; Schöberl, Michael; Schäfer, Peter; Zilly, Frederik; Bätz, Michel; Fößel, Siegfried; Kaup, André

    2014-03-01

    Temporal bracketing can create images with higher dynamic range than the underlying sensor. Unfortunately, moving objects cause disturbing artifacts. Moreover, the combination with high frame rates is almost unachiev­ able since a single video frame requires multiple sensor readouts. The combination of multiple synchronized side-by-side cameras equipped with different attenuation filters promises a remedy, since all exposures can be performed at the same time with the same duration using the playout video frame rate. However, a disparity correction is needed to compensate the spatial displacement of the cameras. Unfortunately, the requirements for a high quality disparity correction contradict the goal to increase dynamic range. When using two cameras, disparity correction needs objects to be properly exposed in both cameras. In contrast, a dynamic range in­crease needs the cameras to capture different luminance ranges. As this contradiction has not been addressed in literature so far, this paper proposes a novel solution based on a three camera setup. It enables accurate de­ termination of the disparities and an increase of the dynamic range by nearly a factor of two while still limiting costs. Compared to a two camera solution, the mean opinion score (MOS) is improved by 13.47 units in average for the Middleburry images.

  18. High-Speed Observer: Automated Streak Detection for the Aerospike Engine

    NASA Technical Reports Server (NTRS)

    Rieckhoff, T. J.; Covan, M. A.; OFarrell, J. M.

    2001-01-01

    A high-frame-rate digital video camera, installed on test stands at Stennis Space Center (SSC), has been used to capture images of the aerospike engine plume during test. These plume images are processed in real time to detect and differentiate anomalous plume events. Results indicate that the High-Speed Observer (HSO) system can detect anomalous plume streaking events that are indicative of aerospike engine malfunction.

  19. Ground-based remote sensing with long lens video camera for upper-stem diameter and other tree crown measurements

    Treesearch

    Neil A. Clark; Sang-Mook Lee

    2004-01-01

    This paper demonstrates how a digital video camera with a long lens can be used with pulse laser ranging in order to collect very large-scale tree crown measurements. The long focal length of the camera lens provides the magnification required for precise viewing of distant points with the trade-off of spatial coverage. Multiple video frames are mosaicked into a single...

  20. Traffic camera system development

    NASA Astrophysics Data System (ADS)

    Hori, Toshi

    1997-04-01

    The intelligent transportation system has generated a strong need for the development of intelligent camera systems to meet the requirements of sophisticated applications, such as electronic toll collection (ETC), traffic violation detection and automatic parking lot control. In order to achieve the highest levels of accuracy in detection, these cameras must have high speed electronic shutters, high resolution, high frame rate, and communication capabilities. A progressive scan interline transfer CCD camera, with its high speed electronic shutter and resolution capabilities, provides the basic functions to meet the requirements of a traffic camera system. Unlike most industrial video imaging applications, traffic cameras must deal with harsh environmental conditions and an extremely wide range of light. Optical character recognition is a critical function of a modern traffic camera system, with detection and accuracy heavily dependent on the camera function. In order to operate under demanding conditions, communication and functional optimization is implemented to control cameras from a roadside computer. The camera operates with a shutter speed faster than 1/2000 sec. to capture highway traffic both day and night. Consequently camera gain, pedestal level, shutter speed and gamma functions are controlled by a look-up table containing various parameters based on environmental conditions, particularly lighting. Lighting conditions are studied carefully, to focus only on the critical license plate surface. A unique light sensor permits accurate reading under a variety of conditions, such as a sunny day, evening, twilight, storms, etc. These camera systems are being deployed successfully in major ETC projects throughout the world.

  1. Nyquist Sampling Theorem: Understanding the Illusion of a Spinning Wheel Captured with a Video Camera

    ERIC Educational Resources Information Center

    Levesque, Luc

    2014-01-01

    Inaccurate measurements occur regularly in data acquisition as a result of improper sampling times. An understanding of proper sampling times when collecting data with an analogue-to-digital converter or video camera is crucial in order to avoid anomalies. A proper choice of sampling times should be based on the Nyquist sampling theorem. If the…

  2. Nyquist Sampling Theorem: Understanding the Illusion of a Spinning Wheel Captured with a Video Camera

    ERIC Educational Resources Information Center

    Levesque, Luc

    2014-01-01

    Inaccurate measurements occur regularly in data acquisition as a result of improper sampling times. An understanding of proper sampling times when collecting data with an analogue-to-digital converter or video camera is crucial in order to avoid anomalies. A proper choice of sampling times should be based on the Nyquist sampling theorem. If the…

  3. Meteor velocity distribution from CILBO double station video camera data

    NASA Astrophysics Data System (ADS)

    Drolshagen, Esther; Ott, Theresa; Koschny, Detlef; Drolshagen, Gerhard; Poppe, Bjoern

    2014-02-01

    This paper is based on data from the double-station meteor camera setup on the Canary Islands - CILBO. The data has been collected from July 2011 until August 2014. The CILBO meteor data of one year (1 June 2013 - 31 May 2014) were used to analyze the velocity distribution of sporadic meteors and to compare the distribution to a reference distribution for near-Earth space. The velocity distribution for 1 AU outside the influence of Earth derived from the Harvard Radio Meteor Project (HRMP) was used as a reference. This HRMP distribution was converted to an altitude of 100 km by considering the gravitational attraction of Earth. The new, theoretical velocity distribution for a fixed meteoroid mass ranges from 11 - 71 𝑘𝑚/𝑠 and peaks at 12.5 𝑘𝑚/𝑠. This represents the predicted velocity distribution. The velocity distribution of the meteors detected simultaneously by both cameras of the CILBO system was examined. The meteors are sorted by their stream association and especially the velocity distribution of the sporadics is studied closely. The derived sporadic velocity distribution has a maximum at 64 𝑘𝑚/𝑠. This drastic difference to the theoretical curve confirms that fast meteors are usually greatly over-represented in optical and radar measurements of meteors. The majority of the fast sporadics are apparently caused by the Apex contribution in the early morning hours. This paper presents first results of the ongoing analysis of the meteor velocity distribution.

  4. Video content analysis on body-worn cameras for retrospective investigation

    NASA Astrophysics Data System (ADS)

    Bouma, Henri; Baan, Jan; ter Haar, Frank B.; Eendebak, Pieter T.; den Hollander, Richard J. M.; Burghouts, Gertjan J.; Wijn, Remco; van den Broek, Sebastiaan P.; van Rest, Jeroen H. C.

    2015-10-01

    In the security domain, cameras are important to assess critical situations. Apart from fixed surveillance cameras we observe an increasing number of sensors on mobile platforms, such as drones, vehicles and persons. Mobile cameras allow rapid and local deployment, enabling many novel applications and effects, such as the reduction of violence between police and citizens. However, the increased use of bodycams also creates potential challenges. For example: how can end-users extract information from the abundance of video, how can the information be presented, and how can an officer retrieve information efficiently? Nevertheless, such video gives the opportunity to stimulate the professionals' memory, and support complete and accurate reporting. In this paper, we show how video content analysis (VCA) can address these challenges and seize these opportunities. To this end, we focus on methods for creating a complete summary of the video, which allows quick retrieval of relevant fragments. The content analysis for summarization consists of several components, such as stabilization, scene selection, motion estimation, localization, pedestrian tracking and action recognition in the video from a bodycam. The different components and visual representations of summaries are presented for retrospective investigation.

  5. Acceptance/operational test procedure 101-AW tank camera purge system and 101-AW video camera system

    SciTech Connect

    Castleberry, J.L.

    1994-09-19

    This procedure will document the satisfactory operation of the 101-AW Tank Camera Purge System (CPS) and the 101-AW Video Camera System. The safety interlock which shuts down all the electronics inside the 101-AW vapor space, during loss of purge pressure, will be in place and tested to ensure reliable performance. This procedure is separated into four sections. Section 6.1 is performed in the 306 building prior to delivery to the 200 East Tank Farms and involves leak checking all fittings on the 101-AW Purge Panel for leakage using a Snoop solution and resolving the leakage. Section 7.1 verifies that PR-1, the regulator which maintains a positive pressure within the volume (cameras and pneumatic lines), is properly set. In addition the green light (PRESSURIZED) (located on the Purge Control Panel) is verified to turn on above 10 in. w.g. and after the time delay (TDR) has timed out. Section 7.2 verifies that the purge cycle functions properly, the red light (PURGE ON) comes on, and that the correct flowrate is obtained to meet the requirements of the National Fire Protection Association. Section 7.3 verifies that the pan and tilt, camera, associated controls and components operate correctly. This section also verifies that the safety interlock system operates correctly during loss of purge pressure. During the loss of purge operation the illumination of the amber light (PURGE FAILED) will be verified.

  6. Field-based high-speed imaging of explosive eruptions

    NASA Astrophysics Data System (ADS)

    Taddeucci, J.; Scarlato, P.; Freda, C.; Moroni, M.

    2012-12-01

    Explosive eruptions involve, by definition, physical processes that are highly dynamic over short time scales. Capturing and parameterizing such processes is a major task in eruption understanding and forecasting, and a task that necessarily requires observational systems capable of high sampling rates. Seismic and acoustic networks are a prime tool for high-frequency observation of eruption, recently joined by Doppler radar and electric sensors. In comparison with the above monitoring systems, imaging techniques provide more complete and direct information of surface processes, but usually at a lower sampling rate. However, recent developments in high-speed imaging systems now allow such information to be obtained with a spatial and temporal resolution suitable for the analysis of several key eruption processes. Our most recent set up for high-speed imaging of explosive eruptions (FAMoUS - FAst, MUltiparametric Set-up,) includes: 1) a monochrome high speed camera, capable of 500 frames per second (fps) at high-definition (1280x1024 pixel) resolution and up to 200000 fps at reduced resolution; 2) a thermal camera capable of 50-200 fps at 480-120x640 pixel resolution; and 3) two acoustic to infrasonic sensors. All instruments are time-synchronized via a data logging system, a hand- or software-operated trigger, and via GPS, allowing signals from other instruments or networks to be directly recorded by the same logging unit or to be readily synchronized for comparison. FAMoUS weights less than 20 kg, easily fits into four, hand-luggage-sized backpacks, and can be deployed in less than 20' (and removed in less than 2', if needed). So far, explosive eruptions have been recorded in high-speed at several active volcanoes, including Fuego and Santiaguito (Guatemala), Stromboli (Italy), Yasur (Vanuatu), and Eyjafiallajokull (Iceland). Image processing and analysis from these eruptions helped illuminate several eruptive processes, including: 1) Pyroclasts ejection. High-speed

  7. Moving camera moving object segmentation in an MPEG-2 compressed video sequence

    NASA Astrophysics Data System (ADS)

    Wang, Jinsong; Patel, Nilesh; Grosky, William

    2006-01-01

    In the paper, we addresses the problem of camera and object motion detection in compressed domain. The estimation of camera motion and the moving object segmentation have been widely stated in a variety of context for video analysis, because they are capable of providing essential clues for interpreting high-level semantic meanings of video sequences. A novel compressed domain motion estimation and segmentation scheme is presented and applied in this paper. The proposed algorithm uses MPEG-2 compressed motion vectors to undergo a spatial and temporal interpolation over several adjacent frames. An iterative rejection scheme based upon the affine model is exploited to effect global camera motion detection. The foreground spatiotemporal objects are separated from the background using the temporal consistency check to the output of the iterative segmentation. This consistency check process can help conglomerate the resulting foreground blocks and weed out unqualified blocks. Illustrative examples are provided to demonstrate the efficacy of the proposed approach.

  8. The effects of camera jitter for background subtraction algorithms on fused infrared-visible video streams

    NASA Astrophysics Data System (ADS)

    Becker, Stefan; Scherer-Negenborn, Norbert; Thakkar, Pooja; Hübner, Wolfgang; Arens, Michael

    2016-10-01

    This paper is a continuation of the work of Becker et al.1 In their work, they analyzed the robustness of various background subtraction algorithms on fused video streams originating from visible and infrared cameras. In order to cover a broader range of background subtraction applications, we show the effects of fusing infrared-visible video streams from vibrating cameras on a large set of background subtraction algorithms. The effectiveness is quantitatively analyzed on recorded data of a typical outdoor sequence with a fine-grained and accurate annotation of the images. Thereby, we identify approaches which can benefit from fused sensor signals with camera jitter. Finally conclusions on what fusion strategies should be preferred under such conditions are given.

  9. High-Speed Videography on HBT-EP

    NASA Astrophysics Data System (ADS)

    Angelini, Sarah M.

    In this thesis, I present measurements from a high-speed video camera diagnostic on the High Beta Tokamak -- Extended Pulse (HBT-EP). This work represents the first use of video data to analyze and understand the behavior of long wavelength kink perturbations in a wall-stabilized tokamak. A Phantom v7.3 camera was installed to capture the plasma's global behavior using visible light emissions and it operates at frame rates from 63 to 125 kfps. A USB2000 spectrometer was used to identify the dominant wavelength of light emitted in HBT-EP. At 656 nm, it is consistent with the D-alpha light expected from interactions between neutral deuterium and plasma electrons. The fast camera in combination with an Acktar vacuum black background produced images which were inverted using Abel techniques to determine the average radial emissivity profiles. These profiles were found to be hollow with a radial scale length of approximately 4 cm at the plasma edge. As a result, the behavior measured and analyzed using visible light videography is limited to the edge region. Using difference subtraction, biorthogonal decomposition and Fourier analysis, the structures of the observed edge fluctuations are computed. By comparing forward modelling results to measurements, the plasma is found to have an m/n = 3/1 helical shape that rotates in the electron drift direction with a lab-frame frequency between 5 and 10 kHz. The fast camera was also used to measure the plasma's response to applied helical magnetic perturbations which resonate with the equilibrium magnetic field at the plasma's edge. The static plasma response to non-rotating resonant magnetic perturbations (RMPs) is measured by comparing changes in the recorded image following a fast reversal, or phase flip, of the applied RMP. The programmed toroidal angle of the RMP is directly inferred from the resulting images of the plasma response. The plasma response and the intensityof the RMP are compared under different conditions. I

  10. Potential of a newly developed high-speed near-infrared (NIR) camera (Compovision) in polymer industrial analyses: monitoring crystallinity and crystal evolution of polylactic acid (PLA) and concentration of PLA in PLA/Poly-(R)-3-hydroxybutyrate (PHB) blends.

    PubMed

    Ishikawa, Daitaro; Nishii, Takashi; Mizuno, Fumiaki; Sato, Harumi; Kazarian, Sergei G; Ozaki, Yukihiro

    2013-12-01

    This study was carried out to evaluate a new high-speed hyperspectral near-infrared (NIR) camera named Compovision. Quantitative analyses of the crystallinity and crystal evolution of biodegradable polymer, polylactic acid (PLA), and its concentration in PLA/poly-(R)-3-hydroxybutyrate (PHB) blends were investigated using near-infrared (NIR) imaging. This NIR camera can measure two-dimensional NIR spectral data in the 1000-2350 nm region obtaining images with wide field of view of 150 × 250 mm(2) (approximately 100  000 pixels) at high speeds (in less than 5 s). PLA with differing crystallinities between 0 and 50% blended samples with PHB in ratios of 80/20, 60/40, 40/60, 20/80, and pure films of 100% PLA and PHB were prepared. Compovision was used to collect respective NIR spectra in the 1000-2350 nm region and investigate the crystallinity of PLA and its concentration in the blends. The partial least squares (PLS) regression models for the crystallinity of PLA were developed using absorbance, second derivative, and standard normal variate (SNV) spectra from the most informative region of the spectra, between 1600 and 2000 nm. The predicted results of PLS models achieved using the absorbance and second derivative spectra were fairly good with a root mean square error (RMSE) of less than 6.1% and a determination of coefficient (R(2)) of more than 0.88 for PLS factor 1. The results obtained using the SNV spectra yielded the best prediction with the smallest RMSE of 2.93% and the highest R(2) of 0.976. Moreover, PLS models developed for estimating the concentration of PLA in the blend polymers using SNV spectra gave good predicted results where the RMSE was 4.94% and R(2) was 0.98. The SNV-based models provided the best-predicted results, since it can reduce the effects of the spectral changes induced by the inhomogeneity and the thickness of the samples. Wide area crystal evolution of PLA on a plate where a temperature slope of 70-105 °C had occurred was also

  11. Introducing a New High-Speed Imaging System for Measuring Raindrop Characteristics

    NASA Astrophysics Data System (ADS)

    Testik, F. Y.; Rahman, K.

    2013-12-01

    Here we present a new high-speed imaging system that we have developed for measuring rainfall microphysical quantities. This optical disdrometer system is capable of providing raindrop characteristics including drop diameter, fall velocity and acceleration, shape, and axis ratio. The main components of the system consist of a high-speed video camera capable of capturing 1000 frames per second, an LED light, a sensor unit to detect raindrops passing through the camera view frame, and a three-dimensional ultrasonic anemometer to measure the wind velocity. The entire imaging system is operated and synchronized using a LabView code developed in-house. In this system, the camera points at the LED light and records the silhouettes of the backlit drops. Because the digital storage limitations do not allow continuous recording of high-speed camera systems more than several seconds, we utilized a sensor system that triggers the camera when a raindrop is detected within the camera view frame at the focal plane. With the trigger signal, the camera records a predefined number of frames to the built-in storage space of the camera head. The images are downloaded to a computer for processing and storage once the rain event is over or the built-in storage space is full. The anemometer data is recorded continuously to the computer. The downloaded sharp, sequential raindrop images are digitally processed using a computer code that is developed in-house, which outputs accurate information on various raindrop characteristics (e.g., drop diameter, shape, axis ratio, fall velocity, and drop size distribution). The new high-speed imaging system is laboratory tested using high-precision spherical lenses with known diameters and also field tested under real rain events. The results of these tests will also be presented. This new imaging system was developed as part of a National Science Foundation grant (NSF Award # 1144846) to study raindrop characteristics and is expected to be an

  12. Registration of retinal sequences from new video-ophthalmoscopic camera.

    PubMed

    Kolar, Radim; Tornow, Ralf P; Odstrcilik, Jan; Liberdova, Ivana

    2016-05-20

    Analysis of fast temporal changes on retinas has become an important part of diagnostic video-ophthalmology. It enables investigation of the hemodynamic processes in retinal tissue, e.g. blood-vessel diameter changes as a result of blood-pressure variation, spontaneous venous pulsation influenced by intracranial-intraocular pressure difference, blood-volume changes as a result of changes in light reflection from retinal tissue, and blood flow using laser speckle contrast imaging. For such applications, image registration of the recorded sequence must be performed. Here we use a new non-mydriatic video-ophthalmoscope for simple and fast acquisition of low SNR retinal sequences. We introduce a novel, two-step approach for fast image registration. The phase correlation in the first stage removes large eye movements. Lucas-Kanade tracking in the second stage removes small eye movements. We propose robust adaptive selection of the tracking points, which is the most important part of tracking-based approaches. We also describe a method for quantitative evaluation of the registration results, based on vascular tree intensity profiles. The achieved registration error evaluated on 23 sequences (5840 frames) is 0.78 ± 0.67 pixels inside the optic disc and 1.39 ± 0.63 pixels outside the optic disc. We compared the results with the commonly used approaches based on Lucas-Kanade tracking and scale-invariant feature transform, which achieved worse results. The proposed method can efficiently correct particular frames of retinal sequences for shift and rotation. The registration results for each frame (shift in X and Y direction and eye rotation) can also be used for eye-movement evaluation during single-spot fixation tasks.

  13. Composite video and graphics display for camera viewing systems in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1993-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  14. Composite video and graphics display for multiple camera viewing system in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1991-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  15. Automated detection of feeding strikes by larval fish using continuous high-speed digital video: a novel method to extract quantitative data from fast, sparse kinematic events.

    PubMed

    Shamur, Eyal; Zilka, Miri; Hassner, Tal; China, Victor; Liberzon, Alex; Holzman, Roi

    2016-06-01

    Using videography to extract quantitative data on animal movement and kinematics constitutes a major tool in biomechanics and behavioral ecology. Advanced recording technologies now enable acquisition of long video sequences encompassing sparse and unpredictable events. Although such events may be ecologically important, analysis of sparse data can be extremely time-consuming and potentially biased; data quality is often strongly dependent on the training level of the observer and subject to contamination by observer-dependent biases. These constraints often limit our ability to study animal performance and fitness. Using long videos of foraging fish larvae, we provide a framework for the automated detection of prey acquisition strikes, a behavior that is infrequent yet critical for larval survival. We compared the performance of four video descriptors and their combinations against manually identified feeding events. For our data, the best single descriptor provided a classification accuracy of 77-95% and detection accuracy of 88-98%, depending on fish species and size. Using a combination of descriptors improved the accuracy of classification by ∼2%, but did not improve detection accuracy. Our results indicate that the effort required by an expert to manually label videos can be greatly reduced to examining only the potential feeding detections in order to filter false detections. Thus, using automated descriptors reduces the amount of manual work needed to identify events of interest from weeks to hours, enabling the assembly of an unbiased large dataset of ecologically relevant behaviors.

  16. High speed door assembly

    DOEpatents

    Shapiro, Carolyn

    1993-01-01

    A high speed door assembly, comprising an actuator cylinder and piston rods, a pressure supply cylinder and fittings, an electrically detonated explosive bolt, a honeycomb structured door, a honeycomb structured decelerator, and a structural steel frame encasing the assembly to close over a 3 foot diameter opening within 50 milliseconds of actuation, to contain hazardous materials and vapors within a test fixture.

  17. High speed door assembly

    DOEpatents

    Shapiro, C.

    1993-04-27

    A high speed door assembly is described, comprising an actuator cylinder and piston rods, a pressure supply cylinder and fittings, an electrically detonated explosive bolt, a honeycomb structured door, a honeycomb structured decelerator, and a structural steel frame encasing the assembly to close over a 3 foot diameter opening within 50 milliseconds of actuation, to contain hazardous materials and vapors within a test fixture.

  18. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    PubMed Central

    Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu

    2016-01-01

    Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%. PMID:26950127

  19. Surgical video recording with a modified GoPro Hero 4 camera

    PubMed Central

    Lin, Lily Koo

    2016-01-01

    Background Surgical videography can provide analytical self-examination for the surgeon, teaching opportunities for trainees, and allow for surgical case presentations. This study examined if a modified GoPro Hero 4 camera with a 25 mm lens could prove to be a cost-effective method of surgical videography with enough detail for oculoplastic and strabismus surgery. Method The stock lens mount and lens were removed from a GoPro Hero 4 camera, and was refitted with a Peau Productions SuperMount and 25 mm lens. The modified GoPro Hero 4 camera was then fixed to an overhead surgical light. Results Camera settings were set to 1080p video resolution. The 25 mm lens allowed for nine times the magnification as the GoPro stock lens. There was no noticeable video distortion. The entire cost was less than 600 USD. Conclusion The adapted GoPro Hero 4 with a 25 mm lens allows for high-definition, cost-effective, portable video capture of oculoplastic and strabismus surgery. The 25 mm lens allows for detailed videography that can enhance surgical teaching and self-examination. PMID:26834455

  20. Surgical video recording with a modified GoPro Hero 4 camera.

    PubMed

    Lin, Lily Koo

    2016-01-01

    Surgical videography can provide analytical self-examination for the surgeon, teaching opportunities for trainees, and allow for surgical case presentations. This study examined if a modified GoPro Hero 4 camera with a 25 mm lens could prove to be a cost-effective method of surgical videography with enough detail for oculoplastic and strabismus surgery. The stock lens mount and lens were removed from a GoPro Hero 4 camera, and was refitted with a Peau Productions SuperMount and 25 mm lens. The modified GoPro Hero 4 camera was then fixed to an overhead surgical light. Camera settings were set to 1080p video resolution. The 25 mm lens allowed for nine times the magnification as the GoPro stock lens. There was no noticeable video distortion. The entire cost was less than 600 USD. The adapted GoPro Hero 4 with a 25 mm lens allows for high-definition, cost-effective, portable video capture of oculoplastic and strabismus surgery. The 25 mm lens allows for detailed videography that can enhance surgical teaching and self-examination.

  1. Hardware-based smart camera for recovering high dynamic range video from multiple exposures

    NASA Astrophysics Data System (ADS)

    Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique

    2014-10-01

    In many applications such as video surveillance or defect detection, the perception of information related to a scene is limited in areas with strong contrasts. The high dynamic range (HDR) capture technique can deal with these limitations. The proposed method has the advantage of automatically selecting multiple exposure times to make outputs more visible than fixed exposure ones. A real-time hardware implementation of the HDR technique that shows more details both in dark and bright areas of a scene is an important line of research. For this purpose, we built a dedicated smart camera that performs both capturing and HDR video processing from three exposures. What is new in our work is shown through the following points: HDR video capture through multiple exposure control, HDR memory management, HDR frame generation, and representation under a hardware context. Our camera achieves a real-time HDR video output at 60 fps at 1.3 megapixels and demonstrates the efficiency of our technique through an experimental result. Applications of this HDR smart camera include the movie industry, the mass-consumer market, military, automotive industry, and surveillance.

  2. Video and acoustic camera techniques for studying fish under ice: a review and comparison

    SciTech Connect

    Mueller, Robert P.; Brown, Richard S.; Hop, Haakon H.; Moulton, Larry

    2006-09-05

    Researchers attempting to study the presence, abundance, size, and behavior of fish species in northern and arctic climates during winter face many challenges, including the presence of thick ice cover, snow cover, and, sometimes, extremely low temperatures. This paper describes and compares the use of video and acoustic cameras for determining fish presence and behavior in lakes, rivers, and streams with ice cover. Methods are provided for determining fish density and size, identifying species, and measuring swimming speed and successful applications of previous surveys of fish under the ice are described. These include drilling ice holes, selecting batteries and generators, deploying pan and tilt cameras, and using paired colored lasers to determine fish size and habitat associations. We also discuss use of infrared and white light to enhance image-capturing capabilities, deployment of digital recording systems and time-lapse techniques, and the use of imaging software. Data are presented from initial surveys with video and acoustic cameras in the Sagavanirktok River Delta, Alaska, during late winter 2004. These surveys represent the first known successful application of a dual-frequency identification sonar (DIDSON) acoustic camera under the ice that achieved fish detection and sizing at camera ranges up to 16 m. Feasibility tests of video and acoustic cameras for determining fish size and density at various turbidity levels are also presented. Comparisons are made of the different techniques in terms of suitability for achieving various fisheries research objectives. This information is intended to assist researchers in choosing the equipment that best meets their study needs.

  3. A digital TV system for the detection of high speed human motion

    NASA Astrophysics Data System (ADS)

    Fang, R. C.

    1981-08-01

    Two array cameras and a force plate were linked to a PDP-11/34 minicomputer for an on-line recording of high speed human motion. A microprocessor-based interface system was constructed to allow preprocessing and coordinating of the video data before being transferred to the minicomputer. Control programs of the interface system are stored in the disk and loaded into the program storage areas of the microprocessor before the interface system starts its operation. Software programs for collecting and processing video and force data have been written. Experiments on the detection of human jumping have been carried out. Normal gait and amputee gait have also been recorded and analyzed.

  4. Kinematic measurements of the vocal-fold displacement waveform in typical children and adult populations: quantification of high-speed endoscopic videos.

    PubMed

    Patel, Rita; Donohue, Kevin D; Unnikrishnan, Harikrishnan; Kryscio, Richard J

    2015-04-01

    This article presents a quantitative method for assessing instantaneous and average lateral vocal-fold motion from high-speed digital imaging, with a focus on developmental changes in vocal-fold kinematics during childhood. Vocal-fold vibrations were analyzed for 28 children (aged 5-11 years) and 28 adults (aged 21-45 years) without voice disorders. The following kinematic features were analyzed from the vocal-fold displacement waveforms: relative velocity-based features (normalized average and peak opening and closing velocities), relative acceleration-based features (normalized peak opening and closing accelerations), speed quotient, and normalized peak displacement. Children exhibited significantly larger normalized peak displacements, normalized average and peak opening velocities, normalized average and peak closing velocities, peak opening and closing accelerations, and speed quotient compared to adult women. Values of normalized average closing velocity and speed quotient were higher in children compared to adult men. When compared to adult men, developing children typically have higher estimates of kinematic features related to normalized displacement and its derivatives. In most cases, the kinematic features of children are closer to those of adult men than adult women. Even though boys experience greater changes in glottal length and pitch as they mature, results indicate that girls experience greater changes in kinematic features compared to boys.

  5. A Novel Method to Reduce Time Investment When Processing Videos from Camera Trap Studies

    PubMed Central

    Swinnen, Kristijn R. R.; Reijniers, Jonas; Breno, Matteo; Leirs, Herwig

    2014-01-01

    Camera traps have proven very useful in ecological, conservation and behavioral research. Camera traps non-invasively record presence and behavior of animals in their natural environment. Since the introduction of digital cameras, large amounts of data can be stored. Unfortunately, processing protocols did not evolve as fast as the technical capabilities of the cameras. We used camera traps to record videos of Eurasian beavers (Castor fiber). However, a large number of recordings did not contain the target species, but instead empty recordings or other species (together non-target recordings), making the removal of these recordings unacceptably time consuming. In this paper we propose a method to partially eliminate non-target recordings without having to watch the recordings, in order to reduce workload. Discrimination between recordings of target species and non-target recordings was based on detecting variation (changes in pixel values from frame to frame) in the recordings. Because of the size of the target species, we supposed that recordings with the target species contain on average much more movements than non-target recordings. Two different filter methods were tested and compared. We show that a partial discrimination can be made between target and non-target recordings based on variation in pixel values and that environmental conditions and filter methods influence the amount of non-target recordings that can be identified and discarded. By allowing a loss of 5% to 20% of recordings containing the target species, in ideal circumstances, 53% to 76% of non-target recordings can be identified and discarded. We conclude that adding an extra processing step in the camera trap protocol can result in large time savings. Since we are convinced that the use of camera traps will become increasingly important in the future, this filter method can benefit many researchers, using it in different contexts across the globe, on both videos and photographs. PMID:24918777

  6. High speed imager test station

    DOEpatents

    Yates, G.J.; Albright, K.L.; Turko, B.T.

    1995-11-14

    A test station enables the performance of a solid state imager (herein called a focal plane array or FPA) to be determined at high image frame rates. A programmable waveform generator is adapted to generate clock pulses at determinable rates for clock light-induced charges from a FPA. The FPA is mounted on an imager header board for placing the imager in operable proximity to level shifters for receiving the clock pulses and outputting pulses effective to clock charge from the pixels forming the FPA. Each of the clock level shifters is driven by leading and trailing edge portions of the clock pulses to reduce power dissipation in the FPA. Analog circuits receive output charge pulses clocked from the FPA pixels. The analog circuits condition the charge pulses to cancel noise in the pulses and to determine and hold a peak value of the charge for digitizing. A high speed digitizer receives the peak signal value and outputs a digital representation of each one of the charge pulses. A video system then displays an image associated with the digital representation of the output charge pulses clocked from the FPA. In one embodiment, the FPA image is formatted to a standard video format for display on conventional video equipment. 12 figs.

  7. High speed imager test station

    DOEpatents

    Yates, George J.; Albright, Kevin L.; Turko, Bojan T.

    1995-01-01

    A test station enables the performance of a solid state imager (herein called a focal plane array or FPA) to be determined at high image frame rates. A programmable waveform generator is adapted to generate clock pulses at determinable rates for clock light-induced charges from a FPA. The FPA is mounted on an imager header board for placing the imager in operable proximity to level shifters for receiving the clock pulses and outputting pulses effective to clock charge from the pixels forming the FPA. Each of the clock level shifters is driven by leading and trailing edge portions of the clock pulses to reduce power dissipation in the FPA. Analog circuits receive output charge pulses clocked from the FPA pixels. The analog circuits condition the charge pulses to cancel noise in the pulses and to determine and hold a peak value of the charge for digitizing. A high speed digitizer receives the peak signal value and outputs a digital representation of each one of the charge pulses. A video system then displays an image associated with the digital representation of the output charge pulses clocked from the FPA. In one embodiment, the FPA image is formatted to a standard video format for display on conventional video equipment.

  8. Video camera system for locating bullet holes in targets at a ballistics tunnel

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Rummler, D. R.; Goad, W. K.

    1990-01-01

    A system consisting of a single charge coupled device (CCD) video camera, computer controlled video digitizer, and software to automate the measurement was developed to measure the location of bullet holes in targets at the International Shooters Development Fund (ISDF)/NASA Ballistics Tunnel. The camera/digitizer system is a crucial component of a highly instrumented indoor 50 meter rifle range which is being constructed to support development of wind resistant, ultra match ammunition. The system was designed to take data rapidly (10 sec between shoots) and automatically with little operator intervention. The system description, measurement concept, and procedure are presented along with laboratory tests of repeatability and bias error. The long term (1 hour) repeatability of the system was found to be 4 microns (one standard deviation) at the target and the bias error was found to be less than 50 microns. An analysis of potential errors and a technique for calibration of the system are presented.

  9. A passive terahertz video camera based on lumped element kinetic inductance detectors.

    PubMed

    Rowe, Sam; Pascale, Enzo; Doyle, Simon; Dunscombe, Chris; Hargrave, Peter; Papageorgio, Andreas; Wood, Ken; Ade, Peter A R; Barry, Peter; Bideaud, Aurélien; Brien, Tom; Dodd, Chris; Grainger, William; House, Julian; Mauskopf, Philip; Moseley, Paul; Spencer, Locke; Sudiwala, Rashmi; Tucker, Carole; Walker, Ian

    2016-03-01

    We have developed a passive 350 GHz (850 μm) video-camera to demonstrate lumped element kinetic inductance detectors (LEKIDs)--designed originally for far-infrared astronomy--as an option for general purpose terrestrial terahertz imaging applications. The camera currently operates at a quasi-video frame rate of 2 Hz with a noise equivalent temperature difference per frame of ∼0.1 K, which is close to the background limit. The 152 element superconducting LEKID array is fabricated from a simple 40 nm aluminum film on a silicon dielectric substrate and is read out through a single microwave feedline with a cryogenic low noise amplifier and room temperature frequency domain multiplexing electronics.

  10. A digital underwater video camera system for aquatic research in regulated rivers

    USGS Publications Warehouse

    Martin, Benjamin M.; Irwin, Elise R.

    2010-01-01

    We designed a digital underwater video camera system to monitor nesting centrarchid behavior in the Tallapoosa River, Alabama, 20 km below a peaking hydropower dam with a highly variable flow regime. Major components of the system included a digital video recorder, multiple underwater cameras, and specially fabricated substrate stakes. The innovative design of the substrate stakes allowed us to effectively observe nesting redbreast sunfish Lepomis auritus in a highly regulated river. Substrate stakes, which were constructed for the specific substratum complex (i.e., sand, gravel, and cobble) identified at our study site, were able to withstand a discharge level of approximately 300 m3/s and allowed us to simultaneously record 10 active nests before and during water releases from the dam. We believe our technique will be valuable for other researchers that work in regulated rivers to quantify behavior of aquatic fauna in response to a discharge disturbance.

  11. A passive terahertz video camera based on lumped element kinetic inductance detectors

    SciTech Connect

    Rowe, Sam Pascale, Enzo; Doyle, Simon; Dunscombe, Chris; Hargrave, Peter; Papageorgio, Andreas; Ade, Peter A. R.; Barry, Peter; Bideaud, Aurélien; Brien, Tom; Dodd, Chris; House, Julian; Moseley, Paul; Sudiwala, Rashmi; Tucker, Carole; Walker, Ian; Wood, Ken; Grainger, William; Mauskopf, Philip; Spencer, Locke

    2016-03-15

    We have developed a passive 350 GHz (850 μm) video-camera to demonstrate lumped element kinetic inductance detectors (LEKIDs)—designed originally for far-infrared astronomy—as an option for general purpose terrestrial terahertz imaging applications. The camera currently operates at a quasi-video frame rate of 2 Hz with a noise equivalent temperature difference per frame of ∼0.1 K, which is close to the background limit. The 152 element superconducting LEKID array is fabricated from a simple 40 nm aluminum film on a silicon dielectric substrate and is read out through a single microwave feedline with a cryogenic low noise amplifier and room temperature frequency domain multiplexing electronics.

  12. Video camera system for locating bullet holes in targets at a ballistics tunnel

    NASA Astrophysics Data System (ADS)

    Burner, A. W.; Rummler, D. R.; Goad, W. K.

    1990-08-01

    A system consisting of a single charge coupled device (CCD) video camera, computer controlled video digitizer, and software to automate the measurement was developed to measure the location of bullet holes in targets at the International Shooters Development Fund (ISDF)/NASA Ballistics Tunnel. The camera/digitizer system is a crucial component of a highly instrumented indoor 50 meter rifle range which is being constructed to support development of wind resistant, ultra match ammunition. The system was designed to take data rapidly (10 sec between shoots) and automatically with little operator intervention. The system description, measurement concept, and procedure are presented along with laboratory tests of repeatability and bias error. The long term (1 hour) repeatability of the system was found to be 4 microns (one standard deviation) at the target and the bias error was found to be less than 50 microns. An analysis of potential errors and a technique for calibration of the system are presented.

  13. ARINC 818 adds capabilities for high-speed sensors and systems

    NASA Astrophysics Data System (ADS)

    Keller, Tim; Grunwald, Paul

    2014-06-01

    ARINC 818, titled Avionics Digital Video Bus (ADVB), is the standard for cockpit video that has gained wide acceptance in both the commercial and military cockpits including the Boeing 787, the A350XWB, the A400M, the KC- 46A and many others. Initially conceived of for cockpit displays, ARINC 818 is now propagating into high-speed sensors, such as infrared and optical cameras due to its high-bandwidth and high reliability. The ARINC 818 specification that was initially release in the 2006 and has recently undergone a major update that will enhance its applicability as a high speed sensor interface. The ARINC 818-2 specification was published in December 2013. The revisions to the specification include: video switching, stereo and 3-D provisions, color sequential implementations, regions of interest, data-only transmissions, multi-channel implementations, bi-directional communication, higher link rates to 32Gbps, synchronization signals, options for high-speed coax interfaces and optical interface details. The additions to the specification are especially appealing for high-bandwidth, multi sensor systems that have issues with throughput bottlenecks and SWaP concerns. ARINC 818 is implemented on either copper or fiber optic high speed physical layers, and allows for time multiplexing multiple sensors onto a single link. This paper discusses each of the new capabilities in the ARINC 818-2 specification and the benefits for ISR and countermeasures implementations, several examples are provided.

  14. Operation and maintenance manual for the high resolution stereoscopic video camera system (HRSVS) system 6230

    SciTech Connect

    Pardini, A.F., Westinghouse Hanford

    1996-07-16

    The High Resolution Stereoscopic Video Cameral System (HRSVS),system 6230, is a stereoscopic camera system that will be used as an end effector on the LDUA to perform surveillance and inspection activities within Hanford waste tanks. It is attached to the LDUA by means of a Tool Interface Plate (TIP), which provides a feed through for all electrical and pneumatic utilities needed by the end effector to operate.

  15. High-speed photography of microscale blast wave phenomena

    NASA Astrophysics Data System (ADS)

    Dewey, John M.; Kleine, Harald

    2005-03-01

    High-speed photography has been a primary tool for the study of blast wave phenomena, dating from the work of Toepler, even before the invention of the camera! High-speed photography was used extensively for the study of blast waves produced by nuclear explosions for which, because of the large scale, cameras running at a few hundred frames per second were adequate to obtain sharp images of the supersonic shock fronts. For the study of the blast waves produced by smaller explosive sources, ever-increasing framing rates were required. As a rough guide, for every three orders of magnitude decrease in charge size a ten-fold increase of framing rate was needed. This severely limited the use of photography for the study of blast waves from laboratory-scale charges. There are many techniques for taking single photographs of explosive phenomena, but the strongly time-dependent development of a blast wave, requires the ability to record a high-speed sequence of photographs of a single event. At ICHSPP25, Kondo et al of Shimadzu Corporation demonstrated a 1 M fps video camera that provides a sequence of up to 100 high-resolution frames. This was subsequently used at the Shock Wave Research Center of Tohoku University to record the blast waves generated by an extensive series of silver azide charges ranging in size from 10 to 0.5mg. The resulting images were measured to provide radius-time histories of the primary and secondary shocks. These were analyzed with techniques similar to those used for the study of explosions from charges with masses ranging from 500 kg to 5 kt. The analyses showed the cube-root scaling laws to be valid for the very small charges, and provided a detailed record of the peak hydrostatic pressure as a function of radius for a unit charge of silver azide, over a wide range of scaled distances. The pressure-radius variation was compared to that from a unit charge of TNT and this permitted a detailed determination of the TNT equivalence of silver azide

  16. A single-imager, single-lens video camera prototype for 3D imaging

    NASA Astrophysics Data System (ADS)

    Christopher, Lauren A.; Li, Weixu

    2012-03-01

    A new method for capturing 3D video from a single imager and lens is introduced. The benefit of this method is that it does not have the calibration and alignment issues associated with binocular 3D video cameras. It also does not require special ranging transmitters and sensors. Because it is a single lens/imager system, it is also less expensive than either the binocular or ranging cameras. Our system outputs a 2D image and associated depth image using the combination of microfluidic lens and Depth from Defocus (DfD) algorithm. The lens is capable of changing the focus to obtain two images at the normal video frame rate. The Depth from Defocus algorithm uses the in focus and out of focus images to infer depth. We performed our experiments on synthetic and on the real aperture CMOS imager with a microfluidic lens. On synthetic images, we found an improvement in mean squared error compared to the literature on a limited test set. On camera images, our research showed that DfD combined with edge detection and segmentation provided subjective improvements in the resulting depth images.

  17. Low-complexity camera digital signal imaging for video document projection system

    NASA Astrophysics Data System (ADS)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  18. Design and evaluation of controls for drift, video gain, and color balance in spaceborne facsimile cameras

    NASA Technical Reports Server (NTRS)

    Katzberg, S. J.; Kelly, W. L., IV; Rowland, C. W.; Burcher, E. E.

    1973-01-01

    The facsimile camera is an optical-mechanical scanning device which has become an attractive candidate as an imaging system for planetary landers and rovers. This paper presents electronic techniques which permit the acquisition and reconstruction of high quality images with this device, even under varying lighting conditions. These techniques include a control for low frequency noise and drift, an automatic gain control, a pulse-duration light modulation scheme, and a relative spectral gain control. Taken together, these techniques allow the reconstruction of radiometrically accurate and properly balanced color images from facsimile camera video data. These techniques have been incorporated into a facsimile camera and reproduction system, and experimental results are presented for each technique and for the complete system.

  19. Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System.

    PubMed

    Jung, Jaehoon; Yoon, Inhye; Paik, Joonki

    2016-06-25

    This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i) automatic camera calibration using both moving objects and a background structure; (ii) object depth estimation; and (iii) detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB) camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems.

  20. Structural analysis of color video camera installation on tank 241AW101 (2 Volumes)

    SciTech Connect

    Strehlow, J.P.

    1994-08-24

    A video camera is planned to be installed on the radioactive storage tank 241AW101 at the DOE` s Hanford Site in Richland, Washington. The camera will occupy the 20 inch port of the Multiport Flange riser which is to be installed on riser 5B of the 241AW101 (3,5,10). The objective of the project reported herein was to perform a seismic analysis and evaluation of the structural components of the camera for a postulated Design Basis Earthquake (DBE) per the reference Structural Design Specification (SDS) document (6). The detail of supporting engineering calculations is documented in URS/Blume Calculation No. 66481-01-CA-03 (1).

  1. Evaluation of lens distortion errors using an underwater camera system for video-based motion analysis

    NASA Technical Reports Server (NTRS)

    Poliner, Jeffrey; Fletcher, Lauren; Klute, Glenn K.

    1994-01-01

    Video-based motion analysis systems are widely employed to study human movement, using computers to capture, store, process, and analyze video data. This data can be collected in any environment where cameras can be located. One of the NASA facilities where human performance research is conducted is the Weightless Environment Training Facility (WETF), a pool of water which simulates zero-gravity with neutral buoyance. Underwater video collection in the WETF poses some unique problems. This project evaluates the error caused by the lens distortion of the WETF cameras. A grid of points of known dimensions was constructed and videotaped using a video vault underwater system. Recorded images were played back on a VCR and a personal computer grabbed and stored the images on disk. These images were then digitized to give calculated coordinates for the grid points. Errors were calculated as the distance from the known coordinates of the points to the calculated coordinates. It was demonstrated that errors from lens distortion could be as high as 8 percent. By avoiding the outermost regions of a wide-angle lens, the error can be kept smaller.

  2. An efficient coding scheme for surveillance videos captured by stationary cameras

    NASA Astrophysics Data System (ADS)

    Zhang, Xianguo; Liang, Luhong; Huang, Qian; Liu, Yazhou; Huang, Tiejun; Gao, Wen

    2010-07-01

    In this paper, a new scheme is presented to improve the coding efficiency of sequences captured by stationary cameras (or namely, static cameras) for video surveillance applications. We introduce two novel kinds of frames (namely background frame and difference frame) for input frames to represent the foreground/background without object detection, tracking or segmentation. The background frame is built using a background modeling procedure and periodically updated while encoding. The difference frame is calculated using the input frame and the background frame. A sequence structure is proposed to generate high quality background frames and efficiently code difference frames without delay, and then surveillance videos can be easily compressed by encoding the background frames and difference frames in a traditional manner. In practice, the H.264/AVC encoder JM 16.0 is employed as a build-in coding module to encode those frames. Experimental results on eight in-door and out-door surveillance videos show that the proposed scheme achieves 0.12 dB~1.53 dB gain in PSNR over the JM 16.0 anchor specially configured for surveillance videos.

  3. The reactions of patients to a video camera in the consulting room

    PubMed Central

    Martin, Edwin; Martin, P. M. L.

    1984-01-01

    In a general practice survey of reactions to the presence of a video camera in the consulting room 13 per cent of patients refused to be filmed, and 11 per cent of those who did consent disapproved of recording. Patients were more willing to express their reservations about video recording if asked to fill in a questionnaire later at home rather than immediately at the surgery. Patients with anxiety, depression, or problems relating to the breasts or reproductive system were more likely to withhold consent. Patients were less likely to refuse video recording of their consultation if they were asked by the doctor for their verbal permission as they entered the consulting room rather then if they were asked to sign a consent form. Only a small minority of the patients who refused to be filmed felt that this refusal had affected their consultation with the doctor. PMID:6502570

  4. High-speed schlieren videography of vortex-ring impact on a wall

    NASA Astrophysics Data System (ADS)

    Kissner, Benjamin; Hargather, Michael; Settles, Gary

    2011-11-01

    Ring vortices of approximately 20 cm diameter are generated through the use of an Airzooka toy. To make the vortex visible, it is seeded with difluoroethane gas, producing a refractive-index difference with the air. A 1-meter-diameter, single-mirror, double-pass schlieren system is used to visualize the ring-vortex motion, and also to provide the wall with which the vortex collides. High-speed imaging is provided by a Photron SA-1 digital video camera. The Airzooka is fired toward the mirror almost along the optical axis of the schlieren system, so that the view of the vortex-mirror collision is normal to the path of vortex motion. Vortex-wall interactions similar to those first observed by Walker et al. (JFM 181, 1987) are recorded at high speed. The presentation will consist of a screening and discussion of these video results.

  5. High Speed Vortex Flows

    NASA Technical Reports Server (NTRS)

    Wood, Richard M.; Wilcox, Floyd J., Jr.; Bauer, Steven X. S.; Allen, Jerry M.

    2000-01-01

    A review of the research conducted at the National Aeronautics and Space Administration (NASA), Langley Research Center (LaRC) into high-speed vortex flows during the 1970s, 1980s, and 1990s is presented. The data reviewed is for flat plates, cavities, bodies, missiles, wings, and aircraft. These data are presented and discussed relative to the design of future vehicles. Also presented is a brief historical review of the extensive body of high-speed vortex flow research from the 1940s to the present in order to provide perspective of the NASA LaRC's high-speed research results. Data are presented which show the types of vortex structures which occur at supersonic speeds and the impact of these flow structures to vehicle performance and control is discussed. The data presented shows the presence of both small- and large scale vortex structures for a variety of vehicles, from missiles to transports. For cavities, the data show very complex multiple vortex structures exist at all combinations of cavity depth to length ratios and Mach number. The data for missiles show the existence of very strong interference effects between body and/or fin vortices and the downstream fins. It was shown that these vortex flow interference effects could be both positive and negative. Data are shown which highlights the effect that leading-edge sweep, leading-edge bluntness, wing thickness, location of maximum thickness, and camber has on the aerodynamics of and flow over delta wings. The observed flow fields for delta wings (i.e. separation bubble, classical vortex, vortex with shock, etc.) are discussed in the context of' aircraft design. And data have been shown that indicate that aerodynamic performance improvements are available by considering vortex flows as a primary design feature. Finally a discussing of a design approach for wings which utilize vortex flows for improved aerodynamic performance at supersonic speed is presented.

  6. High speed flywheel

    DOEpatents

    McGrath, Stephen V.

    1991-01-01

    A flywheel for operation at high speeds utilizes two or more ringlike coments arranged in a spaced concentric relationship for rotation about an axis and an expansion device interposed between the components for accommodating radial growth of the components resulting from flywheel operation. The expansion device engages both of the ringlike components, and the structure of the expansion device ensures that it maintains its engagement with the components. In addition to its expansion-accommodating capacity, the expansion device also maintains flywheel stiffness during flywheel operation.

  7. High speed flywheel

    SciTech Connect

    McGrath, S.V.

    1991-05-07

    This patent describes a flywheel for operation at high speed which utilizes two or more ringlike components arranged in a spaced concentric relationship for rotation about an axis and an expansion device interposed between the components for accommodating radial growth of the components resulting from flywheel operation. The expansion device engages both of the ringlike components, and the structure of the expansion device ensures that it maintains its engagement with the components. In addition to its expansion-accommodating capacity, the expansion device also maintains flywheel stiffness during flywheel operation.

  8. Analysis of the technical biases of meteor video cameras used in the CILBO system

    NASA Astrophysics Data System (ADS)

    Albin, Thomas; Koschny, Detlef; Molau, Sirko; Srama, Ralf; Poppe, Björn

    2017-02-01

    In this paper, we analyse the technical biases of two intensified video cameras, ICC7 and ICC9, of the double-station meteor camera system CILBO (Canary Island Long-Baseline Observatory). This is done to thoroughly understand the effects of the camera systems on the scientific data analysis. We expect a number of errors or biases that come from the system: instrumental errors, algorithmic errors and statistical errors. We analyse different observational properties, in particular the detected meteor magnitudes, apparent velocities, estimated goodness-of-fit of the astrometric measurements with respect to a great circle and the distortion of the camera. We find that, due to a loss of sensitivity towards the edges, the cameras detect only about 55 % of the meteors it could detect if it had a constant sensitivity. This detection efficiency is a function of the apparent meteor velocity. We analyse the optical distortion of the system and the goodness-of-fit of individual meteor position measurements relative to a fitted great circle. The astrometric error is dominated by uncertainties in the measurement of the meteor attributed to blooming, distortion of the meteor image and the development of a wake for some meteors. The distortion of the video images can be neglected. We compare the results of the two identical camera systems and find systematic differences. For example, the peak magnitude distribution for ICC9 is shifted by about 0.2-0.4 mag towards fainter magnitudes. This can be explained by the different pointing directions of the cameras. Since both cameras monitor the same volume in the atmosphere roughly between the two islands of Tenerife and La Palma, one camera (ICC7) points towards the west, the other one (ICC9) to the east. In particular, in the morning hours the apex source is close to the field-of-view of ICC9. Thus, these meteors appear slower, increasing the dwell time on a pixel. This is favourable for the detection of a meteor of a given

  9. Compact full-motion video hyperspectral cameras: development, image processing, and applications

    NASA Astrophysics Data System (ADS)

    Kanaev, A. V.

    2015-10-01

    Emergence of spectral pixel-level color filters has enabled development of hyper-spectral Full Motion Video (FMV) sensors operating in visible (EO) and infrared (IR) wavelengths. The new class of hyper-spectral cameras opens broad possibilities of its utilization for military and industry purposes. Indeed, such cameras are able to classify materials as well as detect and track spectral signatures continuously in real time while simultaneously providing an operator the benefit of enhanced-discrimination-color video. Supporting these extensive capabilities requires significant computational processing of the collected spectral data. In general, two processing streams are envisioned for mosaic array cameras. The first is spectral computation that provides essential spectral content analysis e.g. detection or classification. The second is presentation of the video to an operator that can offer the best display of the content depending on the performed task e.g. providing spatial resolution enhancement or color coding of the spectral analysis. These processing streams can be executed in parallel or they can utilize each other's results. The spectral analysis algorithms have been developed extensively, however demosaicking of more than three equally-sampled spectral bands has been explored scarcely. We present unique approach to demosaicking based on multi-band super-resolution and show the trade-off between spatial resolution and spectral content. Using imagery collected with developed 9-band SWIR camera we demonstrate several of its concepts of operation including detection and tracking. We also compare the demosaicking results to the results of multi-frame super-resolution as well as to the combined multi-frame and multiband processing.

  10. High speed transient sampler

    DOEpatents

    McEwan, Thomas E.

    1995-01-01

    A high speed sampler comprises a meandered sample transmission line for transmitting an input signal, a straight strobe transmission line for transmitting a strobe signal, and a plurality of sampling gates along the transmission lines. The sampling gates comprise a four terminal diode bridge having a first strobe resistor connected from a first terminal of the bridge to the positive strobe line, a second strobe resistor coupled from the third terminal of the bridge to the negative strobe line, a tap connected to the second terminal of the bridge and to the sample transmission line, and a sample holding capacitor connected to the fourth terminal of the bridge. The resistance of the first and second strobe resistors is much higher than the signal transmission line impedance in the preferred system. This results in a sampling gate which applies a very small load on the sample transmission line and on the strobe generator. The sample holding capacitor is implemented using a smaller capacitor and a larger capacitor isolated from the smaller capacitor by resistance. The high speed sampler of the present invention is also characterized by other optimizations, including transmission line tap compensation, stepped impedance strobe line, a multi-layer physical layout, and unique strobe generator design. A plurality of banks of such samplers are controlled for concatenated or interleaved sample intervals to achieve long sample lengths or short sample spacing.

  11. High speed transient sampler

    DOEpatents

    McEwan, T.E.

    1995-11-28

    A high speed sampler comprises a meandered sample transmission line for transmitting an input signal, a straight strobe transmission line for transmitting a strobe signal, and a plurality of sampling gates along the transmission lines. The sampling gates comprise a four terminal diode bridge having a first strobe resistor connected from a first terminal of the bridge to the positive strobe line, a second strobe resistor coupled from the third terminal of the bridge to the negative strobe line, a tap connected to the second terminal of the bridge and to the sample transmission line, and a sample holding capacitor connected to the fourth terminal of the bridge. The resistance of the first and second strobe resistors is much higher than the signal transmission line impedance in the preferred system. This results in a sampling gate which applies a very small load on the sample transmission line and on the strobe generator. The sample holding capacitor is implemented using a smaller capacitor and a larger capacitor isolated from the smaller capacitor by resistance. The high speed sampler of the present invention is also characterized by other optimizations, including transmission line tap compensation, stepped impedance strobe line, a multi-layer physical layout, and unique strobe generator design. A plurality of banks of such samplers are controlled for concatenated or interleaved sample intervals to achieve long sample lengths or short sample spacing. 17 figs.

  12. High speed multiphoton imaging

    NASA Astrophysics Data System (ADS)

    Li, Yongxiao; Brustle, Anne; Gautam, Vini; Cockburn, Ian; Gillespie, Cathy; Gaus, Katharina; Lee, Woei Ming

    2016-12-01

    Intravital multiphoton microscopy has emerged as a powerful technique to visualize cellular processes in-vivo. Real time processes revealed through live imaging provided many opportunities to capture cellular activities in living animals. The typical parameters that determine the performance of multiphoton microscopy are speed, field of view, 3D imaging and imaging depth; many of these are important to achieving data from in-vivo. Here, we provide a full exposition of the flexible polygon mirror based high speed laser scanning multiphoton imaging system, PCI-6110 card (National Instruments) and high speed analog frame grabber card (Matrox Solios eA/XA), which allows for rapid adjustments between frame rates i.e. 5 Hz to 50 Hz with 512 × 512 pixels. Furthermore, a motion correction algorithm is also used to mitigate motion artifacts. A customized control software called Pscan 1.0 is developed for the system. This is then followed by calibration of the imaging performance of the system and a series of quantitative in-vitro and in-vivo imaging in neuronal tissues and mice.

  13. High speed flywheel

    SciTech Connect

    McGrath, S.V.

    1990-01-01

    This invention relates generally to flywheels and relates more particularly to the construction of a high speed, low-mass flywheel. Flywheels with which this invention is to be compared include those constructed of circumferentially wound filaments or fibers held together by a matrix or bonding material. Flywheels of such construction are known to possess a relatively high hoop strength but a relatively low radial strength. Hoop-wound flywheels are, therefore, particularly susceptible to circumferential cracks, and the radial stress limitations of such a flywheel substantially limit its speed capabilities. It is an object of the present invention to provide a new and improved flywheel which experiences reduced radial stress at high operating speeds. Another object of the present invention is to provide flywheel whose construction allows for radial growth as flywheel speed increases while providing the necessary stiffness for transferring and maintaining kinetic energy within the flywheel. Still another object of the present invention is to provide a flywheel having concentrically-disposed component parts wherein rotation induced radial stresses at the interfaces of such component parts approach zero. Yet another object of the present invention is to provide a flywheel which is particularly well-suited for high speed applications. 5 figs.

  14. An explanation for camera perspective bias in voluntariness judgment for video-recorded confession: Suggestion of cognitive frame.

    PubMed

    Park, Kwangbai; Pyo, Jimin

    2012-06-01

    Three experiments were conducted to test the hypothesis that difference in voluntariness judgment for a custodial confession filmed in different camera focuses ("camera perspective bias") could occur because a particular camera focus conveys a suggestion of a particular cognitive frame. In Experiment 1, 146 juror eligible adults in Korea showed a camera perspective bias in voluntariness judgment with a simulated confession filmed with two cameras of different focuses, one on the suspect and the other on the detective. In Experiment 2, the same bias in voluntariness judgment emerged without cameras when the participants were cognitively framed, prior to listening to the audio track of the videos used in Experiment 1, by instructions to make either a voluntariness judgment for a confession or a coerciveness judgment for an interrogation. In Experiment 3, the camera perspective bias in voluntariness judgment disappeared when the participants viewing the video focused on the suspect were initially framed to make coerciveness judgment for the interrogation and the participants viewing the video focused on the detective were initially framed to make voluntariness judgment for the confession. The results in combination indicated that a particular camera focus may convey a suggestion of a particular cognitive frame in which a video-recorded confession/interrogation is initially represented. Some forensic and policy implications were discussed.

  15. First results from newly developed automatic video system MAIA and comparison with older analogue cameras

    NASA Astrophysics Data System (ADS)

    Koten, P.; Páta, P.; Fliegel, K.; Vítek, S.

    2013-09-01

    New automatic video system for meteor observations MAIA was developed in recent years [1]. The goal is to replace the older analogue cameras and provide a platform for continues round the year observations from two different stations. Here we present first results obtained during testing phase as well as the first double station observations. Comparison with the older analogue cameras is provided too. MAIA (Meteor Automatic Imager and Analyzer) is based on digital monochrome camera JAI CM-040 and well proved image intensifier XX1332 (Figure 1). The camera provides spatial resolution 776 x 582 pixels. The maximum frame rate is 61.15 frames per second. Fast Pentax SMS FA 1.4/50mm lens is used as the input element of the optical system. The resulting field-of-view is about 50º in diameter. For the first time new system was used in semiautomatic regime for the observation of the Draconid outburst on 8th October, 2011. Both cameras recorded more than 160 meteors. Additional hardware and software were developed in 2012 to enable automatic observation and basic processing of the data. The system usually records the video sequences for whole night. During the daytime it looks the records for moving object, saves them into short sequences and clears the hard drives to allow additional observations. Initial laboratory measurements [2] and simultaneous observations with older system show significant improvement of the obtained data. Table 1 shows comparison of the basic parameters of both systems. In this paper we will present comparison of the double station data obtained using both systems.

  16. Video Capture of Perforator Flap Harvesting Procedure with a Full High-definition Wearable Camera

    PubMed Central

    2016-01-01

    Summary: Recent advances in wearable recording technology have enabled high-quality video recording of several surgical procedures from the surgeon’s perspective. However, the available wearable cameras are not optimal for recording the harvesting of perforator flaps because they are too heavy and cannot be attached to the surgical loupe. The Ecous is a small high-resolution camera that was specially developed for recording loupe magnification surgery. This study investigated the use of the Ecous for recording perforator flap harvesting procedures. The Ecous SC MiCron is a high-resolution camera that can be mounted directly on the surgical loupe. The camera is light (30 g) and measures only 28 × 32 × 60 mm. We recorded 23 perforator flap harvesting procedures with the Ecous connected to a laptop through a USB cable. The elevated flaps included 9 deep inferior epigastric artery perforator flaps, 7 thoracodorsal artery perforator flaps, 4 anterolateral thigh flaps, and 3 superficial inferior epigastric artery flaps. All procedures were recorded with no equipment failure. The Ecous recorded the technical details of the perforator dissection at a high-resolution level. The surgeon did not feel any extra stress or interference when wearing the Ecous. The Ecous is an ideal camera for recording perforator flap harvesting procedures. It fits onto the surgical loupe perfectly without creating additional stress on the surgeon. High-quality video from the surgeon’s perspective makes accurate documentation of the procedures possible, thereby enhancing surgical education and allowing critical self-reflection. PMID:27482504

  17. Algorithm design for automated transportation photo enforcement camera image and video quality diagnostic check modules

    NASA Astrophysics Data System (ADS)

    Raghavan, Ajay; Saha, Bhaskar

    2013-03-01

    Photo enforcement devices for traffic rules such as red lights, toll, stops, and speed limits are increasingly being deployed in cities and counties around the world to ensure smooth traffic flow and public safety. These are typically unattended fielded systems, and so it is important to periodically check them for potential image/video quality problems that might interfere with their intended functionality. There is interest in automating such checks to reduce the operational overhead and human error involved in manually checking large camera device fleets. Examples of problems affecting such camera devices include exposure issues, focus drifts, obstructions, misalignment, download errors, and motion blur. Furthermore, in some cases, in addition to the sub-algorithms for individual problems, one also has to carefully design the overall algorithm and logic to check for and accurately classifying these individual problems. Some of these issues can occur in tandem or have the potential to be confused for each other by automated algorithms. Examples include camera misalignment that can cause some scene elements to go out of focus for wide-area scenes or download errors that can be misinterpreted as an obstruction. Therefore, the sequence in which the sub-algorithms are utilized is also important. This paper presents an overview of these problems along with no-reference and reduced reference image and video quality solutions to detect and classify such faults.

  18. Real Time Speed Estimation of Moving Vehicles from Side View Images from an Uncalibrated Video Camera

    PubMed Central

    Doğan, Sedat; Temiz, Mahir Serhan; Külür, Sıtkı

    2010-01-01

    In order to estimate the speed of a moving vehicle with side view camera images, velocity vectors of a sufficient number of reference points identified on the vehicle must be found using frame images. This procedure involves two main steps. In the first step, a sufficient number of points from the vehicle is selected, and these points must be accurately tracked on at least two successive video frames. In the second step, by using the displacement vectors of the tracked points and passed time, the velocity vectors of those points are computed. Computed velocity vectors are defined in the video image coordinate system and displacement vectors are measured by the means of pixel units. Then the magnitudes of the computed vectors in image space should be transformed to the object space to find the absolute values of these magnitudes. This transformation requires an image to object space information in a mathematical sense that is achieved by means of the calibration and orientation parameters of the video frame images. This paper presents proposed solutions for the problems of using side view camera images mentioned here. PMID:22399909

  19. High speed civil transport

    NASA Technical Reports Server (NTRS)

    Bogardus, Scott; Loper, Brent; Nauman, Chris; Page, Jeff; Parris, Rusty; Steinbach, Greg

    1990-01-01

    The design process of the High Speed Civil Transport (HSCT) combines existing technology with the expectation of future technology to create a Mach 3.0 transport. The HSCT was designed to have a range in excess of 6000 nautical miles and carry up to 300 passengers. This range will allow the HSCT to service the economically expanding Pacific Basin region. Effort was made in the design to enable the aircraft to use conventional airports with standard 12,000 foot runways. With a takeoff thrust of 250,000 pounds, the four supersonic through-flow engines will accelerate the HSCT to a cruise speed of Mach 3.0. The 679,000 pound (at takeoff) HSCT is designed to cruise at an altitude of 70,000 feet, flying above most atmospheric disturbances.

  20. A compact high-definition low-cost digital stereoscopic video camera for rapid robotic surgery development.

    PubMed

    Carlson, Jay; Kowalczuk, Jędrzej; Psota, Eric; Pérez, Lance C

    2012-01-01

    Robotic surgical platforms require vision feedback systems, which often consist of low-resolution, expensive, single-imager analog cameras. These systems are retooled for 3D display by simply doubling the cameras and outboard control units. Here, a fully-integrated digital stereoscopic video camera employing high-definition sensors and a class-compliant USB video interface is presented. This system can be used with low-cost PC hardware and consumer-level 3D displays for tele-medical surgical applications including military medical support, disaster relief, and space exploration.

  1. Research on simulation and verification system of satellite remote sensing camera video processor based on dual-FPGA

    NASA Astrophysics Data System (ADS)

    Ma, Fei; Liu, Qi; Cui, Xuenan

    2014-09-01

    To satisfy the needs for testing video processor of satellite remote sensing cameras, a design is provided to achieve a simulation and verification system of satellite remote sensing camera video processor based on dual-FPGA. The correctness of video processor FPGA logic can be verified even without CCD signals or analog to digital convertor. Two Xilinx Virtex FPGAs are adopted to make a center unit, the logic of A/D digital data generating and data processing are developed with VHDL. The RS-232 interface is used to receive commands from the host computer, and different types of data are generated and outputted depending on the commands. Experimental results show that the simulation and verification system is flexible and can work well. The simulation and verification system meets the requirements of testing video processors for several different types of satellite remote sensing cameras.

  2. Acute gastroenteritis and video camera surveillance: a cruise ship case report.

    PubMed

    Diskin, Arthur L; Caro, Gina M; Dahl, Eilif

    2014-01-01

    A 'faecal accident' was discovered in front of a passenger cabin of a cruise ship. After proper cleaning of the area the passenger was approached, but denied having any gastrointestinal symptoms. However, when confronted with surveillance camera evidence, she admitted having the accident and even bringing the towel stained with diarrhoea back to the pool towels bin. She was isolated until the next port where she was disembarked. Acute gastroenteritis (AGE) caused by Norovirus is very contagious and easily transmitted from person to person on cruise ships. The main purpose of isolation is to avoid public vomiting and faecal accidents. To quickly identify and isolate contagious passengers and crew and ensure their compliance are key elements in outbreak prevention and control, but this is difficult if ill persons deny symptoms. All passenger ships visiting US ports now have surveillance video cameras, which under certain circumstances can assist in finding potential index cases for AGE outbreaks.

  3. A semantic autonomous video surveillance system for dense camera networks in Smart Cities.

    PubMed

    Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M; Carro, Belén; Sánchez-Esguevillas, Antonio

    2012-01-01

    This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network.

  4. A Semantic Autonomous Video Surveillance System for Dense Camera Networks in Smart Cities

    PubMed Central

    Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M.; Carro, Belén; Sánchez-Esguevillas, Antonio

    2012-01-01

    This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network. PMID:23112607

  5. VideoWeb Dataset for Multi-camera Activities and Non-verbal Communication

    NASA Astrophysics Data System (ADS)

    Denina, Giovanni; Bhanu, Bir; Nguyen, Hoang Thanh; Ding, Chong; Kamal, Ahmed; Ravishankar, Chinya; Roy-Chowdhury, Amit; Ivers, Allen; Varda, Brenda

    Human-activity recognition is one of the most challenging problems in computer vision. Researchers from around the world have tried to solve this problem and have come a long way in recognizing simple motions and atomic activities. As the computer vision community heads toward fully recognizing human activities, a challenging and labeled dataset is needed. To respond to that need, we collected a dataset of realistic scenarios in a multi-camera network environment (VideoWeb) involving multiple persons performing dozens of different repetitive and non-repetitive activities. This chapter describes the details of the dataset. We believe that this VideoWeb Activities dataset is unique and it is one of the most challenging datasets available today. The dataset is publicly available online at http://vwdata.ee.ucr.edu/ along with the data annotation.

  6. People counting and re-identification using fusion of video camera and laser scanner

    NASA Astrophysics Data System (ADS)

    Ling, Bo; Olivera, Santiago; Wagley, Raj

    2016-05-01

    We present a system for people counting and re-identification. It can be used by transit and homeland security agencies. Under FTA SBIR program, we have developed a preliminary system for transit passenger counting and re-identification using a laser scanner and video camera. The laser scanner is used to identify the locations of passenger's head and shoulder in an image, a challenging task in crowed environment. It can also estimate the passenger height without prior calibration. Various color models have been applied to form color signatures. Finally, using a statistical fusion and classification scheme, passengers are counted and re-identified.

  7. High speed fluorescence imaging with compressed ultrafast photography

    NASA Astrophysics Data System (ADS)

    Thompson, J. V.; Mason, J. D.; Beier, H. T.; Bixler, J. N.

    2017-02-01

    Fluorescent lifetime imaging is an optical technique that facilitates imaging molecular interactions and cellular functions. Because the excited lifetime of a fluorophore is sensitive to its local microenvironment,1, 2 measurement of fluorescent lifetimes can be used to accurately detect regional changes in temperature, pH, and ion concentration. However, typical state of the art fluorescent lifetime methods are severely limited when it comes to acquisition time (on the order of seconds to minutes) and video rate imaging. Here we show that compressed ultrafast photography (CUP) can be used in conjunction with fluorescent lifetime imaging to overcome these acquisition rate limitations. Frame rates up to one hundred billion frames per second have been demonstrated with compressed ultrafast photography using a streak camera.3 These rates are achieved by encoding time in the spatial direction with a pseudo-random binary pattern. The time domain information is then reconstructed using a compressed sensing algorithm, resulting in a cube of data (x,y,t) for each readout image. Thus, application of compressed ultrafast photography will allow us to acquire an entire fluorescent lifetime image with a single laser pulse. Using a streak camera with a high-speed CMOS camera, acquisition rates of 100 frames per second can be achieved, which will significantly enhance our ability to quantitatively measure complex biological events with high spatial and temporal resolution. In particular, we will demonstrate the ability of this technique to do single-shot fluorescent lifetime imaging of cells and microspheres.

  8. MOEMS-based time-of-flight camera for 3D video capturing

    NASA Astrophysics Data System (ADS)

    You, Jang-Woo; Park, Yong-Hwa; Cho, Yong-Chul; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Lee, Seung-Wan

    2013-03-01

    We suggest a Time-of-Flight (TOF) video camera capturing real-time depth images (a.k.a depth map), which are generated from the fast-modulated IR images utilizing a novel MOEMS modulator having switching speed of 20 MHz. In general, 3 or 4 independent IR (e.g. 850nm) images are required to generate a single frame of depth image. Captured video image of a moving object frequently shows motion drag between sequentially captured IR images, which results in so called `motion blur' problem even when the frame rate of depth image is fast (e.g. 30 to 60 Hz). We propose a novel `single shot' TOF 3D camera architecture generating a single depth image out of synchronized captured IR images. The imaging system constitutes of 2x2 imaging lens array, MOEMS optical shutters (modulator) placed on each lens aperture and a standard CMOS image sensor. The IR light reflected from object is modulated by optical shutters on the apertures of 2x2 lens array and then transmitted images are captured on the image sensor resulting in 2x2 sub-IR images. As a result, the depth image is generated with those simultaneously captured 4 independent sub-IR images, hence the motion blur problem is canceled. The resulting performance is very useful in the applications of 3D camera to a human-machine interaction device such as user interface of TV, monitor, or hand held devices and motion capturing of human body. In addition, we show that the presented 3D camera can be modified to capture color together with depth image simultaneously on `single shot' frame rate.

  9. Development and calibration of acoustic video camera system for moving vehicles

    NASA Astrophysics Data System (ADS)

    Yang, Diange; Wang, Ziteng; Li, Bing; Lian, Xiaomin

    2011-05-01

    In this paper, a new acoustic video camera system is developed and its calibration method is established. This system is built based on binocular vision and acoustical holography technology. With binocular vision method, the spatial distance between the microphone array and the moving vehicles is obtained, and the sound reconstruction plane can be established closely to the moving vehicle surface automatically. Then the sound video is regenerated closely to the moving vehicles accurately by acoustic holography method. With this system, the moving and stationary sound sources are treated differently and automatically, which makes the sound visualization of moving vehicles much quicker, more intuitively, and accurately. To verify this system, experiments for a stationary speaker and a non-stationary speaker are carried out. Further verification experiments for outdoor moving vehicle are also conducted. Successful video visualization results not only confirm the validity of the system but also suggest that this system can be a potential useful tool in vehicle's noise identification because it allows the users to find out the noise sources by the videos easily. We believe the newly developed system will be of great potential in moving vehicles' noise identification and control.

  10. High speed packet switching

    NASA Technical Reports Server (NTRS)

    1991-01-01

    This document constitutes the final report prepared by Proteon, Inc. of Westborough, Massachusetts under contract NAS 5-30629 entitled High-Speed Packet Switching (SBIR 87-1, Phase 2) prepared for NASA-Greenbelt, Maryland. The primary goal of this research project is to use the results of the SBIR Phase 1 effort to develop a sound, expandable hardware and software router architecture capable of forwarding 25,000 packets per second through the router and passing 300 megabits per second on the router's internal busses. The work being delivered under this contract received its funding from three different sources: the SNIPE/RIG contract (Contract Number F30602-89-C-0014, CDRL Sequence Number A002), the SBIR contract, and Proteon. The SNIPE/RIG and SBIR contracts had many overlapping requirements, which allowed the research done under SNIPE/RIG to be applied to SBIR. Proteon funded all of the work to develop new router interfaces other than FDDI, in addition to funding the productization of the router itself. The router being delivered under SBIR will be a fully product-quality machine. The work done during this contract produced many significant findings and results, summarized here and explained in detail in later sections of this report. The SNIPE/RIG contract was completed. That contract had many overlapping requirements with the SBIR contract, and resulted in the successful demonstration and delivery of a high speed router. The development that took place during the SNIPE/RIG contract produced findings that included the choice of processor and an understanding of the issues surrounding inter processor communications in a multiprocessor environment. Many significant speed enhancements to the router software were made during that time. Under the SBIR contract (and with help from Proteon-funded work), it was found that a single processor router achieved a throughput significantly higher than originally anticipated. For this reason, a single processor router was

  11. High speed civil transport

    NASA Technical Reports Server (NTRS)

    1991-01-01

    This report discusses the design and marketability of a next generation supersonic transport. Apogee Aeronautics Corporation has designated its High Speed Civil Transport (HSCT): Supercruiser HS-8. Since the beginning of the Concorde era, the general consensus has been that the proper time for the introduction of a next generation Supersonic Transport (SST) would depend upon the technical advances made in the areas of propulsion (reduction in emissions) and material composites (stronger, lighter materials). It is believed by many in the aerospace industry that these beforementioned technical advances lie on the horizon. With this being the case, this is the proper time to begin the design phase for the next generation HSCT. The design objective for a HSCT was to develop an aircraft that would be capable of transporting at least 250 passengers with baggage at a distance of 5500 nmi. The supersonic Mach number is currently unspecified. In addition, the design had to be marketable, cost effective, and certifiable. To achieve this goal, technical advances in the current SST's must be made, especially in the areas of aerodynamics and propulsion. As a result of these required aerodynamic advances, several different supersonic design concepts were reviewed.

  12. High Speed Ice Friction

    NASA Astrophysics Data System (ADS)

    Seymour-Pierce, Alexandra; Sammonds, Peter; Lishman, Ben

    2014-05-01

    Many different tribological experiments have been run to determine the frictional behaviour of ice at high speeds, ostensibly with the intention of applying results to everyday fields such as winter tyres and sports. However, experiments have only been conducted up to linear speeds of several metres a second, with few additional subject specific studies reaching speeds comparable to these applications. Experiments were conducted in the cold rooms of the Rock and Ice Physics Laboratory, UCL, on a custom built rotational tribometer based on previous literature designs. Preliminary results from experiments run at 2m/s for ice temperatures of 271 and 263K indicate that colder ice has a higher coefficient of friction, in accordance with the literature. These results will be presented, along with data from further experiments conducted at temperatures between 259-273K (in order to cover a wide range of the temperature dependent behaviour of ice) and speeds of 2-15m/s to produce a temperature-velocity-friction map for ice. The effect of temperature, speed and slider geometry on the deformation of ice will also be investigated. These speeds are approaching those exhibited by sports such as the luge (where athletes slide downhill on an icy track), placing the tribological work in context.

  13. High Speed Civil Transport-737 Landings at Wallops Island

    NASA Technical Reports Server (NTRS)

    1996-01-01

    NASA pilot Michael Wusk makes a 'windowless landing' aboard a NASA 737 research aircraft in flight tests aimed at developing technology for a future supersonic airliner. Cameras in the nose of the airplane relayed images to a computer screen in the aircrafts otherwise 'blind' research cockpit. Computer graphics were overlaid on the image to give cues to the pilot during approaches and landings. Researchers are hoping that by enhancing the pilots vision with high-resolution video displays aircraft designers of the future can do away with the expensive, mechanically-drooping nose of early supersonic transports. The tests were conducted in flights at NASAs Wallops Flights Facility, Wallops, Va. From November 1995 through January 1996. The flight deck systems research is part of the joint NASA-US industry High-Speed Research (HSR) Program, aimed at developing technologies for an economically viable, environmentally friendly high-speed civil transport around the turn of the century. The work is directed by the HSR Program Office, located at NASA Langley Research Center, Hampton.Va.

  14. High speed transition prediction

    NASA Technical Reports Server (NTRS)

    Gasperas, Gediminis

    1992-01-01

    The main objective of this work period was to develop, acquire and apply state-of-the-art tools for the prediction of transition at high speeds at NASA Ames. Although various stability codes as well as basic state codes were acquired, the development of a new Parabolized Stability Equation (PSE) code was minimal. The time that was initially allocated for development was used on other tasks, in particular for the Leading Edge Suction problem, in acquiring proficiency in various graphics tools, and in applying these tools to evaluate various Navier-Stokes and Euler solutions. The second objective of this work period was to attend the Transition and Turbulence Workshop at NASA Langley in July and August, 1991. A report on the Workshop follows. From July 8, 1991 to August 2, 1991, the author participated in the Transition and Turbulence Workshop at NASA Langley. For purposes of interest here, analysis can be said to consist of solving simplified governing equations by various analytical methods, such as asymptotic methods, or by use of very meager computer resources. From the composition of the various groups at the Workshop, it can be seen that analytical methods are generally more popular in Great Britain than they are in the U.S., possibly due to historical factors and the lack of computer resources. Experimenters at the Workshop were mostly concerned with subsonic flows, and a number of demonstrations were provided, among which were a hot-wire experiment to probe the boundary layer on a rotating disc, a hot-wire rake to map a free shear layer behind a cylinder, and the use of heating strips on a flat plate to control instability waves and consequent transition. A highpoint of the demonstrations was the opportunity to observe the rather noisy 'quiet' supersonic pilot tunnel in operation.

  15. Mounted Video Camera Captures Launch of STS-112, Shuttle Orbiter Atlantis

    NASA Technical Reports Server (NTRS)

    2002-01-01

    A color video camera mounted to the top of the External Tank (ET) provided this spectacular never-before-seen view of the STS-112 mission as the Space Shuttle Orbiter Atlantis lifted off in the afternoon of October 7, 2002, The camera provided views as the the orbiter began its ascent until it reached near-orbital speed, about 56 miles above the Earth, including a view of the front and belly of the orbiter, a portion of the Solid Rocket Booster, and ET. The video was downlinked during flight to several NASA data-receiving sites, offering the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. Atlantis carried the S1 Integrated Truss Structure and the Crew and Equipment Translation Aid (CETA) Cart. The CETA is the first of two human-powered carts that will ride along the International Space Station's railway providing a mobile work platform for future extravehicular activities by astronauts. Landing on October 18, 2002, the Orbiter Atlantis ended its 11-day mission.

  16. Mounted Video Camera Captures Launch of STS-112, Shuttle Orbiter Atlantis

    NASA Technical Reports Server (NTRS)

    2002-01-01

    A color video camera mounted to the top of the External Tank (ET) provided this spectacular never-before-seen view of the STS-112 mission as the Space Shuttle Orbiter Atlantis lifted off in the afternoon of October 7, 2002. The camera provided views as the orbiter began its ascent until it reached near-orbital speed, about 56 miles above the Earth, including a view of the front and belly of the orbiter, a portion of the Solid Rocket Booster, and ET. The video was downlinked during flight to several NASA data-receiving sites, offering the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. Atlantis carried the S1 Integrated Truss Structure and the Crew and Equipment Translation Aid (CETA) Cart. The CETA is the first of two human-powered carts that will ride along the International Space Station's railway providing a mobile work platform for future extravehicular activities by astronauts. Landing on October 18, 2002, the Orbiter Atlantis ended its 11-day mission.

  17. Mounted Video Camera Captures Launch of STS-112, Shuttle Orbiter Atlantis

    NASA Technical Reports Server (NTRS)

    2002-01-01

    A color video camera mounted to the top of the External Tank (ET) provided this spectacular never-before-seen view of the STS-112 mission as the Space Shuttle Orbiter Atlantis lifted off in the afternoon of October 7, 2002. The camera provided views as the orbiter began its ascent until it reached near-orbital speed, about 56 miles above the Earth, including a view of the front and belly of the orbiter, a portion of the Solid Rocket Booster, and ET. The video was downlinked during flight to several NASA data-receiving sites, offering the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. Atlantis carried the S1 Integrated Truss Structure and the Crew and Equipment Translation Aid (CETA) Cart. The CETA is the first of two human-powered carts that will ride along the International Space Station's railway providing a mobile work platform for future extravehicular activities by astronauts. Landing on October 18, 2002, the Orbiter Atlantis ended its 11-day mission.

  18. Mounted Video Camera Captures Launch of STS-112, Shuttle Orbiter Atlantis

    NASA Technical Reports Server (NTRS)

    2002-01-01

    A color video camera mounted to the top of the External Tank (ET) provided this spectacular never-before-seen view of the STS-112 mission as the Space Shuttle Orbiter Atlantis lifted off in the afternoon of October 7, 2002, The camera provided views as the the orbiter began its ascent until it reached near-orbital speed, about 56 miles above the Earth, including a view of the front and belly of the orbiter, a portion of the Solid Rocket Booster, and ET. The video was downlinked during flight to several NASA data-receiving sites, offering the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. Atlantis carried the S1 Integrated Truss Structure and the Crew and Equipment Translation Aid (CETA) Cart. The CETA is the first of two human-powered carts that will ride along the International Space Station's railway providing a mobile work platform for future extravehicular activities by astronauts. Landing on October 18, 2002, the Orbiter Atlantis ended its 11-day mission.

  19. 11. INTERIOR VIEW OF 8FOOT HIGH SPEED WIND TUNNEL. SAME ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. INTERIOR VIEW OF 8-FOOT HIGH SPEED WIND TUNNEL. SAME CAMERA POSITION AS VA-118-B-10 LOOKING IN THE OPPOSITE DIRECTION. - NASA Langley Research Center, 8-Foot High Speed Wind Tunnel, 641 Thornell Avenue, Hampton, Hampton, VA

  20. High-speed optical 3D sensing and its applications

    NASA Astrophysics Data System (ADS)

    Watanabe, Yoshihiro

    2016-12-01

    This paper reviews high-speed optical 3D sensing technologies for obtaining the 3D shape of a target using a camera. The focusing speed is from 100 to 1000 fps, exceeding normal camera frame rates, which are typically 30 fps. In particular, contactless, active, and real-time systems are introduced. Also, three example applications of this type of sensing technology are introduced, including surface reconstruction from time-sequential depth images, high-speed 3D user interaction, and high-speed digital archiving.

  1. Fusion: ultra-high-speed and IR image sensors

    NASA Astrophysics Data System (ADS)

    Etoh, T. Goji; Dao, V. T. S.; Nguyen, Quang A.; Kimata, M.

    2015-08-01

    Most targets of ultra-high-speed video cameras operating at more than 1 Mfps, such as combustion, crack propagation, collision, plasma, spark discharge, an air bag at a car accident and a tire under a sudden brake, generate sudden heat. Researchers in these fields require tools to measure the high-speed motion and heat simultaneously. Ultra-high frame rate imaging is achieved by an in-situ storage image sensor. Each pixel of the sensor is equipped with multiple memory elements to record a series of image signals simultaneously at all pixels. Image signals stored in each pixel are read out after an image capturing operation. In 2002, we developed an in-situ storage image sensor operating at 1 Mfps 1). However, the fill factor of the sensor was only 15% due to a light shield covering the wide in-situ storage area. Therefore, in 2011, we developed a backside illuminated (BSI) in-situ storage image sensor to increase the sensitivity with 100% fill factor and a very high quantum efficiency 2). The sensor also achieved a much higher frame rate,16.7 Mfps, thanks to the wiring on the front side with more freedom 3). The BSI structure has another advantage that it has less difficulties in attaching an additional layer on the backside, such as scintillators. This paper proposes development of an ultra-high-speed IR image sensor in combination of advanced nano-technologies for IR imaging and the in-situ storage technology for ultra-highspeed imaging with discussion on issues in the integration.

  2. Reflection imaging in the millimeter-wave range using a video-rate terahertz camera

    NASA Astrophysics Data System (ADS)

    Marchese, Linda E.; Terroux, Marc; Doucet, Michel; Blanchard, Nathalie; Pancrati, Ovidiu; Dufour, Denis; Bergeron, Alain

    2016-05-01

    The ability of millimeter waves (1-10 mm, or 30-300 GHz) to penetrate through dense materials, such as leather, wool, wood and gyprock, and to also transmit over long distances due to low atmospheric absorption, makes them ideal for numerous applications, such as body scanning, building inspection and seeing in degraded visual environments. Current drawbacks of millimeter wave imaging systems are they use single detector or linear arrays that require scanning or the two dimensional arrays are bulky, often consisting of rather large antenna-couple focal plane arrays (FPAs). Previous work from INO has demonstrated the capability of its compact lightweight camera, based on a 384 x 288 microbolometer pixel FPA with custom optics for active video-rate imaging at wavelengths of 118 μm (2.54 THz), 432 μm (0.69 THz), 663 μm (0.45 THz), and 750 μm (0.4 THz). Most of the work focused on transmission imaging, as a first step, but some preliminary demonstrations of reflection imaging at these were also reported. In addition, previous work also showed that the broadband FPA remains sensitive to wavelengths at least up to 3.2 mm (94 GHz). The work presented here demonstrates the ability of the INO terahertz camera for reflection imaging at millimeter wavelengths. Snapshots taken at video rates of objects show the excellent quality of the images. In addition, a description of the imaging system that includes the terahertz camera and different millimeter sources is provided.

  3. Complex effusive events at Kilauea as documented by the GOES satellite and remote video cameras

    USGS Publications Warehouse

    Harris, A.J.L.; Thornber, C.R.

    1999-01-01

    GOES provides thermal data for all of the Hawaiian volcanoes once every 15 min. We show how volcanic radiance time series produced from this data stream can be used as a simple measure of effusive activity. Two types of radiance trends in these time series can be used to monitor effusive activity: (a) Gradual variations in radiance reveal steady flow-field extension and tube development. (b) Discrete spikes correlate with short bursts of activity, such as lava fountaining or lava-lake overflows. We are confident that any effusive event covering more than 10,000 m2 of ground in less than 60 min will be unambiguously detectable using this approach. We demonstrate this capability using GOES, video camera and ground-based observational data for the current eruption of Kilauea volcano (Hawai'i). A GOES radiance time series was constructed from 3987 images between 19 June and 12 August 1997. This time series displayed 24 radiance spikes elevated more than two standard deviations above the mean; 19 of these are correlated with video-recorded short-burst effusive events. Less ambiguous events are interpreted, assessed and related to specific volcanic events by simultaneous use of permanently recording video camera data and ground-observer reports. The GOES radiance time series are automatically processed on data reception and made available in near-real-time, so such time series can contribute to three main monitoring functions: (a) automatically alerting major effusive events; (b) event confirmation and assessment; and (c) establishing effusive event chronology.

  4. The application of the high-speed photography in the experiments of boiling liquid expanding vapor explosions

    NASA Astrophysics Data System (ADS)

    Chen, Sining; Sun, Jinhua; Chen, Dongliang

    2007-01-01

    The liquefied-petroleum gas tank in some failure situations may release its contents, and then a series of hazards with different degrees of severity may occur. The most dangerous accident is the boiling liquid expanding vapor explosion (BLEVE). In this paper, a small-scale experiment was established to experimentally investigate the possible processes that could lead to a BLEVE. As there is some danger in using LPG in the experiments, water was used as the test fluid. The change of pressure and temperature was measured during the experiment. The ejection of the vapor and the sequent two-phase flow were recorded by a high-speed video camera. It was observed that two pressure peaks result after the pressure is released. The vapor was first ejected at a high speed; there was a sudden pressure drop which made the liquid superheated. The superheated liquid then boiled violently causing the liquid contents to swell, and also, the vapor pressure in the tank increased rapidly. The second pressure peak was possibly due to the swell of this two-phase flow which was likely to violently impact the wall of the tank with high speed. The whole evolution of the two-phase flow was recorded through photos captured by the high-speed video camera, and the "two step" BLEVE process was confirmed.

  5. Real-time people counting system using a single video camera

    NASA Astrophysics Data System (ADS)

    Lefloch, Damien; Cheikh, Faouzi A.; Hardeberg, Jon Y.; Gouton, Pierre; Picot-Clemente, Romain

    2008-02-01

    There is growing interest in video-based solutions for people monitoring and counting in business and security applications. Compared to classic sensor-based solutions the video-based ones allow for more versatile functionalities, improved performance with lower costs. In this paper, we propose a real-time system for people counting based on single low-end non-calibrated video camera. The two main challenges addressed in this paper are: robust estimation of the scene background and the number of real persons in merge-split scenarios. The latter is likely to occur whenever multiple persons move closely, e.g. in shopping centers. Several persons may be considered to be a single person by automatic segmentation algorithms, due to occlusions or shadows, leading to under-counting. Therefore, to account for noises, illumination and static objects changes, a background substraction is performed using an adaptive background model (updated over time based on motion information) and automatic thresholding. Furthermore, post-processing of the segmentation results is performed, in the HSV color space, to remove shadows. Moving objects are tracked using an adaptive Kalman filter, allowing a robust estimation of the objects future positions even under heavy occlusion. The system is implemented in Matlab, and gives encouraging results even at high frame rates. Experimental results obtained based on the PETS2006 datasets are presented at the end of the paper.

  6. Flow visualization by mobile phone cameras

    NASA Astrophysics Data System (ADS)

    Cierpka, Christian; Hain, Rainer; Buchmann, Nicolas A.

    2016-06-01

    Mobile smart phones were completely changing people's communication within the last ten years. However, these devices do not only offer communication through different channels but also devices and applications for fun and recreation. In this respect, mobile phone cameras include now relatively fast (up to 240 Hz) cameras to capture high-speed videos of sport events or other fast processes. The article therefore explores the possibility to make use of this development and the wide spread availability of these cameras in the terms of velocity measurements for industrial or technical applications and fluid dynamics education in high schools and at universities. The requirements for a simplistic PIV (particle image velocimetry) system are discussed. A model experiment of a free water jet was used to prove the concept and shed some light on the achievable quality and determine bottle necks by comparing the results obtained with a mobile phone camera with data taken by a high-speed camera suited for scientific experiments.

  7. Identifying predators and fates of grassland passerine nests using miniature video cameras

    USGS Publications Warehouse

    Pietz, Pamela J.; Granfors, Diane A.

    2000-01-01

    Nest fates, causes of nest failure, and identities of nest predators are difficult to determine for grassland passerines. We developed a miniature video-camera system for use in grasslands and deployed it at 69 nests of 10 passerine species in North Dakota during 1996-97. Abandonment rates were higher at nests 1 day or night (22-116 hr) at 6 nests, 5 of which were depredated by ground squirrels or mice. For nests without cameras, estimated predation rates were lower for ground nests than aboveground nests (P = 0.055), but did not differ between open and covered nests (P = 0.74). Open and covered nests differed, however, when predation risk (estimated by initial-predation rate) was examined separately for day and night using camera-monitored nests; the frequency of initial predations that occurred during the day was higher for open nests than covered nests (P = 0.015). Thus, vulnerability of some nest types may depend on the relative importance of nocturnal and diurnal predators. Predation risk increased with nestling age from 0 to 8 days (P = 0.07). Up to 15% of fates assigned to camera-monitored nests were wrong when based solely on evidence that would have been available from periodic nest visits. There was no evidence of disturbance at nearly half the depredated nests, including all 5 depredated by large mammals. Overlap in types of sign left by different predator species, and variability of sign within species, suggests that evidence at nests is unreliable for identifying predators of grassland passerines.

  8. Multiformat video and laser cameras: history, design considerations, acceptance testing, and quality control. Report of AAPM Diagnostic X-Ray Imaging Committee Task Group No. 1.

    PubMed

    Gray, J E; Anderson, W F; Shaw, C C; Shepard, S J; Zeremba, L A; Lin, P J

    1993-01-01

    Acceptance testing and quality control of video and laser cameras is relatively simple, especially with the use of the SMPTE test pattern. Photographic quality control is essential if one wishes to be able to maintain the quality of video and laser cameras. In addition, photographic quality control must be carried out with the film used clinically in the video and laser cameras, and with a sensitometer producing a light spectrum similar to that of the video or laser camera. Before the end of the warranty period a second acceptance test should be carried out. At this time the camera should produce the same results as noted during the initial acceptance test. With the appropriate acceptance and quality control the video and laser cameras should produce quality images throughout the life of the equipment.

  9. High-speed 3D imaging using two-wavelength parallel-phase-shift interferometry.

    PubMed

    Safrani, Avner; Abdulhalim, Ibrahim

    2015-10-15

    High-speed three dimensional imaging based on two-wavelength parallel-phase-shift interferometry is presented. The technique is demonstrated using a high-resolution polarization-based Linnik interferometer operating with three high-speed phase-masked CCD cameras and two quasi-monochromatic modulated light sources. The two light sources allow for phase unwrapping the single source wrapped phase so that relatively high step profiles having heights as large as 3.7 μm can be imaged in video rate with ±2  nm accuracy and repeatability. The technique is validated using a certified very large scale integration (VLSI) step standard followed by a demonstration from the semiconductor industry showing an integrated chip with 2.75 μm height copper micro pillars at different packing densities.

  10. Characterization of energetic devices for thermal battery applications by high-speed photography

    SciTech Connect

    Dosser, L.R.; Guidotti, R.

    1993-12-31

    High-speed photography at rates of up to 20,000 images per second was used to measure these properties in thermal battery igniters and also the ignition of thermal battery itself. By synchronizing a copper vapor laser to the high-speed camera, laser-illuminated images recorded details of the performance of a component. Output characteristics of several types of hermetically-sealed igniters using a TiH{chi}/KCIO{sub 4} pyrotechnic blend were measured as a function of the particle size of the pyrotechnic fuel and the closure disc thickness. The igniters were filmed under both ambient (i.e., unconfined) and confined conditions. Recently, the function of the igniter in a cut-away section of a ``mock`` thermal battery has been filmed. Partial details of these films are discussed in this paper, and selected examples of the films will be displayed via video tape during the presentation of the paper.

  11. Optimizing Detection Rate and Characterization of Subtle Paroxysmal Neonatal Abnormal Facial Movements with Multi-Camera Video-Electroencephalogram Recordings.

    PubMed

    Pisani, Francesco; Pavlidis, Elena; Cattani, Luca; Ferrari, Gianluigi; Raheli, Riccardo; Spagnoli, Carlotta

    2016-06-01

    Objectives We retrospectively analyze the diagnostic accuracy for paroxysmal abnormal facial movements, comparing one camera versus multi-camera approach. Background Polygraphic video-electroencephalogram (vEEG) recording is the current gold standard for brain monitoring in high-risk newborns, especially when neonatal seizures are suspected. One camera synchronized with the EEG is commonly used. Methods Since mid-June 2012, we have started using multiple cameras, one of which point toward newborns' faces. We evaluated vEEGs recorded in newborns in the study period between mid-June 2012 and the end of September 2014 and compared, for each recording, the diagnostic accuracies obtained with one-camera and multi-camera approaches. Results We recorded 147 vEEGs from 87 newborns and found 73 episodes of paroxysmal facial abnormal movements in 18 vEEGs of 11 newborns with the multi-camera approach. By using the single-camera approach, only 28.8% of these events were identified (21/73). Ten positive vEEGs with multicamera with 52 paroxysmal facial abnormal movements (52/73, 71.2%) would have been considered as negative with the single-camera approach. Conclusions The use of one additional facial camera can significantly increase the diagnostic accuracy of vEEGs in the detection of paroxysmal abnormal facial movements in the newborns.

  12. Plant iodine-131 uptake in relation to root concentration as measured in minirhizotron by video camera:

    SciTech Connect

    Moss, K.J.

    1990-09-01

    Glass viewing tubes (minirhizotrons) were placed in the soil beneath native perennial bunchgrass (Agropyron spicatum). The tubes provided access for observing and quantifying plant roots with a miniature video camera and soil moisture estimates by neutron hydroprobe. The radiotracer I-131 was delivered to the root zone at three depths with differing root concentrations. The plant was subsequently sampled and analyzed for I-131. Plant uptake was greater when I-131 was applied at soil depths with higher root concentrations. When I-131 was applied at soil depths with lower root concentrations, plant uptake was less. However, the relationship between root concentration and plant uptake was not a direct one. When I-131 was delivered to deeper soil depths with low root concentrations, the quantity of roots there appeared to be less effective in uptake than the same quantity of roots at shallow soil depths with high root concentration. 29 refs., 6 figs., 11 tabs.

  13. Embedded FIR filter design for real-time refocusing using a standard plenoptic video camera

    NASA Astrophysics Data System (ADS)

    Hahne, Christopher; Aggoun, Amar

    2014-03-01

    A novel and low-cost embedded hardware architecture for real-time refocusing based on a standard plenoptic camera is presented in this study. The proposed layout design synthesizes refocusing slices directly from micro images by omitting the process for the commonly used sub-aperture extraction. Therefore, intellectual property cores, containing switch controlled Finite Impulse Response (FIR) filters, are developed and applied to the Field Programmable Gate Array (FPGA) XC6SLX45 from Xilinx. Enabling the hardware design to work economically, the FIR filters are composed of stored product as well as upsampling and interpolation techniques in order to achieve an ideal relation between image resolution, delay time, power consumption and the demand of logic gates. The video output is transmitted via High-Definition Multimedia Interface (HDMI) with a resolution of 720p at a frame rate of 60 fps conforming to the HD ready standard. Examples of the synthesized refocusing slices are presented.

  14. A two camera video imaging system with application to parafoil angle of attack measurements

    NASA Technical Reports Server (NTRS)

    Meyn, Larry A.; Bennett, Mark S.

    1991-01-01

    This paper describes the development of a two-camera, video imaging system for the determination of three-dimensional spatial coordinates from stereo images. This system successfully measured angle of attack at several span-wise locations for large-scale parafoils tested in the NASA Ames 80- by 120-Foot Wind Tunnel. Measurement uncertainty for angle of attack was less than 0.6 deg. The stereo ranging system was the primary source for angle of attack measurements since inclinometers sewn into the fabric ribs of the parafoils had unknown angle offsets acquired during installation. This paper includes discussions of the basic theory and operation of the stereo ranging system, system measurement uncertainty, experimental set-up, calibration results, and test results. Planned improvements and enhancements to the system are also discussed.

  15. High-speed imaging using CMOS image sensor with quasi pixel-wise exposure

    NASA Astrophysics Data System (ADS)

    Sonoda, T.; Nagahara, H.; Endo, K.; Sugiyama, Y.; Taniguchi, R.

    2017-02-01

    Several recent studies in compressive video sensing have realized scene capture beyond the fundamental trade-off limit between spatial resolution and temporal resolution using random space-time sampling. However, most of these studies showed results for higher frame rate video that were produced by simulation experiments or using an optically simulated random sampling camera, because there are currently no commercially available image sensors with random exposure or sampling capabilities. We fabricated a prototype complementary metal oxide semiconductor (CMOS) image sensor with quasi pixel-wise exposure timing that can realize nonuniform space-time sampling. The prototype sensor can reset exposures independently by columns and fix these amount of exposure by rows for each 8x8 pixel block. This CMOS sensor is not fully controllable via the pixels, and has line-dependent controls, but it offers flexibility when compared with regular CMOS or charge-coupled device sensors with global or rolling shutters. We propose a method to realize pseudo-random sampling for high-speed video acquisition that uses the flexibility of the CMOS sensor. We reconstruct the high-speed video sequence from the images produced by pseudo-random sampling using an over-complete dictionary.

  16. Nonconvex compressive video sensing

    NASA Astrophysics Data System (ADS)

    Chen, Liangliang; Yan, Ming; Qian, Chunqi; Xi, Ning; Zhou, Zhanxin; Yang, Yongliang; Song, Bo; Dong, Lixin

    2016-11-01

    High-speed cameras explore more details than normal cameras in the time sequence, while the conventional video sampling suffers from the trade-off between temporal and spatial resolutions due to the sensor's physical limitation. Compressive sensing overcomes this obstacle by combining the sampling and compression procedures together. A single-pixel-based real-time video acquisition is proposed to record dynamic scenes, and a fast nonconvex algorithm for the nonconvex sorted ℓ1 regularization is applied to reconstruct frame differences using few numbers of measurements. Then, an edge-detection-based denoising method is employed to reduce the error in the frame difference image. The experimental results show that the proposed algorithm together with the single-pixel imaging system makes compressive video cameras available.

  17. Autonomous video camera system for monitoring impacts to benthic habitats from demersal fishing gear, including longlines

    NASA Astrophysics Data System (ADS)

    Kilpatrick, Robert; Ewing, Graeme; Lamb, Tim; Welsford, Dirk; Constable, Andrew

    2011-04-01

    Studies of the interactions of demersal fishing gear with the benthic environment are needed in order to manage conservation of benthic habitats. There has been limited direct assessment of these interactions through deployment of cameras on commercial fishing gear especially on demersal longlines. A compact, autonomous deep-sea video system was designed and constructed by the Australian Antarctic Division (AAD) for deployment on commercial fishing gear to observe interactions with benthos in the Southern Ocean finfish fisheries (targeting toothfish, Dissostichus spp). The Benthic Impacts Camera System (BICS) is capable of withstanding depths to 2500 m, has been successfully fitted to both longline and demersal trawl fishing gear, and is suitable for routine deployment by non-experts such as fisheries observers or crew. The system is entirely autonomous, robust, compact, easy to operate, and has minimal effect on the performance of the fishing gear it is attached to. To date, the system has successfully captured footage that demonstrates the interactions between demersal fishing gear and the benthos during routine commercial operations. It provides the first footage demonstrating the nature of the interaction between demersal longlines and benthic habitats in the Southern Ocean, as well as showing potential as a tool for rapidly assessing habitat types and presence of mobile biota such as krill ( Euphausia superba).

  18. Characterization and Compensation of High Speed Digitizers

    SciTech Connect

    Fong, P; Teruya, A; Lowry, M

    2005-04-04

    Increasingly, ADC technology is being pressed into service for single single-shot instrumentation applications that were formerly served by vacuum-tube based oscilloscopes and streak cameras. ADC technology, while convenient, suffers significant performance impairments. Thus, in these demanding applications, a quantitative and accurate representation of these impairments is critical to an understanding of measurement accuracy. We have developed a phase-plane behavioral model, implemented it in SIMULINK and applied it to interleaved, high-speed ADCs (up to 4 gigasamples/sec). We have also developed and demonstrated techniques to effectively compensate for these impairments based upon the model.

  19. 3-D high-speed imaging of volcanic bomb trajectory in basaltic explosive eruptions

    USGS Publications Warehouse

    Gaudin, D.; Taddeucci, J; Houghton, B. F.; Orr, Tim R.; Andronico, D.; Del Bello, E.; Kueppers, U.; Ricci, T.; Scarlato, P.

    2016-01-01

    Imaging, in general, and high speed imaging in particular are important emerging tools for the study of explosive volcanic eruptions. However, traditional 2-D video observations cannot measure volcanic ejecta motion toward and away from the camera, strongly hindering our capability to fully determine crucial hazard-related parameters such as explosion directionality and pyroclasts' absolute velocity. In this paper, we use up to three synchronized high-speed cameras to reconstruct pyroclasts trajectories in three dimensions. Classical stereographic techniques are adapted to overcome the difficult observation conditions of active volcanic vents, including the large number of overlapping pyroclasts which may change shape in flight, variable lighting and clouding conditions, and lack of direct access to the target. In particular, we use a laser rangefinder to measure the geometry of the filming setup and manually track pyroclasts on the videos. This method reduces uncertainties to 10° in azimuth and dip angle of the pyroclasts, and down to 20% in the absolute velocity estimation. We demonstrate the potential of this approach by three examples: the development of an explosion at Stromboli, a bubble burst at Halema'uma'u lava lake, and an in-flight collision between two bombs at Stromboli.

  20. 3-D high-speed imaging of volcanic bomb trajectory in basaltic explosive eruptions

    NASA Astrophysics Data System (ADS)

    Gaudin, D.; Taddeucci, J.; Houghton, B. F.; Orr, T. R.; Andronico, D.; Del Bello, E.; Kueppers, U.; Ricci, T.; Scarlato, P.

    2016-10-01

    Imaging, in general, and high speed imaging in particular are important emerging tools for the study of explosive volcanic eruptions. However, traditional 2-D video observations cannot measure volcanic ejecta motion toward and away from the camera, strongly hindering our capability to fully determine crucial hazard-related parameters such as explosion directionality and pyroclasts' absolute velocity. In this paper, we use up to three synchronized high-speed cameras to reconstruct pyroclasts trajectories in three dimensions. Classical stereographic techniques are adapted to overcome the difficult observation conditions of active volcanic vents, including the large number of overlapping pyroclasts which may change shape in flight, variable lighting and clouding conditions, and lack of direct access to the target. In particular, we use a laser rangefinder to measure the geometry of the filming setup and manually track pyroclasts on the videos. This method reduces uncertainties to 10° in azimuth and dip angle of the pyroclasts, and down to 20% in the absolute velocity estimation. We demonstrate the potential of this approach by three examples: the development of an explosion at Stromboli, a bubble burst at Halema'uma'u lava lake, and an in-flight collision between two bombs at Stromboli.

  1. Gated high speed optical detector

    NASA Technical Reports Server (NTRS)

    Green, S. I.; Carson, L. M.; Neal, G. W.

    1973-01-01

    The design, fabrication, and test of two gated, high speed optical detectors for use in high speed digital laser communication links are discussed. The optical detectors used a dynamic crossed field photomultiplier and electronics including dc bias and RF drive circuits, automatic remote synchronization circuits, automatic gain control circuits, and threshold detection circuits. The equipment is used to detect binary encoded signals from a mode locked neodynium laser.

  2. A unified and efficient framework for court-net sports video analysis using 3D camera modeling

    NASA Astrophysics Data System (ADS)

    Han, Jungong; de With, Peter H. N.

    2007-01-01

    The extensive amount of video data stored on available media (hard and optical disks) necessitates video content analysis, which is a cornerstone for different user-friendly applications, such as, smart video retrieval and intelligent video summarization. This paper aims at finding a unified and efficient framework for court-net sports video analysis. We concentrate on techniques that are generally applicable for more than one sports type to come to a unified approach. To this end, our framework employs the concept of multi-level analysis, where a novel 3-D camera modeling is utilized to bridge the gap between the object-level and the scene-level analysis. The new 3-D camera modeling is based on collecting features points from two planes, which are perpendicular to each other, so that a true 3-D reference is obtained. Another important contribution is a new tracking algorithm for the objects (i.e. players). The algorithm can track up to four players simultaneously. The complete system contributes to summarization by various forms of information, of which the most important are the moving trajectory and real-speed of each player, as well as 3-D height information of objects and the semantic event segments in a game. We illustrate the performance of the proposed system by evaluating it for a variety of court-net sports videos containing badminton, tennis and volleyball, and we show that the feature detection performance is above 92% and events detection about 90%.

  3. The use of high-speed imaging in education

    NASA Astrophysics Data System (ADS)

    Kleine, H.; McNamara, G.; Rayner, J.

    2017-02-01

    Recent improvements in camera technology and the associated improved access to high-speed camera equipment have made it possible to use high-speed imaging not only in a research environment but also specifically for educational purposes. This includes high-speed sequences that are created both with and for a target audience of students in high schools and universities. The primary goal is to engage students in scientific exploration by providing them with a tool that allows them to see and measure otherwise inaccessible phenomena. High-speed imaging has the potential to stimulate students' curiosity as the results are often surprising or may contradict initial assumptions. "Live" demonstrations in class or student- run experiments are highly suitable to have a profound influence on student learning. Another aspect is the production of high-speed images for demonstration purposes. While some of the approaches known from the application of high speed imaging in a research environment can simply be transferred, additional techniques must often be developed to make the results more easily accessible for the targeted audience. This paper describes a range of student-centered activities that can be undertaken which demonstrate how student engagement and learning can be enhanced through the use of high speed imaging using readily available technologies.

  4. The Automatically Triggered Video or Imaging Station (ATVIS): An Inexpensive Way to Catch Geomorphic Events on Camera

    NASA Astrophysics Data System (ADS)

    Wickert, A. D.

    2010-12-01

    To understand how single events can affect landscape change, we must catch the landscape in the act. Direct observations are rare and often dangerous. While video is a good alternative, commercially-available video systems for field installation cost 11,000, weigh ~100 pounds (45 kg), and shoot 640x480 pixel video at 4 frames per second. This is the same resolution as a cheap point-and-shoot camera, with a frame rate that is nearly an order of magnitude worse. To overcome these limitations of resolution, cost, and portability, I designed and built a new observation station. This system, called ATVIS (Automatically Triggered Video or Imaging Station), costs 450--500 and weighs about 15 pounds. It can take roughly 3 hours of 1280x720 pixel video, 6.5 hours of 640x480 video, or 98,000 1600x1200 pixel photos (one photo every 7 seconds for 8 days). The design calls for a simple Canon point-and-shoot camera fitted with custom firmware that allows 5V pulses through its USB cable to trigger it to take a picture or to initiate or stop video recording. These pulses are provided by a programmable microcontroller that can take input from either sensors or a data logger. The design is easily modifiable to a variety of camera and sensor types, and can also be used for continuous time-lapse imagery. We currently have prototypes set up at a gully near West Bijou Creek on the Colorado high plains and at tributaries to Marble Canyon in northern Arizona. Hopefully, a relatively inexpensive and portable system such as this will allow geomorphologists to supplement sensor networks with photo or video monitoring and allow them to see—and better quantify—the fantastic array of processes that modify landscapes as they unfold. Camera station set up at Badger Canyon, Arizona.Inset: view into box. Clockwise from bottom right: camera, microcontroller (blue), DC converter (red), solar charge controller, 12V battery. Materials and installation assistance courtesy of Ron Griffiths and the

  5. A 3-D High Speed Photographic Survey For Bomb Dropping In The Wind Tunnel

    NASA Astrophysics Data System (ADS)

    Junren, Chen; Liangyi, Chen; Yuxian, Nie; Wenxing, Chen

    1989-06-01

    High speed Stereophotography may obtain 3-D information of the motion object. This paper deals with a high speed stereophotographic survey of dropping bomb in wind tunnel and measurement of its displacement, velocity, acceleration, angle of attack and yaw angle. Two high speed cinecameras are used, the two optical axes of the cameras are perpendicular to each other and in a plane being vertical to the plumb line. The optical axis of a camera (front camera) is parallel with the aircraft body, and the another (side camera) is perpendicular. Before taking the object and image distance of the two cameras must be measured by photographic method. The photographic rate is 304 fps.

  6. Very high-speed digital holography

    NASA Astrophysics Data System (ADS)

    Pérez López, Carlos; Mendoza Santoyo, Fernando; Rodríguez Vera, Ramón; Moreno, David; Barrientos, Bernardino

    2006-08-01

    It is reported for the first time the use of a high speed camera in digital holography with an out of plane sensitivity. The camera takes the image plane holograms of a cw laser illuminated rectangular framed polyester material at a rate of 5000 per second, that is a spacing of 200 microseconds between holograms, and 512 by 500 pixels at 10 bit resolution. The freely standing object has a random movement due to non controlled environmental air currents. As is usual with this technique each digital hologram is Fourier processed in order to obtain upon comparison with a consecutive digital hologram the phase map of the displacement. High quality results showing the amplitude and direction of the random movement are presented.

  7. The application of high-speed digital image correlation.

    SciTech Connect

    Reu, Phillip L.; Miller, Timothy J.

    2008-02-01

    Digital image correlation (DIC) is a method of using digital images to calculate two-dimensional displacement and deformation or for stereo systems three-dimensional shape, displacement, and deformation. While almost any imaging system can be used with DIC, there are some important challenges when working with the technique in high- and ultra-high-speed applications. This article discusses three of these challenges: camera sensor technology, camera frame rate, and camera motion mitigation. Potential solutions are treated via three demonstration experiments showing the successful application of high-speed DIC for dynamic events. The application and practice of DIC at high speeds, rather than the experimental results themselves, provide the main thrust of the discussion.

  8. High-speed AFM of human chromosomes in liquid

    NASA Astrophysics Data System (ADS)

    Picco, L. M.; Dunton, P. G.; Ulcinas, A.; Engledew, D. J.; Hoshi, O.; Ushiki, T.; Miles, M. J.

    2008-09-01

    Further developments of the previously reported high-speed contact-mode AFM are described. The technique is applied to the imaging of human chromosomes at video rate both in air and in water. These are the largest structures to have been imaged with high-speed AFM and the first imaging in liquid to be reported. A possible mechanism that allows such high-speed contact-mode imaging without significant damage to the sample is discussed in the context of the velocity dependence of the measured lateral force on the AFM tip.

  9. Color video camera capable of 1,000,000 fps with triple ultrahigh-speed image sensors

    NASA Astrophysics Data System (ADS)

    Maruyama, Hirotaka; Ohtake, Hiroshi; Hayashida, Tetsuya; Yamada, Masato; Kitamura, Kazuya; Arai, Toshiki; Tanioka, Kenkichi; Etoh, Takeharu G.; Namiki, Jun; Yoshida, Tetsuo; Maruno, Hiromasa; Kondo, Yasushi; Ozaki, Takao; Kanayama, Shigehiro

    2005-03-01

    We developed an ultrahigh-speed, high-sensitivity, color camera that captures moving images of phenomena too fast to be perceived by the human eye. The camera operates well even under restricted lighting conditions. It incorporates a special CCD device that is capable of ultrahigh-speed shots while retaining its high sensitivity. Its ultrahigh-speed shooting capability is made possible by directly connecting CCD storages, which record video images, to photodiodes of individual pixels. Its large photodiode area together with the low-noise characteristic of the CCD contributes to its high sensitivity. The camera can clearly capture events even under poor light conditions, such as during a baseball game at night. Our camera can record the very moment the bat hits the ball.

  10. Optimal camera exposure for video surveillance systems by predictive control of shutter speed, aperture, and gain

    NASA Astrophysics Data System (ADS)

    Torres, Juan; Menéndez, José Manuel

    2015-02-01

    This paper establishes a real-time auto-exposure method to guarantee that surveillance cameras in uncontrolled light conditions take advantage of their whole dynamic range while provide neither under nor overexposed images. State-of-the-art auto-exposure methods base their control on the brightness of the image measured in a limited region where the foreground objects are mostly located. Unlike these methods, the proposed algorithm establishes a set of indicators based on the image histogram that defines its shape and position. Furthermore, the location of the objects to be inspected is likely unknown in surveillance applications. Thus, the whole image is monitored in this approach. To control the camera settings, we defined a parameters function (Ef ) that linearly depends on the shutter speed and the electronic gain; and is inversely proportional to the square of the lens aperture diameter. When the current acquired image is not overexposed, our algorithm computes the value of Ef that would move the histogram to the maximum value that does not overexpose the capture. When the current acquired image is overexposed, it computes the value of Ef that would move the histogram to a value that does not underexpose the capture and remains close to the overexposed region. If the image is under and overexposed, the whole dynamic range of the camera is therefore used, and a default value of the Ef that does not overexpose the capture is selected. This decision follows the idea that to get underexposed images is better than to get overexposed ones, because the noise produced in the lower regions of the histogram can be removed in a post-processing step while the saturated pixels of the higher regions cannot be recovered. The proposed algorithm was tested in a video surveillance camera placed at an outdoor parking lot surrounded by buildings and trees which produce moving shadows in the ground. During the daytime of seven days, the algorithm was running alternatively together

  11. High-speed imaging on static tensile test for unidirectional CFRP

    NASA Astrophysics Data System (ADS)

    Kusano, Hideaki; Aoki, Yuichiro; Hirano, Yoshiyasu; Kondo, Yasushi; Nagao, Yosuke

    2008-11-01

    The objective of this study is to clarify the fracture mechanism of unidirectional CFRP (Carbon Fiber Reinforced Plastics) under static tensile loading. The advantages of CFRP are higher specific stiffness and strength than the metal material. The use of CFRP is increasing in not only the aerospace and rapid transit railway industries but also the sports, leisure and automotive industries. The tensile fracture mechanism of unidirectional CFRP has not been experimentally made clear because the fracture speed of unidirectional CFRP is quite high. We selected the intermediate modulus and high strength unidirectional CFRP laminate which is a typical material used in the aerospace field. The fracture process under static tensile loading was captured by a conventional high-speed camera and a new type High-Speed Video Camera HPV-1. It was found that the duration of fracture is 200 microseconds or less, then images taken by a conventional camera doesn't have enough temporal-resolution. On the other hand, results obtained by HPV-1 have higher quality where the fracture process can be clearly observed.

  12. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room

    PubMed Central

    Nair, Akshay Gopinathan; Kamal, Saurabh; Dave, Tarjani Vivek; Mishra, Kapil; Reddy, Harsha S; Rocca, David Della; Rocca, Robert C Della; Andron, Aleza; Jain, Vandana

    2015-01-01

    Objective: To study the utility of a commercially available small, portable ultra-high definition (HD) camera (GoPro Hero 4) for intraoperative recording. Methods: A head mount was used to fix the camera on the operating surgeon's head. Due care was taken to protect the patient's identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each). The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. Results: The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon's head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. Conclusions: A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery. PMID:26655001

  13. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room.

    PubMed

    Nair, Akshay Gopinathan; Kamal, Saurabh; Dave, Tarjani Vivek; Mishra, Kapil; Reddy, Harsha S; Della Rocca, David; Della Rocca, Robert C; Andron, Aleza; Jain, Vandana

    2015-10-01

    To study the utility of a commercially available small, portable ultra-high definition (HD) camera (GoPro Hero 4) for intraoperative recording. A head mount was used to fix the camera on the operating surgeon's head. Due care was taken to protect the patient's identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each). The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon's head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery.

  14. Investigating particle phase velocity in a 3D spouted bed by a novel fiber high speed photography method

    NASA Astrophysics Data System (ADS)

    Qian, Long; Lu, Yong; Zhong, Wenqi; Chen, Xi; Ren, Bing; Jin, Baosheng

    2013-07-01

    A novel fiber high speed photography method has been developed to measure particle phase velocity in a dense gas-solid flow. The measurement system mainly includes a fiber-optic endoscope, a high speed video camera, a metal halide light source and a powerful computer with large memory. The endoscope which could be inserted into the reactors is used to form motion images of particles within the measurement window illuminated by the metal halide lamp. These images are captured by the high speed video camera and processed through a series of digital image processing algorithms, such as calibration, denoising, enhancement and binarization in order to improve the image quality. Then particles' instantaneous velocity is figured out by tracking each particle in consecutive frames. Particle phase velocity is statistically calculated according to the probability of particle velocity in each frame within a time period. This system has been applied to the investigation of particles fluidization characteristics in a 3D spouted bed. The experimental results indicate that the particle fluidization feature in the region investigated could be roughly classified into three sections by particle phase vertical velocity and the boundary between the first section and the second is the surface where particle phase velocity tends to be 0, which is in good agreement with the results published in other literature.

  15. Applications for high-speed infrared imaging

    NASA Astrophysics Data System (ADS)

    Richards, Austin A.

    2005-03-01

    The phrase high-speed imaging is generally associated with short exposure times, fast frame rates or both. Supersonic projectiles, for example, are often impossible to see with the unaided eye, and require strobe photography to stop their apparent motion. It is often necessary to image high-speed objects in the infrared region of the spectrum, either to detect them or to measure their surface temperature. Conventional infrared cameras have time constants similar to the human eye, so they too, are often at a loss when it comes to photographing fast-moving hot targets. Other types of targets or scenes such as explosions change very rapidly with time. Visualizing those changes requires an extremely high frame rate combined with short exposure times in order to slow down a dynamic event so that it can be studied and quantified. Recent advances in infrared sensor technology and computing power have pushed the envelope of what is possible to achieve with commercial IR camera systems.

  16. Nyquist sampling theorem: understanding the illusion of a spinning wheel captured with a video camera

    NASA Astrophysics Data System (ADS)

    Lévesque, Luc

    2014-11-01

    Inaccurate measurements occur regularly in data acquisition as a result of improper sampling times. An understanding of proper sampling times when collecting data with an analogue-to-digital converter or video camera is crucial in order to avoid anomalies. A proper choice of sampling times should be based on the Nyquist sampling theorem. If the sampling time is chosen judiciously, then it is possible to accurately determine the frequency of a signal varying periodically with time. This paper is of educational value as it presents the principles of sampling during data acquisition. The concept of the Nyquist sampling theorem is usually introduced very briefly in the literature, with very little practical examples to grasp its importance during data acquisitions. Through a series of carefully chosen examples, we attempt to present data sampling from the elementary conceptual idea and try to lead the reader naturally to the Nyquist sampling theorem so we may more clearly understand why a signal can be interpreted incorrectly during a data acquisition procedure in the case of undersampling.

  17. A high sensitivity 20Mfps CMOS image sensor with readout speed of 1Tpixel/sec for visualization of ultra-high speed phenomena

    NASA Astrophysics Data System (ADS)

    Kuroda, R.; Sugawa, S.

    2017-02-01

    Ultra-high speed (UHS) CMOS image sensors with on-chop analog memories placed on the periphery of pixel array for the visualization of UHS phenomena are overviewed in this paper. The developed UHS CMOS image sensors consist of 400H×256V pixels and 128 memories/pixel, and the readout speed of 1Tpixel/sec is obtained, leading to 10 Mfps full resolution video capturing with consecutive 128 frames, and 20 Mfps half resolution video capturing with consecutive 256 frames. The first development model has been employed in the high speed video camera and put in practical use in 2012. By the development of dedicated process technologies, photosensitivity improvement and power consumption reduction were simultaneously achieved, and the performance improved version has been utilized in the commercialized high-speed video camera since 2015 that offers 10 Mfps with ISO16,000 photosensitivity. Due to the improved photosensitivity, clear images can be captured and analyzed even under low light condition, such as under a microscope as well as capturing of UHS light emission phenomena.

  18. Visualization of high speed liquid jet impaction on a moving surface.

    PubMed

    Guo, Yuchen; Green, Sheldon

    2015-04-17

    Two apparatuses for examining liquid jet impingement on a high-speed moving surface are described: an air cannon device (for examining surface speeds between 0 and 25 m/sec) and a spinning disk device (for examining surface speeds between 15 and 100 m/sec). The air cannon linear traverse is a pneumatic energy-powered system that is designed to accelerate a metal rail surface mounted on top of a wooden projectile. A pressurized cylinder fitted with a solenoid valve rapidly releases pressurized air into the barrel, forcing the projectile down the cannon barrel. The projectile travels beneath a spray nozzle, which impinges a liquid jet onto its metal upper surface, and the projectile then hits a stopping mechanism. A camera records the jet impingement, and a pressure transducer records the spray nozzle backpressure. The spinning disk set-up consists of a steel disk that reaches speeds of 500 to 3,000 rpm via a variable frequency drive (VFD) motor. A spray system similar to that of the air cannon generates a liquid jet that impinges onto the spinning disc, and cameras placed at several optical access points record the jet impingement. Video recordings of jet impingement processes are recorded and examined to determine whether the outcome of impingement is splash, splatter, or deposition. The apparatuses are the first that involve the high speed impingement of low-Reynolds-number liquid jets on high speed moving surfaces. In addition to its rail industry applications, the described technique may be used for technical and industrial purposes such as steelmaking and may be relevant to high-speed 3D printing.

  19. High-Speed Electrochemical Imaging.

    PubMed

    Momotenko, Dmitry; Byers, Joshua C; McKelvey, Kim; Kang, Minkyung; Unwin, Patrick R

    2015-09-22

    The design, development, and application of high-speed scanning electrochemical probe microscopy is reported. The approach allows the acquisition of a series of high-resolution images (typically 1000 pixels μm(-2)) at rates approaching 4 seconds per frame, while collecting up to 8000 image pixels per second, about 1000 times faster than typical imaging speeds used up to now. The focus is on scanning electrochemical cell microscopy (SECCM), but the principles and practicalities are applicable to many electrochemical imaging methods. The versatility of the high-speed scan concept is demonstrated at a variety of substrates, including imaging the electroactivity of a patterned self-assembled monolayer on gold, visualization of chemical reactions occurring at single wall carbon nanotubes, and probing nanoscale electrocatalysts for water splitting. These studies provide movies of spatial variations of electrochemical fluxes as a function of potential and a platform for the further development of high speed scanning with other electrochemical imaging techniques.

  20. SEAL FOR HIGH SPEED CENTRIFUGE

    DOEpatents

    Skarstrom, C.W.

    1957-12-17

    A seal is described for a high speed centrifuge wherein the centrifugal force of rotation acts on the gasket to form a tight seal. The cylindrical rotating bowl of the centrifuge contains a closure member resting on a shoulder in the bowl wall having a lower surface containing bands of gasket material, parallel and adjacent to the cylinder wall. As the centrifuge speed increases, centrifugal force acts on the bands of gasket material forcing them in to a sealing contact against the cylinder wall. This arrangememt forms a simple and effective seal for high speed centrifuges, replacing more costly methods such as welding a closure in place.