Science.gov

Sample records for high-speed video camera

  1. Development of high-speed video cameras

    NASA Astrophysics Data System (ADS)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  2. Precise color images a high-speed color video camera system with three intensified sensors

    NASA Astrophysics Data System (ADS)

    Oki, Sachio; Yamakawa, Masafumi; Gohda, Susumu; Etoh, Takeharu G.

    1999-06-01

    High speed imaging systems have been used in a large field of science and engineering. Although the high speed camera systems have been improved to high performance, most of their applications are only to get high speed motion pictures. However, in some fields of science and technology, it is useful to get some other information, such as temperature of combustion flame, thermal plasma and molten materials. Recent digital high speed video imaging technology should be able to get such information from those objects. For this purpose, we have already developed a high speed video camera system with three-intensified-sensors and cubic prism image splitter. The maximum frame rate is 40,500 pps (picture per second) at 64 X 64 pixels and 4,500 pps at 256 X 256 pixels with 256 (8 bit) intensity resolution for each pixel. The camera system can store more than 1,000 pictures continuously in solid state memory. In order to get the precise color images from this camera system, we need to develop a digital technique, which consists of a computer program and ancillary instruments, to adjust displacement of images taken from two or three image sensors and to calibrate relationship between incident light intensity and corresponding digital output signals. In this paper, the digital technique for pixel-based displacement adjustment are proposed. Although the displacement of the corresponding circle was more than 8 pixels in original image, the displacement was adjusted within 0.2 pixels at most by this method.

  3. HIGH SPEED CAMERA

    DOEpatents

    Rogers, B.T. Jr.; Davis, W.C.

    1957-12-17

    This patent relates to high speed cameras having resolution times of less than one-tenth microseconds suitable for filming distinct sequences of a very fast event such as an explosion. This camera consists of a rotating mirror with reflecting surfaces on both sides, a narrow mirror acting as a slit in a focal plane shutter, various other mirror and lens systems as well as an innage recording surface. The combination of the rotating mirrors and the slit mirror causes discrete, narrow, separate pictures to fall upon the film plane, thereby forming a moving image increment of the photographed event. Placing a reflecting surface on each side of the rotating mirror cancels the image velocity that one side of the rotating mirror would impart, so as a camera having this short a resolution time is thereby possible.

  4. HDR {sup 192}Ir source speed measurements using a high speed video camera

    SciTech Connect

    Fonseca, Gabriel P.; Rubo, Rodrigo A.; Sales, Camila P. de; Verhaegen, Frank

    2015-01-15

    Purpose: The dose delivered with a HDR {sup 192}Ir afterloader can be separated into a dwell component, and a transit component resulting from the source movement. The transit component is directly dependent on the source speed profile and it is the goal of this study to measure accurate source speed profiles. Methods: A high speed video camera was used to record the movement of a {sup 192}Ir source (Nucletron, an Elekta company, Stockholm, Sweden) for interdwell distances of 0.25–5 cm with dwell times of 0.1, 1, and 2 s. Transit dose distributions were calculated using a Monte Carlo code simulating the source movement. Results: The source stops at each dwell position oscillating around the desired position for a duration up to (0.026 ± 0.005) s. The source speed profile shows variations between 0 and 81 cm/s with average speed of ∼33 cm/s for most of the interdwell distances. The source stops for up to (0.005 ± 0.001) s at nonprogrammed positions in between two programmed dwell positions. The dwell time correction applied by the manufacturer compensates the transit dose between the dwell positions leading to a maximum overdose of 41 mGy for the considered cases and assuming an air-kerma strength of 48 000 U. The transit dose component is not uniformly distributed leading to over and underdoses, which is within 1.4% for commonly prescribed doses (3–10 Gy). Conclusions: The source maintains its speed even for the short interdwell distances. Dose variations due to the transit dose component are much lower than the prescribed treatment doses for brachytherapy, although transit dose component should be evaluated individually for clinical cases.

  5. Introducing Contactless Blood Pressure Assessment Using a High Speed Video Camera.

    PubMed

    Jeong, In Cheol; Finkelstein, Joseph

    2016-04-01

    Recent studies demonstrated that blood pressure (BP) can be estimated using pulse transit time (PTT). For PTT calculation, photoplethysmogram (PPG) is usually used to detect a time lag in pulse wave propagation which is correlated with BP. Until now, PTT and PPG were registered using a set of body-worn sensors. In this study a new methodology is introduced allowing contactless registration of PTT and PPG using high speed camera resulting in corresponding image-based PTT (iPTT) and image-based PPG (iPPG) generation. The iPTT value can be potentially utilized for blood pressure estimation however extent of correlation between iPTT and BP is unknown. The goal of this preliminary feasibility study was to introduce the methodology for contactless generation of iPPG and iPTT and to make initial estimation of the extent of correlation between iPTT and BP "in vivo." A short cycling exercise was used to generate BP changes in healthy adult volunteers in three consecutive visits. BP was measured by a verified BP monitor simultaneously with iPTT registration at three exercise points: rest, exercise peak, and recovery. iPPG was simultaneously registered at two body locations during the exercise using high speed camera at 420 frames per second. iPTT was calculated as a time lag between pulse waves obtained as two iPPG's registered from simultaneous recoding of head and palm areas. The average inter-person correlation between PTT and iPTT was 0.85 ± 0.08. The range of inter-person correlations between PTT and iPTT was from 0.70 to 0.95 (p < 0.05). The average inter-person coefficient of correlation between SBP and iPTT was -0.80 ± 0.12. The range of correlations between systolic BP and iPTT was from 0.632 to 0.960 with p < 0.05 for most of the participants. Preliminary data indicated that a high speed camera can be potentially utilized for unobtrusive contactless monitoring of abrupt blood pressure changes in a variety of settings. The initial prototype system was able to

  6. High Speed Digital Camera Technology Review

    NASA Technical Reports Server (NTRS)

    Clements, Sandra D.

    2009-01-01

    A High Speed Digital Camera Technology Review (HSD Review) is being conducted to evaluate the state-of-the-shelf in this rapidly progressing industry. Five HSD cameras supplied by four camera manufacturers participated in a Field Test during the Space Shuttle Discovery STS-128 launch. Each camera was also subjected to Bench Tests in the ASRC Imaging Development Laboratory. Evaluation of the data from the Field and Bench Tests is underway. Representatives from the imaging communities at NASA / KSC and the Optical Systems Group are participating as reviewers. A High Speed Digital Video Camera Draft Specification was updated to address Shuttle engineering imagery requirements based on findings from this HSD Review. This draft specification will serve as the template for a High Speed Digital Video Camera Specification to be developed for the wider OSG imaging community under OSG Task OS-33.

  7. High speed multiwire photon camera

    NASA Technical Reports Server (NTRS)

    Lacy, Jeffrey L. (Inventor)

    1991-01-01

    An improved multiwire proportional counter camera having particular utility in the field of clinical nuclear medicine imaging. The detector utilizes direct coupled, low impedance, high speed delay lines, the segments of which are capacitor-inductor networks. A pile-up rejection test is provided to reject confused events otherwise caused by multiple ionization events occuring during the readout window.

  8. High speed multiwire photon camera

    NASA Technical Reports Server (NTRS)

    Lacy, Jeffrey L. (Inventor)

    1989-01-01

    An improved multiwire proportional counter camera having particular utility in the field of clinical nuclear medicine imaging. The detector utilizes direct coupled, low impedance, high speed delay lines, the segments of which are capacitor-inductor networks. A pile-up rejection test is provided to reject confused events otherwise caused by multiple ionization events occurring during the readout window.

  9. High Speed Video for Airborne Instrumentation Application

    NASA Technical Reports Server (NTRS)

    Tseng, Ting; Reaves, Matthew; Mauldin, Kendall

    2006-01-01

    A flight-worthy high speed color video system has been developed. Extensive system development and ground and environmental. testing hes yielded a flight qualified High Speed Video System (HSVS), This HSVS was initially used on the F-15B #836 for the Lifting Insulating Foam Trajectory (LIFT) project.

  10. Visualization of explosion phenomena using a high-speed video camera with an uncoupled objective lens by fiber-optic

    NASA Astrophysics Data System (ADS)

    Tokuoka, Nobuyuki; Miyoshi, Hitoshi; Kusano, Hideaki; Hata, Hidehiro; Hiroe, Tetsuyuki; Fujiwara, Kazuhito; Yasushi, Kondo

    2008-11-01

    Visualization of explosion phenomena is very important and essential to evaluate the performance of explosive effects. The phenomena, however, generate blast waves and fragments from cases. We must protect our visualizing equipment from any form of impact. In the tests described here, the front lens was separated from the camera head by means of a fiber-optic cable in order to be able to use the camera, a Shimadzu Hypervision HPV-1, for tests in severe blast environment, including the filming of explosions. It was possible to obtain clear images of the explosion that were not inferior to the images taken by the camera with the lens directly coupled to the camera head. It could be confirmed that this system is very useful for the visualization of dangerous events, e.g., at an explosion site, and for visualizations at angles that would be unachievable under normal circumstances.

  11. HIGH SPEED KERR CELL FRAMING CAMERA

    DOEpatents

    Goss, W.C.; Gilley, L.F.

    1964-01-01

    The present invention relates to a high speed camera utilizing a Kerr cell shutter and a novel optical delay system having no moving parts. The camera can selectively photograph at least 6 frames within 9 x 10/sup -8/ seconds during any such time interval of an occurring event. The invention utilizes particularly an optical system which views and transmits 6 images of an event to a multi-channeled optical delay relay system. The delay relay system has optical paths of successively increased length in whole multiples of the first channel optical path length, into which optical paths the 6 images are transmitted. The successively delayed images are accepted from the exit of the delay relay system by an optical image focusing means, which in turn directs the images into a Kerr cell shutter disposed to intercept the image paths. A camera is disposed to simultaneously view and record the 6 images during a single exposure of the Kerr cell shutter. (AEC)

  12. High Speed and Slow Motion: The Technology of Modern High Speed Cameras

    ERIC Educational Resources Information Center

    Vollmer, Michael; Mollmann, Klaus-Peter

    2011-01-01

    The enormous progress in the fields of microsystem technology, microelectronics and computer science has led to the development of powerful high speed cameras. Recently a number of such cameras became available as low cost consumer products which can also be used for the teaching of physics. The technology of high speed cameras is discussed,…

  13. High-speed multicolour photometry with CMOS cameras

    NASA Astrophysics Data System (ADS)

    Pokhvala, S. M.; Zhilyaev, B. E.; Reshetnyk, V. M.

    2012-11-01

    We present the results of testing the commercial digital camera Nikon D90 with a CMOS sensor for high-speed photometry with a small telescope Celestron 11'' at the Peak Terskol Observatory. CMOS sensor allows to perform photometry in 3 filters simultaneously that gives a great advantage compared with monochrome CCD detectors. The Bayer BGR colour system of CMOS sensors is close to the Johnson BVR system. The results of testing show that one can carry out photometric measurements with CMOS cameras for stars with the V-magnitude up to ≃14^{m} with the precision of 0.01^{m}. Stars with the V-magnitude up to ˜10 can be shot at 24 frames per second in the video mode.

  14. High-Speed Video Analysis of Damped Harmonic Motion

    ERIC Educational Resources Information Center

    Poonyawatpornkul, J.; Wattanakasiwich, P.

    2013-01-01

    In this paper, we acquire and analyse high-speed videos of a spring-mass system oscillating in glycerin at different temperatures. Three cases of damped harmonic oscillation are investigated and analysed by using high-speed video at a rate of 120 frames s[superscript -1] and Tracker Video Analysis (Tracker) software. We present empirical data for…

  15. High Speed Video Measurements of a Magneto-optical Trap

    NASA Astrophysics Data System (ADS)

    Horstman, Luke; Graber, Curtis; Erickson, Seth; Slattery, Anna; Hoyt, Chad

    2016-05-01

    We present a video method to observe the mechanical properties of a lithium magneto-optical trap. A sinusoidally amplitude-modulated laser beam perturbed a collection of trapped ce7 Li atoms and the oscillatory response was recorded with a NAC Memrecam GX-8 high speed camera at 10,000 frames per second. We characterized the trap by modeling the oscillating cold atoms as a damped, driven, harmonic oscillator. Matlab scripts tracked the atomic cloud movement and relative phase directly from the captured high speed video frames. The trap spring constant, with magnetic field gradient bz = 36 G/cm, was measured to be 4 . 5 +/- . 5 ×10-19 N/m, which implies a trap resonant frequency of 988 +/- 55 Hz. Additionally, at bz = 27 G/cm the spring constant was measured to be 2 . 3 +/- . 2 ×10-19 N/m, which corresponds to a resonant frequency of 707 +/- 30 Hz. These properties at bz = 18 G/cm were found to be 8 . 8 +/- . 5 ×10-20 N/m, and 438 +/- 13 Hz. NSF #1245573.

  16. Hypervelocity High Speed Projectile Imagery and Video

    NASA Technical Reports Server (NTRS)

    Henderson, Donald J.

    2009-01-01

    This DVD contains video showing the results of hypervelocity impact. One is showing a projectile impact on a Kevlar wrapped Aluminum bottle containing 3000 psi gaseous oxygen. One video show animations of a two stage light gas gun.

  17. High-speed camera with internal real-time image processing

    NASA Astrophysics Data System (ADS)

    Paindavoine, M.; Mosqueron, R.; Dubois, J.; Clerc, C.; Grapin, J. C.; Tomasini, F.

    2005-08-01

    High-speed video cameras are powerful tools for investigating for instance the dynamics of fluids or the movements of mechanical parts in manufacturing processes. In the past years, the use of CMOS sensors instead of CCDs have made possible the development of high-speed video cameras offering digital outputs, readout flexibility and lower manufacturing costs. In this field, we designed a new fast CMOS camera with a 1280×1024 pixels resolution at 500 fps. In order to transmit from the camera only useful information from the fast images, we studied some specific algorithms like edge detection, wavelet analysis, image compression and object tracking. These image processing algorithms have been implemented into a FPGA embedded inside the camera. This FPGA technology allows us to process fast images in real time.

  18. Video-rate fluorescence lifetime imaging camera with CMOS single-photon avalanche diode arrays and high-speed imaging algorithm.

    PubMed

    Li, David D-U; Arlt, Jochen; Tyndall, David; Walker, Richard; Richardson, Justin; Stoppa, David; Charbon, Edoardo; Henderson, Robert K

    2011-09-01

    A high-speed and hardware-only algorithm using a center of mass method has been proposed for single-detector fluorescence lifetime sensing applications. This algorithm is now implemented on a field programmable gate array to provide fast lifetime estimates from a 32 × 32 low dark count 0.13 μm complementary metal-oxide-semiconductor single-photon avalanche diode (SPAD) plus time-to-digital converter array. A simple look-up table is included to enhance the lifetime resolvability range and photon economics, making it comparable to the commonly used least-square method and maximum-likelihood estimation based software. To demonstrate its performance, a widefield microscope was adapted to accommodate the SPAD array and image different test samples. Fluorescence lifetime imaging microscopy on fluorescent beads in Rhodamine 6G at a frame rate of 50 fps is also shown. PMID:21950926

  19. Using High-Speed Video to Examine Differential Roller Ginning of Upland Cotton

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A digital high-speed video camera was used to show what occurs as upland fiber is being pulled off of cottonseed at the ginning point on a roller gin stand. The study included a conventional ginning treatment, and a treatment that attempted to selectively remove only the longer fibers off of cotton...

  20. In a Hurry to Work with High-Speed Video at School?

    ERIC Educational Resources Information Center

    Heck, Andre; Uylings, Peter

    2010-01-01

    Casio Computer Co., Ltd., brought in 2008 high-speed video to the consumer level with the release of the EXILIM Pro EX-F1 and the EX-FH20 digital camera.[R] The EX-F1 point-and-shoot camera can shoot up to 60 six-megapixel photos per second and capture movies at up to 1200 frames per second. All this, for a price of about US $1000 at the time of…

  1. High-speed TV cameras for streak tube readout

    SciTech Connect

    Yates, G.J.; Gallegos, R.A.; Holmes, V.H. ); Turko, B.T. )

    1991-01-01

    Two fast framing TV cameras have been characterized and compared as readout media for imaging of 40 mm diameter streak tube (P-11) phosphor screens. One camera is based upon a Focus-Projection-Scan (FPS) high-speed electrostatic deflected vidicon with 30-mm-diameter PbO target. The other uses an interline transfer charge-coupled device (CCD) with 8.8 {times} 11.4 mm rectangular Si target. The field-of-view (FOV), resolution, responsivity, and dynamic range provided by both cameras when exposed to short duration ({approx} 10 {mu} full width at half maximum (FWHM)) transient illumination followed by a single field readout period of {lt}3 ms are presented. 11 refs., 8 figs., 3 tabs.

  2. High-Speed Edge-Detecting Line Scan Smart Camera

    NASA Technical Reports Server (NTRS)

    Prokop, Norman F.

    2012-01-01

    A high-speed edge-detecting line scan smart camera was developed. The camera is designed to operate as a component in a NASA Glenn Research Center developed inlet shock detection system. The inlet shock is detected by projecting a laser sheet through the airflow. The shock within the airflow is the densest part and refracts the laser sheet the most in its vicinity, leaving a dark spot or shadowgraph. These spots show up as a dip or negative peak within the pixel intensity profile of an image of the projected laser sheet. The smart camera acquires and processes in real-time the linear image containing the shock shadowgraph and outputting the shock location. Previously a high-speed camera and personal computer would perform the image capture and processing to determine the shock location. This innovation consists of a linear image sensor, analog signal processing circuit, and a digital circuit that provides a numerical digital output of the shock or negative edge location. The smart camera is capable of capturing and processing linear images at over 1,000 frames per second. The edges are identified as numeric pixel values within the linear array of pixels, and the edge location information can be sent out from the circuit in a variety of ways, such as by using a microcontroller and onboard or external digital interface to include serial data such as RS-232/485, USB, Ethernet, or CAN BUS; parallel digital data; or an analog signal. The smart camera system can be integrated into a small package with a relatively small number of parts, reducing size and increasing reliability over the previous imaging system..

  3. Parallel image compression circuit for high-speed cameras

    NASA Astrophysics Data System (ADS)

    Nishikawa, Yukinari; Kawahito, Shoji; Inoue, Toru

    2005-02-01

    In this paper, we propose 32 parallel image compression circuits for high-speed cameras. The proposed compression circuits are based on a 4 x 4-point 2-dimensional DCT using a DA method, zigzag scanning of 4 blocks of the 2-D DCT coefficients and a 1-dimensional Huffman coding. The compression engine is designed with FPGAs, and the hardware complexity is compared with JPEG algorithm. It is found that the proposed compression circuits require much less hardware, leading to a compact high-speed implementation of the image compression circuits using parallel processing architecture. The PSNR of the reconstructed image using the proposed encoding method is better than that of JPEG at the region of low compression ratio.

  4. High speed web printing inspection with multiple linear cameras

    NASA Astrophysics Data System (ADS)

    Shi, Hui; Yu, Wenyong

    2011-12-01

    Purpose: To detect the defects during the high speed process of web printing, such as smudges, doctor streaks, pin holes, character misprints, foreign matters, hazing, wrinkles, etc., which are the main infecting factors to the quality of printing presswork. Methods: A set of novel machine vision system is used to detect the defects. This system consists of distributed data processing with multiple linear cameras, effective anti-blooming illumination design and fast image processing algorithm with blob searching. Also, pattern matching adapted to paper tension and snake-moving are emphasized. Results: Experimental results verify the speed, reliability and accuracy of the proposed system, by which most of the main defects are inspected at real time under the speed of 300 m/min. Conclusions: High speed quality inspection of large-size web requires multiple linear cameras to construct distributed data processing system. Also material characters of the printings should also be stressed to design proper optical structure, so that tiny web defects can be inspected with variably angles of illumination.

  5. High Speed Video Applications In The Pharmaceutical Industry

    NASA Astrophysics Data System (ADS)

    Stapley, David

    1985-02-01

    The pursuit of quality is essential in the development and production of drugs. The pursuit of excellence is relentless, a never ending search. In the pharmaceutical industry, we all know and apply wide-ranging techniques to assure quality production. We all know that in reality none of these techniques are perfect for all situations. We have all experienced, the damaged foil, blister or tube, the missing leaflet, the 'hard to read' batch code. We are all aware of the need to supplement the traditional techniques of fault finding. This paper shows how high speed video systems can be applied to fully automated filling and packaging operations as a tool to aid the company's drive for high quality and productivity. The range of products involved totals some 350 in approximately 3,000 pack variants, encompassing creams, ointments, lotions, capsules, tablets, parenteral and sterile antibiotics. Pharmaceutical production demands diligence at all stages, with optimum use of the techniques offered by the latest technology. Figure 1 shows typical stages of pharmaceutical production in which quality must be assured, and highlights those stages where the use of high speed video systems have proved of value to date. The use of high speed video systems begins with the very first use of machine and materials: commissioning and validation, (the term used for determining that a process is capable of consistently producing the requisite quality) and continues to support inprocess monitoring, throughout the life of the plant. The activity of validation in the packaging environment is particularly in need of a tool to see the nature of high speed faults, no matter how infrequently they occur, so that informed changes can be made precisely and rapidly. The prime use of this tool is to ensure that machines are less sensitive to minor variations in component characteristics.

  6. In a Hurry to Work with High-Speed Video at School?

    NASA Astrophysics Data System (ADS)

    Heck, André; Uylings, Peter

    2010-03-01

    Casio Computer Co., Ltd., brought in 2008 high-speed video to the consumer level with the release of the EXILIM Pro EX-F1 and the EX-FH20 digital camera.® The EX-F1 point-and-shoot camera can shoot up to 60 six-megapixel photos per second and capture movies at up to 1200 frames per second. All this, for a price of about US 1000 at the time of introduction and with an ease of operation that allows high school students to work in 10 minutes with the camera. The EX-FH20 is a more compact, more user-friendly, and cheaper high-speed camera that can still shoot up to 40 photos per second and capture up to 1000 fps. Yearly, new camera models appear and prices have gone down to about US 250-300 for a decent high-speed camera. For more details we refer to Casio's website.

  7. Characterization of high-speed video systems: tests and analyses

    NASA Astrophysics Data System (ADS)

    Carlton, Patrick N.; Chenette, Eugene R.; Rowe, W. J.; Snyder, Donald R.

    1992-01-01

    The current method of munitions systems testing uses film cameras to record airborne events such as store separation. After film exposure, much time is spent in developing the film and analyzing the images. If the analysis uses digital methods, additional time is required to digitize the images preparatory to the analysis phase. Because airborne equipment parameters such as exposure time cannot be adjusted in flight, images often suffer as a result of changing lighting conditions. Image degradation from other sources may occur in the film development process, and during digitizing. Advances in the design of charge-coupled device (CCD) cameras and mass storage devices, coupled with sophisticated data compression and transmission systems, provide the means to overcome these shortcomings. A system can be developed where the image sensor provides an analog electronic signal and, consequently, images can be digitized and stored using digital mass storage devices or transmitted to a ground station for immediate viewing and analysis. All electronic imaging and processing offers the potential for improved data quality, rapid response time and closed loop operation. This paper examines high speed, high resolution imaging system design issues assuming an electronic image sensor will be used. Experimental data and analyses are presented on the resolution capability of current film and digital image processing technology. Electrical power dissipation in a high speed, high resolution CCD array is also analyzed.

  8. Combining High-Speed Cameras and Stop-Motion Animation Software to Support Students' Modeling of Human Body Movement

    ERIC Educational Resources Information Center

    Lee, Victor R.

    2015-01-01

    Biomechanics, and specifically the biomechanics associated with human movement, is a potentially rich backdrop against which educators can design innovative science teaching and learning activities. Moreover, the use of technologies associated with biomechanics research, such as high-speed cameras that can produce high-quality slow-motion video,…

  9. High-speed optical shutter coupled to fast-readout CCD camera

    NASA Astrophysics Data System (ADS)

    Yates, George J.; Pena, Claudine R.; McDonald, Thomas E., Jr.; Gallegos, Robert A.; Numkena, Dustin M.; Turko, Bojan T.; Ziska, George; Millaud, Jacques E.; Diaz, Rick; Buckley, John; Anthony, Glen; Araki, Takae; Larson, Eric D.

    1999-04-01

    A high frame rate optically shuttered CCD camera for radiometric imaging of transient optical phenomena has been designed and several prototypes fabricated, which are now in evaluation phase. the camera design incorporates stripline geometry image intensifiers for ultra fast image shutters capable of 200ps exposures. The intensifiers are fiber optically coupled to a multiport CCD capable of 75 MHz pixel clocking to achieve 4KHz frame rate for 512 X 512 pixels from simultaneous readout of 16 individual segments of the CCD array. The intensifier, Philips XX1412MH/E03 is generically a Generation II proximity-focused micro channel plate intensifier (MCPII) redesigned for high speed gating by Los Alamos National Laboratory and manufactured by Philips Components. The CCD is a Reticon HSO512 split storage with bi-direcitonal vertical readout architecture. The camera main frame is designed utilizing a multilayer motherboard for transporting CCD video signals and clocks via imbedded stripline buses designed for 100MHz operation. The MCPII gate duration and gain variables are controlled and measured in real time and up-dated for data logging each frame, with 10-bit resolution, selectable either locally or by computer. The camera provides both analog and 10-bit digital video. The camera's architecture, salient design characteristics, and current test data depicting resolution, dynamic range, shutter sequences, and image reconstruction will be presented and discussed.

  10. Jack & the Video Camera

    ERIC Educational Resources Information Center

    Charlan, Nathan

    2010-01-01

    This article narrates how the use of video camera has transformed the life of Jack Williams, a 10-year-old boy from Colorado Springs, Colorado, who has autism. The way autism affected Jack was unique. For the first nine years of his life, Jack remained in his world, alone. Functionally non-verbal and with motor skill problems that affected his…

  11. High-speed video recording with the TDAS

    NASA Astrophysics Data System (ADS)

    Liu, Daniel W.; Griesheimer, Eric D.; Kesler, Lynn O.

    1990-08-01

    The Tracker Data Acquisition System, TDAS is a system architecture for a high speed data recording and analysis system. The device utilizes dual Direct Memory Access (DMA), parallel Small Computer System Interface (SCSI) interface channels and multiple SCSI hard drives. Video rate data capture and storage is accomplished on 16 bit digital data at video rates to 15 Megahertz. The average data rate is approximately 1 Megabyte per second to the current hard disk drives, with instantaneous rates to 5 Megabytes per second. Message protocol enables symbology and frame data to be stored concurrently with the windowed image data. Dual parallel image buffers store 512 Kilobytes of raw image data for each frame and pass windowed data to the storage drives via the SCSI interfaces. Microcomputer control of DMA, Counter Input/Output, Serial Communications Controller and FIFO's is accomplished with a 16 bit processor which efficiently stores the video and ancillary data. Off-line storage is accomplished on 60 Megabyte streaming tape units for image and data dumps. Current applications mclude real-time multimode tracker performance recording as well as statistical post processing of system parameters. Data retrieval is driven by a separate microcomputer, providing laboratory frame-by-frame analysis of the video images and symbology. The TDAS can support 80 Megabytes of on-line storage presently, but can be simply expanded to 400 Megabytes. Phase 2 of the TDAS will include real-time playback of video images to recreate recorded scenarios. This paper describes the system architecture and implementation of the Tracker Data Acquisition system (TDAS), with current applications.

  12. Motion Analysis Of An Object Onto Fine Plastic Beads Using High-Speed Camera

    NASA Astrophysics Data System (ADS)

    Sato, Minoru

    2010-07-01

    Fine spherical polystyrene beads (NaRiKa, D20-1406-01, industrial materials of styrene form) are useful for frictionless demonstrations of dynamics and kinematics. Sawamoto et al. have developed the method of demonstrations using the plastic beads onto a glass board. These fine beads (the average of the diameter is 280 μm and the standard deviation of the diameter is 56 μm) function as ball bearings to reduce the friction between a moving object, glass Petri dish, and the surface of the glass board. The beads that are charged stick onto the glass board by static electricity, and arrange themselves at intervals. The movement characteristic of a Petri dish that moves on the fine polystyrene beads that adhere onto the glass board is shown by video analysis using a USB camera and a high-speed camera (CASIO, EX-F1). The movement of the Petri dish on the fine polystyrene beads onto the glass board is good linearity, but the friction of the beads is not too small. The high-speed video showed that only a small number of beads behind the bottom of the Petri dish supported the Petri dish. The number of the beads that supported the Petri dish that caused the friction is about 0.14.

  13. Advanced High-Definition Video Cameras

    NASA Technical Reports Server (NTRS)

    Glenn, William

    2007-01-01

    A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.

  14. Efficient and high speed depth-based 2D to 3D video conversion

    NASA Astrophysics Data System (ADS)

    Somaiya, Amisha Himanshu; Kulkarni, Ramesh K.

    2013-09-01

    Stereoscopic video is the new era in video viewing and has wide applications such as medicine, satellite imaging and 3D Television. Such stereo content can be generated directly using S3D cameras. However, this approach requires expensive setup and hence converting monoscopic content to S3D becomes a viable approach. This paper proposes a depth-based algorithm for monoscopic to stereoscopic video conversion by using the y axis co-ordinates of the bottom-most pixels of foreground objects. This code can be used for arbitrary videos without prior database training. It does not face the limitations of single monocular depth cues nor does it combine depth cues, thus consuming less processing time without affecting the efficiency of the 3D video output. The algorithm, though not comparable to real-time, is faster than the other available 2D to 3D video conversion techniques in the average ratio of 1:8 to 1:20, essentially qualifying as high-speed. It is an automatic conversion scheme, hence directly gives the 3D video output without human intervention and with the above mentioned features becomes an ideal choice for efficient monoscopic to stereoscopic video conversion. [Figure not available: see fulltext.

  15. Preliminary analysis on faint luminous lightning events recorded by multiple high speed cameras

    NASA Astrophysics Data System (ADS)

    Alves, J.; Saraiva, A. V.; Pinto, O.; Campos, L. Z.; Antunes, L.; Luz, E. S.; Medeiros, C.; Buzato, T. S.

    2013-12-01

    The objective of this work is the study of some faint luminous events produced by lightning flashes that were recorded simultaneously by multiple high-speed cameras during the previous RAMMER (Automated Multi-camera Network for Monitoring and Study of Lightning) campaigns. The RAMMER network is composed by three fixed cameras and one mobile color camera separated by, in average, distances of 13 kilometers. They were located in the Paraiba Valley (in the cities of São José dos Campos and Caçapava), SP, Brazil, arranged in a quadrilateral shape, centered in São José dos Campos region. This configuration allowed RAMMER to see a thunderstorm from different angles, registering the same lightning flashes simultaneously by multiple cameras. Each RAMMER sensor is composed by a triggering system and a Phantom high-speed camera version 9.1, which is set to operate at a frame rate of 2,500 frames per second with a lens Nikkor (model AF-S DX 18-55 mm 1:3.5 - 5.6 G in the stationary sensors, and a lens model AF-S ED 24 mm - 1:1.4 in the mobile sensor). All videos were GPS (Global Positioning System) time stamped. For this work we used a data set collected in four RAMMER manual operation days in the campaign of 2012 and 2013. On Feb. 18th the data set is composed by 15 flashes recorded by two cameras and 4 flashes recorded by three cameras. On Feb. 19th a total of 5 flashes was registered by two cameras and 1 flash registered by three cameras. On Feb. 22th we obtained 4 flashes registered by two cameras. Finally, in March 6th two cameras recorded 2 flashes. The analysis in this study proposes an evaluation methodology for faint luminous lightning events, such as continuing current. Problems in the temporal measurement of the continuing current can generate some imprecisions during the optical analysis, therefore this work aim to evaluate the effects of distance in this parameter with this preliminary data set. In the cases that include the color camera we analyzed the RGB

  16. Very High-Speed Digital Video Capability for In-Flight Use

    NASA Technical Reports Server (NTRS)

    Corda, Stephen; Tseng, Ting; Reaves, Matthew; Mauldin, Kendall; Whiteman, Donald

    2006-01-01

    digital video camera system has been qualified for use in flight on the NASA supersonic F-15B Research Testbed aircraft. This system is capable of very-high-speed color digital imaging at flight speeds up to Mach 2. The components of this system have been ruggedized and shock-mounted in the aircraft to survive the severe pressure, temperature, and vibration of the flight environment. The system includes two synchronized camera subsystems installed in fuselage-mounted camera pods (see Figure 1). Each camera subsystem comprises a camera controller/recorder unit and a camera head. The two camera subsystems are synchronized by use of an MHub(TradeMark) synchronization unit. Each camera subsystem is capable of recording at a rate up to 10,000 pictures per second (pps). A state-of-the-art complementary metal oxide/semiconductor (CMOS) sensor in the camera head has a maximum resolution of 1,280 1,024 pixels at 1,000 pps. Exposure times of the electronic shutter of the camera range from 1/200,000 of a second to full open. The recorded images are captured in a dynamic random-access memory (DRAM) and can be downloaded directly to a personal computer or saved on a compact flash memory card. In addition to the high-rate recording of images, the system can display images in real time at 30 pps. Inter Range Instrumentation Group (IRIG) time code can be inserted into the individual camera controllers or into the M-Hub unit. The video data could also be used to obtain quantitative, three-dimensional trajectory information. The first use of this system was in support of the Space Shuttle Return to Flight effort. Data were needed to help in understanding how thermally insulating foam is shed from a space shuttle external fuel tank during launch. The cameras captured images of simulated external tank debris ejected from a fixture mounted under the centerline of the F-15B aircraft. Digital video was obtained at subsonic and supersonic flight conditions, including speeds up to Mach 2

  17. Using a High-Speed Camera to Measure the Speed of Sound

    NASA Astrophysics Data System (ADS)

    Hack, William Nathan; Baird, William H.

    2012-01-01

    The speed of sound is a physical property that can be measured easily in the lab. However, finding an inexpensive and intuitive way for students to determine this speed has been more involved. The introduction of affordable consumer-grade high-speed cameras (such as the Exilim EX-FC100) makes conceptually simple experiments feasible. Since the Exilim can capture 1000 frames a second, it provides an easy way for students to calculate the speed of sound by counting video frames from a sound-triggered event they can see. For our experiment, we popped a balloon at a measured distance from a sound-activated high-output LED while recording high-speed video for later analysis. The beauty of using this as the method for calculating the speed of sound is that the software required for frame-by-frame analysis is free and the idea itself (slow motion) is simple. This allows even middle school students to measure the speed of sound with assistance, but the ability to independently verify such a basic result is invaluable for high school or college students.

  18. Head-mountable high speed camera for optical neural recording

    PubMed Central

    Park, Joon Hyuk; Platisa, Jelena; Verhagen, Justus V.; Gautam, Shree H.; Osman, Ahmad; Kim, Dongsoo; Pieribone, Vincent A.; Culurciello, Eugenio

    2011-01-01

    We report a head-mountable CMOS camera for recording rapid neuronal activity in freely-moving rodents using fluorescent activity reporters. This small, lightweight camera is capable of detecting small changes in light intensity (0.2% ΔI/I) at 500 fps. The camera has a resolution of 32 × 32, sensitivity of 0.62 V/lux·s, conversion gain of 0.52 μV/e- and well capacity of 2.1 Me-. The camera, containing intensity offset subtraction circuitry within the imaging chip, is part of a miniaturized epi-fluorescent microscope and represents a first generation, mobile scientific-grade, physiology imaging camera. PMID:21763348

  19. Design and application of a digital array high-speed camera system

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Yao, Xuefeng; Ma, Yinji; Yuan, Yanan

    2016-03-01

    In this paper, a digital array high-speed camera system is designed and applied in dynamic fracture experiment. First, the design scheme for 3*3 array digital high-speed camera system is presented, including 3*3 array light emitting diode (LED) light source unit, 3*3 array charge coupled device (CCD) camera unit, timing delay control unit, optical imaging unit and impact loading unit. Second, the influence of geometric optical parameters on optical parallax is analyzed based on the geometric optical imaging mechanism. Finally, combining the method of dynamic caustics with the digital high-speed camera system, the dynamic fracture behavior of crack initiation and propagation in PMMA specimen under low-speed impact is investigated to verify the feasibility of the high-speed camera system.

  20. A new high-speed IR camera system

    NASA Technical Reports Server (NTRS)

    Travis, Jeffrey W.; Shu, Peter K.; Jhabvala, Murzy D.; Kasten, Michael S.; Moseley, Samuel H.; Casey, Sean C.; Mcgovern, Lawrence K.; Luers, Philip J.; Dabney, Philip W.; Kaipa, Ravi C.

    1994-01-01

    A multi-organizational team at the Goddard Space Flight Center is developing a new far infrared (FIR) camera system which furthers the state of the art for this type of instrument by the incorporating recent advances in several technological disciplines. All aspects of the camera system are optimized for operation at the high data rates required for astronomical observations in the far infrared. The instrument is built around a Blocked Impurity Band (BIB) detector array which exhibits responsivity over a broad wavelength band and which is capable of operating at 1000 frames/sec, and consists of a focal plane dewar, a compact camera head electronics package, and a Digital Signal Processor (DSP)-based data system residing in a standard 486 personal computer. In this paper we discuss the overall system architecture, the focal plane dewar, and advanced features and design considerations for the electronics. This system, or one derived from it, may prove useful for many commercial and/or industrial infrared imaging or spectroscopic applications, including thermal machine vision for robotic manufacturing, photographic observation of short-duration thermal events such as combustion or chemical reactions, and high-resolution surveillance imaging.

  1. Development of High Speed Digital Camera: EXILIM EX-F1

    NASA Astrophysics Data System (ADS)

    Nojima, Osamu

    The EX-F1 is a high speed digital camera featuring a revolutionary improvement in burst shooting speed that is expected to create entirely new markets. This model incorporates a high speed CMOS sensor and a high speed LSI processor. With this model, CASIO has achieved an ultra-high speed 60 frames per second (fps) burst rate for still images, together with 1,200 fps high speed movie that captures movements which cannot even be seen by human eyes. Moreover, this model can record movies at full High-Definition. After launching it into the market, it was able to get a lot of high appraisals as an innovation camera. We will introduce the concept, features and technologies about the EX-F1.

  2. Automated High-Speed Video Detection of Small-Scale Explosives Testing

    NASA Astrophysics Data System (ADS)

    Ford, Robert; Guymon, Clint

    2013-06-01

    Small-scale explosives sensitivity test data is used to evaluate hazards of processing, handling, transportation, and storage of energetic materials. Accurate test data is critical to implementation of engineering and administrative controls for personnel safety and asset protection. Operator mischaracterization of reactions during testing contributes to either excessive or inadequate safety protocols. Use of equipment and associated algorithms to aid the operator in reaction determination can significantly reduce operator error. Safety Management Services, Inc. has developed an algorithm to evaluate high-speed video images of sparks from an ESD (Electrostatic Discharge) machine to automatically determine whether or not a reaction has taken place. The algorithm with the high-speed camera is termed GoDetect (patent pending). An operator assisted version for friction and impact testing has also been developed where software is used to quickly process and store video of sensitivity testing. We have used this method for sensitivity testing with multiple pieces of equipment. We present the fundamentals of GoDetect and compare it to other methods used for reaction detection.

  3. Location accuracy evaluation of lightning location systems using natural lightning flashes recorded by a network of high-speed cameras

    NASA Astrophysics Data System (ADS)

    Alves, J.; Saraiva, A. C. V.; Campos, L. Z. D. S.; Pinto, O., Jr.; Antunes, L.

    2014-12-01

    This work presents a method for the evaluation of location accuracy of all Lightning Location System (LLS) in operation in southeastern Brazil, using natural cloud-to-ground (CG) lightning flashes. This can be done through a multiple high-speed cameras network (RAMMER network) installed in the Paraiba Valley region - SP - Brazil. The RAMMER network (Automated Multi-camera Network for Monitoring and Study of Lightning) is composed by four high-speed cameras operating at 2,500 frames per second. Three stationary black-and-white (B&W) cameras were situated in the cities of São José dos Campos and Caçapava. A fourth color camera was mobile (installed in a car), but operated in a fixed location during the observation period, within the city of São José dos Campos. The average distance among cameras was 13 kilometers. Each RAMMER sensor position was determined so that the network can observe the same lightning flash from different angles and all recorded videos were GPS (Global Position System) time stamped, allowing comparisons of events between cameras and the LLS. The RAMMER sensor is basically composed by a computer, a Phantom high-speed camera version 9.1 and a GPS unit. The lightning cases analyzed in the present work were observed by at least two cameras, their position was visually triangulated and the results compared with BrasilDAT network, during the summer seasons of 2011/2012 and 2012/2013. The visual triangulation method is presented in details. The calibration procedure showed an accuracy of 9 meters between the accurate GPS position of the object triangulated and the result from the visual triangulation method. Lightning return stroke positions, estimated with the visual triangulation method, were compared with LLS locations. Differences between solutions were not greater than 1.8 km.

  4. New measuring concepts using integrated online analysis of color and monochrome digital high-speed camera sequences

    NASA Astrophysics Data System (ADS)

    Renz, Harald

    1997-05-01

    High speed sequences allow a subjective assessment of very fast processes and serve as an important basis for the quantitative analysis of movements. Computer systems help to acquire, handle, display and store digital image sequences as well as to perform measurement tasks automatically. High speed cameras have been used since several years for safety tests, material testing or production optimization. To get the very high speed of 1000 or more images per second, three have been used mainly 16 mm film cameras, which could provide an excellent image resolution and the required time resolution. But up to now, most results have been only judged by viewing. For some special applications like safety tests using crash or high-g sled tests in the automobile industry there have been used image analyzing techniques to measure also the characteristic of moving objects inside images. High speed films, shot during the short impact, allow judgement of the dynamic scene. Additionally they serve as an important basis for the quantitative analysis of the very fast movements. Thus exact values of the velocity and acceleration, the dummies or vehicles are exposed to, can be derived. For analysis of the sequences the positions of signalized points--mostly markers, which are fixed by the test engineers before a test--have to be measured frame by frame. The trajectories show the temporal sequence of the test objects and are the base for calibrated diagrams of distance, velocity and acceleration. Today there are replaced more and more 16 mm film cameras by electronic high speed cameras. The development of high-speed recording systems is very far advanced and the prices of these systems are more and more comparable to those of traditional film cameras. Also the resolution has been increased very greatly. The new cameras are `crashproof' and can be used for similar tasks as the 16 mm film cameras at similar sizes. High speed video cameras now offer an easy setup and direct access to

  5. Color high-speed video stroboscope system for inspection of human larynx

    NASA Astrophysics Data System (ADS)

    Stasicki, Boleslaw; Meier, G. E. A.

    2001-04-01

    The videostroboscopy of the larynx has become a powerful tool for the study of vocal physiology, assessment of the fold abnormalities, motion impairments and functional disorders, as well as for the early diagnosis of diseases like cancer and pathologies like nodules, carcinoma, polyps and cysts. Since the vocal folds vibrate in the range of 100 Hz up to 1 kHz, the video stroboscope allows physicians to find otherwise undetectable problems. The color information is essential for the physician by the diagnosis e.g., of the early cancer stage. A previously presented 'general purpose' monochrome high-speed video stroboscope has been tested also for the inspection of the human larynx. Good results have encouraged the authors to develop a medical color version. In contrast to the conventional stroboscopes the system does not utilize pulsed light for the object illumination. Instead, a special asynchronously shuttered video camera triggered by the oscillating object has been used. The apparatus including a specially developed digital phase shifter provides a stop phase and slow-motion observation in real time with simultaneous recording of the periodically moving objects. The desired position of the vocal folds or their virtual slowed down vibration speed does not depend of the voice pitch changes. Sequences of hundreds of high resolution color frames can be stored on the hard disk in the standard graphic formats. Afterwards they can be played back frame-by-frame or as a video clip, evaluated, exported, printed out and transmitted via computer networks.

  6. High Speed Intensified Video Observations of TLEs in Support of PhOCAL

    NASA Technical Reports Server (NTRS)

    Lyons, Walter A.; Nelson, Thomas E.; Cummer, Steven A.; Lang, Timothy; Miller, Steven; Beavis, Nick; Yue, Jia; Samaras, Tim; Warner, Tom A.

    2013-01-01

    The third observing season of PhOCAL (Physical Origins of Coupling to the upper Atmosphere by Lightning) was conducted over the U.S. High Plains during the late spring and summer of 2013. The goal was to capture using an intensified high-speed camera, a transient luminous event (TLE), especially a sprite, as well as its parent cloud-to-ground (SP+CG) lightning discharge, preferably within the domain of a 3-D lightning mapping array (LMA). The co-capture of sprite and its SP+CG was achieved within useful range of an interferometer operating near Rapid City. Other high-speed sprite video sequences were captured above the West Texas LMA. On several occasions the large mesoscale convective complexes (MCSs) producing the TLE-class lightning were also generating vertically propagating convectively generated gravity waves (CGGWs) at the mesopause which were easily visible using NIR-sensitive color cameras. These were captured concurrent with sprites. These observations were follow-ons to a case on 15 April 2012 in which CGGWs were also imaged by the new Day/Night Band on the Suomi NPP satellite system. The relationship between the CGGW and sprite initiation are being investigated. The past year was notable for a large number of elve+halo+sprite sequences sequences generated by the same parent CG. And on several occasions there appear to be prominent banded modulations of the elves' luminosity imaged at >3000 ips. These stripes appear coincident with the banded CGGW structure, and presumably its density variations. Several elves and a sprite from negative CGs were also noted. New color imaging systems have been tested and found capable of capturing sprites. Two cases of sprites with an aurora as a backdrop were also recorded. High speed imaging was also provided in support of the UPLIGHTS program near Rapid City, SD and the USAFA SPRITES II airborne campaign over the Great Plains.

  7. Investigation of a Plasma Ball using a High Speed Camera

    NASA Astrophysics Data System (ADS)

    Laird, James; Zweben, Stewart; Raitses, Yevgeny; Zwicker, Andrew; Kaganovich, Igor

    2008-11-01

    The physics of how a plasma ball works is not clearly understood. A plasma ball is a commercial ``toy'' in which a center electrode is charged to a high voltage and lightning-like discharges fill the ball with many plasma filaments. The ball uses high voltage applied on the center electrode (˜5 kV) which is covered with glass and capacitively coupled to the plasma filaments. This voltage oscillates at a frequency of ˜26 kHz. A Nebula plasma ball from Edmund Scientific was filmed with a Phantom v7.3 camera, which can operate at speeds up to 150,000 frames per second (fps) with a limit of >=2 μsec exposure per frame. At 100,000 fps the filaments were only visible for ˜5 μsec every ˜40 μsec. When the plasma ball is first switched on, the filaments formed only after ˜800 μsec and initially had a much larger diameter with more chaotic behavior than when the ball reached its final plasma filament state at ˜30 msec. Measurements are also being made of the final filament diameter, the speed of the filament propagation, and the effect of thermal gradients on the filament density. An attempt will be made to explain these results from plasma theory and movies of these filaments will be shown. Possible theoretical models include streamer-like formation, thermal condensation instability, and dielectric barrier discharge instability.

  8. High-Speed Color Video System For Data Acquisition At 200 Fields Per Second

    NASA Astrophysics Data System (ADS)

    Holzapfel, C.

    1982-02-01

    Nac Incorporated has recently introduced a new high speed color video system which employs a standard VHS color video cassette. Playback can be accomplished on either the HSV-200 or on a standard VHS video recorder/playback unit, such as manufactured by JVC or Panasonic.

  9. High-Speed Video Analysis in a Conceptual Physics Class

    ERIC Educational Resources Information Center

    Desbien, Dwain M.

    2011-01-01

    The use of probe ware and computers has become quite common in introductory physics classrooms. Video analysis is also becoming more popular and is available to a wide range of students through commercially available and/or free software. Video analysis allows for the study of motions that cannot be easily measured in the traditional lab setting…

  10. A novel multichannel nonintensified ultra-high-speed camera using multiwavelength illumination

    NASA Astrophysics Data System (ADS)

    Hijazi, Ala; Madhavan, Vis

    2006-08-01

    Multi-channel gated-intensified cameras are commonly used for capturing images at ultra-high frame rates. However, the image intensifier reduces the image resolution to such an extent that the images are often unsuitable for applications requiring high quality images, such as digital image correlation. We report on the development of a new type of non-intensified multi-channel camera system that permits recording of image sequences at ultra-high frame rates at the native resolution afforded by the imaging optics and the cameras used. This camera system is based upon the use of short duration light pulses of different wavelengths for illumination of the target and the use of wavelength selective elements in the imaging system to route each particular wavelength of light to a particular camera. A prototype of this camera system comprising four dual-frame cameras synchronized with four dual-cavity lasers producing laser pulses of four different wavelengths is described. The camera is built around a stereo microscope such that it can capture image sequences usable for 2D or 3D digital image correlation. The camera described herein is capable of capturing images at frame rates exceeding 100 MHz. The camera was used for capturing microscopic images of the chip-workpiece interface area during high speed machining. Digital image correlation was performed on the obtained images to map the shear strain rate in the primary-shear-zone during high speed machining.

  11. Using a High-Speed Camera to Measure the Speed of Sound

    ERIC Educational Resources Information Center

    Hack, William Nathan; Baird, William H.

    2012-01-01

    The speed of sound is a physical property that can be measured easily in the lab. However, finding an inexpensive and intuitive way for students to determine this speed has been more involved. The introduction of affordable consumer-grade high-speed cameras (such as the Exilim EX-FC100) makes conceptually simple experiments feasible. Since the…

  12. Observation of Penetration ``Track'' Formation in Silica Aerogel by High-Speed Camera

    NASA Astrophysics Data System (ADS)

    Okudaira, K.; Hasegawa, S.; Onose, N.; Yano, H.; Tabata, M.; Sugita, S.; Tsuchiyama, A.; Yamagishi, A.; Kawai, H.

    2012-05-01

    In this study, formation of penetration tracks in aerogel was observed and recorded by a high-speed camera. Excavation process of a single track, so-called a carrot track made by a 500 micron-alumina grain was observed.

  13. The World in Slow Motion: Using a High-Speed Camera in a Physics Workshop

    ERIC Educational Resources Information Center

    Dewanto, Andreas; Lim, Geok Quee; Kuang, Jianhong; Zhang, Jinfeng; Yeo, Ye

    2012-01-01

    We present a physics workshop for college students to investigate various physical phenomena using high-speed cameras. The technical specifications required, the step-by-step instructions, as well as the practical limitations of the workshop, are discussed. This workshop is also intended to be a novel way to promote physics to Generation-Y…

  14. High-speed video recording system using multiple CCD imagers and digital storage

    NASA Astrophysics Data System (ADS)

    Racca, Roberto G.; Clements, Reginald M.

    1995-05-01

    This paper describes a fully solid state high speed video recording system. Its principle of operation is based on the use of several independent CCD imagers and an array of liquid crystal light valves that control which imager receives the light from the subject. The imagers are exposed in rapid succession and are then read out sequentially at standard video rate into digital memory, generating a time-resolved sequence with as many frames as there are imagers. This design allows the use of inexpensive, consumer-grade camera modules and electronics. A microprocessor-based controller, designed to accept up to ten imagers, handles all phases of the recording: exposure timing, image digitization and storage, and sequential playback onto a standard video monitor. The system is capable of recording full screen black and white images with spatial resolution similar to that of standard television, at rates of about 10,000 images per second in pulsed illumination mode. We have designed and built two optical configurations for the imager multiplexing system. The first one involves permanently splitting the subject light into multiple channels and placing a liquid crystal shutter in front of each imager. A prototype with three CCD imagers and shutters based on this configuration has allowed successful three-image video recordings of phenomena such as the action of an air rifle pellet shattering a piece of glass, using a high-intensity pulsed light emitting diode as the light source. The second configuration is more light-efficient in that it routes the entire subject light to each individual imager in sequence by using the liquid crystal cells as selectable binary switches. Despite some operational limitations, this method offers a solution when the available light, if subdivided among all the imagers, would not allow a sufficiently short exposure time.

  15. High speed video analysis study of elastic and inelastic collisions

    NASA Astrophysics Data System (ADS)

    Baker, Andrew; Beckey, Jacob; Aravind, Vasudeva; Clarion Team

    We study inelastic and elastic collisions with a high frame rate video capture to study the process of deformation and other energy transformations during collision. Snapshots are acquired before and after collision and the dynamics of collision are analyzed using Tracker software. By observing the rapid changes (over few milliseconds) and slower changes (over few seconds) in momentum and kinetic energy during the process of collision, we study the loss of momentum and kinetic energy over time. Using this data, it could be possible to design experiments that reduce error involved in these experiments, helping students build better and more robust models to understand the physical world. We thank Clarion University undergraduate student grant for financial support involving this project.

  16. Perfect Optical Compensator With 1:1 Shutter Ratio Used For High Speed Camera

    NASA Astrophysics Data System (ADS)

    Zhihong, Rong

    1983-03-01

    An optical compensator used for high speed camera is described. The method of compensation, the analysis of the imaging quality and the result of experiment are introduced. The compensator consists of pairs of parallel mirrors. It can perform perfect compensation even at 1:1 shutter ratio. Using this compensator a high speed camera can be operated with no shutter and can obtain the same image sharpness as that of the intermittent camera. The advantages of this compensator are summarized as follows: . While compensating, the aberration correction of the objective would not be damaged. . There is no displacement and defocussing between the scanning image and the film in frame center during compensation. Increasing the exposure angle doesn't reduce the resolving power. . The compensator can also be used in the projector in place of the intermittent mechanism to practise continuous (non-intermittent) projection without shutter.

  17. Digital synchroballistic schlieren camera for high-speed photography of bullets and rocket sleds

    NASA Astrophysics Data System (ADS)

    Buckner, Benjamin D.; L'Esperance, Drew

    2013-08-01

    A high-speed digital streak camera designed for simultaneous high-resolution color photography and focusing schlieren imaging is described. The camera uses a computer-controlled galvanometer scanner to achieve synchroballistic imaging through a narrow slit. Full color 20 megapixel images of a rocket sled moving at 480 m/s and of projectiles fired at around 400 m/s were captured, with high-resolution schlieren imaging in the latter cases, using conventional photographic flash illumination. The streak camera can achieve a line rate for streak imaging of up to 2.4 million lines/s.

  18. Combining High-Speed Cameras and Stop-Motion Animation Software to Support Students' Modeling of Human Body Movement

    NASA Astrophysics Data System (ADS)

    Lee, Victor R.

    2015-04-01

    Biomechanics, and specifically the biomechanics associated with human movement, is a potentially rich backdrop against which educators can design innovative science teaching and learning activities. Moreover, the use of technologies associated with biomechanics research, such as high-speed cameras that can produce high-quality slow-motion video, can be deployed in such a way to support students' participation in practices of scientific modeling. As participants in classroom design experiment, fifteen fifth-grade students worked with high-speed cameras and stop-motion animation software (SAM Animation) over several days to produce dynamic models of motion and body movement. The designed series of learning activities involved iterative cycles of animation creation and critique and use of various depictive materials. Subsequent analysis of flipbooks of human jumping movements created by the students at the beginning and end of the unit revealed a significant improvement in both the epistemic fidelity of students' representations. Excerpts from classroom observations highlight the role that the teacher plays in supporting students' thoughtful reflection of and attention to slow-motion video. In total, this design and research intervention demonstrates that the combination of technologies, activities, and teacher support can lead to improvements in some of the foundations associated with students' modeling.

  19. Dynamics at the Holuhraun eruption based on high speed video data analysis

    NASA Astrophysics Data System (ADS)

    Witt, Tanja; Walter, Thomas R.

    2016-04-01

    The 2014/2015 Holuhraun eruption was an gas rich fissure eruption with high fountains. The magma was transported by a horizontal dyke over a distance of 45km. At the first day the fountains occur over a distance of 1.5km and focused at isolated vents during the following day. Based on video analysis of the fountains we obtained a detailed view onto the velocities of the eruption, the propagation path of magma, communication between vents and complexities in the magma paths. We collected videos from the Holuhraun eruption with 2 high speed cameras and one DSLR camera from 31st August, 2015 to 4th September, 2015 for several hours. The fountains at adjacent vents visually seemed to be related at all days. Hence, we calculated the height as a function of time from the video data. All fountains show a pulsating regime with apparent and sporadic alternations from meter to several tens of meters heights. By a time-dependent cross-correlation approach developed within the FUTUREVOLC project, we are able to compare the pulses in the height at adjacent vents. We find that in most cases there is a time lag between the pulses. From the calculated time lags between the pulses and the distance between the correlated vents, we calculate the apparent speed of magma pulses. The analysis of the frequency of the fountains and the eruption and rest time between the the fountains itself, are quite similar and suggest a connection and controlling process of the fountains in the feeder below. At the Holuhraun eruption 2014/2015 (Iceland) we find a significant time shift between the single pulses of adjacent vents at all days. The mean velocity of all days is 30-40 km/hr, which could be interpreted by a magma flow velocity along the dike at depth.Comparison of the velocities derived from the video data analysis to the assumed magma flow velocity in the dike based on seismic data shows a very good agreement, implying that surface expressions of pulsating vents provide an insight into the

  20. Monitoring the rotation status of wind turbine blades using high-speed camera system

    NASA Astrophysics Data System (ADS)

    Zhang, Dongsheng; Chen, Jubing; Wang, Qiang; Li, Kai

    2013-06-01

    The measurement of the rotating object is of great significance in engineering applications. In this study, a high-speed dual camera system based on 3D digital image correlation has been developed in order to monitor the rotation status of the wind turbine blades. The system allows sequential images acquired at a rate of 500 frames per second (fps). An improved Newton-Raphson algorithm has been proposed which enables detection movement including large rotation and translation in subpixel precision. The simulation experiments showed that this algorithm is robust to identify the movement if the rotation angle is less than 16 degrees between the adjacent images. The subpixel precision is equivalent to the normal NR algorithm, i.e.0.01 pixels in displacement. As a laboratory research, the high speed camera system was used to measure the movement of the wind turbine model which was driven by an electric fan. In the experiment, the image acquisition rate was set at 387 fps and the cameras were calibrated according to Zhang's method. The blade was coated with randomly distributed speckles and 7 locations in the blade along the radial direction were selected. The displacement components of these 7 locations were measured with the proposed method. Conclusion is drawn that the proposed DIC algorithm is suitable for large rotation detection, and the high-speed dual camera system is a promising, economic method in health diagnose of wind turbine blades.

  1. Algorithm-based high-speed video analysis yields new insights into Strombolian eruptions

    NASA Astrophysics Data System (ADS)

    Gaudin, Damien; Taddeucci, Jacopo; Moroni, Monica; Scarlato, Piergiorgio

    2014-05-01

    Strombolian eruptions are characterized by mild, frequent explosions that eject gas and ash- to bomb-sized pyroclasts into the atmosphere. The observation of the products of the explosion is crucial, both for direct hazard assessment and for understanding eruption dynamics. Conventional thermal and optical imaging allows a first characterization of several eruptive processes, but the use of high speed cameras, with frame rates of 500 Hz or more, allows to follow the particles on multiples frames, and to reconstruct their trajectories. However, the manual processing of the images is time consuming. Consequently, it does not allow neither the routine monitoring nor averaged statistics, since only relatively few, selected particles (usually the fastest) can be taken into account. In addition, manual processing is quite inefficient to compute the total ejected mass, since it requires to count each individual particle. In this presentation, we discuss the advantages of using numerical methods for the tracking of the particles and the description of the explosion. A toolbox called "Pyroclast Tracking Velocimetry" is used to compute the size and the trajectory of each individual particle. A large variety of parameters can be derived and statistically compared: ejection velocity, ejection angle, deceleration, size, mass, etc. At the scale of the explosion, the total mass, the mean velocity of the particles, the number and the frequency of ejection pulses can be estimated. The study of high speed videos from 2 vents from Yasur volcano (Vanuatu) and 4 from Stromboli volcano (Italy) reveals that these parameters are positively correlated. As a consequence, the intensity of an explosion can be quantitatively, and operator-independently described by the total kinetic energy of the bombs, taking into account both the mass and the velocity of the particles. For each vent, a specific range of total kinetic energy can be defined, demonstrating the strong influence of the conduit in

  2. Video camera use at nuclear power plants

    SciTech Connect

    Estabrook, M.L.; Langan, M.O.; Owen, D.E. )

    1990-08-01

    A survey of US nuclear power plants was conducted to evaluate video camera use in plant operations, and determine equipment used and the benefits realized. Basic closed circuit television camera (CCTV) systems are described and video camera operation principles are reviewed. Plant approaches for implementing video camera use are discussed, as are equipment selection issues such as setting task objectives, radiation effects on cameras, and the use of disposal cameras. Specific plant applications are presented and the video equipment used is described. The benefits of video camera use --- mainly reduced radiation exposure and increased productivity --- are discussed and quantified. 15 refs., 6 figs.

  3. ULTRASPEC: an electron multiplication CCD camera for very low light level high speed astronomical spectrometry

    NASA Astrophysics Data System (ADS)

    Ives, Derek; Bezawada, Nagaraja; Dhillon, Vik; Marsh, Tom

    2008-07-01

    We present the design, characteristics and astronomical results for ULTRASPEC, a high speed Electron Multiplication CCD (EMCCD) camera using an E2VCCD201 (1K frame transfer device), developed to prove the performance of this new optical detector technology in astronomical spectrometry, particularly in the high speed, low light level regime. We present both modelled and real data for these detectors with particular regard to avalanche gain and clock induced charge (CIC). We present first light results from the camera as used on the EFOSC-2 instrument at the ESO 3.6 metre telescope in La Silla. We also present the design for a proposed new 4Kx2K frame transfer EMCCD.

  4. Precision of FLEET Velocimetry Using High-speed CMOS Camera Systems

    NASA Technical Reports Server (NTRS)

    Peters, Christopher J.; Danehy, Paul M.; Bathel, Brett F.; Jiang, Naibo; Calvert, Nathan D.; Miles, Richard B.

    2015-01-01

    Femtosecond laser electronic excitation tagging (FLEET) is an optical measurement technique that permits quantitative velocimetry of unseeded air or nitrogen using a single laser and a single camera. In this paper, we seek to determine the fundamental precision of the FLEET technique using high-speed complementary metal-oxide semiconductor (CMOS) cameras. Also, we compare the performance of several different high-speed CMOS camera systems for acquiring FLEET velocimetry data in air and nitrogen free-jet flows. The precision was defined as the standard deviation of a set of several hundred single-shot velocity measurements. Methods of enhancing the precision of the measurement were explored such as digital binning (similar in concept to on-sensor binning, but done in post-processing), row-wise digital binning of the signal in adjacent pixels and increasing the time delay between successive exposures. These techniques generally improved precision; however, binning provided the greatest improvement to the un-intensified camera systems which had low signal-to-noise ratio. When binning row-wise by 8 pixels (about the thickness of the tagged region) and using an inter-frame delay of 65 micro sec, precisions of 0.5 m/s in air and 0.2 m/s in nitrogen were achieved. The camera comparison included a pco.dimax HD, a LaVision Imager scientific CMOS (sCMOS) and a Photron FASTCAM SA-X2, along with a two-stage LaVision High Speed IRO intensifier. Excluding the LaVision Imager sCMOS, the cameras were tested with and without intensification and with both short and long inter-frame delays. Use of intensification and longer inter-frame delay generally improved precision. Overall, the Photron FASTCAM SA-X2 exhibited the best performance in terms of greatest precision and highest signal-to-noise ratio primarily because it had the largest pixels.

  5. Precision of FLEET Velocimetry Using High-Speed CMOS Camera Systems

    NASA Technical Reports Server (NTRS)

    Peters, Christopher J.; Danehy, Paul M.; Bathel, Brett F.; Jiang, Naibo; Calvert, Nathan D.; Miles, Richard B.

    2015-01-01

    Femtosecond laser electronic excitation tagging (FLEET) is an optical measurement technique that permits quantitative velocimetry of unseeded air or nitrogen using a single laser and a single camera. In this paper, we seek to determine the fundamental precision of the FLEET technique using high-speed complementary metal-oxide semiconductor (CMOS) cameras. Also, we compare the performance of several different high-speed CMOS camera systems for acquiring FLEET velocimetry data in air and nitrogen free-jet flows. The precision was defined as the standard deviation of a set of several hundred single-shot velocity measurements. Methods of enhancing the precision of the measurement were explored such as digital binning (similar in concept to on-sensor binning, but done in post-processing), row-wise digital binning of the signal in adjacent pixels and increasing the time delay between successive exposures. These techniques generally improved precision; however, binning provided the greatest improvement to the un-intensified camera systems which had low signal-to-noise ratio. When binning row-wise by 8 pixels (about the thickness of the tagged region) and using an inter-frame delay of 65 microseconds, precisions of 0.5 meters per second in air and 0.2 meters per second in nitrogen were achieved. The camera comparison included a pco.dimax HD, a LaVision Imager scientific CMOS (sCMOS) and a Photron FASTCAM SA-X2, along with a two-stage LaVision HighSpeed IRO intensifier. Excluding the LaVision Imager sCMOS, the cameras were tested with and without intensification and with both short and long inter-frame delays. Use of intensification and longer inter-frame delay generally improved precision. Overall, the Photron FASTCAM SA-X2 exhibited the best performance in terms of greatest precision and highest signal-to-noise ratio primarily because it had the largest pixels.

  6. Invited Article: Quantitative imaging of explosions with high-speed cameras.

    PubMed

    McNesby, Kevin L; Homan, Barrie E; Benjamin, Richard A; Boyle, Vincent M; Densmore, John M; Biss, Matthew M

    2016-05-01

    The techniques presented in this paper allow for mapping of temperature, pressure, chemical species, and energy deposition during and following detonations of explosives, using high speed cameras as the main diagnostic tool. This work provides measurement in the explosive near to far-field (0-500 charge diameters) of surface temperatures, peak air-shock pressures, some chemical species signatures, shock energy deposition, and air shock formation. PMID:27250366

  7. A high speed camera with auto adjustable ROI for product's outline dimension measurement

    NASA Astrophysics Data System (ADS)

    Wang, Qian; Wei, Ping; Ke, Jun; Gao, Jingjing

    2014-11-01

    Currently most domestic factories still manually detect machine arbors to decide if they meet industry standards. This method is costly, low efficient, and easy to misjudge the qualified arbors or miss the unqualified ones, thus seriously affects factories' efficiency and credibility. In this paper, we design a specific high-speed camera system with auto adjustable ROI for machine arbor's outline dimension measurement. The entire system includes an illumination part, a camera part, a mechanic structure part and a signal processing part based on FPGA. The system will help factories to realize automatic arbor measurement, and improve their efficiency and reduce their cost.

  8. High-speed camera analysis for nanoparticles produced by using a pulsed wire-discharge method

    NASA Astrophysics Data System (ADS)

    Kim, Jong Hwan; Kim, Dae Sung; Ryu, Bong Ki; Suematsu, Hisayuki; Tanaka, Kenta

    2016-07-01

    We investigated the performance of a high-speed camera and the nanoparticle size distribution to quantify the mechanism of synthesized nanoparticle formation in a pulsed wire discharge (PWD) experiment. The Sn-58Bi alloy wire was 0.5 mm in diameter and 32 mm long; it was prepared in the PWD chamber, and the evaporation explosion process was observed by using a high-speed camera. In order to vary the conditions and analyze the mechanisms of nanoparticle synthesis in the PWD, we changed the pressure of the N2 gas in the chamber from 25 to 75 kPa. To synthesize nanoparticles on a nano-scale, we fixed the charging voltage at 6 kV, and the high-speed camera captured pictures at 22,500 frames per second. The experimental results show that the electronic explosion process at different N2 gas pressures can be characterized by using the explosion's duration and the explosion's intensity. The experiments at the lowest pressure exhibited a longer explosion duration and a greater intensity. Also, at low pressure, very small nanoparticles with a good dispersion were produced.

  9. Characterization of Axial Inducer Cavitation Instabilities via High Speed Video Recordings

    NASA Technical Reports Server (NTRS)

    Arellano, Patrick; Peneda, Marinelle; Ferguson, Thomas; Zoladz, Thomas

    2011-01-01

    Sub-scale water tests were undertaken to assess the viability of utilizing high resolution, high frame-rate digital video recordings of a liquid rocket engine turbopump axial inducer to characterize cavitation instabilities. These high speed video (HSV) images of various cavitation phenomena, including higher order cavitation, rotating cavitation, alternating blade cavitation, and asymmetric cavitation, as well as non-cavitating flows for comparison, were recorded from various orientations through an acrylic tunnel using one and two cameras at digital recording rates ranging from 6,000 to 15,700 frames per second. The physical characteristics of these cavitation forms, including the mechanisms that define the cavitation frequency, were identified. Additionally, these images showed how the cavitation forms changed and transitioned from one type (tip vortex) to another (sheet cavitation) as the inducer boundary conditions (inlet pressures) were changed. Image processing techniques were developed which tracked the formation and collapse of cavitating fluid in a specified target area, both in the temporal and frequency domains, in order to characterize the cavitation instability frequency. The accuracy of the analysis techniques was found to be very dependent on target size for higher order cavitation, but much less so for the other phenomena. Tunnel-mounted piezoelectric, dynamic pressure transducers were present throughout these tests and were used as references in correlating the results obtained by image processing. Results showed good agreement between image processing and dynamic pressure spectral data. The test set-up, test program, and test results including H-Q and suction performance, dynamic environment and cavitation characterization, and image processing techniques and results will be discussed.

  10. Development of a high-speed CT imaging system using EMCCD camera

    NASA Astrophysics Data System (ADS)

    Thacker, Samta C.; Yang, Kai; Packard, Nathan; Gaysinskiy, Valeriy; Burkett, George; Miller, Stuart; Boone, John M.; Nagarkar, Vivek

    2009-02-01

    The limitations of current CCD-based microCT X-ray imaging systems arise from two important factors. First, readout speeds are curtailed in order to minimize system read noise, which increases significantly with increasing readout rates. Second, the afterglow associated with commercial scintillator films can introduce image lag, leading to substantial artifacts in reconstructed images, especially when the detector is operated at several hundred frames/second (fps). For high speed imaging systems, high-speed readout electronics and fast scintillator films are required. This paper presents an approach to developing a high-speed CT detector based on a novel, back-thinned electron-multiplying CCD (EMCCD) coupled to various bright, high resolution, low afterglow films. The EMCCD camera, when operated in its binned mode, is capable of acquiring data at up to 300 fps with reduced imaging area. CsI:Tl,Eu and ZnSe:Te films, recently fabricated at RMD, apart from being bright, showed very good afterglow properties, favorable for high-speed imaging. Since ZnSe:Te films were brighter than CsI:Tl,Eu films, for preliminary experiments a ZnSe:Te film was coupled to an EMCCD camera at UC Davis Medical Center. A high-throughput tungsten anode X-ray generator was used, as the X-ray fluence from a mini- or micro-focus source would be insufficient to achieve high-speed imaging. A euthanized mouse held in a glass tube was rotated 360 degrees in less than 3 seconds, while radiographic images were recorded at various readout rates (up to 300 fps); images were reconstructed using a conventional Feldkamp cone-beam reconstruction algorithm. We have found that this system allows volumetric CT imaging of small animals in approximately two seconds at ~110 to 190 μm resolution, compared to several minutes at 160 μm resolution needed for the best current systems.

  11. Measuring droplet fall speed with a high-speed camera: indoor accuracy and potential outdoor applications

    NASA Astrophysics Data System (ADS)

    Yu, Cheng-Ku; Hsieh, Pei-Rong; Yuter, Sandra E.; Cheng, Lin-Wen; Tsai, Chia-Lun; Lin, Che-Yu; Chen, Ying

    2016-04-01

    Acquisition of accurate raindrop fall speed measurements outdoors in natural rain by means of moderate-cost and easy-to-use devices represents a long-standing and challenging issue in the meteorological community. Feasibility experiments were conducted to evaluate the indoor accuracy of fall speed measurements made with a high-speed camera and to evaluate its capability for outdoor applications. An indoor experiment operating in calm conditions showed that the high-speed imaging technique can provide fall speed measurements with a mean error of 4.1-9.7 % compared to Gunn and Kinzer's empirical fall-speed-size relationship for typical sizes of rain and drizzle drops. Results obtained using the same apparatus outside in summer afternoon showers indicated larger positive and negative velocity deviations compared to the indoor measurements. These observed deviations suggest that ambient flow and turbulence play a role in modifying drop fall speeds which can be quantified with future outdoor high-speed camera measurements. Because the fall speed measurements, as presented in this article, are analyzed on the basis of tracking individual, specific raindrops, sampling uncertainties commonly found in the widely adopted optical disdrometers can be significantly mitigated.

  12. Development of a high-speed H-alpha camera system for the observation of rapid fluctuations in solar flares

    NASA Technical Reports Server (NTRS)

    Kiplinger, Alan L.; Dennis, Brian R.; Orwig, Larry E.; Chen, P. C.

    1988-01-01

    A solid-state digital camera was developed for obtaining H alpha images of solar flares with 0.1 s time resolution. Beginning in the summer of 1988, this system will be operated in conjunction with SMM's hard X-ray burst spectrometer (HXRBS). Important electron time-of-flight effects that are crucial for determining the flare energy release processes should be detectable with these combined H alpha and hard X-ray observations. Charge-injection device (CID) cameras provide 128 x 128 pixel images simultaneously in the H alpha blue wing, line center, and red wing, or other wavelength of interest. The data recording system employs a microprocessor-controlled, electronic interface between each camera and a digital processor board that encodes the data into a serial bitstream for continuous recording by a standard video cassette recorder. Only a small fraction of the data will be permanently archived through utilization of a direct memory access interface onto a VAX-750 computer. In addition to correlations with hard X-ray data, observations from the high speed H alpha camera will also be correlated and optical and microwave data and data from future MAX 1991 campaigns. Whether the recorded optical flashes are simultaneous with X-ray peaks to within 0.1 s, are delayed by tenths of seconds or are even undetectable, the results will have implications on the validity of both thermal and nonthermal models of hard X-ray production.

  13. The development of a high-speed 100 fps CCD camera

    SciTech Connect

    Hoffberg, M.; Laird, R.; Lenkzsus, F. Liu, Chuande; Rodricks, B.; Gelbart, A.

    1996-09-01

    This paper describes the development of a high-speed CCD digital camera system. The system has been designed to use CCDs from various manufacturers with minimal modifications. The first camera built on this design utilizes a Thomson 512x512 pixel CCD as its sensor which is read out from two parallel outputs at a speed of 15 MHz/pixel/output. The data undergoes correlated double sampling after which, they are digitized into 12 bits. The throughput of the system translates into 60 MB/second which is either stored directly in a PC or transferred to a custom designed VXI module. The PC data acquisition version of the camera can collect sustained data in real time that is limited to the memory installed in the PC. The VXI version of the camera, also controlled by a PC, stores 512 MB of real-time data before it must be read out to the PC disk storage. The uncooled CCD can be used either with lenses for visible light imaging or with a phosphor screen for x-ray imaging. This camera has been tested with a phosphor screen coupled to a fiber-optic face plate for high-resolution, high-speed x-ray imaging. The camera is controlled through a custom event-driven user-friendly Windows package. The pixel clock speed can be changed from I MHz to 15 MHz. The noise was measure to be 1.05 bits at a 13.3 MHz pixel clock. This paper will describe the electronics, software, and characterizations that have been performed using both visible and x-ray photons.

  14. Vibration extraction based on fast NCC algorithm and high-speed camera.

    PubMed

    Lei, Xiujun; Jin, Yi; Guo, Jie; Zhu, Chang'an

    2015-09-20

    In this study, a high-speed camera system is developed to complete the vibration measurement in real time and to overcome the mass introduced by conventional contact measurements. The proposed system consists of a notebook computer and a high-speed camera which can capture the images as many as 1000 frames per second. In order to process the captured images in the computer, the normalized cross-correlation (NCC) template tracking algorithm with subpixel accuracy is introduced. Additionally, a modified local search algorithm based on the NCC is proposed to reduce the computation time and to increase efficiency significantly. The modified algorithm can rapidly accomplish one displacement extraction 10 times faster than the traditional template matching without installing any target panel onto the structures. Two experiments were carried out under laboratory and outdoor conditions to validate the accuracy and efficiency of the system performance in practice. The results demonstrated the high accuracy and efficiency of the camera system in extracting vibrating signals. PMID:26406525

  15. High-speed video observations of natural cloud-to-ground lightning leaders - A statistical analysis

    NASA Astrophysics Data System (ADS)

    Campos, Leandro Z. S.; Saba, Marcelo M. F.; Warner, Tom A.; Pinto, Osmar; Krider, E. Philip; Orville, Richard E.

    2014-01-01

    The aim of this investigation is to analyze the phenomenology of positive and negative (stepped and dart) leaders observed in natural lightning from digital high-speed video recordings. For that intent we have used four different high-speed cameras operating at frame rates ranging from 1000 or 11,800 frames per second in addition to data from lightning locating systems (BrasilDat and NLDN). All the recordings were GPS time-stamped in order to avoid ambiguities in the analysis, allowing us to estimate the peak current of and the distance to each flash that was detected by one of the lightning locating systems. The data collection was done at different sites in south and southeastern of Brazil, southern Arizona and South Dakota, USA. A total of 62 negative stepped leaders, 76 negative dart leaders and 29 positive leaders were recorded and analyzed. From these data it was possible to calculate the two-dimensional speed of each observed leader, allowing us to obtain its statistical distribution and evaluate whether or not it is related to other characteristics of the associated flash. In the analyzed dataset, the speeds of positive leaders and negative dart leaders follow a lognormal distribution at the 0.05 level (according to the Shapiro-Wilk test). We have also analyzed how the two-dimensional leader speeds change as they approach ground through two different methodologies. The speed of positive leaders showed a clear tendency to increase while negative dart leaders tend to become slower as they approach ground. Negative stepped leaders, on the other hand, can either accelerate as they get closer to ground or present an irregular development (with no clear tendency) throughout their entire development. For all the three leader types no correlation has been found between the return stroke peak current and the average speed of the leader responsible for its initiation. We did find, however, that dart leaders preceded by longer interstroke intervals cannot present

  16. Inexpensive range camera operating at video speed.

    PubMed

    Kramer, J; Seitz, P; Baltes, H

    1993-05-01

    An optoelectronic device has been developed and built that acquires and displays the range data of an object surface in space in video real time. The recovery of depth is performed with active triangulation. A galvanometer scanner system sweeps a sheet of light across the object at a video field rate of 50 Hz. High-speed signal processing is achieved through the use of a special optical sensor and hardware implementation of the simple electronic-processing steps. Fifty range maps are generated per second and converted into a European standard video signal where the depth is encoded in gray levels or color. The image resolution currently is 128 x 500 pixels with a depth accuracy of 1.5% of the depth range. The present setup uses a 500-mW diode laser for the generation of the light sheet. A 45-mm imaging lens covers a measurement volume of 93 mm x 61 mm x 63 mm at a medium distance of 250 mm from the camera, but this can easily be adapted to other dimensions. PMID:20820391

  17. A novel ultra-high speed camera for digital image processing applications

    NASA Astrophysics Data System (ADS)

    Hijazi, A.; Madhavan, V.

    2008-08-01

    Multi-channel gated-intensified cameras are commonly used for capturing images at ultra-high frame rates. The use of image intensifiers reduces the image resolution and increases the error in applications requiring high-quality images, such as digital image correlation. We report the development of a new type of non-intensified multi-channel camera system that permits recording of image sequences at ultra-high frame rates at the native resolution afforded by the imaging optics and the cameras used. This camera system is based upon the concept of using a sequence of short-duration light pulses of different wavelengths for illumination and using wavelength selective elements in the imaging system to route each particular wavelength of light to a particular camera. As such, the duration of the light pulses controls the exposure time and the timing of the light pulses controls the interframe time. A prototype camera system built according to this concept comprises four dual-frame cameras synchronized with four dual-cavity pulsed lasers producing 5 ns pulses in four different wavelengths. The prototype is capable of recording four-frame full-resolution image sequences at frame rates up to 200 MHz and eight-frame image sequences at frame rates up to 8 MHz. This system is built around a stereo microscope to capture stereoscopic image sequences usable for 3D digital image correlation. The camera system is used for imaging the chip-workpiece interface area during high speed machining, and the images are used to map the strain rate in the primary shear zone.

  18. A novel compact high speed x-ray streak camera (invited)

    SciTech Connect

    Hares, J. D.; Dymoke-Bradshaw, A. K. L.

    2008-10-15

    Conventional in-line high speed streak cameras have fundamental issues when their performance is extended below a picosecond. The transit time spread caused by both the spread in the photoelectron (PE) ''birth'' energy and space charge effects causes significant electron pulse broadening along the axis of the streak camera and limits the time resolution. Also it is difficult to generate a sufficiently large sweep speed. This paper describes a new instrument in which the extraction electrostatic field at the photocathode increases with time, converting time to PE energy. A uniform magnetic field is used to measure the PE energy, and thus time, and also focuses in one dimension. Design calculations are presented for the factors limiting the time resolution. With our design, subpicosecond resolution with high dynamic range is expected.

  19. In-Situ Observation of Horizontal Centrifugal Casting using a High-Speed Camera

    NASA Astrophysics Data System (ADS)

    Esaka, Hisao; Kawai, Kohsuke; Kaneko, Hiroshi; Shinozuka, Kei

    2012-07-01

    In order to understand the solidification process of horizontal centrifugal casting, experimental equipment for in-situ observation using transparent organic substance has been constructed. Succinonitrile-1 mass% water alloy was filled in the round glass cell and the glass cell was completely sealed. To observe the movement of equiaxed grains more clearly and to understand the effect of movement of free surface, a high-speed camera has been installed on the equipment. The most advantageous point of this equipment is that the camera rotates with mold, so that one can observe the same location of the glass cell. Because the recording rate could be increased up to 250 frames per second, the quality of movie was dramatically modified and this made easier and more precise to pursue the certain equiaxed grain. The amplitude of oscillation of equiaxed grain ( = At) decreased as the solidification proceeded.

  20. Low cost alternative of high speed visible light camera for tokamak experiments

    SciTech Connect

    Odstrcil, T.; Grover, O.; Svoboda, V.; Odstrcil, M.; Duran, I.; Mlynar, J.

    2012-10-15

    We present design, analysis, and performance evaluation of a new, low cost and high speed visible-light camera diagnostic system for tokamak experiments. The system is based on the camera Casio EX-F1, with the overall price of approximately a thousand USD. The achieved temporal resolution is up to 40 kHz. This new diagnostic was successfully implemented and tested at the university tokamak GOLEM (R = 0.4 m, a = 0.085 m, B{sub T} < 0.5 T, I{sub p} < 4 kA). One possible application of this new diagnostic at GOLEM is discussed in detail. This application is tomographic reconstruction for estimation of plasma position and emissivity.

  1. Estimation of Rotational Velocity of Baseball Using High-Speed Camera Movies

    NASA Astrophysics Data System (ADS)

    Inoue, Takuya; Uematsu, Yuko; Saito, Hideo

    Movies can be used to analyze a player's performance and improve his/her skills. In the case of baseball, the pitching is recorded by using a high-speed camera, and the recorded images are used to improve the pitching skills of the players. In this paper, we present a method for estimating of the rotational velocity of a baseball on the basis of movies recorded by high-speed cameras. Unlike in the previous methods, we consider the original seam pattern of the ball seen in the input movie and identify the corresponding image from a database of images by adopting the parametric eigenspace method. These database images are CG Images. The ball's posture can be determined on the basis of the rotational parameters. In the proposed method, the symmetric property of the ball is also taken into consideration, and the time continuity is used to determine the ball's posture. In the experiments, we use the proposed method to estimate the rotational velocity of a baseball on the basis of real movies and movies consisting of CG images of the baseball. The results of both the experiments prove that our method can be used to estimate the ball's rotation accurately.

  2. Photography of the commutation spark using a high-speed camera

    NASA Astrophysics Data System (ADS)

    Hanazawa, Tamio; Egashira, Torao; Tanaka, Yasuhiro; Egoshi, Jun

    1997-12-01

    In the single-phase AC commutator motor (known as a universal motor), which is widely used in cleaners, electrical machines, etc., some problems generated by commutation sparks are wear on the brush and noise impediments. We have therefore attempted to use a high-speed camera to elucidate the commutation spark mechanism visually. The high-speed camera that we used is capable of photographing at 5,000 - 20,000,000 frames/s. Selection of a trigger module can be obtained from the operation unit and the exterior triggering signal. In this paper, we proposed an exterior trigger method that involved opening a hole of several millimeters across in the motor and using argon laser light, so that commutator segments may be photographed in position; we then conducted the experiment. This method enabled us to photograph the motor's commutator segment from any position, and we were able to confirm spark generation at every other commutator segment. Furthermore, after confirming the spark generation position of the commutator segment, we next attempted to accelerate the photographing speed to obtain more detailed photography of the moment of spark generation; we then prepared our report.

  3. Experimental evaluation of spot dancing of laser beam in atmospheric propagation using high-speed camera

    NASA Astrophysics Data System (ADS)

    Nakamura, Moriya; Akiba, Makoto; Kuri, Toshiaki; Ohtani, Naoki

    2003-04-01

    We investigated the frequency spectra and two-dimensional (2-D) distributions of the beam-centroid fluctuation created by spot dancing, which are needed to optimize the design of the tracking system, by using a novel spot-dancing measurement method to suppress the effect of building and/or transmitter vibration. In this method, two laser beams are propagated apart from each other and observed simultaneously using high-speed cameras. The position of each beam centroid is obtained using an image processing system. The effect of transmitter vibration is suppressed by taking the difference between the 2-D coordinate data of the beam-centroid positions. The frequency spectra are calculated using the fast Fourier transform. The beam spots of two HeNe lasers propagated 100 m (indoor) and 750 m (open-air) were observed using a high-speed camera of 10,000 frame/sec. Frequency spectra of the beam-centroid variance of up to 5 kHz could be observed. We also measured the variations of spot dancing in two days when the rates of sunshine were 100% and 0%.

  4. Three-dimensional optical reconstruction of vocal fold kinematics using high-speed video with a laser projection system

    PubMed Central

    Luegmair, Georg; Mehta, Daryush D.; Kobler, James B.; Döllinger, Michael

    2015-01-01

    Vocal fold kinematics and its interaction with aerodynamic characteristics play a primary role in acoustic sound production of the human voice. Investigating the temporal details of these kinematics using high-speed videoendoscopic imaging techniques has proven challenging in part due to the limitations of quantifying complex vocal fold vibratory behavior using only two spatial dimensions. Thus, we propose an optical method of reconstructing the superior vocal fold surface in three spatial dimensions using a high-speed video camera and laser projection system. Using stereo-triangulation principles, we extend the camera-laser projector method and present an efficient image processing workflow to generate the three-dimensional vocal fold surfaces during phonation captured at 4000 frames per second. Initial results are provided for airflow-driven vibration of an ex vivo vocal fold model in which at least 75% of visible laser points contributed to the reconstructed surface. The method captures the vertical motion of the vocal folds at a high accuracy to allow for the computation of three-dimensional mucosal wave features such as vibratory amplitude, velocity, and asymmetry. PMID:26087485

  5. System Synchronizes Recordings from Separated Video Cameras

    NASA Technical Reports Server (NTRS)

    Nail, William; Nail, William L.; Nail, Jasper M.; Le, Doung T.

    2009-01-01

    A system of electronic hardware and software for synchronizing recordings from multiple, physically separated video cameras is being developed, primarily for use in multiple-look-angle video production. The system, the time code used in the system, and the underlying method of synchronization upon which the design of the system is based are denoted generally by the term "Geo-TimeCode(TradeMark)." The system is embodied mostly in compact, lightweight, portable units (see figure) denoted video time-code units (VTUs) - one VTU for each video camera. The system is scalable in that any number of camera recordings can be synchronized. The estimated retail price per unit would be about $350 (in 2006 dollars). The need for this or another synchronization system external to video cameras arises because most video cameras do not include internal means for maintaining synchronization with other video cameras. Unlike prior video-camera-synchronization systems, this system does not depend on continuous cable or radio links between cameras (however, it does depend on occasional cable links lasting a few seconds). Also, whereas the time codes used in prior video-camera-synchronization systems typically repeat after 24 hours, the time code used in this system does not repeat for slightly more than 136 years; hence, this system is much better suited for long-term deployment of multiple cameras.

  6. High-speed video analysis system using multiple shuttered charge-coupled device imagers and digital storage

    NASA Astrophysics Data System (ADS)

    Racca, Roberto G.; Stephenson, Owen; Clements, Reginald M.

    1992-06-01

    A fully solid state high-speed video analysis system is presented. It is based on the use of several independent charge-coupled device (CCD) imagers, each shuttered by a liquid crystal light valve. The imagers are exposed in rapid succession and are then read out sequentially at standard video rate into digital memory, generating a time-resolved sequence with as many frames as there are imagers. This design allows the use of inexpensive, consumer-grade camera modules and electronics. A microprocessor-based controller, designed to accept up to ten imagers, handles all phases of the recording from exposure timing to image capture and storage to playback on a standard video monitor. A prototype with three CCD imagers and shutters has been built. It has allowed successful three-image video recordings of phenomena such as the action of an air rifle pellet shattering a piece of glass, using a high-intensity pulsed light emitting diode as the light source. For slower phenomena, recordings in continuous light are also possible by using the shutters themselves to control the exposure time. The system records full-screen black and white images with spatial resolution approaching that of standard television, at rates up to 5000 images per second.

  7. The NACA High-Speed Motion-Picture Camera Optical Compensation at 40,000 Photographs Per Second

    NASA Technical Reports Server (NTRS)

    Miller, Cearcy D

    1946-01-01

    The principle of operation of the NACA high-speed camera is completely explained. This camera, operating at the rate of 40,000 photographs per second, took the photographs presented in numerous NACA reports concerning combustion, preignition, and knock in the spark-ignition engine. Many design details are presented and discussed, details of an entirely conventional nature are omitted. The inherent aberrations of the camera are discussed and partly evaluated. The focal-plane-shutter effect of the camera is explained. Photographs of the camera are presented. Some high-speed motion pictures of familiar objects -- photoflash bulb, firecrackers, camera shutter -- are reproduced as an illustration of the quality of the photographs taken by the camera.

  8. High-speed holographic correlation system for video identification on the internet

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ikeda, Kanami; Kodate, Kashiko

    2013-12-01

    Automatic video identification is important for indexing, search purposes, and removing illegal material on the Internet. By combining a high-speed correlation engine and web-scanning technology, we developed the Fast Recognition Correlation system (FReCs), a video identification system for the Internet. FReCs is an application thatsearches through a number of websites with user-generated content (UGC) and detects video content that violates copyright law. In this paper, we describe the FReCs configuration and an approach to investigating UGC websites using FReCs. The paper also illustrates the combination of FReCs with an optical correlation system, which is capable of easily replacing a digital authorization sever in FReCs with optical correlation.

  9. High-speed camera with real time processing for frequency domain imaging

    PubMed Central

    Shia, Victor; Watt, David; Faris, Gregory W.

    2011-01-01

    We describe a high-speed camera system for frequency domain imaging suitable for applications such as in vivo diffuse optical imaging and fluorescence lifetime imaging. 14-bit images are acquired at 2 gigapixels per second and analyzed with real-time pipeline processing using field programmable gate arrays (FPGAs). Performance of the camera system has been tested both for RF-modulated laser imaging in combination with a gain-modulated image intensifier and a simpler system based upon an LED light source. System amplitude and phase noise are measured and compared against theoretical expressions in the shot noise limit presented for different frequency domain configurations. We show the camera itself is capable of shot noise limited performance for amplitude and phase in as little as 3 ms, and when used in combination with the intensifier the noise levels are nearly shot noise limited. The best phase noise in a single pixel is 0.04 degrees for a 1 s integration time. PMID:21750770

  10. High-speed motion picture camera experiments of cavitation in dynamically loaded journal bearings

    NASA Technical Reports Server (NTRS)

    Jacobson, B. O.; Hamrock, B. J.

    1982-01-01

    A high-speed camera was used to investigate cavitation in dynamically loaded journal bearings. The length-diameter ratio of the bearing, the speeds of the shaft and bearing, the surface material of the shaft, and the static and dynamic eccentricity of the bearing were varied. The results reveal not only the appearance of gas cavitation, but also the development of previously unsuspected vapor cavitation. It was found that gas cavitation increases with time until, after many hundreds of pressure cycles, there is a constant amount of gas kept in the cavitation zone of the bearing. The gas can have pressures of many times the atmospheric pressure. Vapor cavitation bubbles, on the other hand, collapse at pressures lower than the atmospheric pressure and cannot be transported through a high-pressure zone, nor does the amount of vapor cavitation in a bearing increase with time. Analysis is given to support the experimental findings for both gas and vapor cavitation.

  11. A rapid response 64-channel photomultiplier tube camera for high-speed flow velocimetry

    NASA Astrophysics Data System (ADS)

    Ecker, Tobias; Lowe, K. Todd; Ng, Wing F.

    2015-02-01

    In this technical design note, the development of a rapid response photomultiplier tube camera, leveraging field-programmable gate arrays (FPGA) for high-speed flow velocimetry at up to 10 MHz is described. Technically relevant flows, for example, supersonic inlets and exhaust jets, have time scales on the order of microseconds, and their experimental study requires resolution of these timescales for fundamental insight. The inherent rapid response time attributes of a 64-channel photomultiplier array were coupled with two-stage amplifiers on each anode, and were acquired using a FPGA-based system. Application of FPGA allows high data acquisition rates with many channels as well as on-the-fly preprocessing techniques. Results are presented for optical velocimetry in supersonic free jet flows, demonstrating the value of the technique in the chosen application example for determining supersonic shear layer velocity correlation maps.

  12. Television camera video level control system

    NASA Technical Reports Server (NTRS)

    Kravitz, M.; Freedman, L. A.; Fredd, E. H.; Denef, D. E. (Inventor)

    1985-01-01

    A video level control system is provided which generates a normalized video signal for a camera processing circuit. The video level control system includes a lens iris which provides a controlled light signal to a camera tube. The camera tube converts the light signal provided by the lens iris into electrical signals. A feedback circuit in response to the electrical signals generated by the camera tube, provides feedback signals to the lens iris and the camera tube. This assures that a normalized video signal is provided in a first illumination range. An automatic gain control loop, which is also responsive to the electrical signals generated by the camera tube 4, operates in tandem with the feedback circuit. This assures that the normalized video signal is maintained in a second illumination range.

  13. Study of cavitation bubble dynamics during Ho:YAG laser lithotripsy by high-speed camera

    NASA Astrophysics Data System (ADS)

    Zhang, Jian J.; Xuan, Jason R.; Yu, Honggang; Devincentis, Dennis

    2016-02-01

    Although laser lithotripsy is now the preferred treatment option for urolithiasis, the mechanism of laser pulse induced calculus damage is still not fully understood. This is because the process of laser pulse induced calculus damage involves quite a few physical and chemical processes and their time-scales are very short (down to sub micro second level). For laser lithotripsy, the laser pulse induced impact by energy flow can be summarized as: Photon energy in the laser pulse --> photon absorption generated heat in the water liquid and vapor (super heat water or plasma effect) --> shock wave (Bow shock, acoustic wave) --> cavitation bubble dynamics (oscillation, and center of bubble movement , super heat water at collapse, sonoluminscence) --> calculus damage and motion (calculus heat up, spallation/melt of stone, breaking of mechanical/chemical bond, debris ejection, and retropulsion of remaining calculus body). Cavitation bubble dynamics is the center piece of the physical processes that links the whole energy flow chain from laser pulse to calculus damage. In this study, cavitation bubble dynamics was investigated by a high-speed camera and a needle hydrophone. A commercialized, pulsed Ho:YAG laser at 2.1 mu;m, StoneLightTM 30, with pulse energy from 0.5J up to 3.0 J, and pulse width from 150 mu;s up to 800 μs, was used as laser pulse source. The fiber used in the investigation is SureFlexTM fiber, Model S-LLF365, a 365 um core diameter fiber. A high-speed camera with frame rate up to 1 million fps was used in this study. The results revealed the cavitation bubble dynamics (oscillation and center of bubble movement) by laser pulse at different energy level and pulse width. More detailed investigation on bubble dynamics by different type of laser, the relationship between cavitation bubble dynamics and calculus damage (fragmentation/dusting) will be conducted as a future study.

  14. The Eye, Film, And Video In High-Speed Motion Analysis

    NASA Astrophysics Data System (ADS)

    Hyzer, William G.

    1987-09-01

    The unaided human eye with its inherent limitations serves us well in the examination of most large-scale, slow-moving, natural and man-made phenomena, but constraints imposed by inertial factors in the visual mechanism severely limit our ability to observe fast-moving and short-duration events. The introduction of high-speed photography (c. 1851) and videography (c. 1970) served to stretch the temporal limits of human perception by several orders of magnitude so critical analysis could be performed on a wide range of rapidly occurring events of scientific, technological, industrial, and educational interest. The preferential selection of eye, film, or video imagery in fulfilling particular motion analysis requirements is determined largely by the comparative attributes and limitations of these methods. The choice of either film or video does not necessarily eliminate the eye, because it usually continues as a vital link in the analytical chain. The important characteristics of the eye, film, and video imagery in high-speed motion analysis are discussed with particular reference to fields of application which include biomechanics, ballistics, machine design, mechanics of materials, sports analysis, medicine, production engineering, and industrial trouble-shooting.

  15. Study of jet fluctuations in DC plasma torch using high speed camera

    NASA Astrophysics Data System (ADS)

    Tiwari, Nirupama; Sahasrabudhe, S. N.; Joshi, N. K.; Das, A. K.

    2010-02-01

    The power supplies used for the plasma torches are usually SCR controlled and have a large ripple factor. This is due to the fact that the currents in the torch are of the order of hundreds of amperes which prohibit effective filtering of the ripple. The voltage and current vary as per the ripple in the power supply and causes plasma jet to fluctuate. To record these fluctuations, the jet coming out from a D.C. plasma torch operating at atmospheric pressure was imaged using high speed camera at the rate of 3000 frame per second. Light emitted from a well defined zone in the plume was collected using an optical fibre and a Photo Multiplier Tube. Current, voltage and PMT signals were recorded simultaneously using a digital storage oscilloscope (DSO). The fast camera recorded the images for 25 ms and the starting pulse from the camera was used to trigger the DSO for recording voltage, current and optical signals. Each image of the plume recorded by the fast camera was correlated with the magnitude of the instantaneous voltage, current and optical signal. It was observed that the luminosity and length of the plume varies as per the product of instantaneous voltage and current i.e. electrical power fed to plasma torch. The experimental runs were taken with different gas flow rates and electrical powers. The images were analyzed using image processing software and constant intensity contours of images were determined. Further analysis of the images can provide a great deal of information about dynamics of the jet.

  16. Wind dynamic range video camera

    NASA Technical Reports Server (NTRS)

    Craig, G. D. (Inventor)

    1985-01-01

    A television camera apparatus is disclosed in which bright objects are attenuated to fit within the dynamic range of the system, while dim objects are not. The apparatus receives linearly polarized light from an object scene, the light being passed by a beam splitter and focused on the output plane of a liquid crystal light valve. The light valve is oriented such that, with no excitation from the cathode ray tube, all light is rotated 90 deg and focused on the input plane of the video sensor. The light is then converted to an electrical signal, which is amplified and used to excite the CRT. The resulting image is collected and focused by a lens onto the light valve which rotates the polarization vector of the light to an extent proportional to the light intensity from the CRT. The overall effect is to selectively attenuate the image pattern focused on the sensor.

  17. Motion analysis of mechanical heart valve prosthesis utilizing high-speed video

    NASA Astrophysics Data System (ADS)

    Adlparvar, Payam; Guo, George; Kingsbury, Chris

    1993-01-01

    The Edwards-Duromedics (ED) mechanical heart valve prosthesis is of a bileaflet design, incorporating unique design features that distinguish its performance with respect to other mechanical valves of similar type. Leaflet motion of mechanical heart valves, particularly during closure, is related to valve durability, valve sounds and the efficiency of the cardiac output. Modifications to the ED valve have resulted in significant improvements with respect to leaflet motion. In this study a high-speed video system was used to monitor the leaflet motion of the valve, and to compare the performance of the Modified Specification to that of the Original Specification using a St. Jude Medical as a control valve.

  18. Use of High-Speed X ray and Video to Analyze Distal Radius Fracture Pathomechanics.

    PubMed

    Gutowski, Christina; Darvish, Kurosh; Liss, Frederic E; Ilyas, Asif M; Jones, Christopher M

    2015-10-01

    The purpose of this study is to investigate the failure sequence of the distal radius during a simulated fall onto an outstretched hand using cadaver forearms and high-speed X ray and video systems. This apparatus records the beginning and propagation of bony failure, ultimately resulting in distal radius or forearm fracture. The effects of 3 different wrist guard designs are investigated using this system. Serving as a proof-of-concept analysis, this study supports this imaging technique to be used in larger studies of orthopedic trauma and protective devices and specifically for distal radius fractures. PMID:26410645

  19. Time-Correlated High-Speed Video and Lightning Mapping Array Results For Triggered Lightning Flashes

    NASA Astrophysics Data System (ADS)

    Eastvedt, E. M.; Eack, K.; Edens, H. E.; Aulich, G. D.; Hunyady, S.; Winn, W. P.; Murray, C.

    2009-12-01

    Several lightning flashes triggered by the rocket-and-wire technique at Langmuir Laboratory's Kiva facility on South Baldy (approximately 3300 meters above sea level) were captured on high-speed video during the summers of 2008 and 2009. These triggered flashes were also observed with Langmuir Laboratory's Lightning Mapping Array (LMA), a 3-D VHF time-of-arrival system. We analyzed nine flashes (obtained in four different storms) for which the electric field at ground was positive (foul-weather). Each was initiated by an upward positive leader that propagated into the cloud. In all cases observed, the leader exhibited upward branching, and most of the flashes had multiple return strokes.

  20. Temperature measurement of mineral melt by means of a high-speed camera.

    PubMed

    Bizjan, Benjamin; Širok, Brane; Drnovšek, Janko; Pušnik, Igor

    2015-09-10

    This paper presents a temperature evaluation method by means of high-speed, visible light digital camera visualization and its application to the mineral wool production process. The proposed method adequately resolves the temperature-related requirements in mineral wool production and significantly improves the spatial and temporal resolution of measured temperature fields. Additionally, it is very cost effective in comparison with other non-contact temperature field measurement methods, such as infrared thermometry. Using the proposed method for temperatures between 800°C and 1500°C, the available temperature measurement range is approximately 300 K with a single temperature calibration point and without the need for camera setting adjustments. In the case of a stationary blackbody, the proposed method is able to produce deviations of less than 5 K from the reference (thermocouple-measured) temperature in a measurement range within 100 K from the calibration temperature. The method was also tested by visualization of rotating melt film in the rock wool production process. The resulting temperature fields are characterized by a very good temporal and spatial resolution (18,700 frames per second at 128  pixels×328  pixels and 8000 frames per second at 416  pixels×298  pixels). PMID:26368973

  1. Game of thrown bombs in 3D: using high speed cameras and photogrammetry techniques to reconstruct bomb trajectories at Stromboli (Italy)

    NASA Astrophysics Data System (ADS)

    Gaudin, D.; Taddeucci, J.; Scarlato, P.; Del Bello, E.; Houghton, B. F.; Orr, T. R.; Andronico, D.; Kueppers, U.

    2015-12-01

    Large juvenile bombs and lithic clasts, produced and ejected during explosive volcanic eruptions, follow ballistic trajectories. Of particular interest are: 1) the determination of ejection velocity and launch angle, which give insights into shallow conduit conditions and geometry; 2) particle trajectories, with an eye on trajectory evolution caused by collisions between bombs, as well as the interaction between bombs and ash/gas plumes; and 3) the computation of the final emplacement of bomb-sized clasts, which is important for hazard assessment and risk management. Ground-based imagery from a single camera only allows the reconstruction of bomb trajectories in a plan perpendicular to the line of sight, which may lead to underestimation of bomb velocities and does not allow the directionality of the ejections to be studied. To overcome this limitation, we adapted photogrammetry techniques to reconstruct 3D bomb trajectories from two or three synchronized high-speed video cameras. In particular, we modified existing algorithms to consider the errors that may arise from the very high velocity of the particles and the impossibility of measuring tie points close to the scene. Our method was tested during two field campaigns at Stromboli. In 2014, two high-speed cameras with a 500 Hz frame rate and a ~2 cm resolution were set up ~350m from the crater, 10° apart and synchronized. The experiment was repeated with similar parameters in 2015, but using three high-speed cameras in order to significantly reduce uncertainties and allow their estimation. Trajectory analyses for tens of bombs at various times allowed for the identification of shifts in the mean directivity and dispersal angle of the jets during the explosions. These time evolutions are also visible on the permanent video-camera monitoring system, demonstrating the applicability of our method to all kinds of explosive volcanoes.

  2. Operational experience with a high speed video data acquisition system in Fermilab experiment E-687

    SciTech Connect

    Baumbaugh, A.E.; Knickerbocker, K.L.; Baumbaugh, B.; Ruchti, R.

    1987-10-21

    Operation of a high speed, triggerable, Video Data Acquisition System (VDAS) including a hardware data compactor and a 16 megabyte First-In-First-Out buffer memory (FIFO) will be discussed. Active target imaging techniques for High Energy Physics are described and preliminary experimental data is reported.. The hardware architecture for the imaging system and experiment will be discussed as well as other applications for the imaging system. Data rates for the compactor is over 30 megabytes/sec and the FIFO has been run at 100 megabytes/sec. The system can be operated at standard video rates or at any rate up to 30 million pixels/second. 7 refs., 3 figs.

  3. Design methodology for high-speed video processing system based on signal integrity analysis

    NASA Astrophysics Data System (ADS)

    Wang, Rui; Zhang, Hao

    2009-07-01

    On account of high performance requirement of video processing systems and the shortcoming of conventional circuit design method, a design methodology based on the signal integrity (SI) theory for the high-speed video processing system with TI's digital signal processor TMS320DM642 was proposed. The PCB stack-up and construction of the system as well as transmission line characteristic impedance are set and calculated firstly with the impedance control tool Si8000 through this methodology. And then some crucial signals such as data lines of SDRAM are simulated and analyzed with the IBIS models so that reasonable layout and routing rules are established. Finally the system's highdensity PCB design is completed on Cadence SPB15.7 platform. The design result shows that this methodology can effectively restrain signal reflection, crosstalk, rail collapse noise and electromagnetic interference (EMI). Thus it significantly improves stability of the system and shortens development cycles.

  4. ARINC 818 express for high-speed avionics video and power over coax

    NASA Astrophysics Data System (ADS)

    Keller, Tim; Alexander, Jon

    2012-06-01

    CoaXPress is a new standard for high-speed video over coax cabling developed for the machine vision industry. CoaXPress includes both a physical layer and a video protocol. The physical layer has desirable features for aerospace and defense applications: it allows 3Gbps (up to 6Gbps) communication, includes 21Mbps return path allowing for bidirectional communication, and provides up to 13W of power, all over a single coax connection. ARINC 818, titled "Avionics Digital Video Bus" is a protocol standard developed specifically for high speed, mission critical aerospace video systems. ARINC 818 is being widely adopted for new military and commercial display and sensor applications. The ARINC 818 protocol combined with the CoaXPress physical layer provide desirable characteristics for many aerospace systems. This paper presents the results of a technology demonstration program to marry the physical layer from CoaXPress with the ARINC 818 protocol. ARINC 818 is a protocol, not a physical layer. Typically, ARINC 818 is implemented over fiber or copper for speeds of 1 to 2Gbps, but beyond 2Gbps, it has been implemented exclusively over fiber optic links. In many rugged applications, a copper interface is still desired, by implementing ARINC 818 over the CoaXPress physical layer, it provides a path to 3 and 6 Gbps copper interfaces for ARINC 818. Results of the successful technology demonstration dubbed ARINC 818 Express are presented showing 3Gbps communication while powering a remote module over a single coax cable. The paper concludes with suggested next steps for bring this technology to production readiness.

  5. Multi-Camera Reconstruction of Fine Scale High Speed Auroral Dynamics

    NASA Astrophysics Data System (ADS)

    Hirsch, M.; Semeter, J. L.; Zettergren, M. D.; Dahlgren, H.; Goenka, C.; Akbari, H.

    2014-12-01

    The fine spatial structure of dispersive aurora is known to have ground-observable scales of less than 100 meters. The lifetime of prompt emissions is much less than 1 millisecond, and high-speed cameras have observed auroral forms with millisecond scale morphology. Satellite observations have corroborated these spatial and temporal findings. Satellite observation platforms give a very valuable yet passing glance at the auroral region and the precipitation driving the aurora. To gain further insight into the fine structure of accelerated particles driven into the ionosphere, ground-based optical instruments staring at the same region of sky can capture the evolution of processes evolving on time scales from milliseconds to many hours, with continuous sample rates of 100Hz or more. Legacy auroral tomography systems have used baselines of hundreds of kilometers, capturing a "side view" of the field-aligned auroral structure. We show that short baseline (less than 10 km), high speed optical observations fill a measurement gap between legacy long baseline optical observations and incoherent scatter radar. The ill-conditioned inverse problem typical of auroral tomography, accentuated by short baseline optical ground stations is tackled with contemporary data inversion algorithms. We leverage the disruptive electron multiplying charge coupled device (EMCCD) imaging technology and solve the inverse problem via eigenfunctions obtained from a first-principles 1-D electron penetration ionospheric model. We present the latest analysis of observed auroral events from the Poker Flat Research Range near Fairbanks, Alaska. We discuss the system-level design and performance verification measures needed to ensure consistent performance for nightly multi-terabyte data acquisition synchronized between stations to better than 1 millisecond.

  6. Initial laboratory evaluation of color video cameras

    SciTech Connect

    Terry, P L

    1991-01-01

    Sandia National Laboratories has considerable experience with monochrome video cameras used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than identify an intruder. Monochrome cameras are adequate for that application and were selected over color cameras because of their greater sensitivity and resolution. There is a growing interest in the identification function of security video systems for both access control and insider protection. Color information is useful for identification purposes, and color camera technology is rapidly changing. Thus, Sandia National Laboratories established an ongoing program to evaluate color solid-state cameras. Phase one resulted in the publishing of a report titled, Initial Laboratory Evaluation of Color Video Cameras (SAND--91-2579).'' It gave a brief discussion of imager chips and color cameras and monitors, described the camera selection, detailed traditional test parameters and procedures, and gave the results of the evaluation of twelve cameras. In phase two six additional cameras were tested by the traditional methods and all eighteen cameras were tested by newly developed methods. This report details both the traditional and newly developed test parameters and procedures, and gives the results of both evaluations.

  7. Initial laboratory evaluation of color video cameras

    NASA Astrophysics Data System (ADS)

    Terry, P. L.

    1991-12-01

    Sandia National Laboratories has considerable experience with monochrome video cameras used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than identify an intruder. Monochrome cameras are adequate for that application and were selected over color cameras because of their greater sensitivity and resolution. There is a growing interest in the identification function of security video systems for both access control and insider protection. Color information is useful for identification purposes, and color camera technology is rapidly changing. Thus, Sandia National Laboratories established an ongoing program to evaluate color solid-state cameras. Phase one resulted in the publishing of a report titled, 'Initial Laboratory Evaluation of Color Video Cameras (SAND--91-2579).' It gave a brief discussion of imager chips and color cameras and monitors, described the camera selection, detailed traditional test parameters and procedures, and gave the results of the evaluation of twelve cameras. In phase two, six additional cameras were tested by the traditional methods and all eighteen cameras were tested by newly developed methods. This report details both the traditional and newly developed test parameters and procedures, and gives the results of both evaluations.

  8. Measurement of inkjet first-drop behavior using a high-speed camera

    NASA Astrophysics Data System (ADS)

    Kwon, Kye-Si; Kim, Hyung-Seok; Choi, Moohyun

    2016-03-01

    Drop-on-demand inkjet printing has been used as a manufacturing tool for printed electronics, and it has several advantages since a droplet of an exact amount can be deposited on an exact location. Such technology requires positioning the inkjet head on the printing location without jetting, so a jetting pause (non-jetting) idle time is required. Nevertheless, the behavior of the first few drops after the non-jetting pause time is well known to be possibly different from that which occurs in the steady state. The abnormal behavior of the first few drops may result in serious problems regarding printing quality. Therefore, a proper evaluation of a first-droplet failure has become important for the inkjet industry. To this end, in this study, we propose the use of a high-speed camera to evaluate first-drop dissimilarity. For this purpose, the image acquisition frame rate was determined to be an integer multiple of the jetting frequency, and in this manner, we can directly compare the droplet locations of each drop in order to characterize the first-drop behavior. Finally, we evaluate the effect of a sub-driving voltage during the non-jetting pause time to effectively suppress the first-drop dissimilarity.

  9. High-speed motion picture camera experiments of cavitation in dynamically loaded journal bearings

    NASA Technical Reports Server (NTRS)

    Hamrock, B. J.; Jacobson, B. O.

    1983-01-01

    A high-speed camera was used to investigate cavitation in dynamically loaded journal bearings. The length-diameter ratio of the bearing, the speeds of the shaft and bearing, the surface material of the shaft, and the static and dynamic eccentricity of the bearing were varied. The results reveal not only the appearance of gas cavitation, but also the development of previously unsuspected vapor cavitation. It was found that gas cavitation increases with time until, after many hundreds of pressure cycles, there is a constant amount of gas kept in the cavitation zone of the bearing. The gas can have pressures of many times the atmospheric pressure. Vapor cavitation bubbles, on the other hand, collapse at pressures lower than the atmospheric pressure and cannot be transported through a high-pressure zone, nor does the amount of vapor cavitation in a bearing increase with time. Analysis is given to support the experimental findings for both gas and vapor cavitation. Previously announced in STAR as N82-20543

  10. Synchronization of high speed framing camera and intense electron-beam accelerator

    NASA Astrophysics Data System (ADS)

    Cheng, Xin-Bing; Liu, Jin-Liang; Hong, Zhi-Qiang; Qian, Bao-Liang

    2012-06-01

    A new trigger program is proposed to realize the synchronization of high speed framing camera (HSFC) and intense electron-beam accelerator (IEBA). The trigger program which include light signal acquisition radiated from main switch of IEBA and signal processing circuit could provide a trigger signal with rise time of 17 ns and amplitude of about 5 V. First, the light signal was collected by an avalanche photodiode (APD) module, and the delay time between the output voltage of APD and load voltage of IEBA was tested, it was about 35 ns. Subsequently, the output voltage of APD was processed further by the signal processing circuit to obtain the trigger signal. At last, by combining the trigger program with an IEBA, the trigger program operated stably, and a delay time of 30 ns between the trigger signal of HSFC and output voltage of IEBA was obtained. Meanwhile, when surface flashover occurred at the high density polyethylene sample, the delay time between the trigger signal of HSFC and flashover current was up to 150 ns, which satisfied the need of synchronization of HSFC and IEBA. So the experiment results proved that the trigger program could compensate the time (called compensated time) of the trigger signal processing time and the inherent delay time of the HSFC.

  11. Measurement of inkjet first-drop behavior using a high-speed camera.

    PubMed

    Kwon, Kye-Si; Kim, Hyung-Seok; Choi, Moohyun

    2016-03-01

    Drop-on-demand inkjet printing has been used as a manufacturing tool for printed electronics, and it has several advantages since a droplet of an exact amount can be deposited on an exact location. Such technology requires positioning the inkjet head on the printing location without jetting, so a jetting pause (non-jetting) idle time is required. Nevertheless, the behavior of the first few drops after the non-jetting pause time is well known to be possibly different from that which occurs in the steady state. The abnormal behavior of the first few drops may result in serious problems regarding printing quality. Therefore, a proper evaluation of a first-droplet failure has become important for the inkjet industry. To this end, in this study, we propose the use of a high-speed camera to evaluate first-drop dissimilarity. For this purpose, the image acquisition frame rate was determined to be an integer multiple of the jetting frequency, and in this manner, we can directly compare the droplet locations of each drop in order to characterize the first-drop behavior. Finally, we evaluate the effect of a sub-driving voltage during the non-jetting pause time to effectively suppress the first-drop dissimilarity. PMID:27036813

  12. Synchronization of high speed framing camera and intense electron-beam accelerator

    SciTech Connect

    Cheng Xinbing; Liu Jinliang; Hong Zhiqiang; Qian Baoliang

    2012-06-15

    A new trigger program is proposed to realize the synchronization of high speed framing camera (HSFC) and intense electron-beam accelerator (IEBA). The trigger program which include light signal acquisition radiated from main switch of IEBA and signal processing circuit could provide a trigger signal with rise time of 17 ns and amplitude of about 5 V. First, the light signal was collected by an avalanche photodiode (APD) module, and the delay time between the output voltage of APD and load voltage of IEBA was tested, it was about 35 ns. Subsequently, the output voltage of APD was processed further by the signal processing circuit to obtain the trigger signal. At last, by combining the trigger program with an IEBA, the trigger program operated stably, and a delay time of 30 ns between the trigger signal of HSFC and output voltage of IEBA was obtained. Meanwhile, when surface flashover occurred at the high density polyethylene sample, the delay time between the trigger signal of HSFC and flashover current was up to 150 ns, which satisfied the need of synchronization of HSFC and IEBA. So the experiment results proved that the trigger program could compensate the time (called compensated time) of the trigger signal processing time and the inherent delay time of the HSFC.

  13. Close-Range Photogrammetry with Video Cameras

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Goad, W. K.

    1983-01-01

    Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.

  14. Close-range photogrammetry with video cameras

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Goad, W. K.

    1985-01-01

    Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.

  15. An Impact Velocity Device Design for Blood Spatter Pattern Generation with Considerations for High-Speed Video Analysis.

    PubMed

    Stotesbury, Theresa; Illes, Mike; Vreugdenhil, Andrew J

    2016-03-01

    A mechanical device that uses gravitational and spring compression forces to create spatter patterns of known impact velocities is presented and discussed. The custom-made device uses either two or four springs (k1 = 267.8 N/m, k2 = 535.5 N/m) in parallel to create seventeen reproducible impact velocities between 2.1 and 4.0 m/s. The impactor is held at several known spring extensions using an electromagnet. Trigger inputs to the high-speed video camera allow the user to control the magnet's release while capturing video footage simultaneously. A polycarbonate base is used to allow for simultaneous monitoring of the side and bottom views of the impact event. Twenty-four patterns were created across the impact velocity range and analyzed using HemoSpat. Area of origin estimations fell within an acceptable range (ΔXav = -5.5 ± 1.9 cm, ΔYav = -2.6 ± 2.8 cm, ΔZav = +5.5 ± 3.8 cm), supporting distribution analysis for the use in research or bloodstain pattern training. This work provides a framework for those interested in developing a robust impact device. PMID:27404625

  16. Practical use of high-speed cameras for research and development within the automotive industry: yesterday and today

    NASA Astrophysics Data System (ADS)

    Steinmetz, Klaus

    1995-05-01

    Within the automotive industry, especially for the development and improvement of safety systems, we find a lot of high accelerated motions, that can not be followed and consequently not be analyzed by human eye. For the vehicle safety tests at AUDI, which are performed as 'Crash Tests', 'Sled Tests' and 'Static Component Tests', 'Stalex', 'Hycam', and 'Locam' cameras are in use. Nowadays the automobile production is inconceivable without the use of high speed cameras.

  17. High speed video analysis of rockfall fence system evaluation. Final report

    SciTech Connect

    Fry, D.A.; Lucero, J.P.

    1998-07-01

    Rockfall fence systems are used to protect motorists from rocks, dislodged from slopes near roadways, which would potentially roll onto the road at high speeds carrying significant energy. There is an unfortunate list of such rocks on unprotected roads that have caused fatalities and other damage. Los Alamos National Laboratory (LANL) personnel from the Engineering Science and Applications Division, Measurement Technology Group (ESA-MT), participated in a series of rockfall fence system tests at a test range in Rifle, Colorado during March 1998. The tests were for the evaluation and certification of four rockfall fence system designs of Chama Valley Manufacturing (CVM), a Small Business, located in Chama, New Mexico. Also participating in the tests were the Colorado Department of Transportation (CDOT) who provided the test range and some heavy equipment support and High Tech Construction who installed the fence systems. LANL provided two high speed video systems and operators to record each individual rockfall on each fence system. From the recordings LANL then measured the linear and rotational velocities at impact for each rockfall. Using the LANL velocity results, CVM then could calculate the impact energy of each rockfall and therefore certify each design up to the maximum energy that each fence system could absorb without failure. LANL participated as an independent, impartial velocity measurement entity only and did not contribute to the fence systems design or installation. CVM has published a more detailed final report covering all aspects of the project.

  18. High-resolution, high-speed, three-dimensional video imaging with digital fringe projection techniques.

    PubMed

    Ekstrand, Laura; Karpinsky, Nikolaus; Wang, Yajun; Zhang, Song

    2013-01-01

    Digital fringe projection (DFP) techniques provide dense 3D measurements of dynamically changing surfaces. Like the human eyes and brain, DFP uses triangulation between matching points in two views of the same scene at different angles to compute depth. However, unlike a stereo-based method, DFP uses a digital video projector to replace one of the cameras(1). The projector rapidly projects a known sinusoidal pattern onto the subject, and the surface of the subject distorts these patterns in the camera's field of view. Three distorted patterns (fringe images) from the camera can be used to compute the depth using triangulation. Unlike other 3D measurement methods, DFP techniques lead to systems that tend to be faster, lower in equipment cost, more flexible, and easier to develop. DFP systems can also achieve the same measurement resolution as the camera. For this reason, DFP and other digital structured light techniques have recently been the focus of intense research (as summarized in(1-5)). Taking advantage of DFP, the graphics processing unit, and optimized algorithms, we have developed a system capable of 30 Hz 3D video data acquisition, reconstruction, and display for over 300,000 measurement points per frame(6,7). Binary defocusing DFP methods can achieve even greater speeds(8). Diverse applications can benefit from DFP techniques. Our collaborators have used our systems for facial function analysis(9), facial animation(10), cardiac mechanics studies(11), and fluid surface measurements, but many other potential applications exist. This video will teach the fundamentals of DFP techniques and illustrate the design and operation of a binary defocusing DFP system. PMID:24326674

  19. Lifetime and structures of TLEs captured by high-speed camera on board aircraft

    NASA Astrophysics Data System (ADS)

    Takahashi, Y.; Sanmiya, Y.; Sato, M.; Kudo, T.; Inoue, T.

    2012-12-01

    Temporal development of sprite streamer is the manifestation of the local electric field and conductivity. Therefore, in order to understand the mechanisms of sprite, which show a large variety in temporal and spatial structures, the detailed analysis of both fine and macro-structures with high time resolution are to be the key approach. However, due to the long distance from the optical equipments to the phenomena and to the contamination by aerosols, it's not easy to get clear images of TLEs on the ground. In the period of June 27 - July 10, 2011, a combined aircraft and ground-based campaign, in support of NHK Cosmic Shore project, was carried with two jet airplanes under collaboration between NHK, Japan Broadcasting Corporation, and universities. On 8 nights out of 16 standing-by, the jets took off from the airport near Denver, Colorado, and an airborne high speed camera captured over 60 TLE events at a frame rate of 8000-10,000 /sec. Some of them show several tens of streamers in one sprite event, which repeat splitting at the down-going end of streamers or beads. The velocities of the bottom ends and the variations of their brightness are traced carefully. It is found that the top velocity is maintained only for the brightest beads and others become slow just after the splitting. Also the whole luminosity of one sprite event has short time duration with rapid downward motion if the charge moment change of the parent lightning is large. The relationship between diffuse glows such as elves and sprite halos, and subsequent discrete structure of sprite streamers is also examined. In most cases the halo and elves seem to show inhomogenous structures before being accompanied by streamers, which develop to bright spots or streamers with acceleration of the velocity. Those characteristics of velocity and lifetime of TLEs provide key information of their generation mechanism.

  20. Video Analysis with a Web Camera

    ERIC Educational Resources Information Center

    Wyrembeck, Edward P.

    2009-01-01

    Recent advances in technology have made video capture and analysis in the introductory physics lab even more affordable and accessible. The purchase of a relatively inexpensive web camera is all you need if you already have a newer computer and Vernier's Logger Pro 3 software. In addition to Logger Pro 3, other video analysis tools such as…

  1. High-resolution, High-speed, Three-dimensional Video Imaging with Digital Fringe Projection Techniques

    PubMed Central

    Ekstrand, Laura; Karpinsky, Nikolaus; Wang, Yajun; Zhang, Song

    2013-01-01

    Digital fringe projection (DFP) techniques provide dense 3D measurements of dynamically changing surfaces. Like the human eyes and brain, DFP uses triangulation between matching points in two views of the same scene at different angles to compute depth. However, unlike a stereo-based method, DFP uses a digital video projector to replace one of the cameras1. The projector rapidly projects a known sinusoidal pattern onto the subject, and the surface of the subject distorts these patterns in the camera’s field of view. Three distorted patterns (fringe images) from the camera can be used to compute the depth using triangulation. Unlike other 3D measurement methods, DFP techniques lead to systems that tend to be faster, lower in equipment cost, more flexible, and easier to develop. DFP systems can also achieve the same measurement resolution as the camera. For this reason, DFP and other digital structured light techniques have recently been the focus of intense research (as summarized in1-5). Taking advantage of DFP, the graphics processing unit, and optimized algorithms, we have developed a system capable of 30 Hz 3D video data acquisition, reconstruction, and display for over 300,000 measurement points per frame6,7. Binary defocusing DFP methods can achieve even greater speeds8. Diverse applications can benefit from DFP techniques. Our collaborators have used our systems for facial function analysis9, facial animation10, cardiac mechanics studies11, and fluid surface measurements, but many other potential applications exist. This video will teach the fundamentals of DFP techniques and illustrate the design and operation of a binary defocusing DFP system. PMID:24326674

  2. Neural network method for characterizing video cameras

    NASA Astrophysics Data System (ADS)

    Zhou, Shuangquan; Zhao, Dazun

    1998-08-01

    This paper presents a neural network method for characterizing color video camera. A multilayer feedforward network with the error back-propagation learning rule for training, is used as a nonlinear transformer to model a camera, which realizes a mapping from the CIELAB color space to RGB color space. With SONY video camera, D65 illuminant, Pritchard Spectroradiometer, 410 JIS color charts as training data and 36 charts as testing data, results show that the mean error of training data is 2.9 and that of testing data is 4.0 in a 2563 RGB space.

  3. High speed imaging - An important industrial tool

    NASA Technical Reports Server (NTRS)

    Moore, Alton; Pinelli, Thomas E.

    1986-01-01

    High-speed photography, which is a rapid sequence of photographs that allow an event to be analyzed through the stoppage of motion or the production of slow-motion effects, is examined. In high-speed photography 16, 35, and 70 mm film and framing rates between 64-12,000 frames per second are utilized to measure such factors as angles, velocities, failure points, and deflections. The use of dual timing lamps in high-speed photography and the difficulties encountered with exposure and programming the camera and event are discussed. The application of video cameras to the recording of high-speed events is described.

  4. High speed imaging - An important industrial tool

    NASA Astrophysics Data System (ADS)

    Moore, Alton; Pinelli, Thomas E.

    1986-05-01

    High-speed photography, which is a rapid sequence of photographs that allow an event to be analyzed through the stoppage of motion or the production of slow-motion effects, is examined. In high-speed photography 16, 35, and 70 mm film and framing rates between 64-12,000 frames per second are utilized to measure such factors as angles, velocities, failure points, and deflections. The use of dual timing lamps in high-speed photography and the difficulties encountered with exposure and programming the camera and event are discussed. The application of video cameras to the recording of high-speed events is described.

  5. Eulerian frequency analysis of structural vibrations from high-speed video

    NASA Astrophysics Data System (ADS)

    Venanzoni, Andrea; De Ryck, Laurent; Cuenca, Jacques

    2016-06-01

    An approach for the analysis of the frequency content of structural vibrations from high-speed video recordings is proposed. The techniques and tools proposed rely on an Eulerian approach, that is, using the time history of pixels independently to analyse structural motion, as opposed to Lagrangian approaches, where the motion of the structure is tracked in time. The starting point is an existing Eulerian motion magnification method, which consists in decomposing the video frames into a set of spatial scales through a so-called Laplacian pyramid [1]. Each scale - or level - can be amplified independently to reconstruct a magnified motion of the observed structure. The approach proposed here provides two analysis tools or pre-amplification steps. The first tool provides a representation of the global frequency content of a video per pyramid level. This may be further enhanced by applying an angular filter in the spatial frequency domain to each frame of the video before the Laplacian pyramid decomposition, which allows for the identification of the frequency content of the structural vibrations in a particular direction of space. This proposed tool complements the existing Eulerian magnification method by amplifying selectively the levels containing relevant motion information with respect to their frequency content. This magnifies the displacement while limiting the noise contribution. The second tool is a holographic representation of the frequency content of a vibrating structure, yielding a map of the predominant frequency components across the structure. In contrast to the global frequency content representation of the video, this tool provides a local analysis of the periodic gray scale intensity changes of the frame in order to identify the vibrating parts of the structure and their main frequencies. Validation cases are provided and the advantages and limits of the approaches are discussed. The first validation case consists of the frequency content

  6. High speed television camera system processes photographic film data for digital computer analysis

    NASA Technical Reports Server (NTRS)

    Habbal, N. A.

    1970-01-01

    Data acquisition system translates and processes graphical information recorded on high speed photographic film. It automatically scans the film and stores the information with a minimal use of the computer memory.

  7. Photogrammetric Applications of Immersive Video Cameras

    NASA Astrophysics Data System (ADS)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  8. Determining aerodynamic coefficients from high speed video of a free-flying model in a shock tunnel

    NASA Astrophysics Data System (ADS)

    Neely, Andrew J.; West, Ivan; Hruschka, Robert; Park, Gisu; Mudford, Neil R.

    2008-11-01

    This paper describes the application of the free flight technique to determine the aerodynamic coefficients of a model for the flow conditions produced in a shock tunnel. Sting-based force measurement techniques either lack the required temporal response or are restricted to large complex models. Additionally the free flight technique removes the flow interference produced by the sting that is present for these other techniques. Shock tunnel test flows present two major challenges to the practical implementation of the free flight technique. These are the millisecond-order duration of the test flows and the spatial and temporal nonuniformity of these flows. These challenges are overcome by the combination of an ultra-high speed digital video camera to record the trajectory, with spatial and temporal mapping of the test flow conditions. Use of a lightweight model ensures sufficient motion during the test time. The technique is demonstrated using the simple case of drag measurement on a spherical model, free flown in a Mach 10 shock tunnel condition.

  9. High-Speed Video-Oculography for Measuring Three-Dimensional Rotation Vectors of Eye Movements in Mice

    PubMed Central

    Takeda, Noriaki; Uno, Atsuhiko; Inohara, Hidenori; Shimada, Shoichi

    2016-01-01

    Background The mouse is the most commonly used animal model in biomedical research because of recent advances in molecular genetic techniques. Studies related to eye movement in mice are common in fields such as ophthalmology relating to vision, neuro-otology relating to the vestibulo-ocular reflex (VOR), neurology relating to the cerebellum’s role in movement, and psychology relating to attention. Recording eye movements in mice, however, is technically difficult. Methods We developed a new algorithm for analyzing the three-dimensional (3D) rotation vector of eye movement in mice using high-speed video-oculography (VOG). The algorithm made it possible to analyze the gain and phase of VOR using the eye’s angular velocity around the axis of eye rotation. Results When mice were rotated at 0.5 Hz and 2.5 Hz around the earth’s vertical axis with their heads in a 30° nose-down position, the vertical components of their left eye movements were in phase with the horizontal components. The VOR gain was 0.42 at 0.5 Hz and 0.74 at 2.5 Hz, and the phase lead of the eye movement against the turntable was 16.1° at 0.5 Hz and 4.88° at 2.5 Hz. Conclusions To the best of our knowledge, this is the first report of this algorithm being used to calculate a 3D rotation vector of eye movement in mice using high-speed VOG. We developed a technique for analyzing the 3D rotation vector of eye movements in mice with a high-speed infrared CCD camera. We concluded that the technique is suitable for analyzing eye movements in mice. We also include a C++ source code that can calculate the 3D rotation vectors of the eye position from two-dimensional coordinates of the pupil and the iris freckle in the image to this article. PMID:27023859

  10. High-speed video observations of the fine structure of a natural negative stepped leader at close distance

    NASA Astrophysics Data System (ADS)

    Qi, Qi; Lu, Weitao; Ma, Ying; Chen, Luwen; Zhang, Yijun; Rakov, Vladimir A.

    2016-09-01

    We present new high-speed video observations of a natural downward negative lightning flash that occurred at a close distance of 350 m. The stepped leader of this flash was imaged by three high-speed video cameras operating at framing rates of 1000, 10,000 and 50,000 frames per second, respectively. Synchronized electromagnetic field records were also obtained. Nine pronounced field pulses which we attributed to individual leader steps were recorded. The time intervals between the step pulses ranged from 13.9 to 23.9 μs, with a mean value of 17.4 μs. Further, for the first time, smaller pulses were observed between the pronounced step pulses in the magnetic field derivative records. Time intervals between the smaller pulses (indicative of intermittent processes between steps) ranged from 0.9 to 5.5 μs, with a mean of 2.2 μs and a standard deviation of 0.82 μs. A total of 23 luminous segments, commonly attributed to space stems/leaders, were captured. Their two-dimensional lengths varied from 1 to 13 m, with a mean of 5 m. The distances between the luminous segments and the existing leader channels ranged from 1 to 8 m, with a mean value of 4 m. Three possible scenarios of the evolution of space stems/leaders located beside the leader channel have been inferred: (A) the space stem/leader fails to make connection to the leader channel; (B) the space stem/leader connects to the existing leader channel, but may die off and be followed, tens of microseconds later, by a low luminosity streamer; (C) the space stem/leader connects to the existing channel and launches an upward-propagating luminosity wave. Weakly luminous filamentary structures, which we interpreted as corona streamers, were observed emanating from the leader tip. The stepped leader branches extended downward with speeds ranging from 4.1 × 105 to 14.6 × 105 m s- 1.

  11. High speed video shooting with continuous-wave laser illumination in laboratory modeling of wind - wave interaction

    NASA Astrophysics Data System (ADS)

    Kandaurov, Alexander; Troitskaya, Yuliya; Caulliez, Guillemette; Sergeev, Daniil; Vdovin, Maxim

    2014-05-01

    Three examples of usage of high-speed video filming in investigation of wind-wave interaction in laboratory conditions is described. Experiments were carried out at the Wind - wave stratified flume of IAP RAS (length 10 m, cross section of air channel 0.4 x 0.4 m, wind velocity up to 24 m/s) and at the Large Air-Sea Interaction Facility (LASIF) - MIO/Luminy (length 40 m, cross section of air channel 3.2 x 1.6 m, wind velocity up to 10 m/s). A combination of PIV-measurements, optical measurements of water surface form and wave gages were used for detailed investigation of the characteristics of the wind flow over the water surface. The modified PIV-method is based on the use of continuous-wave (CW) laser illumination of the airflow seeded by particles and high-speed video. During the experiments on the Wind - wave stratified flume of IAP RAS Green (532 nm) CW laser with 1.5 Wt output power was used as a source for light sheet. High speed digital camera Videosprint (VS-Fast) was used for taking visualized air flow images with the frame rate 2000 Hz. Velocity air flow field was retrieved by PIV images processing with adaptive cross-correlation method on the curvilinear grid following surface wave profile. The mean wind velocity profiles were retrieved using conditional in phase averaging like in [1]. In the experiments on the LASIF more powerful Argon laser (4 Wt, CW) was used as well as high-speed camera with higher sensitivity and resolution: Optronics Camrecord CR3000x2, frame rate 3571 Hz, frame size 259×1696 px. In both series of experiments spherical 0.02 mm polyamide particles with inertial time 7 ms were used for seeding airflow. New particle seeding system based on utilization of air pressure is capable of injecting 2 g of particles per second for 1.3 - 2.4 s without flow disturbance. Used in LASIF this system provided high particle density on PIV-images. In combination with high-resolution camera it allowed us to obtain momentum fluxes directly from

  12. Using a high-speed movie camera to evaluate slice dropping in clinical image interpretation with stack mode viewers.

    PubMed

    Yakami, Masahiro; Yamamoto, Akira; Yanagisawa, Morio; Sekiguchi, Hiroyuki; Kubo, Takeshi; Togashi, Kaori

    2013-06-01

    The purpose of this study is to verify objectively the rate of slice omission during paging on picture archiving and communication system (PACS) viewers by recording the images shown on the computer displays of these viewers with a high-speed movie camera. This study was approved by the institutional review board. A sequential number from 1 to 250 was superimposed on each slice of a series of clinical Digital Imaging and Communication in Medicine (DICOM) data. The slices were displayed using several DICOM viewers, including in-house developed freeware and clinical PACS viewers. The freeware viewer and one of the clinical PACS viewers included functions to prevent slice dropping. The series was displayed in stack mode and paged in both automatic and manual paging modes. The display was recorded with a high-speed movie camera and played back at a slow speed to check whether slices were dropped. The paging speeds were also measured. With a paging speed faster than half the refresh rate of the display, some viewers dropped up to 52.4 % of the slices, while other well-designed viewers did not, if used with the correct settings. Slice dropping during paging was objectively confirmed using a high-speed movie camera. To prevent slice dropping, the viewer must be specially designed for the purpose and must be used with the correct settings, or the paging speed must be slower than half of the display refresh rate. PMID:23053908

  13. High-speed communications enabling real-time video for battlefield commanders using tracked FSO

    NASA Astrophysics Data System (ADS)

    Al-Akkoumi, Mouhammad K.; Huck, Robert C.; Sluss, James J., Jr.

    2007-04-01

    Free Space Optics (FSO) technology is currently in use to solve the last-mile problem in telecommunication systems by offering higher bandwidth than wired or wireless connections when optical fiber is not available. Incorporating mobility into FSO technology can contribute to growth in its utility. Tracking and alignment are two big challenges for mobile FSO communications. In this paper, we present a theoretical approach for mobile FSO networks between Unmanned Aerial Vehicles (UAVs), manned aerial vehicles, and ground vehicles. We introduce tracking algorithms for achieving Line of Sight (LOS) connectivity and present analytical results. Two scenarios are studied in this paper: 1 - An unmanned aerial surveillance vehicle, the Global Hawk, with a stationary ground vehicle, an M1 Abrams Main Battle Tank, and 2 - a manned aerial surveillance vehicle, the E-3A Airborne Warning and Control System (AWACS), with an unmanned combat aerial vehicle, the Joint Unmanned Combat Air System (J-UCAS). After initial vehicle locations have been coordinated, the tracking algorithm will steer the gimbals to maintain connectivity between the two vehicles and allow high-speed communications to occur. Using this algorithm, data, voice, and video can be sent via the FSO connection from one vehicle to the other vehicle.

  14. The Mechanical Properties of Early Drosophila Embryos Measured by High-Speed Video Microrheology

    PubMed Central

    Wessel, Alok D.; Gumalla, Maheshwar; Grosshans, Jörg; Schmidt, Christoph F.

    2015-01-01

    In early development, Drosophila melanogaster embryos form a syncytium, i.e., multiplying nuclei are not yet separated by cell membranes, but are interconnected by cytoskeletal polymer networks consisting of actin and microtubules. Between division cycles 9 and 13, nuclei and cytoskeleton form a two-dimensional cortical layer. To probe the mechanical properties and dynamics of this self-organizing pre-tissue, we measured shear moduli in the embryo by high-speed video microrheology. We recorded position fluctuations of injected micron-sized fluorescent beads with kHz sampling frequencies and characterized the viscoelasticity of the embryo in different locations. Thermal fluctuations dominated over nonequilibrium activity for frequencies between 0.3 and 1000 Hz. Between the nuclear layer and the yolk, the cytoplasm was homogeneous and viscously dominated, with a viscosity three orders of magnitude higher than that of water. Within the nuclear layer we found an increase of the elastic and viscous moduli consistent with an increased microtubule density. Drug-interference experiments showed that microtubules contribute to the measured viscoelasticity inside the embryo whereas actin only plays a minor role in the regions outside of the actin caps that are closely associated with the nuclei. Measurements at different stages of the nuclear division cycle showed little variation. PMID:25902430

  15. Measuring contraction propagation and localizing pacemaker cells using high speed video microscopy

    NASA Astrophysics Data System (ADS)

    Akl, Tony J.; Nepiyushchikh, Zhanna V.; Gashev, Anatoliy A.; Zawieja, David C.; Coté, Gerard L.

    2011-02-01

    Previous studies have shown the ability of many lymphatic vessels to contract phasically to pump lymph. Every lymphangion can act like a heart with pacemaker sites that initiate the phasic contractions. The contractile wave propagates along the vessel to synchronize the contraction. However, determining the location of the pacemaker sites within these vessels has proven to be very difficult. A high speed video microscopy system with an automated algorithm to detect pacemaker location and calculate the propagation velocity, speed, duration, and frequency of the contractions is presented in this paper. Previous methods for determining the contractile wave propagation velocity manually were time consuming and subject to errors and potential bias. The presented algorithm is semiautomated giving objective results based on predefined criteria with the option of user intervention. The system was first tested on simulation images and then on images acquired from isolated microlymphatic mesenteric vessels. We recorded contraction propagation velocities around 10 mm/s with a shortening speed of 20.4 to 27.1 μm/s on average and a contraction frequency of 7.4 to 21.6 contractions/min. The simulation results showed that the algorithm has no systematic error when compared to manual tracking. The system was used to determine the pacemaker location with a precision of 28 μm when using a frame rate of 300 frames per second.

  16. Measuring contraction propagation and localizing pacemaker cells using high speed video microscopy.

    PubMed

    Akl, Tony J; Nepiyushchikh, Zhanna V; Gashev, Anatoliy A; Zawieja, David C; Cot, Gerard L

    2011-02-01

    Previous studies have shown the ability of many lymphatic vessels to contract phasically to pump lymph. Every lymphangion can act like a heart with pacemaker sites that initiate the phasic contractions. The contractile wave propagates along the vessel to synchronize the contraction. However, determining the location of the pacemaker sites within these vessels has proven to be very difficult. A high speed video microscopy system with an automated algorithm to detect pacemaker location and calculate the propagation velocity, speed, duration, and frequency of the contractions is presented in this paper. Previous methods for determining the contractile wave propagation velocity manually were time consuming and subject to errors and potential bias. The presented algorithm is semiautomated giving objective results based on predefined criteria with the option of user intervention. The system was first tested on simulation images and then on images acquired from isolated microlymphatic mesenteric vessels. We recorded contraction propagation velocities around 10 mm/s with a shortening speed of 20.4 to 27.1 μm/s on average and a contraction frequency of 7.4 to 21.6 contractions/min. The simulation results showed that the algorithm has no systematic error when compared to manual tracking. The system was used to determine the pacemaker location with a precision of 28 μm when using a frame rate of 300 frames per second. PMID:21361700

  17. Studying the internal ballistics of a combustion-driven potato cannon using high-speed video

    NASA Astrophysics Data System (ADS)

    Courtney, E. D. S.; Courtney, M. W.

    2013-07-01

    A potato cannon was designed to accommodate several different experimental propellants and have a transparent barrel so the movement of the projectile could be recorded on high-speed video (at 2000 frames per second). Five experimental propellants were tested: propane (C3H8), acetylene (C2H2), ethanol (C2H6O), methanol (CH4O) and butane (C4H10). The quantity of each experimental propellant was calculated to approximate a stoichometric mixture and considering the upper and lower flammability limits, which in turn were affected by the volume of the combustion chamber. Cylindrical projectiles were cut from raw potatoes so that there was an airtight fit, and each weighed 50 (± 0.5) g. For each trial, position as a function of time was determined via frame-by-frame analysis. Five trials were made for each experimental propellant and the results analyzed to compute velocity and acceleration as functions of time. Additional quantities, including force on the potato and the pressure applied to the potato, were also computed. For each experimental propellant average velocity versus barrel position curves were plotted. The most effective experimental propellant was defined as that which accelerated the potato to the highest muzzle velocity. The experimental propellant acetylene performed the best on average (138.1 m s-1), followed by methanol (48.2 m s-1), butane (34.6 m s-1), ethanol (33.3 m s-1) and propane (27.9 m s-1), respectively.

  18. High-speed video evidence of a dart leader with bidirectional development

    NASA Astrophysics Data System (ADS)

    Jiang, Rubin; Wu, Zhijun; Qie, Xiushu; Wang, Dongfang; Liu, Mingyuan

    2014-07-01

    An upward negative cloud-to-ground lightning flash initiated from a high structure was detected by a high-speed camera operated at 10,000 fps, together with the coordinated measurement of electric field changes. Bidirectional propagation of a dart leader developing through the preconditioned channel was observed for the first time by optical means. The leader initially propagated downward through the upper channel with decreasing luminosity and speed and terminated at an altitude of about 2200 m. Subsequently, it restarted the development with both upward and downward channel extensions. The 2-D partial speed of the leader's upward propagation with positive polarity ranged between 3.2 × 106 m/s and 1.1 × 107 m/s with an average value of 6.4 × 106 m/s, while the speeds of the downward propagation with negative polarity ranged between 1.0 and 3.2 × 106 m/s with an average value of 2.2 × 106 m/s. The downward propagation of the bidirectional leader eventually reached the ground and induced a subsequent return stroke.

  19. Per-Pixel Coded Exposure for High-Speed and High-Resolution Imaging Using a Digital Micromirror Device Camera

    PubMed Central

    Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei

    2016-01-01

    High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera. PMID:26959023

  20. Per-Pixel Coded Exposure for High-Speed and High-Resolution Imaging Using a Digital Micromirror Device Camera.

    PubMed

    Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei

    2016-01-01

    High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera. PMID:26959023

  1. Temperature evaluation of a hyper-rapid plasma jet by the method of high-speed video recording

    NASA Astrophysics Data System (ADS)

    Rif, A. E.; Cherevko, V. V.; Ivashutenko, A. S.; Martyushev, N. V.; Nikonova, N. Ye

    2016-04-01

    In this paper the procedure of comparative evaluation of plasma temperature using high-speed video filming of fast processes is presented. It has been established that the maximum plasma temperature reaches the value exceeding 30 000 K for the hypervelocity electric-discharge plasma, generated by a coaxial magnetoplasma accelerator with the use of the 'Image J' software.

  2. Video Analysis with a Web Camera

    NASA Astrophysics Data System (ADS)

    Wyrembeck, Edward P.

    2009-01-01

    Recent advances in technology have made video capture and analysis in the introductory physics lab even more affordable and accessible. The purchase of a relatively inexpensive web camera is all you need if you already have a newer computer and Vernier's2 Logger Pro 3 software. In addition to Logger Pro 3, other video analysis tools such as Videopoint3 and Tracker,4 which is freely downloadable, by Doug Brown could also be used. I purchased Logitech's5 QuickCam Pro 4000 web camera for 99 after Rick Sorensen6 at Vernier Software and Technology recommended it for computers using a Windows platform. Once I had mounted the web camera on a mobile computer with Velcro and installed the software, I was ready to capture motion video and analyze it.

  3. High-speed camera observation of multi-component droplet coagulation in an ultrasonic standing wave field

    NASA Astrophysics Data System (ADS)

    Reißenweber, Marina; Krempel, Sandro; Lindner, Gerhard

    2013-12-01

    With an acoustic levitator small particles can be aggregated near the nodes of a standing pressure field. Furthermore it is possible to atomize liquids on a vibrating surface. We used a combination of both mechanisms and atomized several liquids simultaneously, consecutively and emulsified in the ultrasonic field. Using a high-speed camera we observed the coagulation of the spray droplets into single large levitated droplets resolved in space and time. In case of subsequent atomization of two components the spray droplets of the second component were deposited on the surface of the previously coagulated droplet of the first component without mixing.

  4. Optimizing the input and output transmission lines that gate the microchannel plate in a high-speed framing camera

    NASA Astrophysics Data System (ADS)

    Lugten, John B.; Brown, Charles G.; Piston, Kenneth W.; Beeman, Bart V.; Allen, Fred V.; Boyle, Dustin T.; Brown, Christopher G.; Cruz, Jason G.; Kittle, Douglas R.; Lumbard, Alexander A.; Torres, Peter; Hargrove, Dana R.; Benedetti, Laura R.; Bell, Perry M.

    2015-08-01

    We present new designs for the launch and receiver boards used in a high speed x-ray framing camera at the National Ignition Facility. The new launch board uses a Klopfenstein taper to match the 50 ohm input impedance to the ~10 ohm microchannel plate. The new receiver board incorporates design changes resulting in an output monitor pulse shape that more accurately represents the pulse shape at the input and across the microchannel plate; this is valuable for assessing and monitoring the electrical performance of the assembled framing camera head. The launch and receiver boards maximize power coupling to the microchannel plate, minimize cross talk between channels, and minimize reflections. We discuss some of the design tradeoffs we explored, and present modeling results and measured performance. We also present our methods for dealing with the non-ideal behavior of coupling capacitors and terminating resistors. We compare the performance of these new designs to that of some earlier designs.

  5. A novel optical apparatus for the study of rolling contact wear/fatigue based on a high-speed camera and multiple-source laser illumination.

    PubMed

    Bodini, I; Sansoni, G; Lancini, M; Pasinetti, S; Docchio, F

    2016-08-01

    Rolling contact wear/fatigue tests on wheel/rail specimens are important to produce wheels and rails of new materials for improved lifetime and performance, which are able to operate in harsh environments and at high rolling speeds. This paper presents a novel non-invasive, all-optical system, based on a high-speed video camera and multiple laser illumination sources, which is able to continuously monitor the dynamics of the specimens used to test wheel and rail materials, in a laboratory test bench. 3D macro-topography and angular position of the specimen are simultaneously performed, together with the acquisition of surface micro-topography, at speeds up to 500 rpm, making use of a fast camera and image processing algorithms. Synthetic indexes for surface micro-topography classification are defined, the 3D macro-topography is measured with a standard uncertainty down to 0.019 mm, and the angular position is measured on a purposely developed analog encoder with a standard uncertainty of 2.9°. The very small camera exposure time enables to obtain blur-free images with excellent definition. The system will be described with the aid of end-cycle specimens, as well as of in-test specimens. PMID:27587125

  6. A novel optical apparatus for the study of rolling contact wear/fatigue based on a high-speed camera and multiple-source laser illumination

    NASA Astrophysics Data System (ADS)

    Bodini, I.; Sansoni, G.; Lancini, M.; Pasinetti, S.; Docchio, F.

    2016-08-01

    Rolling contact wear/fatigue tests on wheel/rail specimens are important to produce wheels and rails of new materials for improved lifetime and performance, which are able to operate in harsh environments and at high rolling speeds. This paper presents a novel non-invasive, all-optical system, based on a high-speed video camera and multiple laser illumination sources, which is able to continuously monitor the dynamics of the specimens used to test wheel and rail materials, in a laboratory test bench. 3D macro-topography and angular position of the specimen are simultaneously performed, together with the acquisition of surface micro-topography, at speeds up to 500 rpm, making use of a fast camera and image processing algorithms. Synthetic indexes for surface micro-topography classification are defined, the 3D macro-topography is measured with a standard uncertainty down to 0.019 mm, and the angular position is measured on a purposely developed analog encoder with a standard uncertainty of 2.9°. The very small camera exposure time enables to obtain blur-free images with excellent definition. The system will be described with the aid of end-cycle specimens, as well as of in-test specimens.

  7. High-speed 1280x1024 camera with 12-Gbyte SDRAM memory

    NASA Astrophysics Data System (ADS)

    Postnikov, Konstantin O.; Yakovlev, Alexey V.

    2001-04-01

    A 600 Frame/s camera based on 1.3 Megapixel CMOS sensor (PBMV13) with wide digital data output bus (10 parallel outputs of 10 bit worlds) was developed using high capacity SCRAM memory. This architecture allows to achieve 10 seconds of continuous recording of digital data from the sensor at 600 frames per second to the memory box with up to 12 1Gbytes SDRAM modules. Acquired data is transmitted through the fibre optic channel connected to the camera via FPDP interface to a PC type computer at the speed of 100 Mbyte per second and fibre cable length up to 10 km. All camera settings such as shutter time, frame rate, image size, present for changing integration time and frame rate, can be controlled by software. Camera specifications: shutter time - from 3.3 us to full frame at 1.6 us steps at 600 fps and then 1 frame steps down to 16 ms, frame rate - from 60 fps to 600 fps, image size 1280x1024, 1280x512, 1290x256, or 1280x128, changing on a fly - presetting two step table, memory capacity - depends on frame size (6000 frames with 1280x1024 or 48000 frames with 1280x128 resolution). Program can work with monochrome or color versions of the MV13 sensor.

  8. Advanced High-Speed Framing Camera Development for Fast, Visible Imaging Experiments

    SciTech Connect

    Amy Lewis, Stuart Baker, Brian Cox, Abel Diaz, David Glass, Matthew Martin

    2011-05-11

    The advances in high-voltage switching developed in this project allow a camera user to rapidly vary the number of output frames from 1 to 25. A high-voltage, variable-amplitude pulse train shifts the deflection location to the new frame location during the interlude between frames, making multiple frame counts and locations possible. The final deflection circuit deflects to five different frame positions per axis, including the center position, making for a total of 25 frames. To create the preset voltages, electronically adjustable {+-}500 V power supplies were chosen. Digital-to-analog converters provide digital control of the supplies. The power supplies are clamped to {+-}400 V so as not to exceed the voltage ratings of the transistors. A field-programmable gated array (FPGA) receives the trigger signal and calculates the combination of plate voltages for each frame. The interframe time and number of frames are specified by the user, but are limited by the camera electronics. The variable-frame circuit shifts the plate voltages of the first frame to those of the second frame during the user-specified interframe time. Designed around an electrostatic image tube, a framing camera images the light present during each frame (at the photocathode) onto the tube’s phosphor. The phosphor persistence allows the camera to display multiple frames on the phosphor at one time. During this persistence, a CCD camera is triggered and the analog image is collected digitally. The tube functions by converting photons to electrons at the negatively charged photocathode. The electrons move quickly toward the more positive charge of the phosphor. Two sets of deflection plates skew the electron’s path in horizontal and vertical (x axis and y axis, respectively) directions. Hence, each frame’s electrons bombard the phosphor surface at a controlled location defined by the voltages on the deflection plates. To prevent the phosphor from being exposed between frames, the image tube

  9. A study on ice crystal formation behavior at intracellular freezing of plant cells using a high-speed camera.

    PubMed

    Ninagawa, Takako; Eguchi, Akemi; Kawamura, Yukio; Konishi, Tadashi; Narumi, Akira

    2016-08-01

    Intracellular ice crystal formation (IIF) causes several problems to cryopreservation, and it is the key to developing improved cryopreservation techniques that can ensure the long-term preservation of living tissues. Therefore, the ability to capture clear intracellular freezing images is important for understanding both the occurrence and the IIF behavior. The authors developed a new cryomicroscopic system that was equipped with a high-speed camera for this study and successfully used this to capture clearer images of the IIF process in the epidermal tissues of strawberry geranium (Saxifraga stolonifera Curtis) leaves. This system was then used to examine patterns in the location and formation of intracellular ice crystals and to evaluate the degree of cell deformation because of ice crystals inside the cell and the growing rate and grain size of intracellular ice crystals at various cooling rates. The results showed that an increase in cooling rate influenced the formation pattern of intracellular ice crystals but had less of an effect on their location. Moreover, it reduced the degree of supercooling at the onset of intracellular freezing and the degree of cell deformation; the characteristic grain size of intracellular ice crystals was also reduced, but the growing rate of intracellular ice crystals was increased. Thus, the high-speed camera images could expose these changes in IIF behaviors with an increase in the cooling rate, and these are believed to have been caused by an increase in the degree of supercooling. PMID:27343136

  10. Machine Vision Techniques For High Speed Videography

    NASA Astrophysics Data System (ADS)

    Hunter, David B.

    1984-11-01

    The priority associated with U.S. efforts to increase productivity has led to, among other things, the development of Machine Vision systems for use in manufacturing automation requirements. Many such systems combine solid state television cameras and data processing equipment to facilitate high speed, on-line inspection and real time dimensional measurement of parts and assemblies. These parts are often randomly oriented and spaced on a conveyor belt under continuous motion. Television imagery of high speed events has historically been achieved by use of pulsed (strobe) illumination or high speed shutter techniques synchronized with a camera's vertical blanking to separate write and read cycle operation. Lack of synchronization between part position and camera scanning in most on-line applications precludes use of this vertical interval illumination technique. Alternatively, many Machine Vision cameras incorporate special techniques for asynchronous, stop-motion imaging. Such cameras are capable of imaging parts asynchronously at rates approaching 60 hertz while remaining compatible with standard video recording units. Techniques for asynchronous, stop-motion imaging have not been incorporated in cameras used for High Speed Videography. Imaging of these events has alternatively been obtained through the utilization of special, high frame rate cameras to minimize motion during the frame interval. High frame rate cameras must undoubtedly be utilized for recording of high speed events occurring at high repetition rates. However, such cameras require very specialized, and often expensive, video recording equipment. It seems, therefore, that Machine Vision cameras with capability for asynchronous, stop-motion imaging represent a viable approach for cost effective video recording of high speed events occurring at repetition rates up to 60 hertz.

  11. Investigations of some aspects of the spray process in a single wire arc plasma spray system using high speed camera.

    PubMed

    Tiwari, N; Sahasrabudhe, S N; Tak, A K; Barve, D N; Das, A K

    2012-02-01

    A high speed camera has been used to record and analyze the evolution as well as particle behavior in a single wire arc plasma spray torch. Commercially available systems (spray watch, DPV 2000, etc.) focus onto a small area in the spray jet. They are not designed for tracking a single particle from the torch to the substrate. Using high speed camera, individual particles were tracked and their velocities were measured at various distances from the spray torch. Particle velocity information at different distances from the nozzle of the torch is very important to decide correct substrate position for the good quality of coating. The analysis of the images has revealed the details of the process of arc attachment to wire, melting of the wire, and detachment of the molten mass from the tip. Images of the wire and the arc have been recorded for different wire feed rates, gas flow rates, and torch powers, to determine compatible wire feed rates. High speed imaging of particle trajectories has been used for particle velocity determination using time of flight method. It was observed that the ripple in the power supply of the torch leads to large variation of instantaneous power fed to the torch. This affects the velocity of the spray particles generated at different times within one cycle of the ripple. It is shown that the velocity of a spray particle depends on the instantaneous torch power at the time of its generation. This correlation was established by experimental evidence in this paper. Once the particles leave the plasma jet, their forward speeds were found to be more or less invariant beyond 40 mm up to 500 mm from the nozzle exit. PMID:22380128

  12. An unmanned watching system using video cameras

    SciTech Connect

    Kaneda, K.; Nakamae, E. ); Takahashi, E. ); Yazawa, K. )

    1990-04-01

    Techniques for detecting intruders at a remote location, such as a power plant or substation, or in an unmanned building at night, are significant in the field of unmanned watching systems. This article describes an unmanned watching system to detect trespassers in real time, applicable both indoors and outdoors, based on image processing. The main part of the proposed system consists of a video camera, an image processor and a microprocessor. Images are input from the video camera to the image processor every 1/60 second, and objects which enter the image are detected by measuring changes of intensity level in selected sensor areas. This article discusses the system configuration and the detection method. Experimental results under a range of environmental conditions are given.

  13. Photometric Calibration of Consumer Video Cameras

    NASA Technical Reports Server (NTRS)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to

  14. Frequency Identification of Vibration Signals Using Video Camera Image Data

    PubMed Central

    Jeng, Yih-Nen; Wu, Chia-Hung

    2012-01-01

    This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC) can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system. PMID:23202026

  15. Frequency identification of vibration signals using video camera image data.

    PubMed

    Jeng, Yih-Nen; Wu, Chia-Hung

    2012-01-01

    This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC) can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system. PMID:23202026

  16. SPADAS: a high-speed 3D single-photon camera for advanced driver assistance systems

    NASA Astrophysics Data System (ADS)

    Bronzi, D.; Zou, Y.; Bellisai, S.; Villa, F.; Tisa, S.; Tosi, A.; Zappa, F.

    2015-02-01

    Advanced Driver Assistance Systems (ADAS) are the most advanced technologies to fight road accidents. Within ADAS, an important role is played by radar- and lidar-based sensors, which are mostly employed for collision avoidance and adaptive cruise control. Nonetheless, they have a narrow field-of-view and a limited ability to detect and differentiate objects. Standard camera-based technologies (e.g. stereovision) could balance these weaknesses, but they are currently not able to fulfill all automotive requirements (distance range, accuracy, acquisition speed, and frame-rate). To this purpose, we developed an automotive-oriented CMOS single-photon camera for optical 3D ranging based on indirect time-of-flight (iTOF) measurements. Imagers based on Single-photon avalanche diode (SPAD) arrays offer higher sensitivity with respect to CCD/CMOS rangefinders, have inherent better time resolution, higher accuracy and better linearity. Moreover, iTOF requires neither high bandwidth electronics nor short-pulsed lasers, hence allowing the development of cost-effective systems. The CMOS SPAD sensor is based on 64 × 32 pixels, each able to process both 2D intensity-data and 3D depth-ranging information, with background suppression. Pixel-level memories allow fully parallel imaging and prevents motion artefacts (skew, wobble, motion blur) and partial exposure effects, which otherwise would hinder the detection of fast moving objects. The camera is housed in an aluminum case supporting a 12 mm F/1.4 C-mount imaging lens, with a 40°×20° field-of-view. The whole system is very rugged and compact and a perfect solution for vehicle's cockpit, with dimensions of 80 mm × 45 mm × 70 mm, and less that 1 W consumption. To provide the required optical power (1.5 W, eye safe) and to allow fast (up to 25 MHz) modulation of the active illumination, we developed a modular laser source, based on five laser driver cards, with three 808 nm lasers each. We present the full characterization of

  17. Ultra-high speed and low latency broadband digital video transport

    NASA Astrophysics Data System (ADS)

    Stufflebeam, Joseph L.; Remley, Dennis M.; Sullivan, Anthony; Gurrola, Hector

    2004-07-01

    Various approaches for transporting digital video over Ethernet and SONET networks are presented. Commercial analog and digital frame grabbers are utilized, as well as software running under Microsoft Windows 2000/XP. No other specialized hardware is required. A network configuration using independent VLANs for video channels provides efficient transport for high bandwidth data. A framework is described for implementing both uncompressed and compressed streaming with standard and non-standard video. NTSC video is handled as well as other formats that include high resolution CMOS, high bit-depth infrared, and high frame rate parallel digital. End-to-end latencies of less than 200 msec are achieved.

  18. High speed wide field CMOS camera for Transneptunian Automatic Occultation Survey

    NASA Astrophysics Data System (ADS)

    Wang, Shiang-Yu; Geary, John C.; Amato, Stephen M.; Hu, Yen-Sang; Ling, Hung-Hsu; Huang, Pin-Jie; Furesz, Gabor; Chen, Hsin-Yo; Chang, Yin-Chang; Szentgyorgyi, Andrew; Lehner, Matthew; Norton, Timothy

    2014-08-01

    The Transneptunian Automated Occultation Survey (TAOS II) is a three robotic telescope project to detect the stellar occultation events generated by Trans Neptunian Objects (TNOs). TAOS II project aims to monitor about 10000 stars simultaneously at 20Hz to enable statistically significant event rate. The TAOS II camera is designed to cover the 1.7 degree diameter field of view (FoV) of the 1.3m telescope with 10 mosaic 4.5kx2k CMOS sensors. The new CMOS sensor has a back illumination thinned structure and high sensitivity to provide similar performance to that of the backillumination thinned CCDs. The sensor provides two parallel and eight serial decoders so the region of interests can be addressed and read out separately through different output channels efficiently. The pixel scale is about 0.6"/pix with the 16μm pixels. The sensors, mounted on a single Invar plate, are cooled to the operation temperature of about 200K by a cryogenic cooler. The Invar plate is connected to the dewar body through a supporting ring with three G10 bipods. The deformation of the cold plate is less than 10μm to ensure the sensor surface is always within ±40μm of focus range. The control electronics consists of analog part and a Xilinx FPGA based digital circuit. For each field star, 8×8 pixels box will be readout. The pixel rate for each channel is about 1Mpix/s and the total pixel rate for each camera is about 80Mpix/s. The FPGA module will calculate the total flux and also the centroid coordinates for every field star in each exposure.

  19. An Investigation On The Problems Of The Intermittent High-Speed Camera Of 360 Frames/S

    NASA Astrophysics Data System (ADS)

    Zhihong, Rong

    1989-06-01

    This paper discusses several problems on the JX-300 intermittent synchronous high-speed camera developed by the Institue of Optics and Electronics (10E), Academia Sinica in 1985. It is shown that when a framing rate is no more than 120 frames/s, a relatively high reliability is obtained resulting from low acceleration of the moving elements, weak intermittent pulldown strength, low frequency vibration, etc. At the time when a framing rate increases to over 200 frames/s, the photographic resolving power, as well as the film running reliability reduce due to the dramatic increase in vibration and pulldown strenth, which is similar to that in the stationary photography. It is getting worse when the framing rate is up to 300 frames/s. Therefore, deliberating on the choice of a claw mechanism having a framing rate of over 300 frames/s and conducting a series of technical measures are particularly important for a camera to obtain a sharp object image securely, otherwise it can hardly reach the framing rate of 300 frames/s for an intermittent camera. Even if this framing rate is attained, the image quality is also deformed and the mechanism would be rapidly worn off from high vibration.

  20. Characterization of calculus migration during Ho:YAG laser lithotripsy by high speed camera using suspended pendulum method

    NASA Astrophysics Data System (ADS)

    Zhang, Jian James; Rajabhandharaks, Danop; Xuan, Jason Rongwei; Chia, Ray W. J.; Hasenberg, Tom

    2014-03-01

    Calculus migration is a common problem during ureteroscopic laser lithotripsy procedure to treat urolithiasis. A conventional experimental method to characterize calculus migration utilized a hosting container (e.g. a "V" grove or a test tube). These methods, however, demonstrated large variation and poor detectability, possibly attributing to friction between the calculus and the container on which the calculus was situated. In this study, calculus migration was investigated using a pendulum model suspended under water to eliminate the aforementioned friction. A high speed camera was used to study the movement of the calculus which covered zero order (displacement), 1st order (speed) and 2nd order (acceleration). A commercialized, pulsed Ho:YAG laser at 2.1 um, 365-um core fiber, and calculus phantom (Plaster of Paris, 10×10×10mm cube) were utilized to mimic laser lithotripsy procedure. The phantom was hung on a stainless steel bar and irradiated by the laser at 0.5, 1.0 and 1.5J energy per pulse at 10Hz for 1 second (i.e., 5, 10, and 15W). Movement of the phantom was recorded by a high-speed camera with a frame rate of 10,000 FPS. Maximum displacement was 1.25+/-0.10, 3.01+/-0.52, and 4.37+/-0.58 mm for 0.5, 1, and 1.5J energy per pulse, respectively. Using the same laser power, the conventional method showed <0.5 mm total displacement. When reducing the phantom size to 5×5×5mm (1/8 in volume), the displacement was very inconsistent. The results suggested that using the pendulum model to eliminate the friction improved sensitivity and repeatability of the experiment. Detailed investigation on calculus movement and other causes of experimental variation will be conducted as a future study.

  1. Optical engineering application of modeled photosynthetically active radiation (PAR) for high-speed digital camera dynamic range optimization

    NASA Astrophysics Data System (ADS)

    Alves, James; Gueymard, Christian A.

    2009-08-01

    As efforts to create accurate yet computationally efficient estimation models for clear-sky photosynthetically active solar radiation (PAR) have succeeded, the range of practical engineering applications where these models can be successfully applied has increased. This paper describes a novel application of the REST2 radiative model (developed by the second author) in optical engineering. The PAR predictions in this application are used to predict the possible range of instantaneous irradiances that could impinge on the image plane of a stationary video camera designed to image license plates on moving vehicles. The overall spectral response of the camera (including lens and optical filters) is similar to the 400-700 nm PAR range, thereby making PAR irradiance (rather than luminance) predictions most suitable for this application. The accuracy of the REST2 irradiance predictions for horizontal surfaces, coupled with another radiative model to obtain irradiances on vertical surfaces, and to standard optical image formation models, enable setting the dynamic range controls of the camera to ensure that the license plate images are legible (unsaturated with adequate contrast) regardless of the time of day, sky condition, or vehicle speed. A brief description of how these radiative models are utilized as part of the camera control algorithm is provided. Several comparisons of the irradiance predictions derived from the radiative model versus actual PAR measurements under varying sky conditions with three Licor sensors (one horizontal and two vertical) have been made and showed good agreement. Various camera-to-plate geometries and compass headings have been considered in these comparisons. Time-lapse sequences of license plate images taken with the camera under various sky conditions over a 30-day period are also analyzed. They demonstrate the success of the approach at creating legible plate images under highly variable lighting, which is the main goal of this

  2. Thermal/structural/optical integrated design for optical window of a high-speed aerial optical camera

    NASA Astrophysics Data System (ADS)

    Zhang, Gaopeng; Yang, Hongtao; Mei, Chao; Shi, Kui; Wu, Dengshan; Qiao, Mingrui

    2015-10-01

    In order to obtain high quality image of the aero optical remote sensor, it is important to analysis its thermal-optical performance on the condition of high speed and high altitude. Especially for the key imaging assembly, such as optical window, the temperature variation and temperature gradient can result in defocus and aberrations in optical system, which will lead to the poor quality image. In order to improve the optical performance of a high speed aerial camera optical window, the thermal/structural/optical integrated design method is developed. Firstly, the flight environment of optical window is analyzed. Based on the theory of aerodynamics and heat transfer, the convection heat transfer coefficient is calculated. The temperature distributing of optical window is simulated by the finite element analysis software. The maximum difference in temperature of the inside and outside of optical window is obtained. Then the deformation of optical window under the boundary condition of the maximum difference in temperature is calculated. The optical window surface deformation is fitted in Zernike polynomial as the interface, the calculated Zernike fitting coefficients is brought in and analyzed by CodeV Optical Software. At last, the transfer function diagrams of the optical system on temperature field are comparatively analyzed. By comparing and analyzing the result, it can be obtained that the optical path difference caused by thermal deformation of the optical window is 149.6 nm, which is under PV <=1 4λ .The simulation result meets the requirements of optical design very well. The above study can be used as an important reference for other optical window designs.

  3. The issue of precision in the measurement of soil splash by a single drop using a high speed camera

    NASA Astrophysics Data System (ADS)

    Ryżak, Magdalena; Bieganowski, Andrzej; Polakowski, Cezary; Sochan, Agata

    2014-05-01

    Soil, being the top layer of the Earth's crust and a component of many ecosystems, undergoes continuous degradation. One of the forms of this degradation is water erosion. Erosion is a physical degradation process affecting the soil surface. This process affects not only the environment, but also the productivity and profitability of agriculture. Therefore, understanding the mechanisms of erosion and preventing it is important for agriculture and economy. Erosion has been the subject of many studies among various research teams around the world. The splash is the first stage of water erosion. The splash erosion can be characterised as two subprocesses: detachment of a particle from the soil surface and the transport of the particle in different directions. The aim of this study was to evaluate the reproducibility of the soil splash phenomenon that occurs as a result of the fall of a single drop. Using high-speed cameras, we measured the reproducibility of recorded splash parameters; these included the number and surface of detached particles and the width of the crown formed as a result of the splash. Measurements were carried out on soil samples with different textures taken from the topsoil of two soil profiles in south eastern Poland. After collection, these samples were dried at room temperature, sieved through a 2 mm sieve, and then humidified to three different humidity conditions. Drops of water with a diameter of 4.2 mm freely fell from a height of 1.5 m. Measurements were recorded using a high-speed camera (Vision Research MIRO M310) and the data were recording at 2000 frames per second. The number and surface of detached particles and the resulting width of the crown during the splash were analysed. The measurements demonstrated that: - Soil splash caused by the first drop striking the surface was significantly different from the splash caused by the impact of subsequent drops. This difference was due to the fact that less moisture was present at the time

  4. Large Area Divertor Temperature Measurements Using A High-speed Camera With Near-infrared FiIters in NSTX

    SciTech Connect

    Lyons, B C; Zweben, S J; Gray, T K; Hosea, J; Kaita, R; Kugel, H W; Maqueda, R J; McLean, A G; Roquemore, A L; Soukhanovskii, V A

    2011-04-05

    Fast cameras already installed on the National Spherical Torus Experiment (NSTX) have be equipped with near-infrared (NIR) filters in order to measure the surface temperature in the lower divertor region. Such a system provides a unique combination of high speed (> 50 kHz) and wide fi eld-of-view (> 50% of the divertor). Benchtop calibrations demonstrated the system's ability to measure thermal emission down to 330 oC. There is also, however, signi cant plasma light background in NSTX. Without improvements in background reduction, the current system is incapable of measuring signals below the background equivalent temperature (600 - 700 oC). Thermal signatures have been detected in cases of extreme divertor heating. It is observed that the divertor can reach temperatures around 800 oC when high harmonic fast wave (HHFW) heating is used. These temperature profiles were fi t using a simple heat diffusion code, providing a measurement of the heat flux to the divertor. Comparisons to other infrared thermography systems on NSTX are made.

  5. Scheimpflug camera in the quantitative assessment of reproducibility of high-speed corneal deformation during intraocular pressure measurement.

    PubMed

    Koprowski, Robert; Ambrósio, Renato; Reisdorf, Sven

    2015-11-01

    The paper presents an original analysis method of corneal deformation images from the ultra-high-speed Scheimpflug camera (Corvis ST tonometer). Particular attention was paid to deformation frequencies exceeding 100 Hz and their reproducibility in healthy subjects examined repeatedly. A total of 4200 images with a resolution of 200 × 576 pixels were recorded. The data derived from 3 consecutive measurements from 10 volunteers with normal corneas. A new image analysis algorithm, written in Matlab with the use of the Image Processing package, adaptive image filtering, morphological analysis methods and fast Fourier transform, was proposed. The following results were obtained: (1) reproducibility of the eyeball reaction in healthy subjects with precision of 10%, (2) corneal vibrations with a frequency of 369 ± 65 Hz (3) and amplitude of 7.86 ± 1.28 µm, (4) the phase shift within two parts of the cornea of the same subject of about 150°. The result of image sequence analysis for one subject and deformations with a corneal frequency response above 100 Hz. PMID:25623926

  6. A compact single-camera system for high-speed, simultaneous 3-D velocity and temperature measurements.

    SciTech Connect

    Lu, Louise; Sick, Volker; Frank, Jonathan H.

    2013-09-01

    The University of Michigan and Sandia National Laboratories collaborated on the initial development of a compact single-camera approach for simultaneously measuring 3-D gasphase velocity and temperature fields at high frame rates. A compact diagnostic tool is desired to enable investigations of flows with limited optical access, such as near-wall flows in an internal combustion engine. These in-cylinder flows play a crucial role in improving engine performance. Thermographic phosphors were proposed as flow and temperature tracers to extend the capabilities of a novel, compact 3D velocimetry diagnostic to include high-speed thermometry. Ratiometric measurements were performed using two spectral bands of laser-induced phosphorescence emission from BaMg2Al10O17:Eu (BAM) phosphors in a heated air flow to determine the optimal optical configuration for accurate temperature measurements. The originally planned multi-year research project ended prematurely after the first year due to the Sandia-sponsored student leaving the research group at the University of Michigan.

  7. The concurrent validity and reliability of a low-cost, high-speed camera-based method for measuring the flight time of vertical jumps.

    PubMed

    Balsalobre-Fernández, Carlos; Tejero-González, Carlos M; del Campo-Vecino, Juan; Bavaresco, Nicolás

    2014-02-01

    Flight time is the most accurate and frequently used variable when assessing the height of vertical jumps. The purpose of this study was to analyze the validity and reliability of an alternative method (i.e., the HSC-Kinovea method) for measuring the flight time and height of vertical jumping using a low-cost high-speed Casio Exilim FH-25 camera (HSC). To this end, 25 subjects performed a total of 125 vertical jumps on an infrared (IR) platform while simultaneously being recorded with a HSC at 240 fps. Subsequently, 2 observers with no experience in video analysis analyzed the 125 videos independently using the open-license Kinovea 0.8.15 software. The flight times obtained were then converted into vertical jump heights, and the intraclass correlation coefficient (ICC), Bland-Altman plot, and Pearson correlation coefficient were calculated for those variables. The results showed a perfect correlation agreement (ICC = 1, p < 0.0001) between both observers' measurements of flight time and jump height and a highly reliable agreement (ICC = 0.997, p < 0.0001) between the observers' measurements of flight time and jump height using the HSC-Kinovea method and those obtained using the IR system, thus explaining 99.5% (p < 0.0001) of the differences (shared variance) obtained using the IR platform. As a result, besides requiring no previous experience in the use of this technology, the HSC-Kinovea method can be considered to provide similarly valid and reliable measurements of flight time and vertical jump height as more expensive equipment (i.e., IR). As such, coaches from many sports could use the HSC-Kinovea method to measure the flight time and height of their athlete's vertical jumps. PMID:23689339

  8. Initial laboratory evaluation of color video cameras: Phase 2

    SciTech Connect

    Terry, P.L.

    1993-07-01

    Sandia National Laboratories has considerable experience with monochrome video cameras used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than to identify intruders. The monochrome cameras were selected over color cameras because they have greater sensitivity and resolution. There is a growing interest in the identification function of security video systems for both access control and insider protection. Because color camera technology is rapidly changing and because color information is useful for identification purposes, Sandia National Laboratories has established an on-going program to evaluate the newest color solid-state cameras. Phase One of the Sandia program resulted in the SAND91-2579/1 report titled: Initial Laboratory Evaluation of Color Video Cameras. The report briefly discusses imager chips, color cameras, and monitors, describes the camera selection, details traditional test parameters and procedures, and gives the results reached by evaluating 12 cameras. Here, in Phase Two of the report, we tested 6 additional cameras using traditional methods. In addition, all 18 cameras were tested by newly developed methods. This Phase 2 report details those newly developed test parameters and procedures, and evaluates the results.

  9. Time-synchronized high-speed video images, electric fields, and currents of rocket-and-wire triggered lightning

    NASA Astrophysics Data System (ADS)

    Biagi, C. J.; Hill, J. D.; Jordan, D. M.; Uman, M. A.; Rakov, V. A.

    2009-12-01

    We present novel observations of 20 classically-triggered lightning flashes from the 2009 summer season at the International Center for Lightning Research and Testing (ICLRT) in north-central Florida. We focus on: (1) upward positive leaders (UPL), (2) current decreases and current reflections associated with the destruction of the triggering wire, and (3) dart-stepped leader propagation involving space stems or space leaders ahead of the leader tip. High-speed video data were acquired 440 m from the triggered lightning using a Phantom v7.1 operating at frame rates of up to 10 kfps (90 µs frame time) with a field of view from ground to an altitude of 325 m and a Photron SA1.1 operating at frame rates of up to 300 kfps (3.3 µs frame time) that viewed from ground to an altitude of 120 m. These data were acquired along with time-synchronized measurements of electric field (dc to 3 MHz) and channel-base current (dc to 8 MHz). The sustained UPLs developed when the rockets were between altitudes of 100 m and 200 m, and accelerated from about 104 to 105 m s-1 from the top of the triggering wire to an altitude of 325 m. In each successive 10 kfps high-speed video image, the newly formed UPL channels were brighter than the previously established channel and the new channel segments were longer. The UPLs in two flashes were imaged at a frame rate of 300 kfps from the top of the wire to about 10 m above the wire (110 m to 120 m above ground). In these images the UPL developed in a stepped manner with luminosity waves traveling from the channel tip back toward the wire during a time of 2 to 3 frames (6.6 µs to 9.9 µs). The new channel segments were on average 1 m in length and the average interstep interval was 23 µs. During 13 of the 20 initial continuous currents, an abrupt current decrease and the beginning of the wire illumination (due to its melting) occurred simultaneously to within 1 high-speed video frame (between 3.3 µs and 10 µs). For two of the triggered

  10. Observation of diesel spray by pseudo-high-speed photography

    NASA Astrophysics Data System (ADS)

    Umezu, Seiji; Oka, Mohachiro

    2001-04-01

    Pseudo high speed photography has been developed to observe intermittent, periodic and high speed phenomena like diesel spray. Main device of this photography consists of Automatic Variable Retarder (AVR) which delays gradually timing between diesel injection and strobe spark with the micrometer. This technique enables us to observe diesel spray development just like images taken by a high speed video camera. This paper describes a principle of pseudo high speed photography, experimental results of adaptation to diesel spray and analysis of the diesel atomization mechanism.

  11. Using a slit lamp-mounted digital high-speed camera for dynamic observation of phakic lenses during eye movements: a pilot study

    PubMed Central

    Leitritz, Martin Alexander; Ziemssen, Focke; Bartz-Schmidt, Karl Ulrich; Voykov, Bogomil

    2014-01-01

    Purpose To evaluate a digital high-speed camera combined with digital morphometry software for dynamic measurements of phakic intraocular lens movements to observe kinetic influences, particularly in fast direction changes and at lateral end points. Materials and methods A high-speed camera taking 300 frames per second observed movements of eight iris-claw intraocular lenses and two angle-supported intraocular lenses. Standardized saccades were performed by the patients to trigger mass inertia with lens position changes. Freeze images with maximum deviation were used for digital software-based morphometry analysis with ImageJ. Results Two eyes from each of five patients (median age 32 years, range 28–45 years) without findings other than refractive errors were included. The high-speed images showed sufficient usability for further morphometric processing. In the primary eye position, the median decentrations downward and in a lateral direction were −0.32 mm (range −0.69 to 0.024) and 0.175 mm (range −0.37 to 0.45), respectively. Despite the small sample size of asymptomatic patients, we found a considerable amount of lens dislocation. The median distance amplitude during eye movements was 0.158 mm (range 0.02–0.84). There was a slight positive correlation (r=0.39, P<0.001) between the grade of deviation in the primary position and the distance increase triggered by movements. Conclusion With the use of a slit lamp-mounted high-speed camera system and morphometry software, observation and objective measurements of iris-claw intraocular lenses and angle-supported intraocular lenses movements seem to be possible. Slight decentration in the primary position might be an indicator of increased lens mobility during kinetic stress during eye movements. Long-term assessment by high-speed analysis with higher case numbers has to clarify the relationship between progressing motility and endothelial cell damage. PMID:25071365

  12. Implicit Memory in Monkeys: Development of a Delay Eyeblink Conditioning System with Parallel Electromyographic and High-Speed Video Measurements

    PubMed Central

    Suzuki, Kazutaka; Toyoda, Haruyoshi; Kano, Masanobu; Tsukada, Hideo; Kirino, Yutaka

    2015-01-01

    Delay eyeblink conditioning, a cerebellum-dependent learning paradigm, has been applied to various mammalian species but not yet to monkeys. We therefore developed an accurate measuring system that we believe is the first system suitable for delay eyeblink conditioning in a monkey species (Macaca mulatta). Monkey eyeblinking was simultaneously monitored by orbicularis oculi electromyographic (OO-EMG) measurements and a high-speed camera-based tracking system built around a 1-kHz CMOS image sensor. A 1-kHz tone was the conditioned stimulus (CS), while an air puff (0.02 MPa) was the unconditioned stimulus. EMG analysis showed that the monkeys exhibited a conditioned response (CR) incidence of more than 60% of trials during the 5-day acquisition phase and an extinguished CR during the 2-day extinction phase. The camera system yielded similar results. Hence, we conclude that both methods are effective in evaluating monkey eyeblink conditioning. This system incorporating two different measuring principles enabled us to elucidate the relationship between the actual presence of eyelid closure and OO-EMG activity. An interesting finding permitted by the new system was that the monkeys frequently exhibited obvious CRs even when they produced visible facial signs of drowsiness or microsleep. Indeed, the probability of observing a CR in a given trial was not influenced by whether the monkeys closed their eyelids just before CS onset, suggesting that this memory could be expressed independently of wakefulness. This work presents a novel system for cognitive assessment in monkeys that will be useful for elucidating the neural mechanisms of implicit learning in nonhuman primates. PMID:26068663

  13. Implicit Memory in Monkeys: Development of a Delay Eyeblink Conditioning System with Parallel Electromyographic and High-Speed Video Measurements.

    PubMed

    Kishimoto, Yasushi; Yamamoto, Shigeyuki; Suzuki, Kazutaka; Toyoda, Haruyoshi; Kano, Masanobu; Tsukada, Hideo; Kirino, Yutaka

    2015-01-01

    Delay eyeblink conditioning, a cerebellum-dependent learning paradigm, has been applied to various mammalian species but not yet to monkeys. We therefore developed an accurate measuring system that we believe is the first system suitable for delay eyeblink conditioning in a monkey species (Macaca mulatta). Monkey eyeblinking was simultaneously monitored by orbicularis oculi electromyographic (OO-EMG) measurements and a high-speed camera-based tracking system built around a 1-kHz CMOS image sensor. A 1-kHz tone was the conditioned stimulus (CS), while an air puff (0.02 MPa) was the unconditioned stimulus. EMG analysis showed that the monkeys exhibited a conditioned response (CR) incidence of more than 60% of trials during the 5-day acquisition phase and an extinguished CR during the 2-day extinction phase. The camera system yielded similar results. Hence, we conclude that both methods are effective in evaluating monkey eyeblink conditioning. This system incorporating two different measuring principles enabled us to elucidate the relationship between the actual presence of eyelid closure and OO-EMG activity. An interesting finding permitted by the new system was that the monkeys frequently exhibited obvious CRs even when they produced visible facial signs of drowsiness or microsleep. Indeed, the probability of observing a CR in a given trial was not influenced by whether the monkeys closed their eyelids just before CS onset, suggesting that this memory could be expressed independently of wakefulness. This work presents a novel system for cognitive assessment in monkeys that will be useful for elucidating the neural mechanisms of implicit learning in nonhuman primates. PMID:26068663

  14. DETAIL VIEW OF A VIDEO CAMERA POSITIONED ALONG THE PERIMETER ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    DETAIL VIEW OF A VIDEO CAMERA POSITIONED ALONG THE PERIMETER OF THE MLP - Cape Canaveral Air Force Station, Launch Complex 39, Mobile Launcher Platforms, Launcher Road, East of Kennedy Parkway North, Cape Canaveral, Brevard County, FL

  15. DETAIL VIEW OF VIDEO CAMERA, MAIN FLOOR LEVEL, PLATFORM ESOUTH, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    DETAIL VIEW OF VIDEO CAMERA, MAIN FLOOR LEVEL, PLATFORM E-SOUTH, HB-3, FACING SOUTHWEST - Cape Canaveral Air Force Station, Launch Complex 39, Vehicle Assembly Building, VAB Road, East of Kennedy Parkway North, Cape Canaveral, Brevard County, FL

  16. Fused Six-Camera Video of STS-134 Launch

    NASA Video Gallery

    Imaging experts funded by the Space Shuttle Program and located at NASA's Ames Research Center prepared this video by merging nearly 20,000 photographs taken by a set of six cameras capturing 250 i...

  17. Station Cameras Capture New Videos of Hurricane Katia

    NASA Video Gallery

    Aboard the International Space Station, external cameras captured new video of Hurricane Katia as it moved northwest across the western Atlantic north of Puerto Rico at 10:35 a.m. EDT on September ...

  18. Synchronised electrical monitoring and high speed video of bubble growth associated with individual discharges during plasma electrolytic oxidation

    NASA Astrophysics Data System (ADS)

    Troughton, S. C.; Nominé, A.; Nominé, A. V.; Henrion, G.; Clyne, T. W.

    2015-12-01

    Synchronised electrical current and high speed video information are presented from individual discharges on Al substrates during PEO processing. Exposure time was 8 μs and linear spatial resolution 9 μm. Image sequences were captured for periods of 2 s, during which the sample surface was illuminated with short duration flashes (revealing bubbles formed where the discharge reached the surface of the coating). Correlations were thus established between discharge current, light emission from the discharge channel and (externally-illuminated) dimensions of the bubble as it expanded and contracted. Bubbles reached radii of 500 μm, within periods of 100 μs, with peak growth velocity about 10 m/s. It is deduced that bubble growth occurs as a consequence of the progressive volatilisation of water (electrolyte), without substantial increases in either pressure or temperature within the bubble. Current continues to flow through the discharge as the bubble expands, and this growth (and the related increase in electrical resistance) is thought to be responsible for the current being cut off (soon after the point of maximum radius). A semi-quantitative audit is presented of the transformations between different forms of energy that take place during the lifetime of a discharge.

  19. High-speed video gait analysis reveals early and characteristic locomotor phenotypes in mouse models of neurodegenerative movement disorders.

    PubMed

    Preisig, Daniel F; Kulic, Luka; Krüger, Maik; Wirth, Fabian; McAfoose, Jordan; Späni, Claudia; Gantenbein, Pascal; Derungs, Rebecca; Nitsch, Roger M; Welt, Tobias

    2016-09-15

    Neurodegenerative diseases of the central nervous system frequently affect the locomotor system resulting in impaired movement and gait. In this study we performed a whole-body high-speed video gait analysis in three different mouse lines of neurodegenerative movement disorders to investigate the motor phenotype. Based on precise computerized motion tracking of all relevant joints and the tail, a custom-developed algorithm generated individual and comprehensive locomotor profiles consisting of 164 spatial and temporal parameters. Gait changes observed in the three models corresponded closely to the classical clinical symptoms described in these disorders: Muscle atrophy due to motor neuron loss in SOD1 G93A transgenic mice led to gait characterized by changes in hind-limb movement and positioning. In contrast, locomotion in huntingtin N171-82Q mice modeling Huntington's disease with basal ganglia damage was defined by hyperkinetic limb movements and rigidity of the trunk. Harlequin mutant mice modeling cerebellar degeneration showed gait instability and extensive changes in limb positioning. Moreover, model specific gait parameters were identified and were shown to be more sensitive than conventional motor tests. Altogether, this technique provides new opportunities to decipher underlying disease mechanisms and test novel therapeutic approaches. PMID:27233823

  20. BEHAVIORAL INTERACTIONS OF THE BLACK IMPORTED FIRE ANT (SOLENOPSIS RICHTERI FOREL) AND ITS PARASITOID FLY (PSEUDACTEON CURVATUS BORGMEIER) AS REVEALED BY HIGH-SPEED VIDEO.

    Technology Transfer Automated Retrieval System (TEKTRAN)

    High-speed video recordings were used to study the interactions between the phorid fly (Pseudacteon curvatus), and the black imported fire ant (Solenopsis richteri) in the field. Phorid flies are extremely fast agile fliers that can hover and fly in all directions. Wingbeat frequency recorded with...

  1. A new ex vivo beating heart model to investigate the application of heart valve performance tools with a high-speed camera.

    PubMed

    Kondruweit, Markus; Friedl, Sven; Heim, Christian; Wittenberg, Thomas; Weyand, Michael; Harig, Frank

    2014-01-01

    High-speed camera examination of heart valves is an established technique to examine heart valve prosthesis. The aim of this study was to examine the possibility to transmit new tools for high-speed camera examination of heart valve behavior under near-physiological conditions in a porcine ex vivo beating heart model. After explantation of the piglet heart, main coronary arteries were cannulated and the heart was reperfused with the previously collected donor blood. When the heart started beating in sinus rhythm again, the motion of the aortic and mitral valve was recorded using a digital high-speed camera system (recording rate 2,000 frames/sec). The image sequences of the mitral valve were analyzed, and digital kymograms were calculated at different angles for the exact analysis of the different closure phases. The image sequence of the aortic valve was analyzed, and several snakes were performed to analyze the effective orifice area over the time. Both processing tools were successfully applied to examine heart valves in this ex vivo beating heart model. We were able to investigate the exact open and closure time of the mitral valve, as well as the projected effective orifice area of the aortic valve over the time. The high-speed camera investigation in an ex vivo beating heart model of heart valve behavior is feasible and also reasonable because of using processing feature such as kymography for exact analysis. These analytical techniques might help to optimize reconstructive surgery of the mitral valve and the development of heart valve prostheses in future. PMID:24270227

  2. A comparison of DIC and grid measurements for processing spalling tests with the VFM and an 80-kpixel ultra-high speed camera

    NASA Astrophysics Data System (ADS)

    Saletti, D.; Forquin, P.

    2016-05-01

    During the last decades, the spalling technique has been more and more used to characterize the tensile strength of geomaterials at high-strain-rates. In 2012, a new processing technique was proposed by Pierron and Forquin [1] to measure the stress level and apparent Young's modulus in a concrete sample by means of an ultra-high speed camera, a grid bonded onto the sample and the Virtual Fields Method. However the possible benefit to use the DIC (Digital Image Correlation) technique instead of the grid method has not been investigated. In the present work, spalling experiments were performed on two aluminum alloy samples with HPV1 (Shimadzu) ultra-high speed camera providing 1 Mfps maximum recording frequency and about 80 kpixel spatial resolution. A grid with 1 mm pitch was bonded onto the first sample whereas a speckle pattern was covering the second sample for DIC measurements. Both methods were evaluated in terms of displacement and acceleration measurements by comparing the experimental data to laser interferometer measurements. In addition, the stress and strain levels in a given cross-section were compared to the experimental data provided by a strain gage glued on each sample. The measurements allow discussing the benefit of each (grid and DIC) technique to obtain the stress-strain relationship in the case of using an 80-kpixel ultra-high speed camera.

  3. The calibration of video cameras for quantitative measurements

    NASA Technical Reports Server (NTRS)

    Snow, Walter L.; Childers, Brooks A.; Shortis, Mark R.

    1993-01-01

    Several different recent applications of velocimetry at Langley Research Center are described in order to show the need for video camera calibration for quantitative measurements. Problems peculiar to video sensing are discussed, including synchronization and timing, targeting, and lighting. The extension of the measurements to include radiometric estimates is addressed.

  4. Demonstrations of Optical Spectra with a Video Camera

    ERIC Educational Resources Information Center

    Kraftmakher, Yaakov

    2012-01-01

    The use of a video camera may markedly improve demonstrations of optical spectra. First, the output electrical signal from the camera, which provides full information about a picture to be transmitted, can be used for observing the radiant power spectrum on the screen of a common oscilloscope. Second, increasing the magnification by the camera…

  5. High-speed video of competing and cut-off leaders prior to "upward illumination-type" lightning ground strokes

    NASA Astrophysics Data System (ADS)

    Stolzenburg, Maribeth; Marshall, Thomas; Karunarathne, Sumedhe; Karunarathna, Nadeeka; Warner, Tom; Orville, Richard

    2013-04-01

    This study presents evidence to test a hypothesis regarding the physical mechanism resulting in very weak "upward illumination" (UI) type ground strokes occurring within a few milliseconds after a normal return stroke (RS) of a negative lightning flash. As described in previous work [Stolzenburg et al., JGR D15203, 2012], these short duration (< 1 ms) strokes form a new ground connection, without apparent connection to the main RS, over their relatively short (< 3 km) visible upward return path. From a dataset of 170 video flashes acquired in 2011 (captured at 50000 frames per second), we find 20 good UI examples in 18 flashes at 2.5-32.3 km distance from the camera. Average separation values are 1.26 ms and 1.9 km between the ground connections of the UI and main RS. Based on electric field change data for the flashes, the estimated peak current of the UI strokes averages -5.0 kA, about one-third the average value for the preceding RS. In 15 cases the video data show a distinct stepped leader to the UI which develops concurrently with the stepped leader to the main RS. Estimated altitude of the UI leader tip just before the main RS occurs ranges from 0 to 610 m, and in 7 cases steps are visible in the UI leader after the main RS. In most of the examples the RS and UI appear as separate channels for their entire visible portion, but in 5 cases there is a junction indicating the UI leader is a cut-off branch from the main leader. A generalized schematic of the seven main luminosity stages in a typical UI, along with video examples showing each of these stages and electric field change data, will be presented.

  6. Video Cameras in the Ondrejov Flare Spectrograph Results and Prospects

    NASA Astrophysics Data System (ADS)

    Kotrc, P.

    Since 1991 video cameras have been widely used both in the image and in the spectral data acquisition of the Ondrejov Multichannel Flare Spectrograph. In addition to classical photographic data registration, this kind of detectors brought new possibilities, especially into dynamical solar phenomena observations and put new requirements on the digitization, archiving and data processing techniques. The unique complex video system consisting of four video cameras and auxiliary equipment was mostly developed, implemented and used in the Ondrejov observatory. The main advantages and limitations of the system are briefly described from the points of view of its scientific philosophy, intents and outputs. Some obtained results, experience and future prospects are discussed.

  7. Experimental Comparison of the High-Speed Imaging Performance of an EM-CCD and sCMOS Camera in a Dynamic Live-Cell Imaging Test Case

    PubMed Central

    Beier, Hope T.; Ibey, Bennett L.

    2014-01-01

    The study of living cells may require advanced imaging techniques to track weak and rapidly changing signals. Fundamental to this need is the recent advancement in camera technology. Two camera types, specifically sCMOS and EM-CCD, promise both high signal-to-noise and high speed (>100 fps), leaving researchers with a critical decision when determining the best technology for their application. In this article, we compare two cameras using a live-cell imaging test case in which small changes in cellular fluorescence must be rapidly detected with high spatial resolution. The EM-CCD maintained an advantage of being able to acquire discernible images with a lower number of photons due to its EM-enhancement. However, if high-resolution images at speeds approaching or exceeding 1000 fps are desired, the flexibility of the full-frame imaging capabilities of sCMOS is superior. PMID:24404178

  8. Court Reconstruction for Camera Calibration in Broadcast Basketball Videos.

    PubMed

    Wen, Pei-Chih; Cheng, Wei-Chih; Wang, Yu-Shuen; Chu, Hung-Kuo; Tang, Nick C; Liao, Hong-Yuan Mark

    2016-05-01

    We introduce a technique of calibrating camera motions in basketball videos. Our method particularly transforms player positions to standard basketball court coordinates and enables applications such as tactical analysis and semantic basketball video retrieval. To achieve a robust calibration, we reconstruct the panoramic basketball court from a video, followed by warping the panoramic court to a standard one. As opposed to previous approaches, which individually detect the court lines and corners of each video frame, our technique considers all video frames simultaneously to achieve calibration; hence, it is robust to illumination changes and player occlusions. To demonstrate the feasibility of our technique, we present a stroke-based system that allows users to retrieve basketball videos. Our system tracks player trajectories from broadcast basketball videos. It then rectifies the trajectories to a standard basketball court by using our camera calibration method. Consequently, users can apply stroke queries to indicate how the players move in gameplay during retrieval. The main advantage of this interface is an explicit query of basketball videos so that unwanted outcomes can be prevented. We show the results in Figs. 1, 7, 9, 10 and our accompanying video to exhibit the feasibility of our technique. PMID:27504515

  9. Assessment of the metrological performance of an in situ storage image sensor ultra-high speed camera for full-field deformation measurements

    NASA Astrophysics Data System (ADS)

    Rossi, Marco; Pierron, Fabrice; Forquin, Pascal

    2014-02-01

    Ultra-high speed (UHS) cameras allow us to acquire images typically up to about 1 million frames s-1 for a full spatial resolution of the order of 1 Mpixel. Different technologies are available nowadays to achieve these performances, an interesting one is the so-called in situ storage image sensor architecture where the image storage is incorporated into the sensor chip. Such an architecture is all solid state and does not contain movable devices as occurs, for instance, in the rotating mirror UHS cameras. One of the disadvantages of this system is the low fill factor (around 76% in the vertical direction and 14% in the horizontal direction) since most of the space in the sensor is occupied by memory. This peculiarity introduces a series of systematic errors when the camera is used to perform full-field strain measurements. The aim of this paper is to develop an experimental procedure to thoroughly characterize the performance of such kinds of cameras in full-field deformation measurement and identify the best operative conditions which minimize the measurement errors. A series of tests was performed on a Shimadzu HPV-1 UHS camera first using uniform scenes and then grids under rigid movements. The grid method was used as full-field measurement optical technique here. From these tests, it has been possible to appropriately identify the camera behaviour and utilize this information to improve actual measurements.

  10. Single software platform used for high speed data transfer implementation in a 65k pixel camera working in single photon counting mode

    NASA Astrophysics Data System (ADS)

    Maj, P.; Kasiński, K.; Gryboś, P.; Szczygieł, R.; Kozioł, A.

    2015-12-01

    Integrated circuits designed for specific applications generally use non-standard communication methods. Hybrid pixel detector readout electronics produces a huge amount of data as a result of number of frames per seconds. The data needs to be transmitted to a higher level system without limiting the ASIC's capabilities. Nowadays, the Camera Link interface is still one of the fastest communication methods, allowing transmission speeds up to 800 MB/s. In order to communicate between a higher level system and the ASIC with a dedicated protocol, an FPGA with dedicated code is required. The configuration data is received from the PC and written to the ASIC. At the same time, the same FPGA should be able to transmit the data from the ASIC to the PC at the very high speed. The camera should be an embedded system enabling autonomous operation and self-monitoring. In the presented solution, at least three different hardware platforms are used—FPGA, microprocessor with real-time operating system and the PC with end-user software. We present the use of a single software platform for high speed data transfer from 65k pixel camera to the personal computer.

  11. Study of fiber-tip damage mechanism during Ho:YAG laser lithotripsy by high-speed camera and the Schlieren method

    NASA Astrophysics Data System (ADS)

    Zhang, Jian J.; Getzan, Grant; Xuan, Jason R.; Yu, Honggang

    2015-02-01

    Fiber-tip degradation, damage, or burn back is a common problem during the ureteroscopic laser lithotripsy procedure to treat urolithiasis. Fiber-tip burn back results in reduced transmission of laser energy, which greatly reduces the efficiency of stone comminution. In some cases, the fiber-tip degradation is so severe that the damaged fiber-tip will absorb most of the laser energy, which can cause the tip portion to be overheated and melt the cladding or jacket layers of the fiber. Though it is known that the higher the energy density (which is the ratio of the laser energy fluence over the cross section area of the fiber core), the faster the fiber-tip degradation, the damage mechanism of the fibertip is still unclear. In this study, fiber-tip degradation was investigated by visualization of shockwave, cavitation/bubble dynamics, and calculus debris ejection with a high-speed camera and the Schlieren method. A commercialized, pulsed Ho:YAG laser at 2.12 um, 273/365/550-um core fibers, and calculus phantoms (Plaster of Paris, 10x10x10 mm cube) were utilized to mimic the laser lithotripsy procedure. Laser energy induced shockwave, cavitation/bubble dynamics, and stone debris ejection were recorded by a high-speed camera with a frame rate of 10,000 to 930,000 fps. The results suggested that using a high-speed camera and the Schlieren method to visualize the shockwave provided valuable information about time-dependent acoustic energy propagation and its interaction with cavitation and calculus. Detailed investigation on acoustic energy beam shaping by fiber-tip modification and interaction between shockwave, cavitation/bubble dynamics, and calculus debris ejection will be conducted as a future study.

  12. High-speed imaging system for observation of discharge phenomena

    NASA Astrophysics Data System (ADS)

    Tanabe, R.; Kusano, H.; Ito, Y.

    2008-11-01

    A thin metal electrode tip instantly changes its shape into a sphere or a needlelike shape in a single electrical discharge of high current. These changes occur within several hundred microseconds. To observe these high-speed phenomena in a single discharge, an imaging system using a high-speed video camera and a high repetition rate pulse laser was constructed. A nanosecond laser, the wavelength of which was 532 nm, was used as the illuminating source of a newly developed high-speed video camera, HPV-1. The time resolution of our system was determined by the laser pulse width and was about 80 nanoseconds. The system can take one hundred pictures at 16- or 64-microsecond intervals in a single discharge event. A band-pass filter at 532 nm was placed in front of the camera to block the emission of the discharge arc at other wavelengths. Therefore, clear images of the electrode were recorded even during the discharge. If the laser was not used, only images of plasma during discharge and thermal radiation from the electrode after discharge were observed. These results demonstrate that the combination of a high repetition rate and a short pulse laser with a high speed video camera provides a unique and powerful method for high speed imaging.

  13. Synchronizing Light Pulses With Video Camera

    NASA Technical Reports Server (NTRS)

    Kalshoven, James E., Jr.; Tierney, Michael; Dabney, Philip

    1993-01-01

    Interface circuit triggers laser or other external source of light to flash in proper frame and field (at proper time) for video recording and playback in "pause" mode. Also increases speed of electronic shutter (if any) during affected frame to reduce visibility of background illumination relative to that of laser illumination.

  14. 67. DETAIL OF VIDEO CAMERA CONTROL PANEL LOCATED IMMEDIATELY WEST ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    67. DETAIL OF VIDEO CAMERA CONTROL PANEL LOCATED IMMEDIATELY WEST OF ASSISTANT LAUNCH CONDUCTOR PANEL SHOWN IN CA-133-1-A-66 - Vandenberg Air Force Base, Space Launch Complex 3, Launch Operations Building, Napa & Alden Roads, Lompoc, Santa Barbara County, CA

  15. Using a Digital Video Camera to Study Motion

    ERIC Educational Resources Information Center

    Abisdris, Gil; Phaneuf, Alain

    2007-01-01

    To illustrate how a digital video camera can be used to analyze various types of motion, this simple activity analyzes the motion and measures the acceleration due to gravity of a basketball in free fall. Although many excellent commercially available data loggers and software can accomplish this task, this activity requires almost no financial…

  16. Compact 3D flash lidar video cameras and applications

    NASA Astrophysics Data System (ADS)

    Stettner, Roger

    2010-04-01

    The theory and operation of Advanced Scientific Concepts, Inc.'s (ASC) latest compact 3D Flash LIDAR Video Cameras (3D FLVCs) and a growing number of technical problems and solutions are discussed. The solutions range from space shuttle docking, planetary entry, decent and landing, surveillance, autonomous and manned ground vehicle navigation and 3D imaging through particle obscurants.

  17. Lights, Camera, Action! Using Video Recordings to Evaluate Teachers

    ERIC Educational Resources Information Center

    Petrilli, Michael J.

    2011-01-01

    Teachers and their unions do not want test scores to count for everything; classroom observations are key, too. But planning a couple of visits from the principal is hardly sufficient. These visits may "change the teacher's behavior"; furthermore, principals may not be the best judges of effective teaching. So why not put video cameras in…

  18. CameraCast: flexible access to remote video sensors

    NASA Astrophysics Data System (ADS)

    Kong, Jiantao; Ganev, Ivan; Schwan, Karsten; Widener, Patrick

    2007-01-01

    New applications like remote surveillance and online environmental or traffic monitoring are making it increasingly important to provide flexible and protected access to remote video sensor devices. Current systems use application-level codes like web-based solutions to provide such access. This requires adherence to user-level APIs provided by such services, access to remote video information through given application-specific service and server topologies, and that the data being captured and distributed is manipulated by third party service codes. CameraCast is a simple, easily used system-level solution to remote video access. It provides a logical device API so that an application can identically operate on local vs. remote video sensor devices, using its own service and server topologies. In addition, the application can take advantage of API enhancements to protect remote video information, using a capability-based model for differential data protection that offers fine grain control over the information made available to specific codes or machines, thereby limiting their ability to violate privacy or security constraints. Experimental evaluations of CameraCast show that the performance of accessing remote video information approximates that of accesses to local devices, given sufficient networking resources. High performance is also attained when protection restrictions are enforced, due to an efficient kernel-level realization of differential data protection.

  19. Unmanned Vehicle Guidance Using Video Camera/Vehicle Model

    NASA Technical Reports Server (NTRS)

    Sutherland, T.

    1999-01-01

    A video guidance sensor (VGS) system has flown on both STS-87 and STS-95 to validate a single camera/target concept for vehicle navigation. The main part of the image algorithm was the subtraction of two consecutive images using software. For a nominal size image of 256 x 256 pixels this subtraction can take a large portion of the time between successive frames in standard rate video leaving very little time for other computations. The purpose of this project was to integrate the software subtraction into hardware to speed up the subtraction process and allow for more complex algorithms to be performed, both in hardware and software.

  20. Relationship between structures of sprite streamers and inhomogeneity of preceding halos captured by high-speed camera during a combined aircraft and ground-based campaign

    NASA Astrophysics Data System (ADS)

    Takahashi, Y.; Sato, M.; Kudo, T.; Shima, Y.; Kobayashi, N.; Inoue, T.; Stenbaek-Nielsen, H. C.; McHarg, M. G.; Haaland, R. K.; Kammae, T.; Yair, Y.; Lyons, W. A.; Cummer, S. A.; Ahrns, J.; Yukman, P.; Warner, T. A.; Sonnenfeld, R. G.; Li, J.; Lu, G.

    2011-12-01

    The relationship between diffuse glows such as elves and sprite halos and subsequent discrete structure of sprite streamers is considered to be one of the keys to solve the mechanism causing a large variation of sprite structures. However, it's not easy to image at high frame rate both the diffuse and discrete structures simultaneously, since it requires high sensitivity, high spatial resolution and high signal-to-noise ratio. To capture the real spatial structure of TLEs without influence of atmospheric absorption, spacecraft would be the best solution. However, since the imaging observation from space is mostly made for TLEs appeared near the horizon, the range from spacecraft to TLEs becomes large, such as few thousand km, resulting in low spatial resolution. The aircraft can approach thunderstorm up to a few hundred km or less and can carry heavy high-speed cameras with huge size data memories. In the period of June 27 - July 10, 2011, a combined aircraft and ground-based campaign, in support of NHK Cosmic Shore project, was carried with two jet airplanes under collaboration between NHK (Japan Broadcasting Corporation) and universities. On 8 nights out of 16 standing-by, the jets took off from the airport near Denver, Colorado, and an airborne high speed camera captured over 40 TLE events at a frame rate of 8300 /sec. Here we introduce the time development of sprite streamers and the both large and fine structures of preceding halos showing inhomogeneity, suggesting a mechanism to cause the large variation of sprite types, such as crown like sprites.

  1. High-Speed Video for Investigating Splash Erosion Behaviour: Obtaining Initial Velocity and Angle of Ejections by Tracking Trajectories.

    NASA Astrophysics Data System (ADS)

    Ahn, S.; Doerr, S.; Douglas, P.; Bryant, R.; Hamlett, C.; McHale, G.; Newton, M.; Shirtcliffe, N.

    2012-04-01

    The use of high-speed videography has been shown to be very useful in some splash erosion studies. One methodological problem that arises in its application is the difficulty in tracking a large number of particles in slow motion, especially when the use of automatic tracking software is limited. With this problem, some studies simply assume a certain ejecting angle for all particles rather than actually tracking every particle. In this contribution, different combinations of variables (e.g. landing position, landing time or departing position, etc.) were compared in order to determine an efficient and sufficiently precise method for trajectory tracking when a large amount of particles are being ejected.

  2. In-flight Video Captured by External Tank Camera System

    NASA Technical Reports Server (NTRS)

    2005-01-01

    In this July 26, 2005 video, Earth slowly fades into the background as the STS-114 Space Shuttle Discovery climbs into space until the External Tank (ET) separates from the orbiter. An External Tank ET Camera System featuring a Sony XC-999 model camera provided never before seen footage of the launch and tank separation. The camera was installed in the ET LO2 Feedline Fairing. From this position, the camera had a 40% field of view with a 3.5 mm lens. The field of view showed some of the Bipod area, a portion of the LH2 tank and Intertank flange area, and some of the bottom of the shuttle orbiter. Contained in an electronic box, the battery pack and transmitter were mounted on top of the Solid Rocker Booster (SRB) crossbeam inside the ET. The battery pack included 20 Nickel-Metal Hydride batteries (similar to cordless phone battery packs) totaling 28 volts DC and could supply about 70 minutes of video. Located 95 degrees apart on the exterior of the Intertank opposite orbiter side, there were 2 blade S-Band antennas about 2 1/2 inches long that transmitted a 10 watt signal to the ground stations. The camera turned on approximately 10 minutes prior to launch and operated for 15 minutes following liftoff. The complete camera system weighs about 32 pounds. Marshall Space Flight Center (MSFC), Johnson Space Center (JSC), Goddard Space Flight Center (GSFC), and Kennedy Space Center (KSC) participated in the design, development, and testing of the ET camera system.

  3. Fast roadway detection using car cabin video camera

    NASA Astrophysics Data System (ADS)

    Krokhina, Daria; Blinov, Veniamin; Gladilin, Sergey; Tarhanov, Ivan; Postnikov, Vassili

    2015-12-01

    We describe a fast method for road detection in images from a vehicle cabin camera. Straight section of roadway is detected using Fast Hough Transform and the method of dynamic programming. We assume that location of horizon line in the image and the road pattern are known. The developed method is fast enough to detect the roadway on each frame of the video stream in real time and may be further accelerated by the use of tracking.

  4. Robust camera calibration for sport videos using court models

    NASA Astrophysics Data System (ADS)

    Farin, Dirk; Krabbe, Susanne; de With, Peter H. N.; Effelsberg, Wolfgang

    2003-12-01

    We propose an automatic camera calibration algorithm for court sports. The obtained camera calibration parameters are required for applications that need to convert positions in the video frame to real-world coordinates or vice versa. Our algorithm uses a model of the arrangement of court lines for calibration. Since the court model can be specified by the user, the algorithm can be applied to a variety of different sports. The algorithm starts with a model initialization step which locates the court in the image without any user assistance or a-priori knowledge about the most probable position. Image pixels are classified as court line pixels if they pass several tests including color and local texture constraints. A Hough transform is applied to extract line elements, forming a set of court line candidates. The subsequent combinatorial search establishes correspondences between lines in the input image and lines from the court model. For the succeeding input frames, an abbreviated calibration algorithm is used, which predicts the camera parameters for the new image and optimizes the parameters using a gradient-descent algorithm. We have conducted experiments on a variety of sport videos (tennis, volleyball, and goal area sequences of soccer games). Video scenes with considerable difficulties were selected to test the robustness of the algorithm. Results show that the algorithm is very robust to occlusions, partial court views, bad lighting conditions, or shadows.

  5. Identifying sports videos using replay, text, and camera motion features

    NASA Astrophysics Data System (ADS)

    Kobla, Vikrant; DeMenthon, Daniel; Doermann, David S.

    1999-12-01

    Automated classification of digital video is emerging as an important piece of the puzzle in the design of content management systems for digital libraries. The ability to classify videos into various classes such as sports, news, movies, or documentaries, increases the efficiency of indexing, browsing, and retrieval of video in large databases. In this paper, we discuss the extraction of features that enable identification of sports videos directly from the compressed domain of MPEG video. These features include detecting the presence of action replays, determining the amount of scene text in vide, and calculating various statistics on camera and/or object motion. The features are derived from the macroblock, motion,and bit-rate information that is readily accessible from MPEG video with very minimal decoding, leading to substantial gains in processing speeds. Full-decoding of selective frames is required only for text analysis. A decision tree classifier built using these features is able to identify sports clips with an accuracy of about 93 percent.

  6. Improving Photometric Calibration of Meteor Video Camera Systems

    NASA Technical Reports Server (NTRS)

    Ehlert, Steven; Kingery, Aaron; Cooke, William

    2016-01-01

    Current optical observations of meteors are commonly limited by systematic uncertainties in photometric calibration at the level of approximately 0.5 mag or higher. Future improvements to meteor ablation models, luminous efficiency models, or emission spectra will hinge on new camera systems and techniques that significantly reduce calibration uncertainties and can reliably perform absolute photometric measurements of meteors. In this talk we discuss the algorithms and tests that NASA's Meteoroid Environment Office (MEO) has developed to better calibrate photometric measurements for the existing All-Sky and Wide-Field video camera networks as well as for a newly deployed four-camera system for measuring meteor colors in Johnson-Cousins BV RI filters. In particular we will emphasize how the MEO has been able to address two long-standing concerns with the traditional procedure, discussed in more detail below.

  7. Determination of pulsed-source cloud size/rise information using high-speed, low-speed, and digitized-video photography techniques

    NASA Astrophysics Data System (ADS)

    Magiawala, Kiran R.; Schatzle, Paul R.; Petach, Michael B.; Figueroa, Miguel A.; Peabody, Alden S., II

    1993-01-01

    This paper discusses a laboratory method based on generating a buoyant thermal cloud through explosively bursting an aluminum foil by a rapid electric discharge procedure. The required electric energy is stored in a bank of capacitors and is discharged into the foil through a trigger circuit on external command. The aluminum first vaporizes and becomes an aluminum gas plasma at high temperature (approximately 8000 K) which then mixes with the surrounding air and ignites. The cloud containing these hot combustion products rises up in an unstratified anechoic environment. As the cloud rises, it entrains the air from the surroundings due to turbulent mixing and it grows. To characterize this cloud rise, three different types of photographic techniques are used. They are: high-speed photography (6000 fps), low-speed photography (200 fps), and video photography (30 fps). These techniques cover various time scales in foil firing schedule beginning from early time (up to 10 msec) to late time (up to 4 secs). Images obtained by video photography technique have been processed into a digital format. In digitizing the video tape data, an optical video disk player/recorder was used together with pc-based frame grabber hardware. A simple software routine was developed to obtain cloud size/rise data based on an edge detection technique.

  8. Comparison of high speed imaging technique to laser vibrometry for detection of vibration information from objects

    NASA Astrophysics Data System (ADS)

    Paunescu, Gabriela; Lutzmann, Peter; Göhler, Benjamin; Wegner, Daniel

    2015-10-01

    The development of camera technology in recent years has made high speed imaging a reliable method in vibration and dynamic measurements. The passive recovery of vibration information from high speed video recordings was reported in several recent papers. A highly developed technique, involving decomposition of the input video into spatial subframes to compute local motion signals, allowed an accurate sound reconstruction. A simpler technique based on image matching for vibration measurement was also reported as efficient in extracting audio information from a silent high speed video. In this paper we investigate and discuss the sensitivity and the limitations of the high speed imaging technique for vibration detection in comparison to the well-established Doppler vibrometry technique. Experiments on the extension of the high speed imaging method to longer range applications are presented.

  9. A low-bandwidth graphical user interface for high-speed triage of potential items of interest in video imagery

    NASA Astrophysics Data System (ADS)

    Huber, David J.; Khosla, Deepak; Martin, Kevin; Chen, Yang

    2013-06-01

    In this paper, we introduce a user interface called the "Threat Chip Display" (TCD) for rapid human-in-the-loop analysis and detection of "threats" in high-bandwidth imagery and video from a list of "Items of Interest" (IOI), which includes objects, targets and events that the human is interested in detecting and identifying. Typically some front-end algorithm (e.g., computer vision, cognitive algorithm, EEG RSVP based detection, radar detection) has been applied to the video and has pre-processed and identified a potential list of IOI. The goal of the TCD is to facilitate rapid analysis and triaging of this list of IOI to detect and confirm actual threats. The layout of the TCD is designed for ease of use, fast triage of IOI, and a low bandwidth requirement. Additionally, a very low mental demand allows the system to be run for extended periods of time.

  10. A multiscale product approach for an automatic classification of voice disorders from endoscopic high-speed videos.

    PubMed

    Unger, Jakob; Schuster, Maria; Hecker, Dietmar J; Schick, Bernhard; Lohscheller, Joerg

    2013-01-01

    Direct observation of vocal fold vibration is indispensable for a clinical diagnosis of voice disorders. Among current imaging techniques, high-speed videoendoscopy constitutes a state-of-the-art method capturing several thousand frames per second of the vocal folds during phonation. Recently, a method for extracting descriptive features from phonovibrograms, a two-dimensional image containing the spatio-temporal pattern of vocal fold dynamics, was presented. The derived features are closely related to a clinically established protocol for functional assessment of pathologic voices. The discriminative power of these features for different pathologic findings and configurations has not been assessed yet. In the current study, a collective of 220 subjects is considered for two- and multi-class problems of healthy and pathologic findings. The performance of the proposed feature set is compared to conventional feature reduction routines and was found to clearly outperform these. As such, the proposed procedure shows great potential for diagnostical issues of vocal fold disorders. PMID:24111445

  11. A new paradigm for video cameras: optical sensors

    NASA Astrophysics Data System (ADS)

    Grottle, Kevin; Nathan, Anoo; Smith, Catherine

    2007-04-01

    This paper presents a new paradigm for the utilization of video surveillance cameras as optical sensors to augment and significantly improve the reliability and responsiveness of chemical monitoring systems. Incorporated into a hierarchical tiered sensing architecture, cameras serve as 'Tier 1' or 'trigger' sensors monitoring for visible indications after a release of warfare or industrial toxic chemical agents. No single sensor today yet detects the full range of these agents, but the result of exposure is harmful and yields visible 'duress' behaviors. Duress behaviors range from simple to complex types of observable signatures. By incorporating optical sensors in a tiered sensing architecture, the resulting alarm signals based on these behavioral signatures increases the range of detectable toxic chemical agent releases and allows timely confirmation of an agent release. Given the rapid onset of duress type symptoms, an optical sensor can detect the presence of a release almost immediately. This provides cues for a monitoring system to send air samples to a higher-tiered chemical sensor, quickly launch protective mitigation steps, and notify an operator to inspect the area using the camera's video signal well before the chemical agent can disperse widely throughout a building.

  12. Social Justice through Literacy: Integrating Digital Video Cameras in Reading Summaries and Responses

    ERIC Educational Resources Information Center

    Liu, Rong; Unger, John A.; Scullion, Vicki A.

    2014-01-01

    Drawing data from an action-oriented research project for integrating digital video cameras into the reading process in pre-college courses, this study proposes using digital video cameras in reading summaries and responses to promote critical thinking and to teach social justice concepts. The digital video research project is founded on…

  13. Automatic radial distortion correction in zoom lens video camera

    NASA Astrophysics Data System (ADS)

    Kim, Daehyun; Shin, Hyoungchul; Oh, Juhyun; Sohn, Kwanghoon

    2010-10-01

    We present a novel method for automatically correcting the radial lens distortion in a zoom lens video camera system. We first define the zoom lens distortion model using an inherent characteristic of the zoom lens. Next, we sample some video frames with different focal lengths and estimate their radial distortion parameters and focal lengths. We then optimize the zoom lens distortion model with preestimated parameter pairs using the least-squares method. For more robust optimization, we divide the sample images into two groups according to distortion types (i.e., barrel and pincushion) and then separately optimize the zoom lens distortion models with respect to divided groups. Our results show that the zoom lens distortion model can accurately represent the radial distortion of a zoom lens.

  14. Non-mydriatic, wide field, fundus video camera

    NASA Astrophysics Data System (ADS)

    Hoeher, Bernhard; Voigtmann, Peter; Michelson, Georg; Schmauss, Bernhard

    2014-02-01

    We describe a method we call "stripe field imaging" that is capable of capturing wide field color fundus videos and images of the human eye at pupil sizes of 2mm. This means that it can be used with a non-dilated pupil even with bright ambient light. We realized a mobile demonstrator to prove the method and we could acquire color fundus videos of subjects successfully. We designed the demonstrator as a low-cost device consisting of mass market components to show that there is no major additional technical outlay to realize the improvements we propose. The technical core idea of our method is breaking the rotational symmetry in the optical design that is given in many conventional fundus cameras. By this measure we could extend the possible field of view (FOV) at a pupil size of 2mm from a circular field with 20° in diameter to a square field with 68° by 18° in size. We acquired a fundus video while the subject was slightly touching and releasing the lid. The resulting video showed changes at vessels in the region of the papilla and a change of the paleness of the papilla.

  15. Scientists Behind the Camera - Increasing Video Documentation in the Field

    NASA Astrophysics Data System (ADS)

    Thomson, S.; Wolfe, J.

    2013-12-01

    Over the last two years, Skypunch Creative has designed and implemented a number of pilot projects to increase the amount of video captured by scientists in the field. The major barrier to success that we tackled with the pilot projects was the conflicting demands of the time, space, storage needs of scientists in the field and the demands of shooting high quality video. Our pilots involved providing scientists with equipment, varying levels of instruction on shooting in the field and post-production resources (editing and motion graphics). In each project, the scientific team was provided with cameras (or additional equipment if they owned their own), tripods, and sometimes sound equipment, as well as an external hard drive to return the footage to us. Upon receiving the footage we professionally filmed follow-up interviews and created animations and motion graphics to illustrate their points. We also helped with the distribution of the final product (http://climatescience.tv/2012/05/the-story-of-a-flying-hippo-the-hiaper-pole-to-pole-observation-project/ and http://climatescience.tv/2013/01/bogged-down-in-alaska/). The pilot projects were a success. Most of the scientists returned asking for additional gear and support for future field work. Moving out of the pilot phase, to continue the project, we have produced a 14 page guide for scientists shooting in the field based on lessons learned - it contains key tips and best practice techniques for shooting high quality footage in the field. We have also expanded the project and are now testing the use of video cameras that can be synced with sensors so that the footage is useful both scientifically and artistically. Extract from A Scientist's Guide to Shooting Video in the Field

  16. High Speed Video Data Acquisition System (VDAS) for H. E. P. , including Reference Frame Subtractor, Data Compactor and 16 megabyte FIFO

    SciTech Connect

    Knickerbocker, K.L.; Baumbaugh, A.E.; Ruchti, R.; Baumbaugh, B.W.

    1987-02-01

    A Video-Data-Acquisition-System (VDAS) has been developed to record image data from a scintillating glass fiber-optic target developed for High Energy Physics. VDAS consists of a combination flash ADC, reference frame subtractor, high speed data compactor, an N megabyte First-In-First-Out (FIFO) memory (where N is a multiple of 4), and a single board computer as a control processor. System data rates are in excess of 30 megabytes/second. The reference frame subtractor, in conjunction with the data compactor, records only the differences from a standard frame. This greatly reduces the amount of data needed to record an image. Typical image sizes are reduced by as much as a factor of 20. With the exception of the ECL ADC board, the system uses standard TTL components to minimize power consumption and cost. VDAS operation as well as enhancements to the original system are discussed.

  17. Visible light communication in dynamic environment using image/high-speed communication hybrid sensor

    NASA Astrophysics Data System (ADS)

    Maeno, Keita; Panahpour Tehrani, Mehrdad; Fujii, Toshiaki; Okada, Hiraku; Yamazato, Takaya; Tanimoto, Masayuki; Yendo, Tomohiro

    2012-01-01

    Visible Light Communication (VLC) is a wireless communication method using LEDs. LEDs can respond in high-speed and VLC uses this characteristics. In VLC researches, there are two types of receivers mainly, one is photodiode receiver and the other is high-speed camera. A photodiode receiver can communicate in high-speed and has high transmission rate because of its high-speed response. A high-speed camera can detect and track the transmitter easily because it is not necessary to move the camera. In this paper, we use a hybrid sensor designed for VLC which has advantages of both photodiode and high-speed camera, that is, high transmission rate and easy detecting of the transmitter. The light receiving section of the hybrid sensor consists of communication pixels and video pixels, which realizes the advantages. This hybrid sensor can communicate in static environment in previous research. However in dynamic environment, high-speed tracking of the transmitter is essential for communication. So, we realize the high-speed tracking of the transmitter by using the information of the communication pixels. Experimental results show the possibility of communication in dynamic environment.

  18. High Speed Telescopic Imaging of Sprites

    NASA Astrophysics Data System (ADS)

    McHarg, M. G.; Stenbaek-Nielsen, H. C.; Kanmae, T.; Haaland, R. K.

    2010-12-01

    A total of 21 sprite events were recorded at Langmuir Laboratory, New Mexico, during the nights of 14 and 15 July 2010 with a 500 mm focal length Takahashi Sky 90 telescope. The camera used was a Phantom 7.3 with a VideoScope image intensifier. The images were 512x256 pixels for a field of view of 1.3x0.6 degrees. The data were recorded at 16,000 frames per second (62 μs between images) and an integration time of 20 μs per image. Co-aligned with the telescope was a second similar high-speed camera, but with an 85 mm Nikon lens; this camera recorded at 10,000 frames per second with 100 μs exposure. The image format was also 512x256 pixels for a field of view of 7.3x3.7 degrees. The 21 events recorded include all basic sprite elements: Elve, sprite halos, C-sprites, carrot sprites, and large jellyfish sprites. We compare and contrast the spatial details seen in the different types of sprites, including streamer head size and the number of streamers subsequent to streamer head splitting. Telescopic high speed image of streamer tip splitting in sprites recorded at 07:06:09 UT on 15 July 2010.

  19. Video-Camera-Based Position-Measuring System

    NASA Technical Reports Server (NTRS)

    Lane, John; Immer, Christopher; Brink, Jeffrey; Youngquist, Robert

    2005-01-01

    A prototype optoelectronic system measures the three-dimensional relative coordinates of objects of interest or of targets affixed to objects of interest in a workspace. The system includes a charge-coupled-device video camera mounted in a known position and orientation in the workspace, a frame grabber, and a personal computer running image-data-processing software. Relative to conventional optical surveying equipment, this system can be built and operated at much lower cost; however, it is less accurate. It is also much easier to operate than are conventional instrumentation systems. In addition, there is no need to establish a coordinate system through cooperative action by a team of surveyors. The system operates in real time at around 30 frames per second (limited mostly by the frame rate of the camera). It continuously tracks targets as long as they remain in the field of the camera. In this respect, it emulates more expensive, elaborate laser tracking equipment that costs of the order of 100 times as much. Unlike laser tracking equipment, this system does not pose a hazard of laser exposure. Images acquired by the camera are digitized and processed to extract all valid targets in the field of view. The three-dimensional coordinates (x, y, and z) of each target are computed from the pixel coordinates of the targets in the images to accuracy of the order of millimeters over distances of the orders of meters. The system was originally intended specifically for real-time position measurement of payload transfers from payload canisters into the payload bay of the Space Shuttle Orbiters (see Figure 1). The system may be easily adapted to other applications that involve similar coordinate-measuring requirements. Examples of such applications include manufacturing, construction, preliminary approximate land surveying, and aerial surveying. For some applications with rectangular symmetry, it is feasible and desirable to attach a target composed of black and white

  20. Deep-Sea Video Cameras Without Pressure Housings

    NASA Technical Reports Server (NTRS)

    Cunningham, Thomas

    2004-01-01

    Underwater video cameras of a proposed type (and, optionally, their light sources) would not be housed in pressure vessels. Conventional underwater cameras and their light sources are housed in pods that keep the contents dry and maintain interior pressures of about 1 atmosphere (.0.1 MPa). Pods strong enough to withstand the pressures at great ocean depths are bulky, heavy, and expensive. Elimination of the pods would make it possible to build camera/light-source units that would be significantly smaller, lighter, and less expensive. The depth ratings of the proposed camera/light source units would be essentially unlimited because the strengths of their housings would no longer be an issue. A camera according to the proposal would contain an active-pixel image sensor and readout circuits, all in the form of a single silicon-based complementary metal oxide/semiconductor (CMOS) integrated- circuit chip. As long as none of the circuitry and none of the electrical leads were exposed to seawater, which is electrically conductive, silicon integrated- circuit chips could withstand the hydrostatic pressure of even the deepest ocean. The pressure would change the semiconductor band gap by only a slight amount . not enough to degrade imaging performance significantly. Electrical contact with seawater would be prevented by potting the integrated-circuit chip in a transparent plastic case. The electrical leads for supplying power to the chip and extracting the video signal would also be potted, though not necessarily in the same transparent plastic. The hydrostatic pressure would tend to compress the plastic case and the chip equally on all sides; there would be no need for great strength because there would be no need to hold back high pressure on one side against low pressure on the other side. A light source suitable for use with the camera could consist of light-emitting diodes (LEDs). Like integrated- circuit chips, LEDs can withstand very large hydrostatic pressures. If

  1. High speed photography, videography, and photonics IV; Proceedings of the Meeting, San Diego, CA, Aug. 19, 20, 1986

    NASA Technical Reports Server (NTRS)

    Ponseggi, B. G. (Editor)

    1986-01-01

    Various papers on high-speed photography, videography, and photonics are presented. The general topics addressed include: photooptical and video instrumentation, streak camera data acquisition systems, photooptical instrumentation in wind tunnels, applications of holography and interferometry in wind tunnel research programs, and data analysis for photooptical and video instrumentation.

  2. The Camera Is Not a Methodology: Towards a Framework for Understanding Young Children's Use of Video Cameras

    ERIC Educational Resources Information Center

    Bird, Jo; Colliver, Yeshe; Edwards, Susan

    2014-01-01

    Participatory research methods argue that young children should be enabled to contribute their perspectives on research seeking to understand their worldviews. Visual research methods, including the use of still and video cameras with young children have been viewed as particularly suited to this aim because cameras have been considered easy and…

  3. ATR/OTR-SY Tank Camera Purge System and in Tank Color Video Imaging System

    SciTech Connect

    Werry, S.M.

    1995-06-06

    This procedure will document the satisfactory operation of the 101-SY tank Camera Purge System (CPS) and 101-SY in tank Color Camera Video Imaging System (CCVIS). Included in the CPRS is the nitrogen purging system safety interlock which shuts down all the color video imaging system electronics within the 101-SY tank vapor space during loss of nitrogen purge pressure.

  4. Reliable camera motion estimation from compressed MPEG videos using machine learning approach

    NASA Astrophysics Data System (ADS)

    Wang, Zheng; Ren, Jinchang; Wang, Yubin; Sun, Meijun; Jiang, Jianmin

    2013-05-01

    As an important feature in characterizing video content, camera motion has been widely applied in various multimedia and computer vision applications. A novel method for fast and reliable estimation of camera motion from MPEG videos is proposed, using support vector machine for estimation in a regression model trained on a synthesized sequence. Experiments conducted on real sequences show that the proposed method yields much improved results in estimating camera motions while the difficulty in selecting valid macroblocks and motion vectors is skipped.

  5. On the Complexity of Digital Video Cameras in/as Research: Perspectives and Agencements

    ERIC Educational Resources Information Center

    Bangou, Francis

    2014-01-01

    The goal of this article is to consider the potential for digital video cameras to produce as part of a research agencement. Our reflection will be guided by the current literature on the use of video recordings in research, as well as by the rhizoanalysis of two vignettes. The first of these vignettes is associated with a short video clip shot by…

  6. Application of Optical Measurement Techniques During Stages of Pregnancy: Use of Phantom High Speed Cameras for Digital Image Correlation (D.I.C.) During Baby Kicking and Abdomen Movements

    NASA Technical Reports Server (NTRS)

    Gradl, Paul

    2016-01-01

    Paired images were collected using a projected pattern instead of standard painting of the speckle pattern on her abdomen. High Speed cameras were post triggered after movements felt. Data was collected at 120 fps -limited due to 60hz frequency of projector. To ensure that kicks and movement data was real a background test was conducted with no baby movement (to correct for breathing and body motion).

  7. High speed photography, videography, and photonics III; Proceedings of the Meeting, San Diego, CA, August 22, 23, 1985

    NASA Technical Reports Server (NTRS)

    Ponseggi, B. G. (Editor); Johnson, H. C. (Editor)

    1985-01-01

    Papers are presented on the picosecond electronic framing camera, photogrammetric techniques using high-speed cineradiography, picosecond semiconductor lasers for characterizing high-speed image shutters, the measurement of dynamic strain by high-speed moire photography, the fast framing camera with independent frame adjustments, design considerations for a data recording system, and nanosecond optical shutters. Consideration is given to boundary-layer transition detectors, holographic imaging, laser holographic interferometry in wind tunnels, heterodyne holographic interferometry, a multispectral video imaging and analysis system, a gated intensified camera, a charge-injection-device profile camera, a gated silicon-intensified-target streak tube and nanosecond-gated photoemissive shutter tubes. Topics discussed include high time-space resolved photography of lasers, time-resolved X-ray spectrographic instrumentation for laser studies, a time-resolving X-ray spectrometer, a femtosecond streak camera, streak tubes and cameras, and a short pulse X-ray diagnostic development facility.

  8. Real-time air quality monitoring by using internet video surveillance camera

    NASA Astrophysics Data System (ADS)

    Wong, C. J.; Lim, H. S.; MatJafri, M. Z.; Abdullah, K.; Low, K. L.

    2007-04-01

    Nowadays internet video surveillance cameras are widely use in security monitoring. The quantities of installations of these cameras also become more and more. This paper reports that the internet video surveillance cameras can be applied as a remote sensor for monitoring the concentrations of particulate matter less than 10 micron (PM10), so that real time air quality can be monitored at multi location simultaneously. An algorithm was developed based on the regression analysis of relationship between the measured reflectance components from a surface material and the atmosphere. This algorithm converts multispectral image pixel values acquired from these cameras into quantitative values of the concentrations of PM10. These computed PM10 values were compared to other standard values measured by a DustTrak TM meter. The correlation results showed that the newly develop algorithm produced a high degree of accuracy as indicated by high correlation coefficient (R2) and low root-mean-square-error (RMS) values. The preliminary results showed that the accuracy produced by this internet video surveillance camera is slightly better than that from the internet protocol (IP) camera. Basically the spatial resolution of images acquired by the IP camera was poorer compared to the internet video surveillance camera. This is because the images acquired by IP camera had been compressed and there was no compression for the images from the internet video surveillance camera.

  9. [Research Award providing funds for a tracking video camera

    NASA Technical Reports Server (NTRS)

    Collett, Thomas

    2000-01-01

    The award provided funds for a tracking video camera. The camera has been installed and the system calibrated. It has enabled us to follow in real time the tracks of individual wood ants (Formica rufa) within a 3m square arena as they navigate singly in-doors guided by visual cues. To date we have been using the system on two projects. The first is an analysis of the navigational strategies that ants use when guided by an extended landmark (a low wall) to a feeding site. After a brief training period, ants are able to keep a defined distance and angle from the wall, using their memory of the wall's height on the retina as a controlling parameter. By training with walls of one height and length and testing with walls of different heights and lengths, we can show that ants adjust their distance from the wall so as to keep the wall at the height that they learned during training. Thus, their distance from the base of a tall wall is further than it is from the training wall, and the distance is shorter when the wall is low. The stopping point of the trajectory is defined precisely by the angle that the far end of the wall makes with the trajectory. Thus, ants walk further if the wall is extended in length and not so far if the wall is shortened. These experiments represent the first case in which the controlling parameters of an extended trajectory can be defined with some certainty. It raises many questions for future research that we are now pursuing.

  10. High speed imaging television system

    DOEpatents

    Wilkinson, William O.; Rabenhorst, David W.

    1984-01-01

    A television system for observing an event which provides a composite video output comprising the serially interlaced images the system is greater than the time resolution of any of the individual cameras.

  11. A New Methodology for Studying Dynamics of Aerosol Particles in Sneeze and Cough Using a Digital High-Vision, High-Speed Video System and Vector Analyses

    PubMed Central

    Nishimura, Hidekazu; Sakata, Soichiro; Kaga, Akikazu

    2013-01-01

    Microbial pathogens of respiratory infectious diseases are often transmitted through particles in sneeze and cough. Therefore, understanding the particle movement is important for infection control. Images of a sneeze induced by nasal cavity stimulation by healthy adult volunteers, were taken by a digital high-vision, high-speed video system equipped with a computer system and treated as a research model. The obtained images were enhanced electronically, converted to digital images every 1/300 s, and subjected to vector analysis of the bioparticles contained in the whole sneeze cloud using automatic image processing software. The initial velocity of the particles or their clusters in the sneeze was greater than 6 m/s, but decreased as the particles moved forward; the momentums of the particles seemed to be lost by 0.15–0.20 s and started a diffusion movement. An approximate equation of a function of elapsed time for their velocity was obtained from the vector analysis to represent the dynamics of the front-line particles. This methodology was also applied for a cough. Microclouds contained in a smoke exhaled with a voluntary cough by a volunteer after smoking one breath of cigarette, were traced as the visible, aerodynamic surrogates for invisible bioparticles of cough. The smoke cough microclouds had an initial velocity greater than 5 m/s. The fastest microclouds were located at the forefront of cloud mass that moving forward; however, their velocity clearly decreased after 0.05 s and they began to diffuse in the environmental airflow. The maximum direct reaches of the particles and microclouds driven by sneezing and coughing unaffected by environmental airflows were estimated by calculations using the obtained equations to be about 84 cm and 30 cm from the mouth, respectively, both achieved in about 0.2 s, suggesting that data relating to the dynamics of sneeze and cough became available by calculation. PMID:24312206

  12. A Raman Spectroscopy and High-Speed Video Experimental Study: The Effect of Pressure on the Solid-Liquid Transformation Kinetics of N-octane

    NASA Astrophysics Data System (ADS)

    Liu, C.; Wang, D.

    2015-12-01

    Phase transitions of minerals and rocks in the interior of the Earth, especially at elevated pressures and temperatures, can make the crystal structures and state parameters obviously changed, so it is very important for the physical and chemical properties of these materials. It is known that the transformation between solid and liquid is relatively common in nature, such as the melting of ice and the crystallization of mineral or water. The kinetics relevant to these transformations might provide valuable information on the reaction rate and the reaction mechanism involving nucleation and growth. An in-situ transformation kinetic study of n-octane, which served as an example for this type of phase transition, has been carried out using a hydrothermal diamond anvil cell (HDAC) and high-speed video technique, and that the overall purpose of this study is to develop a comprehensive understanding of the reaction mechanism and the influence of pressure on the different transformation rates. At ambient temperature, the liquid-solid transformation of n-octane first took place with increasing pressure, and then the solid phase gradually transformed into the liquid phase when the sample was heated to a certain temperature. Upon the cooling of the system, the liquid-solid transformation occurred again. According to the established quantitative assessments of the transformation rates, pressure and temperature, it showed that there was a negative pressure dependence of the solid-liquid transformation rate. However, the elevation of pressure can accelerate the liquid-solid transformation rate. Based on the calculated activation energy values, an interfacial reaction and diffusion dominated the solid-liquid transformation, but the liquid-solid transformation was mainly controlled by diffusion. This experimental technique is a powerful and effective tool for the transformation kinetics study of n-octane, and the obtained results are of great significance to the kinetics study

  13. Acceptance/operational test procedure 241-AN-107 Video Camera System

    SciTech Connect

    Pedersen, L.T.

    1994-11-18

    This procedure will document the satisfactory operation of the 241-AN-107 Video Camera System. The camera assembly, including camera mast, pan-and-tilt unit, camera, and lights, will be installed in Tank 241-AN-107 to monitor activities during the Caustic Addition Project. The camera focus, zoom, and iris remote controls will be functionally tested. The resolution and color rendition of the camera will be verified using standard reference charts. The pan-and-tilt unit will be tested for required ranges of motion, and the camera lights will be functionally tested. The master control station equipment, including the monitor, VCRs, printer, character generator, and video micrometer will be set up and performance tested in accordance with original equipment manufacturer`s specifications. The accuracy of the video micrometer to measure objects in the range of 0.25 inches to 67 inches will be verified. The gas drying distribution system will be tested to ensure that a drying gas can be flowed over the camera and lens in the event that condensation forms on these components. This test will be performed by attaching the gas input connector, located in the upper junction box, to a pressurized gas supply and verifying that the check valve, located in the camera housing, opens to exhaust the compressed gas. The 241-AN-107 camera system will also be tested to assure acceptable resolution of the camera imaging components utilizing the camera system lights.

  14. High speed data compactor

    DOEpatents

    Baumbaugh, Alan E.; Knickerbocker, Kelly L.

    1988-06-04

    A method and apparatus for suppressing from transmission, non-informational data words from a source of data words such as a video camera. Data words having values greater than a predetermined threshold are transmitted whereas data words having values less than a predetermined threshold are not transmitted but their occurrences instead are counted. Before being transmitted, the count of occurrences of invalid data words and valid data words are appended with flag digits which a receiving system decodes. The original data stream is fully reconstructable from the stream of valid data words and count of invalid data words.

  15. High speed photography, videography, and photonics VI; Proceedings of the Meeting, San Diego, CA, Aug. 15-17, 1988

    NASA Astrophysics Data System (ADS)

    Stradling, Gary L.

    1989-06-01

    Recent advances in high-speed optics are discussed in reviews and reports. Topics addressed include ultrafast spectroscopy for atomic and molecular studies, streak-camera technology, ultrafast streak systems, framing and X-ray streak-camera measurements, high-speed video techniques (lighting and analysis), and high-speed photography. Particular attention is given to fsec time-resolved observations of molecular and crystalline vibrations and rearrangements, space-charge effects in the fsec streak tube, noise propagation in streak systems, nsec framing photography for laser-produced interstreaming plasmas, an oil-cooled flash X-ray tube for biomedical radiography, a video tracker for high-speed location measurement, and electrooptic framing holography.

  16. Lori Losey - The Woman Behind the Video Camera

    NASA Video Gallery

    The often-spectacular aerial video imagery of NASA flight research, airborne science missions and space satellite launches doesn't just happen. Much of it is the work of Lori Losey, senior video pr...

  17. Ultra-high-speed bionanoscope for cell and microbe imaging

    NASA Astrophysics Data System (ADS)

    Etoh, T. Goji; Vo Le, Cuong; Kawano, Hiroyuki; Ishikawa, Ikuko; Miyawaki, Atshushi; Dao, Vu T. S.; Nguyen, Hoang Dung; Yokoi, Sayoko; Yoshida, Shigeru; Nakano, Hitoshi; Takehara, Kohsei; Saito, Yoshiharu

    2008-11-01

    We are developing an ultra-high-sensitivity and ultra-high-speed imaging system for bioscience, mainly for imaging of microbes with visible light and cells with fluorescence emission. Scarcity of photons is the most serious problem in applications of high-speed imaging to the scientific field. To overcome the problem, the system integrates new technologies consisting of (1) an ultra-high-speed video camera with sub-ten-photon sensitivity with the frame rate of more than 1 mega frames per second, (2) a microscope with highly efficient use of light applicable to various unstained and fluorescence cell observations, and (3) very powerful long-pulse-strobe Xenon lights and lasers for microscopes. Various auxiliary technologies to support utilization of the system are also being developed. One example of them is an efficient video trigger system, which detects a weak signal of a sudden change in a frame under ultra-high-speed imaging by canceling high-frequency fluctuation of illumination light. This paper outlines the system with its preliminary evaluation results.

  18. Operational test procedure 241-AZ-101 waste tank color video camera system

    SciTech Connect

    Robinson, R.S.

    1996-10-30

    The purpose of this procedure is to provide a documented means of verifying that all of the functional components of the 241-AZ- 101 Waste Tank Video Camera System operate properly before and after installation.

  19. Engineering task plan for flammable gas atmosphere mobile color video camera systems

    SciTech Connect

    Kohlman, E.H.

    1995-01-25

    This Engineering Task Plan (ETP) describes the design, fabrication, assembly, and testing of the mobile video camera systems. The color video camera systems will be used to observe and record the activities within the vapor space of a tank on a limited exposure basis. The units will be fully mobile and designed for operation in the single-shell flammable gas producing tanks. The objective of this tank is to provide two mobile camera systems for use in flammable gas producing single-shell tanks (SSTs) for the Flammable Gas Tank Safety Program. The camera systems will provide observation, video recording, and monitoring of the activities that occur in the vapor space of applied tanks. The camera systems will be designed to be totally mobile, capable of deployment up to 6.1 meters into a 4 inch (minimum) riser.

  20. Using a Video Camera to Measure the Radius of the Earth

    ERIC Educational Resources Information Center

    Carroll, Joshua; Hughes, Stephen

    2013-01-01

    A simple but accurate method for measuring the Earth's radius using a video camera is described. A video camera was used to capture a shadow rising up the wall of a tall building at sunset. A free program called ImageJ was used to measure the time it took the shadow to rise a known distance up the building. The time, distance and length of…

  1. Still-Video Photography: Tomorrow's Electronic Cameras in the Hands of Today's Photojournalists.

    ERIC Educational Resources Information Center

    Foss, Kurt; Kahan, Robert S.

    This paper examines the still-video camera and its potential impact by looking at recent experiments and by gathering information from some of the few people knowledgeable about the new technology. The paper briefly traces the evolution of the tools and processes of still-video photography, examining how photographers and their work have been…

  2. Measuring 8–250 ps short pulses using a high-speed streak camera on kilojoule, petawatt-class laser systems

    SciTech Connect

    Qiao, J.; Jaanimagi, P. A.; Boni, R.; Bromage, J.; Hill, E.

    2013-07-15

    Short-pulse measurements using a streak camera are sensitive to space-charge broadening, which depends on the pulse duration and shape, and on the uniformity of photocathode illumination. An anamorphic-diffuser-based beam-homogenizing system and a space-charge-broadening calibration method were developed to accurately measure short pulses using an optical streak camera. This approach provides a more-uniform streak image and enables one to characterize space-charge-induced pulse-broadening effects.

  3. High-speed imaging of explosive eruptions: applications and perspectives

    NASA Astrophysics Data System (ADS)

    Taddeucci, Jacopo; Scarlato, Piergiorgio; Gaudin, Damien; Capponi, Antonio; Alatorre-Ibarguengoitia, Miguel-Angel; Moroni, Monica

    2013-04-01

    Explosive eruptions, being by definition highly dynamic over short time scales, necessarily call for observational systems capable of relatively high sampling rates. "Traditional" tools, like as seismic and acoustic networks, have recently been joined by Doppler radar and electric sensors. Recent developments in high-speed camera systems now allow direct visual information of eruptions to be obtained with a spatial and temporal resolution suitable for the analysis of several key eruption processes. Here we summarize the methods employed to gather and process high-speed videos of explosive eruptions, and provide an overview of the several applications of these new type of data in understanding different aspects of explosive volcanism. Our most recent set up for high-speed imaging of explosive eruptions (FAMoUS - FAst, MUltiparametric Set-up,) includes: 1) a monochrome high speed camera, capable of 500 frames per second (fps) at high-definition (1280x1024 pixel) resolution and up to 200000 fps at reduced resolution; 2) a thermal camera capable of 50-200 fps at 480-120x640 pixel resolution; and 3) two acoustic to infrasonic sensors. All instruments are time-synchronized via a data logging system, a hand- or software-operated trigger, and via GPS, allowing signals from other instruments or networks to be directly recorded by the same logging unit or to be readily synchronized for comparison. FAMoUS weights less than 20 kg, easily fits into four, hand-luggage-sized backpacks, and can be deployed in less than 20' (and removed in less than 2', if needed). So far, explosive eruptions have been recorded in high-speed at several active volcanoes, including Fuego and Santiaguito (Guatemala), Stromboli (Italy), Yasur (Vanuatu), and Eyjafiallajokull (Iceland). Image processing and analysis from these eruptions helped illuminate several eruptive processes, including: 1) Pyroclasts ejection. High-speed videos reveal multiple, discrete ejection pulses within a single Strombolian

  4. Engineering task plan for Tanks 241-AN-103, 104, 105 color video camera systems

    SciTech Connect

    Kohlman, E.H.

    1994-11-17

    This Engineering Task Plan (ETP) describes the design, fabrication, assembly, and installation of the video camera systems into the vapor space within tanks 241-AN-103, 104, and 105. The one camera remotely operated color video systems will be used to observe and record the activities within the vapor space. Activities may include but are not limited to core sampling, auger activities, crust layer examination, monitoring of equipment installation/removal, and any other activities. The objective of this task is to provide a single camera system in each of the tanks for the Flammable Gas Tank Safety Program.

  5. High speed handpieces

    PubMed Central

    Bhandary, Nayan; Desai, Asavari; Shetty, Y Bharath

    2014-01-01

    High speed instruments are versatile instruments used by clinicians of all specialties of dentistry. It is important for clinicians to understand the types of high speed handpieces available and the mechanism of working. The centers for disease control and prevention have issued guidelines time and again for disinfection and sterilization of high speed handpieces. This article presents the recent developments in the design of the high speed handpieces. With a view to prevent hospital associated infections significant importance has been given to disinfection, sterilization & maintenance of high speed handpieces. How to cite the article: Bhandary N, Desai A, Shetty YB. High speed handpieces. J Int Oral Health 2014;6(1):130-2. PMID:24653618

  6. Lights! Camera! Action! Handling Your First Video Assignment.

    ERIC Educational Resources Information Center

    Thomas, Marjorie Bekaert

    1989-01-01

    The author discusses points to consider when hiring and working with a video production company to develop a video for human resources purposes. Questions to ask the consultants are included, as is information on the role of the company liaison and on how to avoid expensive, time-wasting pitfalls. (CH)

  7. Lights, Cameras, Pencils! Using Descriptive Video to Enhance Writing

    ERIC Educational Resources Information Center

    Hoffner, Helen; Baker, Eileen; Quinn, Kathleen Benson

    2008-01-01

    Students of various ages and abilities can increase their comprehension and build vocabulary with the help of a new technology, Descriptive Video. Descriptive Video (also known as described programming) was developed to give individuals with visual impairments access to visual media such as television programs and films. Described programs,…

  8. Accuracy potential of large-format still-video cameras

    NASA Astrophysics Data System (ADS)

    Maas, Hans-Gerd; Niederoest, Markus

    1997-07-01

    High resolution digital stillvideo cameras have found wide interest in digital close range photogrammetry in the last five years. They can be considered fully autonomous digital image acquisition systems without the requirement of permanent connection to an external power supply and a host computer for camera control and data storage, thus allowing for convenient data acquisition in many applications of digital photogrammetry. The accuracy potential of stillvideo cameras has been extensively discussed. While large format CCD sensors themselves can be considered very accurate measurement devices, lenses, camera bodies and sensor mounts of stillvideo cameras are not compression techniques in image storage, which may also affect the accuracy potential. This presentation shows recent experiences from accuracy tests with a number of large format stillvideo cameras, including a modified Kodak DCS200, a Kodak DCS460, a Nikon E2 and a Polaroid PDC-2000. The tests of the cameras include absolute and relative measurements and were performed using strong photogrammetric networks and good external reference. The results of the tests indicate that very high accuracies can be achieved with large blocks of stillvideo imagery especially in deformation measurements. In absolute measurements, however, the accuracy potential of the large format CCD sensors is partly ruined by a lack of stability of the cameras.

  9. Feasibility study of transmission of OTV camera control information in the video vertical blanking interval

    NASA Technical Reports Server (NTRS)

    White, Preston A., III

    1994-01-01

    The Operational Television system at Kennedy Space Center operates hundreds of video cameras, many remotely controllable, in support of the operations at the center. This study was undertaken to determine if commercial NABTS (North American Basic Teletext System) teletext transmission in the vertical blanking interval of the genlock signals distributed to the cameras could be used to send remote control commands to the cameras and the associated pan and tilt platforms. Wavelength division multiplexed fiberoptic links are being installed in the OTV system to obtain RS-250 short-haul quality. It was demonstrated that the NABTS transmission could be sent over the fiberoptic cable plant without excessive video quality degradation and that video cameras could be controlled using NABTS transmissions over multimode fiberoptic paths as long as 1.2 km.

  10. RECON 6: A real-time, wide-angle, solid-state reconnaissance camera system for high-speed, low-altitude aircraft

    NASA Technical Reports Server (NTRS)

    Labinger, R. L.

    1976-01-01

    The maturity of self-scanned, solid-state, multielement photosensors makes the realization of "real time" reconnaissance photography viable and practical. A system built around these sensors which can be constructed to satisfy the requirements of the tactical reconnaissance scenario is described. The concept chosen is the push broom strip camera system -- RECON 6 -- which represents the least complex and most economical approach for an electronic camera capable of providing a high level of performance over a 140 deg wide, continuous swath at altitudes from 200 to 3,000 feet and at minimum loss in resolution at higher altitudes.