Science.gov

Sample records for high-speed video camera

  1. Development of high-speed video cameras

    NASA Astrophysics Data System (ADS)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  2. Precise color images a high-speed color video camera system with three intensified sensors

    NASA Astrophysics Data System (ADS)

    Oki, Sachio; Yamakawa, Masafumi; Gohda, Susumu; Etoh, Takeharu G.

    1999-06-01

    High speed imaging systems have been used in a large field of science and engineering. Although the high speed camera systems have been improved to high performance, most of their applications are only to get high speed motion pictures. However, in some fields of science and technology, it is useful to get some other information, such as temperature of combustion flame, thermal plasma and molten materials. Recent digital high speed video imaging technology should be able to get such information from those objects. For this purpose, we have already developed a high speed video camera system with three-intensified-sensors and cubic prism image splitter. The maximum frame rate is 40,500 pps (picture per second) at 64 X 64 pixels and 4,500 pps at 256 X 256 pixels with 256 (8 bit) intensity resolution for each pixel. The camera system can store more than 1,000 pictures continuously in solid state memory. In order to get the precise color images from this camera system, we need to develop a digital technique, which consists of a computer program and ancillary instruments, to adjust displacement of images taken from two or three image sensors and to calibrate relationship between incident light intensity and corresponding digital output signals. In this paper, the digital technique for pixel-based displacement adjustment are proposed. Although the displacement of the corresponding circle was more than 8 pixels in original image, the displacement was adjusted within 0.2 pixels at most by this method.

  3. HIGH SPEED CAMERA

    DOEpatents

    Rogers, B.T. Jr.; Davis, W.C.

    1957-12-17

    This patent relates to high speed cameras having resolution times of less than one-tenth microseconds suitable for filming distinct sequences of a very fast event such as an explosion. This camera consists of a rotating mirror with reflecting surfaces on both sides, a narrow mirror acting as a slit in a focal plane shutter, various other mirror and lens systems as well as an innage recording surface. The combination of the rotating mirrors and the slit mirror causes discrete, narrow, separate pictures to fall upon the film plane, thereby forming a moving image increment of the photographed event. Placing a reflecting surface on each side of the rotating mirror cancels the image velocity that one side of the rotating mirror would impart, so as a camera having this short a resolution time is thereby possible.

  4. HDR {sup 192}Ir source speed measurements using a high speed video camera

    SciTech Connect

    Fonseca, Gabriel P.; Rubo, Rodrigo A.; Sales, Camila P. de; Verhaegen, Frank

    2015-01-15

    Purpose: The dose delivered with a HDR {sup 192}Ir afterloader can be separated into a dwell component, and a transit component resulting from the source movement. The transit component is directly dependent on the source speed profile and it is the goal of this study to measure accurate source speed profiles. Methods: A high speed video camera was used to record the movement of a {sup 192}Ir source (Nucletron, an Elekta company, Stockholm, Sweden) for interdwell distances of 0.25–5 cm with dwell times of 0.1, 1, and 2 s. Transit dose distributions were calculated using a Monte Carlo code simulating the source movement. Results: The source stops at each dwell position oscillating around the desired position for a duration up to (0.026 ± 0.005) s. The source speed profile shows variations between 0 and 81 cm/s with average speed of ∼33 cm/s for most of the interdwell distances. The source stops for up to (0.005 ± 0.001) s at nonprogrammed positions in between two programmed dwell positions. The dwell time correction applied by the manufacturer compensates the transit dose between the dwell positions leading to a maximum overdose of 41 mGy for the considered cases and assuming an air-kerma strength of 48 000 U. The transit dose component is not uniformly distributed leading to over and underdoses, which is within 1.4% for commonly prescribed doses (3–10 Gy). Conclusions: The source maintains its speed even for the short interdwell distances. Dose variations due to the transit dose component are much lower than the prescribed treatment doses for brachytherapy, although transit dose component should be evaluated individually for clinical cases.

  5. Introducing Contactless Blood Pressure Assessment Using a High Speed Video Camera.

    PubMed

    Jeong, In Cheol; Finkelstein, Joseph

    2016-04-01

    Recent studies demonstrated that blood pressure (BP) can be estimated using pulse transit time (PTT). For PTT calculation, photoplethysmogram (PPG) is usually used to detect a time lag in pulse wave propagation which is correlated with BP. Until now, PTT and PPG were registered using a set of body-worn sensors. In this study a new methodology is introduced allowing contactless registration of PTT and PPG using high speed camera resulting in corresponding image-based PTT (iPTT) and image-based PPG (iPPG) generation. The iPTT value can be potentially utilized for blood pressure estimation however extent of correlation between iPTT and BP is unknown. The goal of this preliminary feasibility study was to introduce the methodology for contactless generation of iPPG and iPTT and to make initial estimation of the extent of correlation between iPTT and BP "in vivo." A short cycling exercise was used to generate BP changes in healthy adult volunteers in three consecutive visits. BP was measured by a verified BP monitor simultaneously with iPTT registration at three exercise points: rest, exercise peak, and recovery. iPPG was simultaneously registered at two body locations during the exercise using high speed camera at 420 frames per second. iPTT was calculated as a time lag between pulse waves obtained as two iPPG's registered from simultaneous recoding of head and palm areas. The average inter-person correlation between PTT and iPTT was 0.85 ± 0.08. The range of inter-person correlations between PTT and iPTT was from 0.70 to 0.95 (p < 0.05). The average inter-person coefficient of correlation between SBP and iPTT was -0.80 ± 0.12. The range of correlations between systolic BP and iPTT was from 0.632 to 0.960 with p < 0.05 for most of the participants. Preliminary data indicated that a high speed camera can be potentially utilized for unobtrusive contactless monitoring of abrupt blood pressure changes in a variety of settings. The initial prototype system was able to

  6. High Speed Digital Camera Technology Review

    NASA Technical Reports Server (NTRS)

    Clements, Sandra D.

    2009-01-01

    A High Speed Digital Camera Technology Review (HSD Review) is being conducted to evaluate the state-of-the-shelf in this rapidly progressing industry. Five HSD cameras supplied by four camera manufacturers participated in a Field Test during the Space Shuttle Discovery STS-128 launch. Each camera was also subjected to Bench Tests in the ASRC Imaging Development Laboratory. Evaluation of the data from the Field and Bench Tests is underway. Representatives from the imaging communities at NASA / KSC and the Optical Systems Group are participating as reviewers. A High Speed Digital Video Camera Draft Specification was updated to address Shuttle engineering imagery requirements based on findings from this HSD Review. This draft specification will serve as the template for a High Speed Digital Video Camera Specification to be developed for the wider OSG imaging community under OSG Task OS-33.

  7. High speed multiwire photon camera

    NASA Technical Reports Server (NTRS)

    Lacy, Jeffrey L. (Inventor)

    1991-01-01

    An improved multiwire proportional counter camera having particular utility in the field of clinical nuclear medicine imaging. The detector utilizes direct coupled, low impedance, high speed delay lines, the segments of which are capacitor-inductor networks. A pile-up rejection test is provided to reject confused events otherwise caused by multiple ionization events occuring during the readout window.

  8. High speed multiwire photon camera

    NASA Technical Reports Server (NTRS)

    Lacy, Jeffrey L. (Inventor)

    1989-01-01

    An improved multiwire proportional counter camera having particular utility in the field of clinical nuclear medicine imaging. The detector utilizes direct coupled, low impedance, high speed delay lines, the segments of which are capacitor-inductor networks. A pile-up rejection test is provided to reject confused events otherwise caused by multiple ionization events occurring during the readout window.

  9. High Speed Video for Airborne Instrumentation Application

    NASA Technical Reports Server (NTRS)

    Tseng, Ting; Reaves, Matthew; Mauldin, Kendall

    2006-01-01

    A flight-worthy high speed color video system has been developed. Extensive system development and ground and environmental. testing hes yielded a flight qualified High Speed Video System (HSVS), This HSVS was initially used on the F-15B #836 for the Lifting Insulating Foam Trajectory (LIFT) project.

  10. Visualization of explosion phenomena using a high-speed video camera with an uncoupled objective lens by fiber-optic

    NASA Astrophysics Data System (ADS)

    Tokuoka, Nobuyuki; Miyoshi, Hitoshi; Kusano, Hideaki; Hata, Hidehiro; Hiroe, Tetsuyuki; Fujiwara, Kazuhito; Yasushi, Kondo

    2008-11-01

    Visualization of explosion phenomena is very important and essential to evaluate the performance of explosive effects. The phenomena, however, generate blast waves and fragments from cases. We must protect our visualizing equipment from any form of impact. In the tests described here, the front lens was separated from the camera head by means of a fiber-optic cable in order to be able to use the camera, a Shimadzu Hypervision HPV-1, for tests in severe blast environment, including the filming of explosions. It was possible to obtain clear images of the explosion that were not inferior to the images taken by the camera with the lens directly coupled to the camera head. It could be confirmed that this system is very useful for the visualization of dangerous events, e.g., at an explosion site, and for visualizations at angles that would be unachievable under normal circumstances.

  11. HIGH SPEED KERR CELL FRAMING CAMERA

    DOEpatents

    Goss, W.C.; Gilley, L.F.

    1964-01-01

    The present invention relates to a high speed camera utilizing a Kerr cell shutter and a novel optical delay system having no moving parts. The camera can selectively photograph at least 6 frames within 9 x 10/sup -8/ seconds during any such time interval of an occurring event. The invention utilizes particularly an optical system which views and transmits 6 images of an event to a multi-channeled optical delay relay system. The delay relay system has optical paths of successively increased length in whole multiples of the first channel optical path length, into which optical paths the 6 images are transmitted. The successively delayed images are accepted from the exit of the delay relay system by an optical image focusing means, which in turn directs the images into a Kerr cell shutter disposed to intercept the image paths. A camera is disposed to simultaneously view and record the 6 images during a single exposure of the Kerr cell shutter. (AEC)

  12. High Speed and Slow Motion: The Technology of Modern High Speed Cameras

    ERIC Educational Resources Information Center

    Vollmer, Michael; Mollmann, Klaus-Peter

    2011-01-01

    The enormous progress in the fields of microsystem technology, microelectronics and computer science has led to the development of powerful high speed cameras. Recently a number of such cameras became available as low cost consumer products which can also be used for the teaching of physics. The technology of high speed cameras is discussed,…

  13. High-speed multicolour photometry with CMOS cameras

    NASA Astrophysics Data System (ADS)

    Pokhvala, S. M.; Zhilyaev, B. E.; Reshetnyk, V. M.

    2012-11-01

    We present the results of testing the commercial digital camera Nikon D90 with a CMOS sensor for high-speed photometry with a small telescope Celestron 11'' at the Peak Terskol Observatory. CMOS sensor allows to perform photometry in 3 filters simultaneously that gives a great advantage compared with monochrome CCD detectors. The Bayer BGR colour system of CMOS sensors is close to the Johnson BVR system. The results of testing show that one can carry out photometric measurements with CMOS cameras for stars with the V-magnitude up to ≃14^{m} with the precision of 0.01^{m}. Stars with the V-magnitude up to ˜10 can be shot at 24 frames per second in the video mode.

  14. High-Speed Video Analysis of Damped Harmonic Motion

    ERIC Educational Resources Information Center

    Poonyawatpornkul, J.; Wattanakasiwich, P.

    2013-01-01

    In this paper, we acquire and analyse high-speed videos of a spring-mass system oscillating in glycerin at different temperatures. Three cases of damped harmonic oscillation are investigated and analysed by using high-speed video at a rate of 120 frames s[superscript -1] and Tracker Video Analysis (Tracker) software. We present empirical data for…

  15. Hypervelocity High Speed Projectile Imagery and Video

    NASA Technical Reports Server (NTRS)

    Henderson, Donald J.

    2009-01-01

    This DVD contains video showing the results of hypervelocity impact. One is showing a projectile impact on a Kevlar wrapped Aluminum bottle containing 3000 psi gaseous oxygen. One video show animations of a two stage light gas gun.

  16. High Speed Video Measurements of a Magneto-optical Trap

    NASA Astrophysics Data System (ADS)

    Horstman, Luke; Graber, Curtis; Erickson, Seth; Slattery, Anna; Hoyt, Chad

    2016-05-01

    We present a video method to observe the mechanical properties of a lithium magneto-optical trap. A sinusoidally amplitude-modulated laser beam perturbed a collection of trapped ce7 Li atoms and the oscillatory response was recorded with a NAC Memrecam GX-8 high speed camera at 10,000 frames per second. We characterized the trap by modeling the oscillating cold atoms as a damped, driven, harmonic oscillator. Matlab scripts tracked the atomic cloud movement and relative phase directly from the captured high speed video frames. The trap spring constant, with magnetic field gradient bz = 36 G/cm, was measured to be 4 . 5 +/- . 5 ×10-19 N/m, which implies a trap resonant frequency of 988 +/- 55 Hz. Additionally, at bz = 27 G/cm the spring constant was measured to be 2 . 3 +/- . 2 ×10-19 N/m, which corresponds to a resonant frequency of 707 +/- 30 Hz. These properties at bz = 18 G/cm were found to be 8 . 8 +/- . 5 ×10-20 N/m, and 438 +/- 13 Hz. NSF #1245573.

  17. High-speed camera with internal real-time image processing

    NASA Astrophysics Data System (ADS)

    Paindavoine, M.; Mosqueron, R.; Dubois, J.; Clerc, C.; Grapin, J. C.; Tomasini, F.

    2005-08-01

    High-speed video cameras are powerful tools for investigating for instance the dynamics of fluids or the movements of mechanical parts in manufacturing processes. In the past years, the use of CMOS sensors instead of CCDs have made possible the development of high-speed video cameras offering digital outputs, readout flexibility and lower manufacturing costs. In this field, we designed a new fast CMOS camera with a 1280×1024 pixels resolution at 500 fps. In order to transmit from the camera only useful information from the fast images, we studied some specific algorithms like edge detection, wavelet analysis, image compression and object tracking. These image processing algorithms have been implemented into a FPGA embedded inside the camera. This FPGA technology allows us to process fast images in real time.

  18. Video-rate fluorescence lifetime imaging camera with CMOS single-photon avalanche diode arrays and high-speed imaging algorithm.

    PubMed

    Li, David D-U; Arlt, Jochen; Tyndall, David; Walker, Richard; Richardson, Justin; Stoppa, David; Charbon, Edoardo; Henderson, Robert K

    2011-09-01

    A high-speed and hardware-only algorithm using a center of mass method has been proposed for single-detector fluorescence lifetime sensing applications. This algorithm is now implemented on a field programmable gate array to provide fast lifetime estimates from a 32 × 32 low dark count 0.13 μm complementary metal-oxide-semiconductor single-photon avalanche diode (SPAD) plus time-to-digital converter array. A simple look-up table is included to enhance the lifetime resolvability range and photon economics, making it comparable to the commonly used least-square method and maximum-likelihood estimation based software. To demonstrate its performance, a widefield microscope was adapted to accommodate the SPAD array and image different test samples. Fluorescence lifetime imaging microscopy on fluorescent beads in Rhodamine 6G at a frame rate of 50 fps is also shown. PMID:21950926

  19. Using High-Speed Video to Examine Differential Roller Ginning of Upland Cotton

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A digital high-speed video camera was used to show what occurs as upland fiber is being pulled off of cottonseed at the ginning point on a roller gin stand. The study included a conventional ginning treatment, and a treatment that attempted to selectively remove only the longer fibers off of cotton...

  20. In a Hurry to Work with High-Speed Video at School?

    ERIC Educational Resources Information Center

    Heck, Andre; Uylings, Peter

    2010-01-01

    Casio Computer Co., Ltd., brought in 2008 high-speed video to the consumer level with the release of the EXILIM Pro EX-F1 and the EX-FH20 digital camera.[R] The EX-F1 point-and-shoot camera can shoot up to 60 six-megapixel photos per second and capture movies at up to 1200 frames per second. All this, for a price of about US $1000 at the time of…

  1. High-speed TV cameras for streak tube readout

    SciTech Connect

    Yates, G.J.; Gallegos, R.A.; Holmes, V.H. ); Turko, B.T. )

    1991-01-01

    Two fast framing TV cameras have been characterized and compared as readout media for imaging of 40 mm diameter streak tube (P-11) phosphor screens. One camera is based upon a Focus-Projection-Scan (FPS) high-speed electrostatic deflected vidicon with 30-mm-diameter PbO target. The other uses an interline transfer charge-coupled device (CCD) with 8.8 {times} 11.4 mm rectangular Si target. The field-of-view (FOV), resolution, responsivity, and dynamic range provided by both cameras when exposed to short duration ({approx} 10 {mu} full width at half maximum (FWHM)) transient illumination followed by a single field readout period of {lt}3 ms are presented. 11 refs., 8 figs., 3 tabs.

  2. High-Speed Edge-Detecting Line Scan Smart Camera

    NASA Technical Reports Server (NTRS)

    Prokop, Norman F.

    2012-01-01

    A high-speed edge-detecting line scan smart camera was developed. The camera is designed to operate as a component in a NASA Glenn Research Center developed inlet shock detection system. The inlet shock is detected by projecting a laser sheet through the airflow. The shock within the airflow is the densest part and refracts the laser sheet the most in its vicinity, leaving a dark spot or shadowgraph. These spots show up as a dip or negative peak within the pixel intensity profile of an image of the projected laser sheet. The smart camera acquires and processes in real-time the linear image containing the shock shadowgraph and outputting the shock location. Previously a high-speed camera and personal computer would perform the image capture and processing to determine the shock location. This innovation consists of a linear image sensor, analog signal processing circuit, and a digital circuit that provides a numerical digital output of the shock or negative edge location. The smart camera is capable of capturing and processing linear images at over 1,000 frames per second. The edges are identified as numeric pixel values within the linear array of pixels, and the edge location information can be sent out from the circuit in a variety of ways, such as by using a microcontroller and onboard or external digital interface to include serial data such as RS-232/485, USB, Ethernet, or CAN BUS; parallel digital data; or an analog signal. The smart camera system can be integrated into a small package with a relatively small number of parts, reducing size and increasing reliability over the previous imaging system..

  3. Parallel image compression circuit for high-speed cameras

    NASA Astrophysics Data System (ADS)

    Nishikawa, Yukinari; Kawahito, Shoji; Inoue, Toru

    2005-02-01

    In this paper, we propose 32 parallel image compression circuits for high-speed cameras. The proposed compression circuits are based on a 4 x 4-point 2-dimensional DCT using a DA method, zigzag scanning of 4 blocks of the 2-D DCT coefficients and a 1-dimensional Huffman coding. The compression engine is designed with FPGAs, and the hardware complexity is compared with JPEG algorithm. It is found that the proposed compression circuits require much less hardware, leading to a compact high-speed implementation of the image compression circuits using parallel processing architecture. The PSNR of the reconstructed image using the proposed encoding method is better than that of JPEG at the region of low compression ratio.

  4. High speed web printing inspection with multiple linear cameras

    NASA Astrophysics Data System (ADS)

    Shi, Hui; Yu, Wenyong

    2011-12-01

    Purpose: To detect the defects during the high speed process of web printing, such as smudges, doctor streaks, pin holes, character misprints, foreign matters, hazing, wrinkles, etc., which are the main infecting factors to the quality of printing presswork. Methods: A set of novel machine vision system is used to detect the defects. This system consists of distributed data processing with multiple linear cameras, effective anti-blooming illumination design and fast image processing algorithm with blob searching. Also, pattern matching adapted to paper tension and snake-moving are emphasized. Results: Experimental results verify the speed, reliability and accuracy of the proposed system, by which most of the main defects are inspected at real time under the speed of 300 m/min. Conclusions: High speed quality inspection of large-size web requires multiple linear cameras to construct distributed data processing system. Also material characters of the printings should also be stressed to design proper optical structure, so that tiny web defects can be inspected with variably angles of illumination.

  5. High Speed Video Applications In The Pharmaceutical Industry

    NASA Astrophysics Data System (ADS)

    Stapley, David

    1985-02-01

    The pursuit of quality is essential in the development and production of drugs. The pursuit of excellence is relentless, a never ending search. In the pharmaceutical industry, we all know and apply wide-ranging techniques to assure quality production. We all know that in reality none of these techniques are perfect for all situations. We have all experienced, the damaged foil, blister or tube, the missing leaflet, the 'hard to read' batch code. We are all aware of the need to supplement the traditional techniques of fault finding. This paper shows how high speed video systems can be applied to fully automated filling and packaging operations as a tool to aid the company's drive for high quality and productivity. The range of products involved totals some 350 in approximately 3,000 pack variants, encompassing creams, ointments, lotions, capsules, tablets, parenteral and sterile antibiotics. Pharmaceutical production demands diligence at all stages, with optimum use of the techniques offered by the latest technology. Figure 1 shows typical stages of pharmaceutical production in which quality must be assured, and highlights those stages where the use of high speed video systems have proved of value to date. The use of high speed video systems begins with the very first use of machine and materials: commissioning and validation, (the term used for determining that a process is capable of consistently producing the requisite quality) and continues to support inprocess monitoring, throughout the life of the plant. The activity of validation in the packaging environment is particularly in need of a tool to see the nature of high speed faults, no matter how infrequently they occur, so that informed changes can be made precisely and rapidly. The prime use of this tool is to ensure that machines are less sensitive to minor variations in component characteristics.

  6. In a Hurry to Work with High-Speed Video at School?

    NASA Astrophysics Data System (ADS)

    Heck, André; Uylings, Peter

    2010-03-01

    Casio Computer Co., Ltd., brought in 2008 high-speed video to the consumer level with the release of the EXILIM Pro EX-F1 and the EX-FH20 digital camera.® The EX-F1 point-and-shoot camera can shoot up to 60 six-megapixel photos per second and capture movies at up to 1200 frames per second. All this, for a price of about US 1000 at the time of introduction and with an ease of operation that allows high school students to work in 10 minutes with the camera. The EX-FH20 is a more compact, more user-friendly, and cheaper high-speed camera that can still shoot up to 40 photos per second and capture up to 1000 fps. Yearly, new camera models appear and prices have gone down to about US 250-300 for a decent high-speed camera. For more details we refer to Casio's website.

  7. Characterization of high-speed video systems: tests and analyses

    NASA Astrophysics Data System (ADS)

    Carlton, Patrick N.; Chenette, Eugene R.; Rowe, W. J.; Snyder, Donald R.

    1992-01-01

    The current method of munitions systems testing uses film cameras to record airborne events such as store separation. After film exposure, much time is spent in developing the film and analyzing the images. If the analysis uses digital methods, additional time is required to digitize the images preparatory to the analysis phase. Because airborne equipment parameters such as exposure time cannot be adjusted in flight, images often suffer as a result of changing lighting conditions. Image degradation from other sources may occur in the film development process, and during digitizing. Advances in the design of charge-coupled device (CCD) cameras and mass storage devices, coupled with sophisticated data compression and transmission systems, provide the means to overcome these shortcomings. A system can be developed where the image sensor provides an analog electronic signal and, consequently, images can be digitized and stored using digital mass storage devices or transmitted to a ground station for immediate viewing and analysis. All electronic imaging and processing offers the potential for improved data quality, rapid response time and closed loop operation. This paper examines high speed, high resolution imaging system design issues assuming an electronic image sensor will be used. Experimental data and analyses are presented on the resolution capability of current film and digital image processing technology. Electrical power dissipation in a high speed, high resolution CCD array is also analyzed.

  8. Combining High-Speed Cameras and Stop-Motion Animation Software to Support Students' Modeling of Human Body Movement

    ERIC Educational Resources Information Center

    Lee, Victor R.

    2015-01-01

    Biomechanics, and specifically the biomechanics associated with human movement, is a potentially rich backdrop against which educators can design innovative science teaching and learning activities. Moreover, the use of technologies associated with biomechanics research, such as high-speed cameras that can produce high-quality slow-motion video,…

  9. Jack & the Video Camera

    ERIC Educational Resources Information Center

    Charlan, Nathan

    2010-01-01

    This article narrates how the use of video camera has transformed the life of Jack Williams, a 10-year-old boy from Colorado Springs, Colorado, who has autism. The way autism affected Jack was unique. For the first nine years of his life, Jack remained in his world, alone. Functionally non-verbal and with motor skill problems that affected his…

  10. High-speed optical shutter coupled to fast-readout CCD camera

    NASA Astrophysics Data System (ADS)

    Yates, George J.; Pena, Claudine R.; McDonald, Thomas E., Jr.; Gallegos, Robert A.; Numkena, Dustin M.; Turko, Bojan T.; Ziska, George; Millaud, Jacques E.; Diaz, Rick; Buckley, John; Anthony, Glen; Araki, Takae; Larson, Eric D.

    1999-04-01

    A high frame rate optically shuttered CCD camera for radiometric imaging of transient optical phenomena has been designed and several prototypes fabricated, which are now in evaluation phase. the camera design incorporates stripline geometry image intensifiers for ultra fast image shutters capable of 200ps exposures. The intensifiers are fiber optically coupled to a multiport CCD capable of 75 MHz pixel clocking to achieve 4KHz frame rate for 512 X 512 pixels from simultaneous readout of 16 individual segments of the CCD array. The intensifier, Philips XX1412MH/E03 is generically a Generation II proximity-focused micro channel plate intensifier (MCPII) redesigned for high speed gating by Los Alamos National Laboratory and manufactured by Philips Components. The CCD is a Reticon HSO512 split storage with bi-direcitonal vertical readout architecture. The camera main frame is designed utilizing a multilayer motherboard for transporting CCD video signals and clocks via imbedded stripline buses designed for 100MHz operation. The MCPII gate duration and gain variables are controlled and measured in real time and up-dated for data logging each frame, with 10-bit resolution, selectable either locally or by computer. The camera provides both analog and 10-bit digital video. The camera's architecture, salient design characteristics, and current test data depicting resolution, dynamic range, shutter sequences, and image reconstruction will be presented and discussed.

  11. High-speed video recording with the TDAS

    NASA Astrophysics Data System (ADS)

    Liu, Daniel W.; Griesheimer, Eric D.; Kesler, Lynn O.

    1990-08-01

    The Tracker Data Acquisition System, TDAS is a system architecture for a high speed data recording and analysis system. The device utilizes dual Direct Memory Access (DMA), parallel Small Computer System Interface (SCSI) interface channels and multiple SCSI hard drives. Video rate data capture and storage is accomplished on 16 bit digital data at video rates to 15 Megahertz. The average data rate is approximately 1 Megabyte per second to the current hard disk drives, with instantaneous rates to 5 Megabytes per second. Message protocol enables symbology and frame data to be stored concurrently with the windowed image data. Dual parallel image buffers store 512 Kilobytes of raw image data for each frame and pass windowed data to the storage drives via the SCSI interfaces. Microcomputer control of DMA, Counter Input/Output, Serial Communications Controller and FIFO's is accomplished with a 16 bit processor which efficiently stores the video and ancillary data. Off-line storage is accomplished on 60 Megabyte streaming tape units for image and data dumps. Current applications mclude real-time multimode tracker performance recording as well as statistical post processing of system parameters. Data retrieval is driven by a separate microcomputer, providing laboratory frame-by-frame analysis of the video images and symbology. The TDAS can support 80 Megabytes of on-line storage presently, but can be simply expanded to 400 Megabytes. Phase 2 of the TDAS will include real-time playback of video images to recreate recorded scenarios. This paper describes the system architecture and implementation of the Tracker Data Acquisition system (TDAS), with current applications.

  12. Motion Analysis Of An Object Onto Fine Plastic Beads Using High-Speed Camera

    NASA Astrophysics Data System (ADS)

    Sato, Minoru

    2010-07-01

    Fine spherical polystyrene beads (NaRiKa, D20-1406-01, industrial materials of styrene form) are useful for frictionless demonstrations of dynamics and kinematics. Sawamoto et al. have developed the method of demonstrations using the plastic beads onto a glass board. These fine beads (the average of the diameter is 280 μm and the standard deviation of the diameter is 56 μm) function as ball bearings to reduce the friction between a moving object, glass Petri dish, and the surface of the glass board. The beads that are charged stick onto the glass board by static electricity, and arrange themselves at intervals. The movement characteristic of a Petri dish that moves on the fine polystyrene beads that adhere onto the glass board is shown by video analysis using a USB camera and a high-speed camera (CASIO, EX-F1). The movement of the Petri dish on the fine polystyrene beads onto the glass board is good linearity, but the friction of the beads is not too small. The high-speed video showed that only a small number of beads behind the bottom of the Petri dish supported the Petri dish. The number of the beads that supported the Petri dish that caused the friction is about 0.14.

  13. Advanced High-Definition Video Cameras

    NASA Technical Reports Server (NTRS)

    Glenn, William

    2007-01-01

    A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.

  14. Efficient and high speed depth-based 2D to 3D video conversion

    NASA Astrophysics Data System (ADS)

    Somaiya, Amisha Himanshu; Kulkarni, Ramesh K.

    2013-09-01

    Stereoscopic video is the new era in video viewing and has wide applications such as medicine, satellite imaging and 3D Television. Such stereo content can be generated directly using S3D cameras. However, this approach requires expensive setup and hence converting monoscopic content to S3D becomes a viable approach. This paper proposes a depth-based algorithm for monoscopic to stereoscopic video conversion by using the y axis co-ordinates of the bottom-most pixels of foreground objects. This code can be used for arbitrary videos without prior database training. It does not face the limitations of single monocular depth cues nor does it combine depth cues, thus consuming less processing time without affecting the efficiency of the 3D video output. The algorithm, though not comparable to real-time, is faster than the other available 2D to 3D video conversion techniques in the average ratio of 1:8 to 1:20, essentially qualifying as high-speed. It is an automatic conversion scheme, hence directly gives the 3D video output without human intervention and with the above mentioned features becomes an ideal choice for efficient monoscopic to stereoscopic video conversion. [Figure not available: see fulltext.

  15. Preliminary analysis on faint luminous lightning events recorded by multiple high speed cameras

    NASA Astrophysics Data System (ADS)

    Alves, J.; Saraiva, A. V.; Pinto, O.; Campos, L. Z.; Antunes, L.; Luz, E. S.; Medeiros, C.; Buzato, T. S.

    2013-12-01

    The objective of this work is the study of some faint luminous events produced by lightning flashes that were recorded simultaneously by multiple high-speed cameras during the previous RAMMER (Automated Multi-camera Network for Monitoring and Study of Lightning) campaigns. The RAMMER network is composed by three fixed cameras and one mobile color camera separated by, in average, distances of 13 kilometers. They were located in the Paraiba Valley (in the cities of São José dos Campos and Caçapava), SP, Brazil, arranged in a quadrilateral shape, centered in São José dos Campos region. This configuration allowed RAMMER to see a thunderstorm from different angles, registering the same lightning flashes simultaneously by multiple cameras. Each RAMMER sensor is composed by a triggering system and a Phantom high-speed camera version 9.1, which is set to operate at a frame rate of 2,500 frames per second with a lens Nikkor (model AF-S DX 18-55 mm 1:3.5 - 5.6 G in the stationary sensors, and a lens model AF-S ED 24 mm - 1:1.4 in the mobile sensor). All videos were GPS (Global Positioning System) time stamped. For this work we used a data set collected in four RAMMER manual operation days in the campaign of 2012 and 2013. On Feb. 18th the data set is composed by 15 flashes recorded by two cameras and 4 flashes recorded by three cameras. On Feb. 19th a total of 5 flashes was registered by two cameras and 1 flash registered by three cameras. On Feb. 22th we obtained 4 flashes registered by two cameras. Finally, in March 6th two cameras recorded 2 flashes. The analysis in this study proposes an evaluation methodology for faint luminous lightning events, such as continuing current. Problems in the temporal measurement of the continuing current can generate some imprecisions during the optical analysis, therefore this work aim to evaluate the effects of distance in this parameter with this preliminary data set. In the cases that include the color camera we analyzed the RGB

  16. Very High-Speed Digital Video Capability for In-Flight Use

    NASA Technical Reports Server (NTRS)

    Corda, Stephen; Tseng, Ting; Reaves, Matthew; Mauldin, Kendall; Whiteman, Donald

    2006-01-01

    digital video camera system has been qualified for use in flight on the NASA supersonic F-15B Research Testbed aircraft. This system is capable of very-high-speed color digital imaging at flight speeds up to Mach 2. The components of this system have been ruggedized and shock-mounted in the aircraft to survive the severe pressure, temperature, and vibration of the flight environment. The system includes two synchronized camera subsystems installed in fuselage-mounted camera pods (see Figure 1). Each camera subsystem comprises a camera controller/recorder unit and a camera head. The two camera subsystems are synchronized by use of an MHub(TradeMark) synchronization unit. Each camera subsystem is capable of recording at a rate up to 10,000 pictures per second (pps). A state-of-the-art complementary metal oxide/semiconductor (CMOS) sensor in the camera head has a maximum resolution of 1,280 1,024 pixels at 1,000 pps. Exposure times of the electronic shutter of the camera range from 1/200,000 of a second to full open. The recorded images are captured in a dynamic random-access memory (DRAM) and can be downloaded directly to a personal computer or saved on a compact flash memory card. In addition to the high-rate recording of images, the system can display images in real time at 30 pps. Inter Range Instrumentation Group (IRIG) time code can be inserted into the individual camera controllers or into the M-Hub unit. The video data could also be used to obtain quantitative, three-dimensional trajectory information. The first use of this system was in support of the Space Shuttle Return to Flight effort. Data were needed to help in understanding how thermally insulating foam is shed from a space shuttle external fuel tank during launch. The cameras captured images of simulated external tank debris ejected from a fixture mounted under the centerline of the F-15B aircraft. Digital video was obtained at subsonic and supersonic flight conditions, including speeds up to Mach 2

  17. Using a High-Speed Camera to Measure the Speed of Sound

    NASA Astrophysics Data System (ADS)

    Hack, William Nathan; Baird, William H.

    2012-01-01

    The speed of sound is a physical property that can be measured easily in the lab. However, finding an inexpensive and intuitive way for students to determine this speed has been more involved. The introduction of affordable consumer-grade high-speed cameras (such as the Exilim EX-FC100) makes conceptually simple experiments feasible. Since the Exilim can capture 1000 frames a second, it provides an easy way for students to calculate the speed of sound by counting video frames from a sound-triggered event they can see. For our experiment, we popped a balloon at a measured distance from a sound-activated high-output LED while recording high-speed video for later analysis. The beauty of using this as the method for calculating the speed of sound is that the software required for frame-by-frame analysis is free and the idea itself (slow motion) is simple. This allows even middle school students to measure the speed of sound with assistance, but the ability to independently verify such a basic result is invaluable for high school or college students.

  18. Head-mountable high speed camera for optical neural recording

    PubMed Central

    Park, Joon Hyuk; Platisa, Jelena; Verhagen, Justus V.; Gautam, Shree H.; Osman, Ahmad; Kim, Dongsoo; Pieribone, Vincent A.; Culurciello, Eugenio

    2011-01-01

    We report a head-mountable CMOS camera for recording rapid neuronal activity in freely-moving rodents using fluorescent activity reporters. This small, lightweight camera is capable of detecting small changes in light intensity (0.2% ΔI/I) at 500 fps. The camera has a resolution of 32 × 32, sensitivity of 0.62 V/lux·s, conversion gain of 0.52 μV/e- and well capacity of 2.1 Me-. The camera, containing intensity offset subtraction circuitry within the imaging chip, is part of a miniaturized epi-fluorescent microscope and represents a first generation, mobile scientific-grade, physiology imaging camera. PMID:21763348

  19. Design and application of a digital array high-speed camera system

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Yao, Xuefeng; Ma, Yinji; Yuan, Yanan

    2016-03-01

    In this paper, a digital array high-speed camera system is designed and applied in dynamic fracture experiment. First, the design scheme for 3*3 array digital high-speed camera system is presented, including 3*3 array light emitting diode (LED) light source unit, 3*3 array charge coupled device (CCD) camera unit, timing delay control unit, optical imaging unit and impact loading unit. Second, the influence of geometric optical parameters on optical parallax is analyzed based on the geometric optical imaging mechanism. Finally, combining the method of dynamic caustics with the digital high-speed camera system, the dynamic fracture behavior of crack initiation and propagation in PMMA specimen under low-speed impact is investigated to verify the feasibility of the high-speed camera system.

  20. A new high-speed IR camera system

    NASA Technical Reports Server (NTRS)

    Travis, Jeffrey W.; Shu, Peter K.; Jhabvala, Murzy D.; Kasten, Michael S.; Moseley, Samuel H.; Casey, Sean C.; Mcgovern, Lawrence K.; Luers, Philip J.; Dabney, Philip W.; Kaipa, Ravi C.

    1994-01-01

    A multi-organizational team at the Goddard Space Flight Center is developing a new far infrared (FIR) camera system which furthers the state of the art for this type of instrument by the incorporating recent advances in several technological disciplines. All aspects of the camera system are optimized for operation at the high data rates required for astronomical observations in the far infrared. The instrument is built around a Blocked Impurity Band (BIB) detector array which exhibits responsivity over a broad wavelength band and which is capable of operating at 1000 frames/sec, and consists of a focal plane dewar, a compact camera head electronics package, and a Digital Signal Processor (DSP)-based data system residing in a standard 486 personal computer. In this paper we discuss the overall system architecture, the focal plane dewar, and advanced features and design considerations for the electronics. This system, or one derived from it, may prove useful for many commercial and/or industrial infrared imaging or spectroscopic applications, including thermal machine vision for robotic manufacturing, photographic observation of short-duration thermal events such as combustion or chemical reactions, and high-resolution surveillance imaging.

  1. Development of High Speed Digital Camera: EXILIM EX-F1

    NASA Astrophysics Data System (ADS)

    Nojima, Osamu

    The EX-F1 is a high speed digital camera featuring a revolutionary improvement in burst shooting speed that is expected to create entirely new markets. This model incorporates a high speed CMOS sensor and a high speed LSI processor. With this model, CASIO has achieved an ultra-high speed 60 frames per second (fps) burst rate for still images, together with 1,200 fps high speed movie that captures movements which cannot even be seen by human eyes. Moreover, this model can record movies at full High-Definition. After launching it into the market, it was able to get a lot of high appraisals as an innovation camera. We will introduce the concept, features and technologies about the EX-F1.

  2. Automated High-Speed Video Detection of Small-Scale Explosives Testing

    NASA Astrophysics Data System (ADS)

    Ford, Robert; Guymon, Clint

    2013-06-01

    Small-scale explosives sensitivity test data is used to evaluate hazards of processing, handling, transportation, and storage of energetic materials. Accurate test data is critical to implementation of engineering and administrative controls for personnel safety and asset protection. Operator mischaracterization of reactions during testing contributes to either excessive or inadequate safety protocols. Use of equipment and associated algorithms to aid the operator in reaction determination can significantly reduce operator error. Safety Management Services, Inc. has developed an algorithm to evaluate high-speed video images of sparks from an ESD (Electrostatic Discharge) machine to automatically determine whether or not a reaction has taken place. The algorithm with the high-speed camera is termed GoDetect (patent pending). An operator assisted version for friction and impact testing has also been developed where software is used to quickly process and store video of sensitivity testing. We have used this method for sensitivity testing with multiple pieces of equipment. We present the fundamentals of GoDetect and compare it to other methods used for reaction detection.

  3. Location accuracy evaluation of lightning location systems using natural lightning flashes recorded by a network of high-speed cameras

    NASA Astrophysics Data System (ADS)

    Alves, J.; Saraiva, A. C. V.; Campos, L. Z. D. S.; Pinto, O., Jr.; Antunes, L.

    2014-12-01

    This work presents a method for the evaluation of location accuracy of all Lightning Location System (LLS) in operation in southeastern Brazil, using natural cloud-to-ground (CG) lightning flashes. This can be done through a multiple high-speed cameras network (RAMMER network) installed in the Paraiba Valley region - SP - Brazil. The RAMMER network (Automated Multi-camera Network for Monitoring and Study of Lightning) is composed by four high-speed cameras operating at 2,500 frames per second. Three stationary black-and-white (B&W) cameras were situated in the cities of São José dos Campos and Caçapava. A fourth color camera was mobile (installed in a car), but operated in a fixed location during the observation period, within the city of São José dos Campos. The average distance among cameras was 13 kilometers. Each RAMMER sensor position was determined so that the network can observe the same lightning flash from different angles and all recorded videos were GPS (Global Position System) time stamped, allowing comparisons of events between cameras and the LLS. The RAMMER sensor is basically composed by a computer, a Phantom high-speed camera version 9.1 and a GPS unit. The lightning cases analyzed in the present work were observed by at least two cameras, their position was visually triangulated and the results compared with BrasilDAT network, during the summer seasons of 2011/2012 and 2012/2013. The visual triangulation method is presented in details. The calibration procedure showed an accuracy of 9 meters between the accurate GPS position of the object triangulated and the result from the visual triangulation method. Lightning return stroke positions, estimated with the visual triangulation method, were compared with LLS locations. Differences between solutions were not greater than 1.8 km.

  4. New measuring concepts using integrated online analysis of color and monochrome digital high-speed camera sequences

    NASA Astrophysics Data System (ADS)

    Renz, Harald

    1997-05-01

    High speed sequences allow a subjective assessment of very fast processes and serve as an important basis for the quantitative analysis of movements. Computer systems help to acquire, handle, display and store digital image sequences as well as to perform measurement tasks automatically. High speed cameras have been used since several years for safety tests, material testing or production optimization. To get the very high speed of 1000 or more images per second, three have been used mainly 16 mm film cameras, which could provide an excellent image resolution and the required time resolution. But up to now, most results have been only judged by viewing. For some special applications like safety tests using crash or high-g sled tests in the automobile industry there have been used image analyzing techniques to measure also the characteristic of moving objects inside images. High speed films, shot during the short impact, allow judgement of the dynamic scene. Additionally they serve as an important basis for the quantitative analysis of the very fast movements. Thus exact values of the velocity and acceleration, the dummies or vehicles are exposed to, can be derived. For analysis of the sequences the positions of signalized points--mostly markers, which are fixed by the test engineers before a test--have to be measured frame by frame. The trajectories show the temporal sequence of the test objects and are the base for calibrated diagrams of distance, velocity and acceleration. Today there are replaced more and more 16 mm film cameras by electronic high speed cameras. The development of high-speed recording systems is very far advanced and the prices of these systems are more and more comparable to those of traditional film cameras. Also the resolution has been increased very greatly. The new cameras are `crashproof' and can be used for similar tasks as the 16 mm film cameras at similar sizes. High speed video cameras now offer an easy setup and direct access to

  5. Color high-speed video stroboscope system for inspection of human larynx

    NASA Astrophysics Data System (ADS)

    Stasicki, Boleslaw; Meier, G. E. A.

    2001-04-01

    The videostroboscopy of the larynx has become a powerful tool for the study of vocal physiology, assessment of the fold abnormalities, motion impairments and functional disorders, as well as for the early diagnosis of diseases like cancer and pathologies like nodules, carcinoma, polyps and cysts. Since the vocal folds vibrate in the range of 100 Hz up to 1 kHz, the video stroboscope allows physicians to find otherwise undetectable problems. The color information is essential for the physician by the diagnosis e.g., of the early cancer stage. A previously presented 'general purpose' monochrome high-speed video stroboscope has been tested also for the inspection of the human larynx. Good results have encouraged the authors to develop a medical color version. In contrast to the conventional stroboscopes the system does not utilize pulsed light for the object illumination. Instead, a special asynchronously shuttered video camera triggered by the oscillating object has been used. The apparatus including a specially developed digital phase shifter provides a stop phase and slow-motion observation in real time with simultaneous recording of the periodically moving objects. The desired position of the vocal folds or their virtual slowed down vibration speed does not depend of the voice pitch changes. Sequences of hundreds of high resolution color frames can be stored on the hard disk in the standard graphic formats. Afterwards they can be played back frame-by-frame or as a video clip, evaluated, exported, printed out and transmitted via computer networks.

  6. High Speed Intensified Video Observations of TLEs in Support of PhOCAL

    NASA Technical Reports Server (NTRS)

    Lyons, Walter A.; Nelson, Thomas E.; Cummer, Steven A.; Lang, Timothy; Miller, Steven; Beavis, Nick; Yue, Jia; Samaras, Tim; Warner, Tom A.

    2013-01-01

    The third observing season of PhOCAL (Physical Origins of Coupling to the upper Atmosphere by Lightning) was conducted over the U.S. High Plains during the late spring and summer of 2013. The goal was to capture using an intensified high-speed camera, a transient luminous event (TLE), especially a sprite, as well as its parent cloud-to-ground (SP+CG) lightning discharge, preferably within the domain of a 3-D lightning mapping array (LMA). The co-capture of sprite and its SP+CG was achieved within useful range of an interferometer operating near Rapid City. Other high-speed sprite video sequences were captured above the West Texas LMA. On several occasions the large mesoscale convective complexes (MCSs) producing the TLE-class lightning were also generating vertically propagating convectively generated gravity waves (CGGWs) at the mesopause which were easily visible using NIR-sensitive color cameras. These were captured concurrent with sprites. These observations were follow-ons to a case on 15 April 2012 in which CGGWs were also imaged by the new Day/Night Band on the Suomi NPP satellite system. The relationship between the CGGW and sprite initiation are being investigated. The past year was notable for a large number of elve+halo+sprite sequences sequences generated by the same parent CG. And on several occasions there appear to be prominent banded modulations of the elves' luminosity imaged at >3000 ips. These stripes appear coincident with the banded CGGW structure, and presumably its density variations. Several elves and a sprite from negative CGs were also noted. New color imaging systems have been tested and found capable of capturing sprites. Two cases of sprites with an aurora as a backdrop were also recorded. High speed imaging was also provided in support of the UPLIGHTS program near Rapid City, SD and the USAFA SPRITES II airborne campaign over the Great Plains.

  7. Investigation of a Plasma Ball using a High Speed Camera

    NASA Astrophysics Data System (ADS)

    Laird, James; Zweben, Stewart; Raitses, Yevgeny; Zwicker, Andrew; Kaganovich, Igor

    2008-11-01

    The physics of how a plasma ball works is not clearly understood. A plasma ball is a commercial ``toy'' in which a center electrode is charged to a high voltage and lightning-like discharges fill the ball with many plasma filaments. The ball uses high voltage applied on the center electrode (˜5 kV) which is covered with glass and capacitively coupled to the plasma filaments. This voltage oscillates at a frequency of ˜26 kHz. A Nebula plasma ball from Edmund Scientific was filmed with a Phantom v7.3 camera, which can operate at speeds up to 150,000 frames per second (fps) with a limit of >=2 μsec exposure per frame. At 100,000 fps the filaments were only visible for ˜5 μsec every ˜40 μsec. When the plasma ball is first switched on, the filaments formed only after ˜800 μsec and initially had a much larger diameter with more chaotic behavior than when the ball reached its final plasma filament state at ˜30 msec. Measurements are also being made of the final filament diameter, the speed of the filament propagation, and the effect of thermal gradients on the filament density. An attempt will be made to explain these results from plasma theory and movies of these filaments will be shown. Possible theoretical models include streamer-like formation, thermal condensation instability, and dielectric barrier discharge instability.

  8. High-Speed Color Video System For Data Acquisition At 200 Fields Per Second

    NASA Astrophysics Data System (ADS)

    Holzapfel, C.

    1982-02-01

    Nac Incorporated has recently introduced a new high speed color video system which employs a standard VHS color video cassette. Playback can be accomplished on either the HSV-200 or on a standard VHS video recorder/playback unit, such as manufactured by JVC or Panasonic.

  9. High-Speed Video Analysis in a Conceptual Physics Class

    ERIC Educational Resources Information Center

    Desbien, Dwain M.

    2011-01-01

    The use of probe ware and computers has become quite common in introductory physics classrooms. Video analysis is also becoming more popular and is available to a wide range of students through commercially available and/or free software. Video analysis allows for the study of motions that cannot be easily measured in the traditional lab setting…

  10. A novel multichannel nonintensified ultra-high-speed camera using multiwavelength illumination

    NASA Astrophysics Data System (ADS)

    Hijazi, Ala; Madhavan, Vis

    2006-08-01

    Multi-channel gated-intensified cameras are commonly used for capturing images at ultra-high frame rates. However, the image intensifier reduces the image resolution to such an extent that the images are often unsuitable for applications requiring high quality images, such as digital image correlation. We report on the development of a new type of non-intensified multi-channel camera system that permits recording of image sequences at ultra-high frame rates at the native resolution afforded by the imaging optics and the cameras used. This camera system is based upon the use of short duration light pulses of different wavelengths for illumination of the target and the use of wavelength selective elements in the imaging system to route each particular wavelength of light to a particular camera. A prototype of this camera system comprising four dual-frame cameras synchronized with four dual-cavity lasers producing laser pulses of four different wavelengths is described. The camera is built around a stereo microscope such that it can capture image sequences usable for 2D or 3D digital image correlation. The camera described herein is capable of capturing images at frame rates exceeding 100 MHz. The camera was used for capturing microscopic images of the chip-workpiece interface area during high speed machining. Digital image correlation was performed on the obtained images to map the shear strain rate in the primary-shear-zone during high speed machining.

  11. Using a High-Speed Camera to Measure the Speed of Sound

    ERIC Educational Resources Information Center

    Hack, William Nathan; Baird, William H.

    2012-01-01

    The speed of sound is a physical property that can be measured easily in the lab. However, finding an inexpensive and intuitive way for students to determine this speed has been more involved. The introduction of affordable consumer-grade high-speed cameras (such as the Exilim EX-FC100) makes conceptually simple experiments feasible. Since the…

  12. Observation of Penetration ``Track'' Formation in Silica Aerogel by High-Speed Camera

    NASA Astrophysics Data System (ADS)

    Okudaira, K.; Hasegawa, S.; Onose, N.; Yano, H.; Tabata, M.; Sugita, S.; Tsuchiyama, A.; Yamagishi, A.; Kawai, H.

    2012-05-01

    In this study, formation of penetration tracks in aerogel was observed and recorded by a high-speed camera. Excavation process of a single track, so-called a carrot track made by a 500 micron-alumina grain was observed.

  13. The World in Slow Motion: Using a High-Speed Camera in a Physics Workshop

    ERIC Educational Resources Information Center

    Dewanto, Andreas; Lim, Geok Quee; Kuang, Jianhong; Zhang, Jinfeng; Yeo, Ye

    2012-01-01

    We present a physics workshop for college students to investigate various physical phenomena using high-speed cameras. The technical specifications required, the step-by-step instructions, as well as the practical limitations of the workshop, are discussed. This workshop is also intended to be a novel way to promote physics to Generation-Y…

  14. High-speed video recording system using multiple CCD imagers and digital storage

    NASA Astrophysics Data System (ADS)

    Racca, Roberto G.; Clements, Reginald M.

    1995-05-01

    This paper describes a fully solid state high speed video recording system. Its principle of operation is based on the use of several independent CCD imagers and an array of liquid crystal light valves that control which imager receives the light from the subject. The imagers are exposed in rapid succession and are then read out sequentially at standard video rate into digital memory, generating a time-resolved sequence with as many frames as there are imagers. This design allows the use of inexpensive, consumer-grade camera modules and electronics. A microprocessor-based controller, designed to accept up to ten imagers, handles all phases of the recording: exposure timing, image digitization and storage, and sequential playback onto a standard video monitor. The system is capable of recording full screen black and white images with spatial resolution similar to that of standard television, at rates of about 10,000 images per second in pulsed illumination mode. We have designed and built two optical configurations for the imager multiplexing system. The first one involves permanently splitting the subject light into multiple channels and placing a liquid crystal shutter in front of each imager. A prototype with three CCD imagers and shutters based on this configuration has allowed successful three-image video recordings of phenomena such as the action of an air rifle pellet shattering a piece of glass, using a high-intensity pulsed light emitting diode as the light source. The second configuration is more light-efficient in that it routes the entire subject light to each individual imager in sequence by using the liquid crystal cells as selectable binary switches. Despite some operational limitations, this method offers a solution when the available light, if subdivided among all the imagers, would not allow a sufficiently short exposure time.

  15. High speed video analysis study of elastic and inelastic collisions

    NASA Astrophysics Data System (ADS)

    Baker, Andrew; Beckey, Jacob; Aravind, Vasudeva; Clarion Team

    We study inelastic and elastic collisions with a high frame rate video capture to study the process of deformation and other energy transformations during collision. Snapshots are acquired before and after collision and the dynamics of collision are analyzed using Tracker software. By observing the rapid changes (over few milliseconds) and slower changes (over few seconds) in momentum and kinetic energy during the process of collision, we study the loss of momentum and kinetic energy over time. Using this data, it could be possible to design experiments that reduce error involved in these experiments, helping students build better and more robust models to understand the physical world. We thank Clarion University undergraduate student grant for financial support involving this project.

  16. Perfect Optical Compensator With 1:1 Shutter Ratio Used For High Speed Camera

    NASA Astrophysics Data System (ADS)

    Zhihong, Rong

    1983-03-01

    An optical compensator used for high speed camera is described. The method of compensation, the analysis of the imaging quality and the result of experiment are introduced. The compensator consists of pairs of parallel mirrors. It can perform perfect compensation even at 1:1 shutter ratio. Using this compensator a high speed camera can be operated with no shutter and can obtain the same image sharpness as that of the intermittent camera. The advantages of this compensator are summarized as follows: . While compensating, the aberration correction of the objective would not be damaged. . There is no displacement and defocussing between the scanning image and the film in frame center during compensation. Increasing the exposure angle doesn't reduce the resolving power. . The compensator can also be used in the projector in place of the intermittent mechanism to practise continuous (non-intermittent) projection without shutter.

  17. Digital synchroballistic schlieren camera for high-speed photography of bullets and rocket sleds

    NASA Astrophysics Data System (ADS)

    Buckner, Benjamin D.; L'Esperance, Drew

    2013-08-01

    A high-speed digital streak camera designed for simultaneous high-resolution color photography and focusing schlieren imaging is described. The camera uses a computer-controlled galvanometer scanner to achieve synchroballistic imaging through a narrow slit. Full color 20 megapixel images of a rocket sled moving at 480 m/s and of projectiles fired at around 400 m/s were captured, with high-resolution schlieren imaging in the latter cases, using conventional photographic flash illumination. The streak camera can achieve a line rate for streak imaging of up to 2.4 million lines/s.

  18. Combining High-Speed Cameras and Stop-Motion Animation Software to Support Students' Modeling of Human Body Movement

    NASA Astrophysics Data System (ADS)

    Lee, Victor R.

    2015-04-01

    Biomechanics, and specifically the biomechanics associated with human movement, is a potentially rich backdrop against which educators can design innovative science teaching and learning activities. Moreover, the use of technologies associated with biomechanics research, such as high-speed cameras that can produce high-quality slow-motion video, can be deployed in such a way to support students' participation in practices of scientific modeling. As participants in classroom design experiment, fifteen fifth-grade students worked with high-speed cameras and stop-motion animation software (SAM Animation) over several days to produce dynamic models of motion and body movement. The designed series of learning activities involved iterative cycles of animation creation and critique and use of various depictive materials. Subsequent analysis of flipbooks of human jumping movements created by the students at the beginning and end of the unit revealed a significant improvement in both the epistemic fidelity of students' representations. Excerpts from classroom observations highlight the role that the teacher plays in supporting students' thoughtful reflection of and attention to slow-motion video. In total, this design and research intervention demonstrates that the combination of technologies, activities, and teacher support can lead to improvements in some of the foundations associated with students' modeling.

  19. Dynamics at the Holuhraun eruption based on high speed video data analysis

    NASA Astrophysics Data System (ADS)

    Witt, Tanja; Walter, Thomas R.

    2016-04-01

    The 2014/2015 Holuhraun eruption was an gas rich fissure eruption with high fountains. The magma was transported by a horizontal dyke over a distance of 45km. At the first day the fountains occur over a distance of 1.5km and focused at isolated vents during the following day. Based on video analysis of the fountains we obtained a detailed view onto the velocities of the eruption, the propagation path of magma, communication between vents and complexities in the magma paths. We collected videos from the Holuhraun eruption with 2 high speed cameras and one DSLR camera from 31st August, 2015 to 4th September, 2015 for several hours. The fountains at adjacent vents visually seemed to be related at all days. Hence, we calculated the height as a function of time from the video data. All fountains show a pulsating regime with apparent and sporadic alternations from meter to several tens of meters heights. By a time-dependent cross-correlation approach developed within the FUTUREVOLC project, we are able to compare the pulses in the height at adjacent vents. We find that in most cases there is a time lag between the pulses. From the calculated time lags between the pulses and the distance between the correlated vents, we calculate the apparent speed of magma pulses. The analysis of the frequency of the fountains and the eruption and rest time between the the fountains itself, are quite similar and suggest a connection and controlling process of the fountains in the feeder below. At the Holuhraun eruption 2014/2015 (Iceland) we find a significant time shift between the single pulses of adjacent vents at all days. The mean velocity of all days is 30-40 km/hr, which could be interpreted by a magma flow velocity along the dike at depth.Comparison of the velocities derived from the video data analysis to the assumed magma flow velocity in the dike based on seismic data shows a very good agreement, implying that surface expressions of pulsating vents provide an insight into the

  20. Algorithm-based high-speed video analysis yields new insights into Strombolian eruptions

    NASA Astrophysics Data System (ADS)

    Gaudin, Damien; Taddeucci, Jacopo; Moroni, Monica; Scarlato, Piergiorgio

    2014-05-01

    Strombolian eruptions are characterized by mild, frequent explosions that eject gas and ash- to bomb-sized pyroclasts into the atmosphere. The observation of the products of the explosion is crucial, both for direct hazard assessment and for understanding eruption dynamics. Conventional thermal and optical imaging allows a first characterization of several eruptive processes, but the use of high speed cameras, with frame rates of 500 Hz or more, allows to follow the particles on multiples frames, and to reconstruct their trajectories. However, the manual processing of the images is time consuming. Consequently, it does not allow neither the routine monitoring nor averaged statistics, since only relatively few, selected particles (usually the fastest) can be taken into account. In addition, manual processing is quite inefficient to compute the total ejected mass, since it requires to count each individual particle. In this presentation, we discuss the advantages of using numerical methods for the tracking of the particles and the description of the explosion. A toolbox called "Pyroclast Tracking Velocimetry" is used to compute the size and the trajectory of each individual particle. A large variety of parameters can be derived and statistically compared: ejection velocity, ejection angle, deceleration, size, mass, etc. At the scale of the explosion, the total mass, the mean velocity of the particles, the number and the frequency of ejection pulses can be estimated. The study of high speed videos from 2 vents from Yasur volcano (Vanuatu) and 4 from Stromboli volcano (Italy) reveals that these parameters are positively correlated. As a consequence, the intensity of an explosion can be quantitatively, and operator-independently described by the total kinetic energy of the bombs, taking into account both the mass and the velocity of the particles. For each vent, a specific range of total kinetic energy can be defined, demonstrating the strong influence of the conduit in

  1. Monitoring the rotation status of wind turbine blades using high-speed camera system

    NASA Astrophysics Data System (ADS)

    Zhang, Dongsheng; Chen, Jubing; Wang, Qiang; Li, Kai

    2013-06-01

    The measurement of the rotating object is of great significance in engineering applications. In this study, a high-speed dual camera system based on 3D digital image correlation has been developed in order to monitor the rotation status of the wind turbine blades. The system allows sequential images acquired at a rate of 500 frames per second (fps). An improved Newton-Raphson algorithm has been proposed which enables detection movement including large rotation and translation in subpixel precision. The simulation experiments showed that this algorithm is robust to identify the movement if the rotation angle is less than 16 degrees between the adjacent images. The subpixel precision is equivalent to the normal NR algorithm, i.e.0.01 pixels in displacement. As a laboratory research, the high speed camera system was used to measure the movement of the wind turbine model which was driven by an electric fan. In the experiment, the image acquisition rate was set at 387 fps and the cameras were calibrated according to Zhang's method. The blade was coated with randomly distributed speckles and 7 locations in the blade along the radial direction were selected. The displacement components of these 7 locations were measured with the proposed method. Conclusion is drawn that the proposed DIC algorithm is suitable for large rotation detection, and the high-speed dual camera system is a promising, economic method in health diagnose of wind turbine blades.

  2. Video camera use at nuclear power plants

    SciTech Connect

    Estabrook, M.L.; Langan, M.O.; Owen, D.E. )

    1990-08-01

    A survey of US nuclear power plants was conducted to evaluate video camera use in plant operations, and determine equipment used and the benefits realized. Basic closed circuit television camera (CCTV) systems are described and video camera operation principles are reviewed. Plant approaches for implementing video camera use are discussed, as are equipment selection issues such as setting task objectives, radiation effects on cameras, and the use of disposal cameras. Specific plant applications are presented and the video equipment used is described. The benefits of video camera use --- mainly reduced radiation exposure and increased productivity --- are discussed and quantified. 15 refs., 6 figs.

  3. ULTRASPEC: an electron multiplication CCD camera for very low light level high speed astronomical spectrometry

    NASA Astrophysics Data System (ADS)

    Ives, Derek; Bezawada, Nagaraja; Dhillon, Vik; Marsh, Tom

    2008-07-01

    We present the design, characteristics and astronomical results for ULTRASPEC, a high speed Electron Multiplication CCD (EMCCD) camera using an E2VCCD201 (1K frame transfer device), developed to prove the performance of this new optical detector technology in astronomical spectrometry, particularly in the high speed, low light level regime. We present both modelled and real data for these detectors with particular regard to avalanche gain and clock induced charge (CIC). We present first light results from the camera as used on the EFOSC-2 instrument at the ESO 3.6 metre telescope in La Silla. We also present the design for a proposed new 4Kx2K frame transfer EMCCD.

  4. Precision of FLEET Velocimetry Using High-speed CMOS Camera Systems

    NASA Technical Reports Server (NTRS)

    Peters, Christopher J.; Danehy, Paul M.; Bathel, Brett F.; Jiang, Naibo; Calvert, Nathan D.; Miles, Richard B.

    2015-01-01

    Femtosecond laser electronic excitation tagging (FLEET) is an optical measurement technique that permits quantitative velocimetry of unseeded air or nitrogen using a single laser and a single camera. In this paper, we seek to determine the fundamental precision of the FLEET technique using high-speed complementary metal-oxide semiconductor (CMOS) cameras. Also, we compare the performance of several different high-speed CMOS camera systems for acquiring FLEET velocimetry data in air and nitrogen free-jet flows. The precision was defined as the standard deviation of a set of several hundred single-shot velocity measurements. Methods of enhancing the precision of the measurement were explored such as digital binning (similar in concept to on-sensor binning, but done in post-processing), row-wise digital binning of the signal in adjacent pixels and increasing the time delay between successive exposures. These techniques generally improved precision; however, binning provided the greatest improvement to the un-intensified camera systems which had low signal-to-noise ratio. When binning row-wise by 8 pixels (about the thickness of the tagged region) and using an inter-frame delay of 65 micro sec, precisions of 0.5 m/s in air and 0.2 m/s in nitrogen were achieved. The camera comparison included a pco.dimax HD, a LaVision Imager scientific CMOS (sCMOS) and a Photron FASTCAM SA-X2, along with a two-stage LaVision High Speed IRO intensifier. Excluding the LaVision Imager sCMOS, the cameras were tested with and without intensification and with both short and long inter-frame delays. Use of intensification and longer inter-frame delay generally improved precision. Overall, the Photron FASTCAM SA-X2 exhibited the best performance in terms of greatest precision and highest signal-to-noise ratio primarily because it had the largest pixels.

  5. Precision of FLEET Velocimetry Using High-Speed CMOS Camera Systems

    NASA Technical Reports Server (NTRS)

    Peters, Christopher J.; Danehy, Paul M.; Bathel, Brett F.; Jiang, Naibo; Calvert, Nathan D.; Miles, Richard B.

    2015-01-01

    Femtosecond laser electronic excitation tagging (FLEET) is an optical measurement technique that permits quantitative velocimetry of unseeded air or nitrogen using a single laser and a single camera. In this paper, we seek to determine the fundamental precision of the FLEET technique using high-speed complementary metal-oxide semiconductor (CMOS) cameras. Also, we compare the performance of several different high-speed CMOS camera systems for acquiring FLEET velocimetry data in air and nitrogen free-jet flows. The precision was defined as the standard deviation of a set of several hundred single-shot velocity measurements. Methods of enhancing the precision of the measurement were explored such as digital binning (similar in concept to on-sensor binning, but done in post-processing), row-wise digital binning of the signal in adjacent pixels and increasing the time delay between successive exposures. These techniques generally improved precision; however, binning provided the greatest improvement to the un-intensified camera systems which had low signal-to-noise ratio. When binning row-wise by 8 pixels (about the thickness of the tagged region) and using an inter-frame delay of 65 microseconds, precisions of 0.5 meters per second in air and 0.2 meters per second in nitrogen were achieved. The camera comparison included a pco.dimax HD, a LaVision Imager scientific CMOS (sCMOS) and a Photron FASTCAM SA-X2, along with a two-stage LaVision HighSpeed IRO intensifier. Excluding the LaVision Imager sCMOS, the cameras were tested with and without intensification and with both short and long inter-frame delays. Use of intensification and longer inter-frame delay generally improved precision. Overall, the Photron FASTCAM SA-X2 exhibited the best performance in terms of greatest precision and highest signal-to-noise ratio primarily because it had the largest pixels.

  6. Invited Article: Quantitative imaging of explosions with high-speed cameras.

    PubMed

    McNesby, Kevin L; Homan, Barrie E; Benjamin, Richard A; Boyle, Vincent M; Densmore, John M; Biss, Matthew M

    2016-05-01

    The techniques presented in this paper allow for mapping of temperature, pressure, chemical species, and energy deposition during and following detonations of explosives, using high speed cameras as the main diagnostic tool. This work provides measurement in the explosive near to far-field (0-500 charge diameters) of surface temperatures, peak air-shock pressures, some chemical species signatures, shock energy deposition, and air shock formation. PMID:27250366

  7. A high speed camera with auto adjustable ROI for product's outline dimension measurement

    NASA Astrophysics Data System (ADS)

    Wang, Qian; Wei, Ping; Ke, Jun; Gao, Jingjing

    2014-11-01

    Currently most domestic factories still manually detect machine arbors to decide if they meet industry standards. This method is costly, low efficient, and easy to misjudge the qualified arbors or miss the unqualified ones, thus seriously affects factories' efficiency and credibility. In this paper, we design a specific high-speed camera system with auto adjustable ROI for machine arbor's outline dimension measurement. The entire system includes an illumination part, a camera part, a mechanic structure part and a signal processing part based on FPGA. The system will help factories to realize automatic arbor measurement, and improve their efficiency and reduce their cost.

  8. High-speed camera analysis for nanoparticles produced by using a pulsed wire-discharge method

    NASA Astrophysics Data System (ADS)

    Kim, Jong Hwan; Kim, Dae Sung; Ryu, Bong Ki; Suematsu, Hisayuki; Tanaka, Kenta

    2016-07-01

    We investigated the performance of a high-speed camera and the nanoparticle size distribution to quantify the mechanism of synthesized nanoparticle formation in a pulsed wire discharge (PWD) experiment. The Sn-58Bi alloy wire was 0.5 mm in diameter and 32 mm long; it was prepared in the PWD chamber, and the evaporation explosion process was observed by using a high-speed camera. In order to vary the conditions and analyze the mechanisms of nanoparticle synthesis in the PWD, we changed the pressure of the N2 gas in the chamber from 25 to 75 kPa. To synthesize nanoparticles on a nano-scale, we fixed the charging voltage at 6 kV, and the high-speed camera captured pictures at 22,500 frames per second. The experimental results show that the electronic explosion process at different N2 gas pressures can be characterized by using the explosion's duration and the explosion's intensity. The experiments at the lowest pressure exhibited a longer explosion duration and a greater intensity. Also, at low pressure, very small nanoparticles with a good dispersion were produced.

  9. Characterization of Axial Inducer Cavitation Instabilities via High Speed Video Recordings

    NASA Technical Reports Server (NTRS)

    Arellano, Patrick; Peneda, Marinelle; Ferguson, Thomas; Zoladz, Thomas

    2011-01-01

    Sub-scale water tests were undertaken to assess the viability of utilizing high resolution, high frame-rate digital video recordings of a liquid rocket engine turbopump axial inducer to characterize cavitation instabilities. These high speed video (HSV) images of various cavitation phenomena, including higher order cavitation, rotating cavitation, alternating blade cavitation, and asymmetric cavitation, as well as non-cavitating flows for comparison, were recorded from various orientations through an acrylic tunnel using one and two cameras at digital recording rates ranging from 6,000 to 15,700 frames per second. The physical characteristics of these cavitation forms, including the mechanisms that define the cavitation frequency, were identified. Additionally, these images showed how the cavitation forms changed and transitioned from one type (tip vortex) to another (sheet cavitation) as the inducer boundary conditions (inlet pressures) were changed. Image processing techniques were developed which tracked the formation and collapse of cavitating fluid in a specified target area, both in the temporal and frequency domains, in order to characterize the cavitation instability frequency. The accuracy of the analysis techniques was found to be very dependent on target size for higher order cavitation, but much less so for the other phenomena. Tunnel-mounted piezoelectric, dynamic pressure transducers were present throughout these tests and were used as references in correlating the results obtained by image processing. Results showed good agreement between image processing and dynamic pressure spectral data. The test set-up, test program, and test results including H-Q and suction performance, dynamic environment and cavitation characterization, and image processing techniques and results will be discussed.

  10. Development of a high-speed CT imaging system using EMCCD camera

    NASA Astrophysics Data System (ADS)

    Thacker, Samta C.; Yang, Kai; Packard, Nathan; Gaysinskiy, Valeriy; Burkett, George; Miller, Stuart; Boone, John M.; Nagarkar, Vivek

    2009-02-01

    The limitations of current CCD-based microCT X-ray imaging systems arise from two important factors. First, readout speeds are curtailed in order to minimize system read noise, which increases significantly with increasing readout rates. Second, the afterglow associated with commercial scintillator films can introduce image lag, leading to substantial artifacts in reconstructed images, especially when the detector is operated at several hundred frames/second (fps). For high speed imaging systems, high-speed readout electronics and fast scintillator films are required. This paper presents an approach to developing a high-speed CT detector based on a novel, back-thinned electron-multiplying CCD (EMCCD) coupled to various bright, high resolution, low afterglow films. The EMCCD camera, when operated in its binned mode, is capable of acquiring data at up to 300 fps with reduced imaging area. CsI:Tl,Eu and ZnSe:Te films, recently fabricated at RMD, apart from being bright, showed very good afterglow properties, favorable for high-speed imaging. Since ZnSe:Te films were brighter than CsI:Tl,Eu films, for preliminary experiments a ZnSe:Te film was coupled to an EMCCD camera at UC Davis Medical Center. A high-throughput tungsten anode X-ray generator was used, as the X-ray fluence from a mini- or micro-focus source would be insufficient to achieve high-speed imaging. A euthanized mouse held in a glass tube was rotated 360 degrees in less than 3 seconds, while radiographic images were recorded at various readout rates (up to 300 fps); images were reconstructed using a conventional Feldkamp cone-beam reconstruction algorithm. We have found that this system allows volumetric CT imaging of small animals in approximately two seconds at ~110 to 190 μm resolution, compared to several minutes at 160 μm resolution needed for the best current systems.

  11. Measuring droplet fall speed with a high-speed camera: indoor accuracy and potential outdoor applications

    NASA Astrophysics Data System (ADS)

    Yu, Cheng-Ku; Hsieh, Pei-Rong; Yuter, Sandra E.; Cheng, Lin-Wen; Tsai, Chia-Lun; Lin, Che-Yu; Chen, Ying

    2016-04-01

    Acquisition of accurate raindrop fall speed measurements outdoors in natural rain by means of moderate-cost and easy-to-use devices represents a long-standing and challenging issue in the meteorological community. Feasibility experiments were conducted to evaluate the indoor accuracy of fall speed measurements made with a high-speed camera and to evaluate its capability for outdoor applications. An indoor experiment operating in calm conditions showed that the high-speed imaging technique can provide fall speed measurements with a mean error of 4.1-9.7 % compared to Gunn and Kinzer's empirical fall-speed-size relationship for typical sizes of rain and drizzle drops. Results obtained using the same apparatus outside in summer afternoon showers indicated larger positive and negative velocity deviations compared to the indoor measurements. These observed deviations suggest that ambient flow and turbulence play a role in modifying drop fall speeds which can be quantified with future outdoor high-speed camera measurements. Because the fall speed measurements, as presented in this article, are analyzed on the basis of tracking individual, specific raindrops, sampling uncertainties commonly found in the widely adopted optical disdrometers can be significantly mitigated.

  12. Development of a high-speed H-alpha camera system for the observation of rapid fluctuations in solar flares

    NASA Technical Reports Server (NTRS)

    Kiplinger, Alan L.; Dennis, Brian R.; Orwig, Larry E.; Chen, P. C.

    1988-01-01

    A solid-state digital camera was developed for obtaining H alpha images of solar flares with 0.1 s time resolution. Beginning in the summer of 1988, this system will be operated in conjunction with SMM's hard X-ray burst spectrometer (HXRBS). Important electron time-of-flight effects that are crucial for determining the flare energy release processes should be detectable with these combined H alpha and hard X-ray observations. Charge-injection device (CID) cameras provide 128 x 128 pixel images simultaneously in the H alpha blue wing, line center, and red wing, or other wavelength of interest. The data recording system employs a microprocessor-controlled, electronic interface between each camera and a digital processor board that encodes the data into a serial bitstream for continuous recording by a standard video cassette recorder. Only a small fraction of the data will be permanently archived through utilization of a direct memory access interface onto a VAX-750 computer. In addition to correlations with hard X-ray data, observations from the high speed H alpha camera will also be correlated and optical and microwave data and data from future MAX 1991 campaigns. Whether the recorded optical flashes are simultaneous with X-ray peaks to within 0.1 s, are delayed by tenths of seconds or are even undetectable, the results will have implications on the validity of both thermal and nonthermal models of hard X-ray production.

  13. The development of a high-speed 100 fps CCD camera

    SciTech Connect

    Hoffberg, M.; Laird, R.; Lenkzsus, F. Liu, Chuande; Rodricks, B.; Gelbart, A.

    1996-09-01

    This paper describes the development of a high-speed CCD digital camera system. The system has been designed to use CCDs from various manufacturers with minimal modifications. The first camera built on this design utilizes a Thomson 512x512 pixel CCD as its sensor which is read out from two parallel outputs at a speed of 15 MHz/pixel/output. The data undergoes correlated double sampling after which, they are digitized into 12 bits. The throughput of the system translates into 60 MB/second which is either stored directly in a PC or transferred to a custom designed VXI module. The PC data acquisition version of the camera can collect sustained data in real time that is limited to the memory installed in the PC. The VXI version of the camera, also controlled by a PC, stores 512 MB of real-time data before it must be read out to the PC disk storage. The uncooled CCD can be used either with lenses for visible light imaging or with a phosphor screen for x-ray imaging. This camera has been tested with a phosphor screen coupled to a fiber-optic face plate for high-resolution, high-speed x-ray imaging. The camera is controlled through a custom event-driven user-friendly Windows package. The pixel clock speed can be changed from I MHz to 15 MHz. The noise was measure to be 1.05 bits at a 13.3 MHz pixel clock. This paper will describe the electronics, software, and characterizations that have been performed using both visible and x-ray photons.

  14. Vibration extraction based on fast NCC algorithm and high-speed camera.

    PubMed

    Lei, Xiujun; Jin, Yi; Guo, Jie; Zhu, Chang'an

    2015-09-20

    In this study, a high-speed camera system is developed to complete the vibration measurement in real time and to overcome the mass introduced by conventional contact measurements. The proposed system consists of a notebook computer and a high-speed camera which can capture the images as many as 1000 frames per second. In order to process the captured images in the computer, the normalized cross-correlation (NCC) template tracking algorithm with subpixel accuracy is introduced. Additionally, a modified local search algorithm based on the NCC is proposed to reduce the computation time and to increase efficiency significantly. The modified algorithm can rapidly accomplish one displacement extraction 10 times faster than the traditional template matching without installing any target panel onto the structures. Two experiments were carried out under laboratory and outdoor conditions to validate the accuracy and efficiency of the system performance in practice. The results demonstrated the high accuracy and efficiency of the camera system in extracting vibrating signals. PMID:26406525

  15. High-speed video observations of natural cloud-to-ground lightning leaders - A statistical analysis

    NASA Astrophysics Data System (ADS)

    Campos, Leandro Z. S.; Saba, Marcelo M. F.; Warner, Tom A.; Pinto, Osmar; Krider, E. Philip; Orville, Richard E.

    2014-01-01

    The aim of this investigation is to analyze the phenomenology of positive and negative (stepped and dart) leaders observed in natural lightning from digital high-speed video recordings. For that intent we have used four different high-speed cameras operating at frame rates ranging from 1000 or 11,800 frames per second in addition to data from lightning locating systems (BrasilDat and NLDN). All the recordings were GPS time-stamped in order to avoid ambiguities in the analysis, allowing us to estimate the peak current of and the distance to each flash that was detected by one of the lightning locating systems. The data collection was done at different sites in south and southeastern of Brazil, southern Arizona and South Dakota, USA. A total of 62 negative stepped leaders, 76 negative dart leaders and 29 positive leaders were recorded and analyzed. From these data it was possible to calculate the two-dimensional speed of each observed leader, allowing us to obtain its statistical distribution and evaluate whether or not it is related to other characteristics of the associated flash. In the analyzed dataset, the speeds of positive leaders and negative dart leaders follow a lognormal distribution at the 0.05 level (according to the Shapiro-Wilk test). We have also analyzed how the two-dimensional leader speeds change as they approach ground through two different methodologies. The speed of positive leaders showed a clear tendency to increase while negative dart leaders tend to become slower as they approach ground. Negative stepped leaders, on the other hand, can either accelerate as they get closer to ground or present an irregular development (with no clear tendency) throughout their entire development. For all the three leader types no correlation has been found between the return stroke peak current and the average speed of the leader responsible for its initiation. We did find, however, that dart leaders preceded by longer interstroke intervals cannot present

  16. Inexpensive range camera operating at video speed.

    PubMed

    Kramer, J; Seitz, P; Baltes, H

    1993-05-01

    An optoelectronic device has been developed and built that acquires and displays the range data of an object surface in space in video real time. The recovery of depth is performed with active triangulation. A galvanometer scanner system sweeps a sheet of light across the object at a video field rate of 50 Hz. High-speed signal processing is achieved through the use of a special optical sensor and hardware implementation of the simple electronic-processing steps. Fifty range maps are generated per second and converted into a European standard video signal where the depth is encoded in gray levels or color. The image resolution currently is 128 x 500 pixels with a depth accuracy of 1.5% of the depth range. The present setup uses a 500-mW diode laser for the generation of the light sheet. A 45-mm imaging lens covers a measurement volume of 93 mm x 61 mm x 63 mm at a medium distance of 250 mm from the camera, but this can easily be adapted to other dimensions. PMID:20820391

  17. A novel ultra-high speed camera for digital image processing applications

    NASA Astrophysics Data System (ADS)

    Hijazi, A.; Madhavan, V.

    2008-08-01

    Multi-channel gated-intensified cameras are commonly used for capturing images at ultra-high frame rates. The use of image intensifiers reduces the image resolution and increases the error in applications requiring high-quality images, such as digital image correlation. We report the development of a new type of non-intensified multi-channel camera system that permits recording of image sequences at ultra-high frame rates at the native resolution afforded by the imaging optics and the cameras used. This camera system is based upon the concept of using a sequence of short-duration light pulses of different wavelengths for illumination and using wavelength selective elements in the imaging system to route each particular wavelength of light to a particular camera. As such, the duration of the light pulses controls the exposure time and the timing of the light pulses controls the interframe time. A prototype camera system built according to this concept comprises four dual-frame cameras synchronized with four dual-cavity pulsed lasers producing 5 ns pulses in four different wavelengths. The prototype is capable of recording four-frame full-resolution image sequences at frame rates up to 200 MHz and eight-frame image sequences at frame rates up to 8 MHz. This system is built around a stereo microscope to capture stereoscopic image sequences usable for 3D digital image correlation. The camera system is used for imaging the chip-workpiece interface area during high speed machining, and the images are used to map the strain rate in the primary shear zone.

  18. In-Situ Observation of Horizontal Centrifugal Casting using a High-Speed Camera

    NASA Astrophysics Data System (ADS)

    Esaka, Hisao; Kawai, Kohsuke; Kaneko, Hiroshi; Shinozuka, Kei

    2012-07-01

    In order to understand the solidification process of horizontal centrifugal casting, experimental equipment for in-situ observation using transparent organic substance has been constructed. Succinonitrile-1 mass% water alloy was filled in the round glass cell and the glass cell was completely sealed. To observe the movement of equiaxed grains more clearly and to understand the effect of movement of free surface, a high-speed camera has been installed on the equipment. The most advantageous point of this equipment is that the camera rotates with mold, so that one can observe the same location of the glass cell. Because the recording rate could be increased up to 250 frames per second, the quality of movie was dramatically modified and this made easier and more precise to pursue the certain equiaxed grain. The amplitude of oscillation of equiaxed grain ( = At) decreased as the solidification proceeded.

  19. Low cost alternative of high speed visible light camera for tokamak experiments

    SciTech Connect

    Odstrcil, T.; Grover, O.; Svoboda, V.; Odstrcil, M.; Duran, I.; Mlynar, J.

    2012-10-15

    We present design, analysis, and performance evaluation of a new, low cost and high speed visible-light camera diagnostic system for tokamak experiments. The system is based on the camera Casio EX-F1, with the overall price of approximately a thousand USD. The achieved temporal resolution is up to 40 kHz. This new diagnostic was successfully implemented and tested at the university tokamak GOLEM (R = 0.4 m, a = 0.085 m, B{sub T} < 0.5 T, I{sub p} < 4 kA). One possible application of this new diagnostic at GOLEM is discussed in detail. This application is tomographic reconstruction for estimation of plasma position and emissivity.

  20. A novel compact high speed x-ray streak camera (invited)

    SciTech Connect

    Hares, J. D.; Dymoke-Bradshaw, A. K. L.

    2008-10-15

    Conventional in-line high speed streak cameras have fundamental issues when their performance is extended below a picosecond. The transit time spread caused by both the spread in the photoelectron (PE) ''birth'' energy and space charge effects causes significant electron pulse broadening along the axis of the streak camera and limits the time resolution. Also it is difficult to generate a sufficiently large sweep speed. This paper describes a new instrument in which the extraction electrostatic field at the photocathode increases with time, converting time to PE energy. A uniform magnetic field is used to measure the PE energy, and thus time, and also focuses in one dimension. Design calculations are presented for the factors limiting the time resolution. With our design, subpicosecond resolution with high dynamic range is expected.

  1. Estimation of Rotational Velocity of Baseball Using High-Speed Camera Movies

    NASA Astrophysics Data System (ADS)

    Inoue, Takuya; Uematsu, Yuko; Saito, Hideo

    Movies can be used to analyze a player's performance and improve his/her skills. In the case of baseball, the pitching is recorded by using a high-speed camera, and the recorded images are used to improve the pitching skills of the players. In this paper, we present a method for estimating of the rotational velocity of a baseball on the basis of movies recorded by high-speed cameras. Unlike in the previous methods, we consider the original seam pattern of the ball seen in the input movie and identify the corresponding image from a database of images by adopting the parametric eigenspace method. These database images are CG Images. The ball's posture can be determined on the basis of the rotational parameters. In the proposed method, the symmetric property of the ball is also taken into consideration, and the time continuity is used to determine the ball's posture. In the experiments, we use the proposed method to estimate the rotational velocity of a baseball on the basis of real movies and movies consisting of CG images of the baseball. The results of both the experiments prove that our method can be used to estimate the ball's rotation accurately.

  2. Photography of the commutation spark using a high-speed camera

    NASA Astrophysics Data System (ADS)

    Hanazawa, Tamio; Egashira, Torao; Tanaka, Yasuhiro; Egoshi, Jun

    1997-12-01

    In the single-phase AC commutator motor (known as a universal motor), which is widely used in cleaners, electrical machines, etc., some problems generated by commutation sparks are wear on the brush and noise impediments. We have therefore attempted to use a high-speed camera to elucidate the commutation spark mechanism visually. The high-speed camera that we used is capable of photographing at 5,000 - 20,000,000 frames/s. Selection of a trigger module can be obtained from the operation unit and the exterior triggering signal. In this paper, we proposed an exterior trigger method that involved opening a hole of several millimeters across in the motor and using argon laser light, so that commutator segments may be photographed in position; we then conducted the experiment. This method enabled us to photograph the motor's commutator segment from any position, and we were able to confirm spark generation at every other commutator segment. Furthermore, after confirming the spark generation position of the commutator segment, we next attempted to accelerate the photographing speed to obtain more detailed photography of the moment of spark generation; we then prepared our report.

  3. Experimental evaluation of spot dancing of laser beam in atmospheric propagation using high-speed camera

    NASA Astrophysics Data System (ADS)

    Nakamura, Moriya; Akiba, Makoto; Kuri, Toshiaki; Ohtani, Naoki

    2003-04-01

    We investigated the frequency spectra and two-dimensional (2-D) distributions of the beam-centroid fluctuation created by spot dancing, which are needed to optimize the design of the tracking system, by using a novel spot-dancing measurement method to suppress the effect of building and/or transmitter vibration. In this method, two laser beams are propagated apart from each other and observed simultaneously using high-speed cameras. The position of each beam centroid is obtained using an image processing system. The effect of transmitter vibration is suppressed by taking the difference between the 2-D coordinate data of the beam-centroid positions. The frequency spectra are calculated using the fast Fourier transform. The beam spots of two HeNe lasers propagated 100 m (indoor) and 750 m (open-air) were observed using a high-speed camera of 10,000 frame/sec. Frequency spectra of the beam-centroid variance of up to 5 kHz could be observed. We also measured the variations of spot dancing in two days when the rates of sunshine were 100% and 0%.

  4. Three-dimensional optical reconstruction of vocal fold kinematics using high-speed video with a laser projection system

    PubMed Central

    Luegmair, Georg; Mehta, Daryush D.; Kobler, James B.; Döllinger, Michael

    2015-01-01

    Vocal fold kinematics and its interaction with aerodynamic characteristics play a primary role in acoustic sound production of the human voice. Investigating the temporal details of these kinematics using high-speed videoendoscopic imaging techniques has proven challenging in part due to the limitations of quantifying complex vocal fold vibratory behavior using only two spatial dimensions. Thus, we propose an optical method of reconstructing the superior vocal fold surface in three spatial dimensions using a high-speed video camera and laser projection system. Using stereo-triangulation principles, we extend the camera-laser projector method and present an efficient image processing workflow to generate the three-dimensional vocal fold surfaces during phonation captured at 4000 frames per second. Initial results are provided for airflow-driven vibration of an ex vivo vocal fold model in which at least 75% of visible laser points contributed to the reconstructed surface. The method captures the vertical motion of the vocal folds at a high accuracy to allow for the computation of three-dimensional mucosal wave features such as vibratory amplitude, velocity, and asymmetry. PMID:26087485

  5. System Synchronizes Recordings from Separated Video Cameras

    NASA Technical Reports Server (NTRS)

    Nail, William; Nail, William L.; Nail, Jasper M.; Le, Doung T.

    2009-01-01

    A system of electronic hardware and software for synchronizing recordings from multiple, physically separated video cameras is being developed, primarily for use in multiple-look-angle video production. The system, the time code used in the system, and the underlying method of synchronization upon which the design of the system is based are denoted generally by the term "Geo-TimeCode(TradeMark)." The system is embodied mostly in compact, lightweight, portable units (see figure) denoted video time-code units (VTUs) - one VTU for each video camera. The system is scalable in that any number of camera recordings can be synchronized. The estimated retail price per unit would be about $350 (in 2006 dollars). The need for this or another synchronization system external to video cameras arises because most video cameras do not include internal means for maintaining synchronization with other video cameras. Unlike prior video-camera-synchronization systems, this system does not depend on continuous cable or radio links between cameras (however, it does depend on occasional cable links lasting a few seconds). Also, whereas the time codes used in prior video-camera-synchronization systems typically repeat after 24 hours, the time code used in this system does not repeat for slightly more than 136 years; hence, this system is much better suited for long-term deployment of multiple cameras.

  6. High-speed video analysis system using multiple shuttered charge-coupled device imagers and digital storage

    NASA Astrophysics Data System (ADS)

    Racca, Roberto G.; Stephenson, Owen; Clements, Reginald M.

    1992-06-01

    A fully solid state high-speed video analysis system is presented. It is based on the use of several independent charge-coupled device (CCD) imagers, each shuttered by a liquid crystal light valve. The imagers are exposed in rapid succession and are then read out sequentially at standard video rate into digital memory, generating a time-resolved sequence with as many frames as there are imagers. This design allows the use of inexpensive, consumer-grade camera modules and electronics. A microprocessor-based controller, designed to accept up to ten imagers, handles all phases of the recording from exposure timing to image capture and storage to playback on a standard video monitor. A prototype with three CCD imagers and shutters has been built. It has allowed successful three-image video recordings of phenomena such as the action of an air rifle pellet shattering a piece of glass, using a high-intensity pulsed light emitting diode as the light source. For slower phenomena, recordings in continuous light are also possible by using the shutters themselves to control the exposure time. The system records full-screen black and white images with spatial resolution approaching that of standard television, at rates up to 5000 images per second.

  7. The NACA High-Speed Motion-Picture Camera Optical Compensation at 40,000 Photographs Per Second

    NASA Technical Reports Server (NTRS)

    Miller, Cearcy D

    1946-01-01

    The principle of operation of the NACA high-speed camera is completely explained. This camera, operating at the rate of 40,000 photographs per second, took the photographs presented in numerous NACA reports concerning combustion, preignition, and knock in the spark-ignition engine. Many design details are presented and discussed, details of an entirely conventional nature are omitted. The inherent aberrations of the camera are discussed and partly evaluated. The focal-plane-shutter effect of the camera is explained. Photographs of the camera are presented. Some high-speed motion pictures of familiar objects -- photoflash bulb, firecrackers, camera shutter -- are reproduced as an illustration of the quality of the photographs taken by the camera.

  8. High-speed holographic correlation system for video identification on the internet

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ikeda, Kanami; Kodate, Kashiko

    2013-12-01

    Automatic video identification is important for indexing, search purposes, and removing illegal material on the Internet. By combining a high-speed correlation engine and web-scanning technology, we developed the Fast Recognition Correlation system (FReCs), a video identification system for the Internet. FReCs is an application thatsearches through a number of websites with user-generated content (UGC) and detects video content that violates copyright law. In this paper, we describe the FReCs configuration and an approach to investigating UGC websites using FReCs. The paper also illustrates the combination of FReCs with an optical correlation system, which is capable of easily replacing a digital authorization sever in FReCs with optical correlation.

  9. High-speed camera with real time processing for frequency domain imaging

    PubMed Central

    Shia, Victor; Watt, David; Faris, Gregory W.

    2011-01-01

    We describe a high-speed camera system for frequency domain imaging suitable for applications such as in vivo diffuse optical imaging and fluorescence lifetime imaging. 14-bit images are acquired at 2 gigapixels per second and analyzed with real-time pipeline processing using field programmable gate arrays (FPGAs). Performance of the camera system has been tested both for RF-modulated laser imaging in combination with a gain-modulated image intensifier and a simpler system based upon an LED light source. System amplitude and phase noise are measured and compared against theoretical expressions in the shot noise limit presented for different frequency domain configurations. We show the camera itself is capable of shot noise limited performance for amplitude and phase in as little as 3 ms, and when used in combination with the intensifier the noise levels are nearly shot noise limited. The best phase noise in a single pixel is 0.04 degrees for a 1 s integration time. PMID:21750770

  10. High-speed motion picture camera experiments of cavitation in dynamically loaded journal bearings

    NASA Technical Reports Server (NTRS)

    Jacobson, B. O.; Hamrock, B. J.

    1982-01-01

    A high-speed camera was used to investigate cavitation in dynamically loaded journal bearings. The length-diameter ratio of the bearing, the speeds of the shaft and bearing, the surface material of the shaft, and the static and dynamic eccentricity of the bearing were varied. The results reveal not only the appearance of gas cavitation, but also the development of previously unsuspected vapor cavitation. It was found that gas cavitation increases with time until, after many hundreds of pressure cycles, there is a constant amount of gas kept in the cavitation zone of the bearing. The gas can have pressures of many times the atmospheric pressure. Vapor cavitation bubbles, on the other hand, collapse at pressures lower than the atmospheric pressure and cannot be transported through a high-pressure zone, nor does the amount of vapor cavitation in a bearing increase with time. Analysis is given to support the experimental findings for both gas and vapor cavitation.

  11. A rapid response 64-channel photomultiplier tube camera for high-speed flow velocimetry

    NASA Astrophysics Data System (ADS)

    Ecker, Tobias; Lowe, K. Todd; Ng, Wing F.

    2015-02-01

    In this technical design note, the development of a rapid response photomultiplier tube camera, leveraging field-programmable gate arrays (FPGA) for high-speed flow velocimetry at up to 10 MHz is described. Technically relevant flows, for example, supersonic inlets and exhaust jets, have time scales on the order of microseconds, and their experimental study requires resolution of these timescales for fundamental insight. The inherent rapid response time attributes of a 64-channel photomultiplier array were coupled with two-stage amplifiers on each anode, and were acquired using a FPGA-based system. Application of FPGA allows high data acquisition rates with many channels as well as on-the-fly preprocessing techniques. Results are presented for optical velocimetry in supersonic free jet flows, demonstrating the value of the technique in the chosen application example for determining supersonic shear layer velocity correlation maps.

  12. Television camera video level control system

    NASA Technical Reports Server (NTRS)

    Kravitz, M.; Freedman, L. A.; Fredd, E. H.; Denef, D. E. (Inventor)

    1985-01-01

    A video level control system is provided which generates a normalized video signal for a camera processing circuit. The video level control system includes a lens iris which provides a controlled light signal to a camera tube. The camera tube converts the light signal provided by the lens iris into electrical signals. A feedback circuit in response to the electrical signals generated by the camera tube, provides feedback signals to the lens iris and the camera tube. This assures that a normalized video signal is provided in a first illumination range. An automatic gain control loop, which is also responsive to the electrical signals generated by the camera tube 4, operates in tandem with the feedback circuit. This assures that the normalized video signal is maintained in a second illumination range.

  13. Study of cavitation bubble dynamics during Ho:YAG laser lithotripsy by high-speed camera

    NASA Astrophysics Data System (ADS)

    Zhang, Jian J.; Xuan, Jason R.; Yu, Honggang; Devincentis, Dennis

    2016-02-01

    Although laser lithotripsy is now the preferred treatment option for urolithiasis, the mechanism of laser pulse induced calculus damage is still not fully understood. This is because the process of laser pulse induced calculus damage involves quite a few physical and chemical processes and their time-scales are very short (down to sub micro second level). For laser lithotripsy, the laser pulse induced impact by energy flow can be summarized as: Photon energy in the laser pulse --> photon absorption generated heat in the water liquid and vapor (super heat water or plasma effect) --> shock wave (Bow shock, acoustic wave) --> cavitation bubble dynamics (oscillation, and center of bubble movement , super heat water at collapse, sonoluminscence) --> calculus damage and motion (calculus heat up, spallation/melt of stone, breaking of mechanical/chemical bond, debris ejection, and retropulsion of remaining calculus body). Cavitation bubble dynamics is the center piece of the physical processes that links the whole energy flow chain from laser pulse to calculus damage. In this study, cavitation bubble dynamics was investigated by a high-speed camera and a needle hydrophone. A commercialized, pulsed Ho:YAG laser at 2.1 mu;m, StoneLightTM 30, with pulse energy from 0.5J up to 3.0 J, and pulse width from 150 mu;s up to 800 μs, was used as laser pulse source. The fiber used in the investigation is SureFlexTM fiber, Model S-LLF365, a 365 um core diameter fiber. A high-speed camera with frame rate up to 1 million fps was used in this study. The results revealed the cavitation bubble dynamics (oscillation and center of bubble movement) by laser pulse at different energy level and pulse width. More detailed investigation on bubble dynamics by different type of laser, the relationship between cavitation bubble dynamics and calculus damage (fragmentation/dusting) will be conducted as a future study.

  14. The Eye, Film, And Video In High-Speed Motion Analysis

    NASA Astrophysics Data System (ADS)

    Hyzer, William G.

    1987-09-01

    The unaided human eye with its inherent limitations serves us well in the examination of most large-scale, slow-moving, natural and man-made phenomena, but constraints imposed by inertial factors in the visual mechanism severely limit our ability to observe fast-moving and short-duration events. The introduction of high-speed photography (c. 1851) and videography (c. 1970) served to stretch the temporal limits of human perception by several orders of magnitude so critical analysis could be performed on a wide range of rapidly occurring events of scientific, technological, industrial, and educational interest. The preferential selection of eye, film, or video imagery in fulfilling particular motion analysis requirements is determined largely by the comparative attributes and limitations of these methods. The choice of either film or video does not necessarily eliminate the eye, because it usually continues as a vital link in the analytical chain. The important characteristics of the eye, film, and video imagery in high-speed motion analysis are discussed with particular reference to fields of application which include biomechanics, ballistics, machine design, mechanics of materials, sports analysis, medicine, production engineering, and industrial trouble-shooting.

  15. Wind dynamic range video camera

    NASA Technical Reports Server (NTRS)

    Craig, G. D. (Inventor)

    1985-01-01

    A television camera apparatus is disclosed in which bright objects are attenuated to fit within the dynamic range of the system, while dim objects are not. The apparatus receives linearly polarized light from an object scene, the light being passed by a beam splitter and focused on the output plane of a liquid crystal light valve. The light valve is oriented such that, with no excitation from the cathode ray tube, all light is rotated 90 deg and focused on the input plane of the video sensor. The light is then converted to an electrical signal, which is amplified and used to excite the CRT. The resulting image is collected and focused by a lens onto the light valve which rotates the polarization vector of the light to an extent proportional to the light intensity from the CRT. The overall effect is to selectively attenuate the image pattern focused on the sensor.

  16. Study of jet fluctuations in DC plasma torch using high speed camera

    NASA Astrophysics Data System (ADS)

    Tiwari, Nirupama; Sahasrabudhe, S. N.; Joshi, N. K.; Das, A. K.

    2010-02-01

    The power supplies used for the plasma torches are usually SCR controlled and have a large ripple factor. This is due to the fact that the currents in the torch are of the order of hundreds of amperes which prohibit effective filtering of the ripple. The voltage and current vary as per the ripple in the power supply and causes plasma jet to fluctuate. To record these fluctuations, the jet coming out from a D.C. plasma torch operating at atmospheric pressure was imaged using high speed camera at the rate of 3000 frame per second. Light emitted from a well defined zone in the plume was collected using an optical fibre and a Photo Multiplier Tube. Current, voltage and PMT signals were recorded simultaneously using a digital storage oscilloscope (DSO). The fast camera recorded the images for 25 ms and the starting pulse from the camera was used to trigger the DSO for recording voltage, current and optical signals. Each image of the plume recorded by the fast camera was correlated with the magnitude of the instantaneous voltage, current and optical signal. It was observed that the luminosity and length of the plume varies as per the product of instantaneous voltage and current i.e. electrical power fed to plasma torch. The experimental runs were taken with different gas flow rates and electrical powers. The images were analyzed using image processing software and constant intensity contours of images were determined. Further analysis of the images can provide a great deal of information about dynamics of the jet.

  17. Use of High-Speed X ray and Video to Analyze Distal Radius Fracture Pathomechanics.

    PubMed

    Gutowski, Christina; Darvish, Kurosh; Liss, Frederic E; Ilyas, Asif M; Jones, Christopher M

    2015-10-01

    The purpose of this study is to investigate the failure sequence of the distal radius during a simulated fall onto an outstretched hand using cadaver forearms and high-speed X ray and video systems. This apparatus records the beginning and propagation of bony failure, ultimately resulting in distal radius or forearm fracture. The effects of 3 different wrist guard designs are investigated using this system. Serving as a proof-of-concept analysis, this study supports this imaging technique to be used in larger studies of orthopedic trauma and protective devices and specifically for distal radius fractures. PMID:26410645

  18. Motion analysis of mechanical heart valve prosthesis utilizing high-speed video

    NASA Astrophysics Data System (ADS)

    Adlparvar, Payam; Guo, George; Kingsbury, Chris

    1993-01-01

    The Edwards-Duromedics (ED) mechanical heart valve prosthesis is of a bileaflet design, incorporating unique design features that distinguish its performance with respect to other mechanical valves of similar type. Leaflet motion of mechanical heart valves, particularly during closure, is related to valve durability, valve sounds and the efficiency of the cardiac output. Modifications to the ED valve have resulted in significant improvements with respect to leaflet motion. In this study a high-speed video system was used to monitor the leaflet motion of the valve, and to compare the performance of the Modified Specification to that of the Original Specification using a St. Jude Medical as a control valve.

  19. Time-Correlated High-Speed Video and Lightning Mapping Array Results For Triggered Lightning Flashes

    NASA Astrophysics Data System (ADS)

    Eastvedt, E. M.; Eack, K.; Edens, H. E.; Aulich, G. D.; Hunyady, S.; Winn, W. P.; Murray, C.

    2009-12-01

    Several lightning flashes triggered by the rocket-and-wire technique at Langmuir Laboratory's Kiva facility on South Baldy (approximately 3300 meters above sea level) were captured on high-speed video during the summers of 2008 and 2009. These triggered flashes were also observed with Langmuir Laboratory's Lightning Mapping Array (LMA), a 3-D VHF time-of-arrival system. We analyzed nine flashes (obtained in four different storms) for which the electric field at ground was positive (foul-weather). Each was initiated by an upward positive leader that propagated into the cloud. In all cases observed, the leader exhibited upward branching, and most of the flashes had multiple return strokes.

  20. Temperature measurement of mineral melt by means of a high-speed camera.

    PubMed

    Bizjan, Benjamin; Širok, Brane; Drnovšek, Janko; Pušnik, Igor

    2015-09-10

    This paper presents a temperature evaluation method by means of high-speed, visible light digital camera visualization and its application to the mineral wool production process. The proposed method adequately resolves the temperature-related requirements in mineral wool production and significantly improves the spatial and temporal resolution of measured temperature fields. Additionally, it is very cost effective in comparison with other non-contact temperature field measurement methods, such as infrared thermometry. Using the proposed method for temperatures between 800°C and 1500°C, the available temperature measurement range is approximately 300 K with a single temperature calibration point and without the need for camera setting adjustments. In the case of a stationary blackbody, the proposed method is able to produce deviations of less than 5 K from the reference (thermocouple-measured) temperature in a measurement range within 100 K from the calibration temperature. The method was also tested by visualization of rotating melt film in the rock wool production process. The resulting temperature fields are characterized by a very good temporal and spatial resolution (18,700 frames per second at 128  pixels×328  pixels and 8000 frames per second at 416  pixels×298  pixels). PMID:26368973

  1. Design methodology for high-speed video processing system based on signal integrity analysis

    NASA Astrophysics Data System (ADS)

    Wang, Rui; Zhang, Hao

    2009-07-01

    On account of high performance requirement of video processing systems and the shortcoming of conventional circuit design method, a design methodology based on the signal integrity (SI) theory for the high-speed video processing system with TI's digital signal processor TMS320DM642 was proposed. The PCB stack-up and construction of the system as well as transmission line characteristic impedance are set and calculated firstly with the impedance control tool Si8000 through this methodology. And then some crucial signals such as data lines of SDRAM are simulated and analyzed with the IBIS models so that reasonable layout and routing rules are established. Finally the system's highdensity PCB design is completed on Cadence SPB15.7 platform. The design result shows that this methodology can effectively restrain signal reflection, crosstalk, rail collapse noise and electromagnetic interference (EMI). Thus it significantly improves stability of the system and shortens development cycles.

  2. Operational experience with a high speed video data acquisition system in Fermilab experiment E-687

    SciTech Connect

    Baumbaugh, A.E.; Knickerbocker, K.L.; Baumbaugh, B.; Ruchti, R.

    1987-10-21

    Operation of a high speed, triggerable, Video Data Acquisition System (VDAS) including a hardware data compactor and a 16 megabyte First-In-First-Out buffer memory (FIFO) will be discussed. Active target imaging techniques for High Energy Physics are described and preliminary experimental data is reported.. The hardware architecture for the imaging system and experiment will be discussed as well as other applications for the imaging system. Data rates for the compactor is over 30 megabytes/sec and the FIFO has been run at 100 megabytes/sec. The system can be operated at standard video rates or at any rate up to 30 million pixels/second. 7 refs., 3 figs.

  3. Game of thrown bombs in 3D: using high speed cameras and photogrammetry techniques to reconstruct bomb trajectories at Stromboli (Italy)

    NASA Astrophysics Data System (ADS)

    Gaudin, D.; Taddeucci, J.; Scarlato, P.; Del Bello, E.; Houghton, B. F.; Orr, T. R.; Andronico, D.; Kueppers, U.

    2015-12-01

    Large juvenile bombs and lithic clasts, produced and ejected during explosive volcanic eruptions, follow ballistic trajectories. Of particular interest are: 1) the determination of ejection velocity and launch angle, which give insights into shallow conduit conditions and geometry; 2) particle trajectories, with an eye on trajectory evolution caused by collisions between bombs, as well as the interaction between bombs and ash/gas plumes; and 3) the computation of the final emplacement of bomb-sized clasts, which is important for hazard assessment and risk management. Ground-based imagery from a single camera only allows the reconstruction of bomb trajectories in a plan perpendicular to the line of sight, which may lead to underestimation of bomb velocities and does not allow the directionality of the ejections to be studied. To overcome this limitation, we adapted photogrammetry techniques to reconstruct 3D bomb trajectories from two or three synchronized high-speed video cameras. In particular, we modified existing algorithms to consider the errors that may arise from the very high velocity of the particles and the impossibility of measuring tie points close to the scene. Our method was tested during two field campaigns at Stromboli. In 2014, two high-speed cameras with a 500 Hz frame rate and a ~2 cm resolution were set up ~350m from the crater, 10° apart and synchronized. The experiment was repeated with similar parameters in 2015, but using three high-speed cameras in order to significantly reduce uncertainties and allow their estimation. Trajectory analyses for tens of bombs at various times allowed for the identification of shifts in the mean directivity and dispersal angle of the jets during the explosions. These time evolutions are also visible on the permanent video-camera monitoring system, demonstrating the applicability of our method to all kinds of explosive volcanoes.

  4. ARINC 818 express for high-speed avionics video and power over coax

    NASA Astrophysics Data System (ADS)

    Keller, Tim; Alexander, Jon

    2012-06-01

    CoaXPress is a new standard for high-speed video over coax cabling developed for the machine vision industry. CoaXPress includes both a physical layer and a video protocol. The physical layer has desirable features for aerospace and defense applications: it allows 3Gbps (up to 6Gbps) communication, includes 21Mbps return path allowing for bidirectional communication, and provides up to 13W of power, all over a single coax connection. ARINC 818, titled "Avionics Digital Video Bus" is a protocol standard developed specifically for high speed, mission critical aerospace video systems. ARINC 818 is being widely adopted for new military and commercial display and sensor applications. The ARINC 818 protocol combined with the CoaXPress physical layer provide desirable characteristics for many aerospace systems. This paper presents the results of a technology demonstration program to marry the physical layer from CoaXPress with the ARINC 818 protocol. ARINC 818 is a protocol, not a physical layer. Typically, ARINC 818 is implemented over fiber or copper for speeds of 1 to 2Gbps, but beyond 2Gbps, it has been implemented exclusively over fiber optic links. In many rugged applications, a copper interface is still desired, by implementing ARINC 818 over the CoaXPress physical layer, it provides a path to 3 and 6 Gbps copper interfaces for ARINC 818. Results of the successful technology demonstration dubbed ARINC 818 Express are presented showing 3Gbps communication while powering a remote module over a single coax cable. The paper concludes with suggested next steps for bring this technology to production readiness.

  5. Multi-Camera Reconstruction of Fine Scale High Speed Auroral Dynamics

    NASA Astrophysics Data System (ADS)

    Hirsch, M.; Semeter, J. L.; Zettergren, M. D.; Dahlgren, H.; Goenka, C.; Akbari, H.

    2014-12-01

    The fine spatial structure of dispersive aurora is known to have ground-observable scales of less than 100 meters. The lifetime of prompt emissions is much less than 1 millisecond, and high-speed cameras have observed auroral forms with millisecond scale morphology. Satellite observations have corroborated these spatial and temporal findings. Satellite observation platforms give a very valuable yet passing glance at the auroral region and the precipitation driving the aurora. To gain further insight into the fine structure of accelerated particles driven into the ionosphere, ground-based optical instruments staring at the same region of sky can capture the evolution of processes evolving on time scales from milliseconds to many hours, with continuous sample rates of 100Hz or more. Legacy auroral tomography systems have used baselines of hundreds of kilometers, capturing a "side view" of the field-aligned auroral structure. We show that short baseline (less than 10 km), high speed optical observations fill a measurement gap between legacy long baseline optical observations and incoherent scatter radar. The ill-conditioned inverse problem typical of auroral tomography, accentuated by short baseline optical ground stations is tackled with contemporary data inversion algorithms. We leverage the disruptive electron multiplying charge coupled device (EMCCD) imaging technology and solve the inverse problem via eigenfunctions obtained from a first-principles 1-D electron penetration ionospheric model. We present the latest analysis of observed auroral events from the Poker Flat Research Range near Fairbanks, Alaska. We discuss the system-level design and performance verification measures needed to ensure consistent performance for nightly multi-terabyte data acquisition synchronized between stations to better than 1 millisecond.

  6. Initial laboratory evaluation of color video cameras

    SciTech Connect

    Terry, P L

    1991-01-01

    Sandia National Laboratories has considerable experience with monochrome video cameras used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than identify an intruder. Monochrome cameras are adequate for that application and were selected over color cameras because of their greater sensitivity and resolution. There is a growing interest in the identification function of security video systems for both access control and insider protection. Color information is useful for identification purposes, and color camera technology is rapidly changing. Thus, Sandia National Laboratories established an ongoing program to evaluate color solid-state cameras. Phase one resulted in the publishing of a report titled, Initial Laboratory Evaluation of Color Video Cameras (SAND--91-2579).'' It gave a brief discussion of imager chips and color cameras and monitors, described the camera selection, detailed traditional test parameters and procedures, and gave the results of the evaluation of twelve cameras. In phase two six additional cameras were tested by the traditional methods and all eighteen cameras were tested by newly developed methods. This report details both the traditional and newly developed test parameters and procedures, and gives the results of both evaluations.

  7. Initial laboratory evaluation of color video cameras

    NASA Astrophysics Data System (ADS)

    Terry, P. L.

    1991-12-01

    Sandia National Laboratories has considerable experience with monochrome video cameras used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than identify an intruder. Monochrome cameras are adequate for that application and were selected over color cameras because of their greater sensitivity and resolution. There is a growing interest in the identification function of security video systems for both access control and insider protection. Color information is useful for identification purposes, and color camera technology is rapidly changing. Thus, Sandia National Laboratories established an ongoing program to evaluate color solid-state cameras. Phase one resulted in the publishing of a report titled, 'Initial Laboratory Evaluation of Color Video Cameras (SAND--91-2579).' It gave a brief discussion of imager chips and color cameras and monitors, described the camera selection, detailed traditional test parameters and procedures, and gave the results of the evaluation of twelve cameras. In phase two, six additional cameras were tested by the traditional methods and all eighteen cameras were tested by newly developed methods. This report details both the traditional and newly developed test parameters and procedures, and gives the results of both evaluations.

  8. Synchronization of high speed framing camera and intense electron-beam accelerator

    NASA Astrophysics Data System (ADS)

    Cheng, Xin-Bing; Liu, Jin-Liang; Hong, Zhi-Qiang; Qian, Bao-Liang

    2012-06-01

    A new trigger program is proposed to realize the synchronization of high speed framing camera (HSFC) and intense electron-beam accelerator (IEBA). The trigger program which include light signal acquisition radiated from main switch of IEBA and signal processing circuit could provide a trigger signal with rise time of 17 ns and amplitude of about 5 V. First, the light signal was collected by an avalanche photodiode (APD) module, and the delay time between the output voltage of APD and load voltage of IEBA was tested, it was about 35 ns. Subsequently, the output voltage of APD was processed further by the signal processing circuit to obtain the trigger signal. At last, by combining the trigger program with an IEBA, the trigger program operated stably, and a delay time of 30 ns between the trigger signal of HSFC and output voltage of IEBA was obtained. Meanwhile, when surface flashover occurred at the high density polyethylene sample, the delay time between the trigger signal of HSFC and flashover current was up to 150 ns, which satisfied the need of synchronization of HSFC and IEBA. So the experiment results proved that the trigger program could compensate the time (called compensated time) of the trigger signal processing time and the inherent delay time of the HSFC.

  9. High-speed motion picture camera experiments of cavitation in dynamically loaded journal bearings

    NASA Technical Reports Server (NTRS)

    Hamrock, B. J.; Jacobson, B. O.

    1983-01-01

    A high-speed camera was used to investigate cavitation in dynamically loaded journal bearings. The length-diameter ratio of the bearing, the speeds of the shaft and bearing, the surface material of the shaft, and the static and dynamic eccentricity of the bearing were varied. The results reveal not only the appearance of gas cavitation, but also the development of previously unsuspected vapor cavitation. It was found that gas cavitation increases with time until, after many hundreds of pressure cycles, there is a constant amount of gas kept in the cavitation zone of the bearing. The gas can have pressures of many times the atmospheric pressure. Vapor cavitation bubbles, on the other hand, collapse at pressures lower than the atmospheric pressure and cannot be transported through a high-pressure zone, nor does the amount of vapor cavitation in a bearing increase with time. Analysis is given to support the experimental findings for both gas and vapor cavitation. Previously announced in STAR as N82-20543

  10. Synchronization of high speed framing camera and intense electron-beam accelerator

    SciTech Connect

    Cheng Xinbing; Liu Jinliang; Hong Zhiqiang; Qian Baoliang

    2012-06-15

    A new trigger program is proposed to realize the synchronization of high speed framing camera (HSFC) and intense electron-beam accelerator (IEBA). The trigger program which include light signal acquisition radiated from main switch of IEBA and signal processing circuit could provide a trigger signal with rise time of 17 ns and amplitude of about 5 V. First, the light signal was collected by an avalanche photodiode (APD) module, and the delay time between the output voltage of APD and load voltage of IEBA was tested, it was about 35 ns. Subsequently, the output voltage of APD was processed further by the signal processing circuit to obtain the trigger signal. At last, by combining the trigger program with an IEBA, the trigger program operated stably, and a delay time of 30 ns between the trigger signal of HSFC and output voltage of IEBA was obtained. Meanwhile, when surface flashover occurred at the high density polyethylene sample, the delay time between the trigger signal of HSFC and flashover current was up to 150 ns, which satisfied the need of synchronization of HSFC and IEBA. So the experiment results proved that the trigger program could compensate the time (called compensated time) of the trigger signal processing time and the inherent delay time of the HSFC.

  11. Measurement of inkjet first-drop behavior using a high-speed camera.

    PubMed

    Kwon, Kye-Si; Kim, Hyung-Seok; Choi, Moohyun

    2016-03-01

    Drop-on-demand inkjet printing has been used as a manufacturing tool for printed electronics, and it has several advantages since a droplet of an exact amount can be deposited on an exact location. Such technology requires positioning the inkjet head on the printing location without jetting, so a jetting pause (non-jetting) idle time is required. Nevertheless, the behavior of the first few drops after the non-jetting pause time is well known to be possibly different from that which occurs in the steady state. The abnormal behavior of the first few drops may result in serious problems regarding printing quality. Therefore, a proper evaluation of a first-droplet failure has become important for the inkjet industry. To this end, in this study, we propose the use of a high-speed camera to evaluate first-drop dissimilarity. For this purpose, the image acquisition frame rate was determined to be an integer multiple of the jetting frequency, and in this manner, we can directly compare the droplet locations of each drop in order to characterize the first-drop behavior. Finally, we evaluate the effect of a sub-driving voltage during the non-jetting pause time to effectively suppress the first-drop dissimilarity. PMID:27036813

  12. Measurement of inkjet first-drop behavior using a high-speed camera

    NASA Astrophysics Data System (ADS)

    Kwon, Kye-Si; Kim, Hyung-Seok; Choi, Moohyun

    2016-03-01

    Drop-on-demand inkjet printing has been used as a manufacturing tool for printed electronics, and it has several advantages since a droplet of an exact amount can be deposited on an exact location. Such technology requires positioning the inkjet head on the printing location without jetting, so a jetting pause (non-jetting) idle time is required. Nevertheless, the behavior of the first few drops after the non-jetting pause time is well known to be possibly different from that which occurs in the steady state. The abnormal behavior of the first few drops may result in serious problems regarding printing quality. Therefore, a proper evaluation of a first-droplet failure has become important for the inkjet industry. To this end, in this study, we propose the use of a high-speed camera to evaluate first-drop dissimilarity. For this purpose, the image acquisition frame rate was determined to be an integer multiple of the jetting frequency, and in this manner, we can directly compare the droplet locations of each drop in order to characterize the first-drop behavior. Finally, we evaluate the effect of a sub-driving voltage during the non-jetting pause time to effectively suppress the first-drop dissimilarity.

  13. Close-Range Photogrammetry with Video Cameras

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Goad, W. K.

    1983-01-01

    Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.

  14. Close-range photogrammetry with video cameras

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Goad, W. K.

    1985-01-01

    Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.

  15. An Impact Velocity Device Design for Blood Spatter Pattern Generation with Considerations for High-Speed Video Analysis.

    PubMed

    Stotesbury, Theresa; Illes, Mike; Vreugdenhil, Andrew J

    2016-03-01

    A mechanical device that uses gravitational and spring compression forces to create spatter patterns of known impact velocities is presented and discussed. The custom-made device uses either two or four springs (k1 = 267.8 N/m, k2 = 535.5 N/m) in parallel to create seventeen reproducible impact velocities between 2.1 and 4.0 m/s. The impactor is held at several known spring extensions using an electromagnet. Trigger inputs to the high-speed video camera allow the user to control the magnet's release while capturing video footage simultaneously. A polycarbonate base is used to allow for simultaneous monitoring of the side and bottom views of the impact event. Twenty-four patterns were created across the impact velocity range and analyzed using HemoSpat. Area of origin estimations fell within an acceptable range (ΔXav = -5.5 ± 1.9 cm, ΔYav = -2.6 ± 2.8 cm, ΔZav = +5.5 ± 3.8 cm), supporting distribution analysis for the use in research or bloodstain pattern training. This work provides a framework for those interested in developing a robust impact device. PMID:27404625

  16. Practical use of high-speed cameras for research and development within the automotive industry: yesterday and today

    NASA Astrophysics Data System (ADS)

    Steinmetz, Klaus

    1995-05-01

    Within the automotive industry, especially for the development and improvement of safety systems, we find a lot of high accelerated motions, that can not be followed and consequently not be analyzed by human eye. For the vehicle safety tests at AUDI, which are performed as 'Crash Tests', 'Sled Tests' and 'Static Component Tests', 'Stalex', 'Hycam', and 'Locam' cameras are in use. Nowadays the automobile production is inconceivable without the use of high speed cameras.

  17. High speed video analysis of rockfall fence system evaluation. Final report

    SciTech Connect

    Fry, D.A.; Lucero, J.P.

    1998-07-01

    Rockfall fence systems are used to protect motorists from rocks, dislodged from slopes near roadways, which would potentially roll onto the road at high speeds carrying significant energy. There is an unfortunate list of such rocks on unprotected roads that have caused fatalities and other damage. Los Alamos National Laboratory (LANL) personnel from the Engineering Science and Applications Division, Measurement Technology Group (ESA-MT), participated in a series of rockfall fence system tests at a test range in Rifle, Colorado during March 1998. The tests were for the evaluation and certification of four rockfall fence system designs of Chama Valley Manufacturing (CVM), a Small Business, located in Chama, New Mexico. Also participating in the tests were the Colorado Department of Transportation (CDOT) who provided the test range and some heavy equipment support and High Tech Construction who installed the fence systems. LANL provided two high speed video systems and operators to record each individual rockfall on each fence system. From the recordings LANL then measured the linear and rotational velocities at impact for each rockfall. Using the LANL velocity results, CVM then could calculate the impact energy of each rockfall and therefore certify each design up to the maximum energy that each fence system could absorb without failure. LANL participated as an independent, impartial velocity measurement entity only and did not contribute to the fence systems design or installation. CVM has published a more detailed final report covering all aspects of the project.

  18. High-resolution, high-speed, three-dimensional video imaging with digital fringe projection techniques.

    PubMed

    Ekstrand, Laura; Karpinsky, Nikolaus; Wang, Yajun; Zhang, Song

    2013-01-01

    Digital fringe projection (DFP) techniques provide dense 3D measurements of dynamically changing surfaces. Like the human eyes and brain, DFP uses triangulation between matching points in two views of the same scene at different angles to compute depth. However, unlike a stereo-based method, DFP uses a digital video projector to replace one of the cameras(1). The projector rapidly projects a known sinusoidal pattern onto the subject, and the surface of the subject distorts these patterns in the camera's field of view. Three distorted patterns (fringe images) from the camera can be used to compute the depth using triangulation. Unlike other 3D measurement methods, DFP techniques lead to systems that tend to be faster, lower in equipment cost, more flexible, and easier to develop. DFP systems can also achieve the same measurement resolution as the camera. For this reason, DFP and other digital structured light techniques have recently been the focus of intense research (as summarized in(1-5)). Taking advantage of DFP, the graphics processing unit, and optimized algorithms, we have developed a system capable of 30 Hz 3D video data acquisition, reconstruction, and display for over 300,000 measurement points per frame(6,7). Binary defocusing DFP methods can achieve even greater speeds(8). Diverse applications can benefit from DFP techniques. Our collaborators have used our systems for facial function analysis(9), facial animation(10), cardiac mechanics studies(11), and fluid surface measurements, but many other potential applications exist. This video will teach the fundamentals of DFP techniques and illustrate the design and operation of a binary defocusing DFP system. PMID:24326674

  19. Lifetime and structures of TLEs captured by high-speed camera on board aircraft

    NASA Astrophysics Data System (ADS)

    Takahashi, Y.; Sanmiya, Y.; Sato, M.; Kudo, T.; Inoue, T.

    2012-12-01

    Temporal development of sprite streamer is the manifestation of the local electric field and conductivity. Therefore, in order to understand the mechanisms of sprite, which show a large variety in temporal and spatial structures, the detailed analysis of both fine and macro-structures with high time resolution are to be the key approach. However, due to the long distance from the optical equipments to the phenomena and to the contamination by aerosols, it's not easy to get clear images of TLEs on the ground. In the period of June 27 - July 10, 2011, a combined aircraft and ground-based campaign, in support of NHK Cosmic Shore project, was carried with two jet airplanes under collaboration between NHK, Japan Broadcasting Corporation, and universities. On 8 nights out of 16 standing-by, the jets took off from the airport near Denver, Colorado, and an airborne high speed camera captured over 60 TLE events at a frame rate of 8000-10,000 /sec. Some of them show several tens of streamers in one sprite event, which repeat splitting at the down-going end of streamers or beads. The velocities of the bottom ends and the variations of their brightness are traced carefully. It is found that the top velocity is maintained only for the brightest beads and others become slow just after the splitting. Also the whole luminosity of one sprite event has short time duration with rapid downward motion if the charge moment change of the parent lightning is large. The relationship between diffuse glows such as elves and sprite halos, and subsequent discrete structure of sprite streamers is also examined. In most cases the halo and elves seem to show inhomogenous structures before being accompanied by streamers, which develop to bright spots or streamers with acceleration of the velocity. Those characteristics of velocity and lifetime of TLEs provide key information of their generation mechanism.

  20. Video Analysis with a Web Camera

    ERIC Educational Resources Information Center

    Wyrembeck, Edward P.

    2009-01-01

    Recent advances in technology have made video capture and analysis in the introductory physics lab even more affordable and accessible. The purchase of a relatively inexpensive web camera is all you need if you already have a newer computer and Vernier's Logger Pro 3 software. In addition to Logger Pro 3, other video analysis tools such as…

  1. High-resolution, High-speed, Three-dimensional Video Imaging with Digital Fringe Projection Techniques

    PubMed Central

    Ekstrand, Laura; Karpinsky, Nikolaus; Wang, Yajun; Zhang, Song

    2013-01-01

    Digital fringe projection (DFP) techniques provide dense 3D measurements of dynamically changing surfaces. Like the human eyes and brain, DFP uses triangulation between matching points in two views of the same scene at different angles to compute depth. However, unlike a stereo-based method, DFP uses a digital video projector to replace one of the cameras1. The projector rapidly projects a known sinusoidal pattern onto the subject, and the surface of the subject distorts these patterns in the camera’s field of view. Three distorted patterns (fringe images) from the camera can be used to compute the depth using triangulation. Unlike other 3D measurement methods, DFP techniques lead to systems that tend to be faster, lower in equipment cost, more flexible, and easier to develop. DFP systems can also achieve the same measurement resolution as the camera. For this reason, DFP and other digital structured light techniques have recently been the focus of intense research (as summarized in1-5). Taking advantage of DFP, the graphics processing unit, and optimized algorithms, we have developed a system capable of 30 Hz 3D video data acquisition, reconstruction, and display for over 300,000 measurement points per frame6,7. Binary defocusing DFP methods can achieve even greater speeds8. Diverse applications can benefit from DFP techniques. Our collaborators have used our systems for facial function analysis9, facial animation10, cardiac mechanics studies11, and fluid surface measurements, but many other potential applications exist. This video will teach the fundamentals of DFP techniques and illustrate the design and operation of a binary defocusing DFP system. PMID:24326674

  2. Neural network method for characterizing video cameras

    NASA Astrophysics Data System (ADS)

    Zhou, Shuangquan; Zhao, Dazun

    1998-08-01

    This paper presents a neural network method for characterizing color video camera. A multilayer feedforward network with the error back-propagation learning rule for training, is used as a nonlinear transformer to model a camera, which realizes a mapping from the CIELAB color space to RGB color space. With SONY video camera, D65 illuminant, Pritchard Spectroradiometer, 410 JIS color charts as training data and 36 charts as testing data, results show that the mean error of training data is 2.9 and that of testing data is 4.0 in a 2563 RGB space.

  3. Eulerian frequency analysis of structural vibrations from high-speed video

    NASA Astrophysics Data System (ADS)

    Venanzoni, Andrea; De Ryck, Laurent; Cuenca, Jacques

    2016-06-01

    An approach for the analysis of the frequency content of structural vibrations from high-speed video recordings is proposed. The techniques and tools proposed rely on an Eulerian approach, that is, using the time history of pixels independently to analyse structural motion, as opposed to Lagrangian approaches, where the motion of the structure is tracked in time. The starting point is an existing Eulerian motion magnification method, which consists in decomposing the video frames into a set of spatial scales through a so-called Laplacian pyramid [1]. Each scale - or level - can be amplified independently to reconstruct a magnified motion of the observed structure. The approach proposed here provides two analysis tools or pre-amplification steps. The first tool provides a representation of the global frequency content of a video per pyramid level. This may be further enhanced by applying an angular filter in the spatial frequency domain to each frame of the video before the Laplacian pyramid decomposition, which allows for the identification of the frequency content of the structural vibrations in a particular direction of space. This proposed tool complements the existing Eulerian magnification method by amplifying selectively the levels containing relevant motion information with respect to their frequency content. This magnifies the displacement while limiting the noise contribution. The second tool is a holographic representation of the frequency content of a vibrating structure, yielding a map of the predominant frequency components across the structure. In contrast to the global frequency content representation of the video, this tool provides a local analysis of the periodic gray scale intensity changes of the frame in order to identify the vibrating parts of the structure and their main frequencies. Validation cases are provided and the advantages and limits of the approaches are discussed. The first validation case consists of the frequency content

  4. High speed imaging - An important industrial tool

    NASA Technical Reports Server (NTRS)

    Moore, Alton; Pinelli, Thomas E.

    1986-01-01

    High-speed photography, which is a rapid sequence of photographs that allow an event to be analyzed through the stoppage of motion or the production of slow-motion effects, is examined. In high-speed photography 16, 35, and 70 mm film and framing rates between 64-12,000 frames per second are utilized to measure such factors as angles, velocities, failure points, and deflections. The use of dual timing lamps in high-speed photography and the difficulties encountered with exposure and programming the camera and event are discussed. The application of video cameras to the recording of high-speed events is described.

  5. High speed imaging - An important industrial tool

    NASA Astrophysics Data System (ADS)

    Moore, Alton; Pinelli, Thomas E.

    1986-05-01

    High-speed photography, which is a rapid sequence of photographs that allow an event to be analyzed through the stoppage of motion or the production of slow-motion effects, is examined. In high-speed photography 16, 35, and 70 mm film and framing rates between 64-12,000 frames per second are utilized to measure such factors as angles, velocities, failure points, and deflections. The use of dual timing lamps in high-speed photography and the difficulties encountered with exposure and programming the camera and event are discussed. The application of video cameras to the recording of high-speed events is described.

  6. High speed television camera system processes photographic film data for digital computer analysis

    NASA Technical Reports Server (NTRS)

    Habbal, N. A.

    1970-01-01

    Data acquisition system translates and processes graphical information recorded on high speed photographic film. It automatically scans the film and stores the information with a minimal use of the computer memory.

  7. Photogrammetric Applications of Immersive Video Cameras

    NASA Astrophysics Data System (ADS)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  8. Determining aerodynamic coefficients from high speed video of a free-flying model in a shock tunnel

    NASA Astrophysics Data System (ADS)

    Neely, Andrew J.; West, Ivan; Hruschka, Robert; Park, Gisu; Mudford, Neil R.

    2008-11-01

    This paper describes the application of the free flight technique to determine the aerodynamic coefficients of a model for the flow conditions produced in a shock tunnel. Sting-based force measurement techniques either lack the required temporal response or are restricted to large complex models. Additionally the free flight technique removes the flow interference produced by the sting that is present for these other techniques. Shock tunnel test flows present two major challenges to the practical implementation of the free flight technique. These are the millisecond-order duration of the test flows and the spatial and temporal nonuniformity of these flows. These challenges are overcome by the combination of an ultra-high speed digital video camera to record the trajectory, with spatial and temporal mapping of the test flow conditions. Use of a lightweight model ensures sufficient motion during the test time. The technique is demonstrated using the simple case of drag measurement on a spherical model, free flown in a Mach 10 shock tunnel condition.

  9. High-Speed Video-Oculography for Measuring Three-Dimensional Rotation Vectors of Eye Movements in Mice

    PubMed Central

    Takeda, Noriaki; Uno, Atsuhiko; Inohara, Hidenori; Shimada, Shoichi

    2016-01-01

    Background The mouse is the most commonly used animal model in biomedical research because of recent advances in molecular genetic techniques. Studies related to eye movement in mice are common in fields such as ophthalmology relating to vision, neuro-otology relating to the vestibulo-ocular reflex (VOR), neurology relating to the cerebellum’s role in movement, and psychology relating to attention. Recording eye movements in mice, however, is technically difficult. Methods We developed a new algorithm for analyzing the three-dimensional (3D) rotation vector of eye movement in mice using high-speed video-oculography (VOG). The algorithm made it possible to analyze the gain and phase of VOR using the eye’s angular velocity around the axis of eye rotation. Results When mice were rotated at 0.5 Hz and 2.5 Hz around the earth’s vertical axis with their heads in a 30° nose-down position, the vertical components of their left eye movements were in phase with the horizontal components. The VOR gain was 0.42 at 0.5 Hz and 0.74 at 2.5 Hz, and the phase lead of the eye movement against the turntable was 16.1° at 0.5 Hz and 4.88° at 2.5 Hz. Conclusions To the best of our knowledge, this is the first report of this algorithm being used to calculate a 3D rotation vector of eye movement in mice using high-speed VOG. We developed a technique for analyzing the 3D rotation vector of eye movements in mice with a high-speed infrared CCD camera. We concluded that the technique is suitable for analyzing eye movements in mice. We also include a C++ source code that can calculate the 3D rotation vectors of the eye position from two-dimensional coordinates of the pupil and the iris freckle in the image to this article. PMID:27023859

  10. High-speed video observations of the fine structure of a natural negative stepped leader at close distance

    NASA Astrophysics Data System (ADS)

    Qi, Qi; Lu, Weitao; Ma, Ying; Chen, Luwen; Zhang, Yijun; Rakov, Vladimir A.

    2016-09-01

    We present new high-speed video observations of a natural downward negative lightning flash that occurred at a close distance of 350 m. The stepped leader of this flash was imaged by three high-speed video cameras operating at framing rates of 1000, 10,000 and 50,000 frames per second, respectively. Synchronized electromagnetic field records were also obtained. Nine pronounced field pulses which we attributed to individual leader steps were recorded. The time intervals between the step pulses ranged from 13.9 to 23.9 μs, with a mean value of 17.4 μs. Further, for the first time, smaller pulses were observed between the pronounced step pulses in the magnetic field derivative records. Time intervals between the smaller pulses (indicative of intermittent processes between steps) ranged from 0.9 to 5.5 μs, with a mean of 2.2 μs and a standard deviation of 0.82 μs. A total of 23 luminous segments, commonly attributed to space stems/leaders, were captured. Their two-dimensional lengths varied from 1 to 13 m, with a mean of 5 m. The distances between the luminous segments and the existing leader channels ranged from 1 to 8 m, with a mean value of 4 m. Three possible scenarios of the evolution of space stems/leaders located beside the leader channel have been inferred: (A) the space stem/leader fails to make connection to the leader channel; (B) the space stem/leader connects to the existing leader channel, but may die off and be followed, tens of microseconds later, by a low luminosity streamer; (C) the space stem/leader connects to the existing channel and launches an upward-propagating luminosity wave. Weakly luminous filamentary structures, which we interpreted as corona streamers, were observed emanating from the leader tip. The stepped leader branches extended downward with speeds ranging from 4.1 × 105 to 14.6 × 105 m s- 1.

  11. High speed video shooting with continuous-wave laser illumination in laboratory modeling of wind - wave interaction

    NASA Astrophysics Data System (ADS)

    Kandaurov, Alexander; Troitskaya, Yuliya; Caulliez, Guillemette; Sergeev, Daniil; Vdovin, Maxim

    2014-05-01

    Three examples of usage of high-speed video filming in investigation of wind-wave interaction in laboratory conditions is described. Experiments were carried out at the Wind - wave stratified flume of IAP RAS (length 10 m, cross section of air channel 0.4 x 0.4 m, wind velocity up to 24 m/s) and at the Large Air-Sea Interaction Facility (LASIF) - MIO/Luminy (length 40 m, cross section of air channel 3.2 x 1.6 m, wind velocity up to 10 m/s). A combination of PIV-measurements, optical measurements of water surface form and wave gages were used for detailed investigation of the characteristics of the wind flow over the water surface. The modified PIV-method is based on the use of continuous-wave (CW) laser illumination of the airflow seeded by particles and high-speed video. During the experiments on the Wind - wave stratified flume of IAP RAS Green (532 nm) CW laser with 1.5 Wt output power was used as a source for light sheet. High speed digital camera Videosprint (VS-Fast) was used for taking visualized air flow images with the frame rate 2000 Hz. Velocity air flow field was retrieved by PIV images processing with adaptive cross-correlation method on the curvilinear grid following surface wave profile. The mean wind velocity profiles were retrieved using conditional in phase averaging like in [1]. In the experiments on the LASIF more powerful Argon laser (4 Wt, CW) was used as well as high-speed camera with higher sensitivity and resolution: Optronics Camrecord CR3000x2, frame rate 3571 Hz, frame size 259×1696 px. In both series of experiments spherical 0.02 mm polyamide particles with inertial time 7 ms were used for seeding airflow. New particle seeding system based on utilization of air pressure is capable of injecting 2 g of particles per second for 1.3 - 2.4 s without flow disturbance. Used in LASIF this system provided high particle density on PIV-images. In combination with high-resolution camera it allowed us to obtain momentum fluxes directly from

  12. The Mechanical Properties of Early Drosophila Embryos Measured by High-Speed Video Microrheology

    PubMed Central

    Wessel, Alok D.; Gumalla, Maheshwar; Grosshans, Jörg; Schmidt, Christoph F.

    2015-01-01

    In early development, Drosophila melanogaster embryos form a syncytium, i.e., multiplying nuclei are not yet separated by cell membranes, but are interconnected by cytoskeletal polymer networks consisting of actin and microtubules. Between division cycles 9 and 13, nuclei and cytoskeleton form a two-dimensional cortical layer. To probe the mechanical properties and dynamics of this self-organizing pre-tissue, we measured shear moduli in the embryo by high-speed video microrheology. We recorded position fluctuations of injected micron-sized fluorescent beads with kHz sampling frequencies and characterized the viscoelasticity of the embryo in different locations. Thermal fluctuations dominated over nonequilibrium activity for frequencies between 0.3 and 1000 Hz. Between the nuclear layer and the yolk, the cytoplasm was homogeneous and viscously dominated, with a viscosity three orders of magnitude higher than that of water. Within the nuclear layer we found an increase of the elastic and viscous moduli consistent with an increased microtubule density. Drug-interference experiments showed that microtubules contribute to the measured viscoelasticity inside the embryo whereas actin only plays a minor role in the regions outside of the actin caps that are closely associated with the nuclei. Measurements at different stages of the nuclear division cycle showed little variation. PMID:25902430

  13. Measuring contraction propagation and localizing pacemaker cells using high speed video microscopy

    NASA Astrophysics Data System (ADS)

    Akl, Tony J.; Nepiyushchikh, Zhanna V.; Gashev, Anatoliy A.; Zawieja, David C.; Coté, Gerard L.

    2011-02-01

    Previous studies have shown the ability of many lymphatic vessels to contract phasically to pump lymph. Every lymphangion can act like a heart with pacemaker sites that initiate the phasic contractions. The contractile wave propagates along the vessel to synchronize the contraction. However, determining the location of the pacemaker sites within these vessels has proven to be very difficult. A high speed video microscopy system with an automated algorithm to detect pacemaker location and calculate the propagation velocity, speed, duration, and frequency of the contractions is presented in this paper. Previous methods for determining the contractile wave propagation velocity manually were time consuming and subject to errors and potential bias. The presented algorithm is semiautomated giving objective results based on predefined criteria with the option of user intervention. The system was first tested on simulation images and then on images acquired from isolated microlymphatic mesenteric vessels. We recorded contraction propagation velocities around 10 mm/s with a shortening speed of 20.4 to 27.1 μm/s on average and a contraction frequency of 7.4 to 21.6 contractions/min. The simulation results showed that the algorithm has no systematic error when compared to manual tracking. The system was used to determine the pacemaker location with a precision of 28 μm when using a frame rate of 300 frames per second.

  14. Measuring contraction propagation and localizing pacemaker cells using high speed video microscopy.

    PubMed

    Akl, Tony J; Nepiyushchikh, Zhanna V; Gashev, Anatoliy A; Zawieja, David C; Cot, Gerard L

    2011-02-01

    Previous studies have shown the ability of many lymphatic vessels to contract phasically to pump lymph. Every lymphangion can act like a heart with pacemaker sites that initiate the phasic contractions. The contractile wave propagates along the vessel to synchronize the contraction. However, determining the location of the pacemaker sites within these vessels has proven to be very difficult. A high speed video microscopy system with an automated algorithm to detect pacemaker location and calculate the propagation velocity, speed, duration, and frequency of the contractions is presented in this paper. Previous methods for determining the contractile wave propagation velocity manually were time consuming and subject to errors and potential bias. The presented algorithm is semiautomated giving objective results based on predefined criteria with the option of user intervention. The system was first tested on simulation images and then on images acquired from isolated microlymphatic mesenteric vessels. We recorded contraction propagation velocities around 10 mm/s with a shortening speed of 20.4 to 27.1 μm/s on average and a contraction frequency of 7.4 to 21.6 contractions/min. The simulation results showed that the algorithm has no systematic error when compared to manual tracking. The system was used to determine the pacemaker location with a precision of 28 μm when using a frame rate of 300 frames per second. PMID:21361700

  15. Studying the internal ballistics of a combustion-driven potato cannon using high-speed video

    NASA Astrophysics Data System (ADS)

    Courtney, E. D. S.; Courtney, M. W.

    2013-07-01

    A potato cannon was designed to accommodate several different experimental propellants and have a transparent barrel so the movement of the projectile could be recorded on high-speed video (at 2000 frames per second). Five experimental propellants were tested: propane (C3H8), acetylene (C2H2), ethanol (C2H6O), methanol (CH4O) and butane (C4H10). The quantity of each experimental propellant was calculated to approximate a stoichometric mixture and considering the upper and lower flammability limits, which in turn were affected by the volume of the combustion chamber. Cylindrical projectiles were cut from raw potatoes so that there was an airtight fit, and each weighed 50 (± 0.5) g. For each trial, position as a function of time was determined via frame-by-frame analysis. Five trials were made for each experimental propellant and the results analyzed to compute velocity and acceleration as functions of time. Additional quantities, including force on the potato and the pressure applied to the potato, were also computed. For each experimental propellant average velocity versus barrel position curves were plotted. The most effective experimental propellant was defined as that which accelerated the potato to the highest muzzle velocity. The experimental propellant acetylene performed the best on average (138.1 m s-1), followed by methanol (48.2 m s-1), butane (34.6 m s-1), ethanol (33.3 m s-1) and propane (27.9 m s-1), respectively.

  16. High-speed communications enabling real-time video for battlefield commanders using tracked FSO

    NASA Astrophysics Data System (ADS)

    Al-Akkoumi, Mouhammad K.; Huck, Robert C.; Sluss, James J., Jr.

    2007-04-01

    Free Space Optics (FSO) technology is currently in use to solve the last-mile problem in telecommunication systems by offering higher bandwidth than wired or wireless connections when optical fiber is not available. Incorporating mobility into FSO technology can contribute to growth in its utility. Tracking and alignment are two big challenges for mobile FSO communications. In this paper, we present a theoretical approach for mobile FSO networks between Unmanned Aerial Vehicles (UAVs), manned aerial vehicles, and ground vehicles. We introduce tracking algorithms for achieving Line of Sight (LOS) connectivity and present analytical results. Two scenarios are studied in this paper: 1 - An unmanned aerial surveillance vehicle, the Global Hawk, with a stationary ground vehicle, an M1 Abrams Main Battle Tank, and 2 - a manned aerial surveillance vehicle, the E-3A Airborne Warning and Control System (AWACS), with an unmanned combat aerial vehicle, the Joint Unmanned Combat Air System (J-UCAS). After initial vehicle locations have been coordinated, the tracking algorithm will steer the gimbals to maintain connectivity between the two vehicles and allow high-speed communications to occur. Using this algorithm, data, voice, and video can be sent via the FSO connection from one vehicle to the other vehicle.

  17. High-speed video evidence of a dart leader with bidirectional development

    NASA Astrophysics Data System (ADS)

    Jiang, Rubin; Wu, Zhijun; Qie, Xiushu; Wang, Dongfang; Liu, Mingyuan

    2014-07-01

    An upward negative cloud-to-ground lightning flash initiated from a high structure was detected by a high-speed camera operated at 10,000 fps, together with the coordinated measurement of electric field changes. Bidirectional propagation of a dart leader developing through the preconditioned channel was observed for the first time by optical means. The leader initially propagated downward through the upper channel with decreasing luminosity and speed and terminated at an altitude of about 2200 m. Subsequently, it restarted the development with both upward and downward channel extensions. The 2-D partial speed of the leader's upward propagation with positive polarity ranged between 3.2 × 106 m/s and 1.1 × 107 m/s with an average value of 6.4 × 106 m/s, while the speeds of the downward propagation with negative polarity ranged between 1.0 and 3.2 × 106 m/s with an average value of 2.2 × 106 m/s. The downward propagation of the bidirectional leader eventually reached the ground and induced a subsequent return stroke.

  18. Using a high-speed movie camera to evaluate slice dropping in clinical image interpretation with stack mode viewers.

    PubMed

    Yakami, Masahiro; Yamamoto, Akira; Yanagisawa, Morio; Sekiguchi, Hiroyuki; Kubo, Takeshi; Togashi, Kaori

    2013-06-01

    The purpose of this study is to verify objectively the rate of slice omission during paging on picture archiving and communication system (PACS) viewers by recording the images shown on the computer displays of these viewers with a high-speed movie camera. This study was approved by the institutional review board. A sequential number from 1 to 250 was superimposed on each slice of a series of clinical Digital Imaging and Communication in Medicine (DICOM) data. The slices were displayed using several DICOM viewers, including in-house developed freeware and clinical PACS viewers. The freeware viewer and one of the clinical PACS viewers included functions to prevent slice dropping. The series was displayed in stack mode and paged in both automatic and manual paging modes. The display was recorded with a high-speed movie camera and played back at a slow speed to check whether slices were dropped. The paging speeds were also measured. With a paging speed faster than half the refresh rate of the display, some viewers dropped up to 52.4 % of the slices, while other well-designed viewers did not, if used with the correct settings. Slice dropping during paging was objectively confirmed using a high-speed movie camera. To prevent slice dropping, the viewer must be specially designed for the purpose and must be used with the correct settings, or the paging speed must be slower than half of the display refresh rate. PMID:23053908

  19. Temperature evaluation of a hyper-rapid plasma jet by the method of high-speed video recording

    NASA Astrophysics Data System (ADS)

    Rif, A. E.; Cherevko, V. V.; Ivashutenko, A. S.; Martyushev, N. V.; Nikonova, N. Ye

    2016-04-01

    In this paper the procedure of comparative evaluation of plasma temperature using high-speed video filming of fast processes is presented. It has been established that the maximum plasma temperature reaches the value exceeding 30 000 K for the hypervelocity electric-discharge plasma, generated by a coaxial magnetoplasma accelerator with the use of the 'Image J' software.

  20. Per-Pixel Coded Exposure for High-Speed and High-Resolution Imaging Using a Digital Micromirror Device Camera

    PubMed Central

    Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei

    2016-01-01

    High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera. PMID:26959023

  1. Per-Pixel Coded Exposure for High-Speed and High-Resolution Imaging Using a Digital Micromirror Device Camera.

    PubMed

    Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei

    2016-01-01

    High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera. PMID:26959023

  2. Video Analysis with a Web Camera

    NASA Astrophysics Data System (ADS)

    Wyrembeck, Edward P.

    2009-01-01

    Recent advances in technology have made video capture and analysis in the introductory physics lab even more affordable and accessible. The purchase of a relatively inexpensive web camera is all you need if you already have a newer computer and Vernier's2 Logger Pro 3 software. In addition to Logger Pro 3, other video analysis tools such as Videopoint3 and Tracker,4 which is freely downloadable, by Doug Brown could also be used. I purchased Logitech's5 QuickCam Pro 4000 web camera for 99 after Rick Sorensen6 at Vernier Software and Technology recommended it for computers using a Windows platform. Once I had mounted the web camera on a mobile computer with Velcro and installed the software, I was ready to capture motion video and analyze it.

  3. High-speed camera observation of multi-component droplet coagulation in an ultrasonic standing wave field

    NASA Astrophysics Data System (ADS)

    Reißenweber, Marina; Krempel, Sandro; Lindner, Gerhard

    2013-12-01

    With an acoustic levitator small particles can be aggregated near the nodes of a standing pressure field. Furthermore it is possible to atomize liquids on a vibrating surface. We used a combination of both mechanisms and atomized several liquids simultaneously, consecutively and emulsified in the ultrasonic field. Using a high-speed camera we observed the coagulation of the spray droplets into single large levitated droplets resolved in space and time. In case of subsequent atomization of two components the spray droplets of the second component were deposited on the surface of the previously coagulated droplet of the first component without mixing.

  4. Optimizing the input and output transmission lines that gate the microchannel plate in a high-speed framing camera

    NASA Astrophysics Data System (ADS)

    Lugten, John B.; Brown, Charles G.; Piston, Kenneth W.; Beeman, Bart V.; Allen, Fred V.; Boyle, Dustin T.; Brown, Christopher G.; Cruz, Jason G.; Kittle, Douglas R.; Lumbard, Alexander A.; Torres, Peter; Hargrove, Dana R.; Benedetti, Laura R.; Bell, Perry M.

    2015-08-01

    We present new designs for the launch and receiver boards used in a high speed x-ray framing camera at the National Ignition Facility. The new launch board uses a Klopfenstein taper to match the 50 ohm input impedance to the ~10 ohm microchannel plate. The new receiver board incorporates design changes resulting in an output monitor pulse shape that more accurately represents the pulse shape at the input and across the microchannel plate; this is valuable for assessing and monitoring the electrical performance of the assembled framing camera head. The launch and receiver boards maximize power coupling to the microchannel plate, minimize cross talk between channels, and minimize reflections. We discuss some of the design tradeoffs we explored, and present modeling results and measured performance. We also present our methods for dealing with the non-ideal behavior of coupling capacitors and terminating resistors. We compare the performance of these new designs to that of some earlier designs.

  5. A novel optical apparatus for the study of rolling contact wear/fatigue based on a high-speed camera and multiple-source laser illumination.

    PubMed

    Bodini, I; Sansoni, G; Lancini, M; Pasinetti, S; Docchio, F

    2016-08-01

    Rolling contact wear/fatigue tests on wheel/rail specimens are important to produce wheels and rails of new materials for improved lifetime and performance, which are able to operate in harsh environments and at high rolling speeds. This paper presents a novel non-invasive, all-optical system, based on a high-speed video camera and multiple laser illumination sources, which is able to continuously monitor the dynamics of the specimens used to test wheel and rail materials, in a laboratory test bench. 3D macro-topography and angular position of the specimen are simultaneously performed, together with the acquisition of surface micro-topography, at speeds up to 500 rpm, making use of a fast camera and image processing algorithms. Synthetic indexes for surface micro-topography classification are defined, the 3D macro-topography is measured with a standard uncertainty down to 0.019 mm, and the angular position is measured on a purposely developed analog encoder with a standard uncertainty of 2.9°. The very small camera exposure time enables to obtain blur-free images with excellent definition. The system will be described with the aid of end-cycle specimens, as well as of in-test specimens. PMID:27587125

  6. A novel optical apparatus for the study of rolling contact wear/fatigue based on a high-speed camera and multiple-source laser illumination

    NASA Astrophysics Data System (ADS)

    Bodini, I.; Sansoni, G.; Lancini, M.; Pasinetti, S.; Docchio, F.

    2016-08-01

    Rolling contact wear/fatigue tests on wheel/rail specimens are important to produce wheels and rails of new materials for improved lifetime and performance, which are able to operate in harsh environments and at high rolling speeds. This paper presents a novel non-invasive, all-optical system, based on a high-speed video camera and multiple laser illumination sources, which is able to continuously monitor the dynamics of the specimens used to test wheel and rail materials, in a laboratory test bench. 3D macro-topography and angular position of the specimen are simultaneously performed, together with the acquisition of surface micro-topography, at speeds up to 500 rpm, making use of a fast camera and image processing algorithms. Synthetic indexes for surface micro-topography classification are defined, the 3D macro-topography is measured with a standard uncertainty down to 0.019 mm, and the angular position is measured on a purposely developed analog encoder with a standard uncertainty of 2.9°. The very small camera exposure time enables to obtain blur-free images with excellent definition. The system will be described with the aid of end-cycle specimens, as well as of in-test specimens.

  7. High-speed 1280x1024 camera with 12-Gbyte SDRAM memory

    NASA Astrophysics Data System (ADS)

    Postnikov, Konstantin O.; Yakovlev, Alexey V.

    2001-04-01

    A 600 Frame/s camera based on 1.3 Megapixel CMOS sensor (PBMV13) with wide digital data output bus (10 parallel outputs of 10 bit worlds) was developed using high capacity SCRAM memory. This architecture allows to achieve 10 seconds of continuous recording of digital data from the sensor at 600 frames per second to the memory box with up to 12 1Gbytes SDRAM modules. Acquired data is transmitted through the fibre optic channel connected to the camera via FPDP interface to a PC type computer at the speed of 100 Mbyte per second and fibre cable length up to 10 km. All camera settings such as shutter time, frame rate, image size, present for changing integration time and frame rate, can be controlled by software. Camera specifications: shutter time - from 3.3 us to full frame at 1.6 us steps at 600 fps and then 1 frame steps down to 16 ms, frame rate - from 60 fps to 600 fps, image size 1280x1024, 1280x512, 1290x256, or 1280x128, changing on a fly - presetting two step table, memory capacity - depends on frame size (6000 frames with 1280x1024 or 48000 frames with 1280x128 resolution). Program can work with monochrome or color versions of the MV13 sensor.

  8. Advanced High-Speed Framing Camera Development for Fast, Visible Imaging Experiments

    SciTech Connect

    Amy Lewis, Stuart Baker, Brian Cox, Abel Diaz, David Glass, Matthew Martin

    2011-05-11

    The advances in high-voltage switching developed in this project allow a camera user to rapidly vary the number of output frames from 1 to 25. A high-voltage, variable-amplitude pulse train shifts the deflection location to the new frame location during the interlude between frames, making multiple frame counts and locations possible. The final deflection circuit deflects to five different frame positions per axis, including the center position, making for a total of 25 frames. To create the preset voltages, electronically adjustable {+-}500 V power supplies were chosen. Digital-to-analog converters provide digital control of the supplies. The power supplies are clamped to {+-}400 V so as not to exceed the voltage ratings of the transistors. A field-programmable gated array (FPGA) receives the trigger signal and calculates the combination of plate voltages for each frame. The interframe time and number of frames are specified by the user, but are limited by the camera electronics. The variable-frame circuit shifts the plate voltages of the first frame to those of the second frame during the user-specified interframe time. Designed around an electrostatic image tube, a framing camera images the light present during each frame (at the photocathode) onto the tube’s phosphor. The phosphor persistence allows the camera to display multiple frames on the phosphor at one time. During this persistence, a CCD camera is triggered and the analog image is collected digitally. The tube functions by converting photons to electrons at the negatively charged photocathode. The electrons move quickly toward the more positive charge of the phosphor. Two sets of deflection plates skew the electron’s path in horizontal and vertical (x axis and y axis, respectively) directions. Hence, each frame’s electrons bombard the phosphor surface at a controlled location defined by the voltages on the deflection plates. To prevent the phosphor from being exposed between frames, the image tube

  9. An unmanned watching system using video cameras

    SciTech Connect

    Kaneda, K.; Nakamae, E. ); Takahashi, E. ); Yazawa, K. )

    1990-04-01

    Techniques for detecting intruders at a remote location, such as a power plant or substation, or in an unmanned building at night, are significant in the field of unmanned watching systems. This article describes an unmanned watching system to detect trespassers in real time, applicable both indoors and outdoors, based on image processing. The main part of the proposed system consists of a video camera, an image processor and a microprocessor. Images are input from the video camera to the image processor every 1/60 second, and objects which enter the image are detected by measuring changes of intensity level in selected sensor areas. This article discusses the system configuration and the detection method. Experimental results under a range of environmental conditions are given.

  10. A study on ice crystal formation behavior at intracellular freezing of plant cells using a high-speed camera.

    PubMed

    Ninagawa, Takako; Eguchi, Akemi; Kawamura, Yukio; Konishi, Tadashi; Narumi, Akira

    2016-08-01

    Intracellular ice crystal formation (IIF) causes several problems to cryopreservation, and it is the key to developing improved cryopreservation techniques that can ensure the long-term preservation of living tissues. Therefore, the ability to capture clear intracellular freezing images is important for understanding both the occurrence and the IIF behavior. The authors developed a new cryomicroscopic system that was equipped with a high-speed camera for this study and successfully used this to capture clearer images of the IIF process in the epidermal tissues of strawberry geranium (Saxifraga stolonifera Curtis) leaves. This system was then used to examine patterns in the location and formation of intracellular ice crystals and to evaluate the degree of cell deformation because of ice crystals inside the cell and the growing rate and grain size of intracellular ice crystals at various cooling rates. The results showed that an increase in cooling rate influenced the formation pattern of intracellular ice crystals but had less of an effect on their location. Moreover, it reduced the degree of supercooling at the onset of intracellular freezing and the degree of cell deformation; the characteristic grain size of intracellular ice crystals was also reduced, but the growing rate of intracellular ice crystals was increased. Thus, the high-speed camera images could expose these changes in IIF behaviors with an increase in the cooling rate, and these are believed to have been caused by an increase in the degree of supercooling. PMID:27343136

  11. Machine Vision Techniques For High Speed Videography

    NASA Astrophysics Data System (ADS)

    Hunter, David B.

    1984-11-01

    The priority associated with U.S. efforts to increase productivity has led to, among other things, the development of Machine Vision systems for use in manufacturing automation requirements. Many such systems combine solid state television cameras and data processing equipment to facilitate high speed, on-line inspection and real time dimensional measurement of parts and assemblies. These parts are often randomly oriented and spaced on a conveyor belt under continuous motion. Television imagery of high speed events has historically been achieved by use of pulsed (strobe) illumination or high speed shutter techniques synchronized with a camera's vertical blanking to separate write and read cycle operation. Lack of synchronization between part position and camera scanning in most on-line applications precludes use of this vertical interval illumination technique. Alternatively, many Machine Vision cameras incorporate special techniques for asynchronous, stop-motion imaging. Such cameras are capable of imaging parts asynchronously at rates approaching 60 hertz while remaining compatible with standard video recording units. Techniques for asynchronous, stop-motion imaging have not been incorporated in cameras used for High Speed Videography. Imaging of these events has alternatively been obtained through the utilization of special, high frame rate cameras to minimize motion during the frame interval. High frame rate cameras must undoubtedly be utilized for recording of high speed events occurring at high repetition rates. However, such cameras require very specialized, and often expensive, video recording equipment. It seems, therefore, that Machine Vision cameras with capability for asynchronous, stop-motion imaging represent a viable approach for cost effective video recording of high speed events occurring at repetition rates up to 60 hertz.

  12. Investigations of some aspects of the spray process in a single wire arc plasma spray system using high speed camera.

    PubMed

    Tiwari, N; Sahasrabudhe, S N; Tak, A K; Barve, D N; Das, A K

    2012-02-01

    A high speed camera has been used to record and analyze the evolution as well as particle behavior in a single wire arc plasma spray torch. Commercially available systems (spray watch, DPV 2000, etc.) focus onto a small area in the spray jet. They are not designed for tracking a single particle from the torch to the substrate. Using high speed camera, individual particles were tracked and their velocities were measured at various distances from the spray torch. Particle velocity information at different distances from the nozzle of the torch is very important to decide correct substrate position for the good quality of coating. The analysis of the images has revealed the details of the process of arc attachment to wire, melting of the wire, and detachment of the molten mass from the tip. Images of the wire and the arc have been recorded for different wire feed rates, gas flow rates, and torch powers, to determine compatible wire feed rates. High speed imaging of particle trajectories has been used for particle velocity determination using time of flight method. It was observed that the ripple in the power supply of the torch leads to large variation of instantaneous power fed to the torch. This affects the velocity of the spray particles generated at different times within one cycle of the ripple. It is shown that the velocity of a spray particle depends on the instantaneous torch power at the time of its generation. This correlation was established by experimental evidence in this paper. Once the particles leave the plasma jet, their forward speeds were found to be more or less invariant beyond 40 mm up to 500 mm from the nozzle exit. PMID:22380128

  13. Photometric Calibration of Consumer Video Cameras

    NASA Technical Reports Server (NTRS)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to

  14. Frequency Identification of Vibration Signals Using Video Camera Image Data

    PubMed Central

    Jeng, Yih-Nen; Wu, Chia-Hung

    2012-01-01

    This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC) can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system. PMID:23202026

  15. Frequency identification of vibration signals using video camera image data.

    PubMed

    Jeng, Yih-Nen; Wu, Chia-Hung

    2012-01-01

    This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC) can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system. PMID:23202026

  16. SPADAS: a high-speed 3D single-photon camera for advanced driver assistance systems

    NASA Astrophysics Data System (ADS)

    Bronzi, D.; Zou, Y.; Bellisai, S.; Villa, F.; Tisa, S.; Tosi, A.; Zappa, F.

    2015-02-01

    Advanced Driver Assistance Systems (ADAS) are the most advanced technologies to fight road accidents. Within ADAS, an important role is played by radar- and lidar-based sensors, which are mostly employed for collision avoidance and adaptive cruise control. Nonetheless, they have a narrow field-of-view and a limited ability to detect and differentiate objects. Standard camera-based technologies (e.g. stereovision) could balance these weaknesses, but they are currently not able to fulfill all automotive requirements (distance range, accuracy, acquisition speed, and frame-rate). To this purpose, we developed an automotive-oriented CMOS single-photon camera for optical 3D ranging based on indirect time-of-flight (iTOF) measurements. Imagers based on Single-photon avalanche diode (SPAD) arrays offer higher sensitivity with respect to CCD/CMOS rangefinders, have inherent better time resolution, higher accuracy and better linearity. Moreover, iTOF requires neither high bandwidth electronics nor short-pulsed lasers, hence allowing the development of cost-effective systems. The CMOS SPAD sensor is based on 64 × 32 pixels, each able to process both 2D intensity-data and 3D depth-ranging information, with background suppression. Pixel-level memories allow fully parallel imaging and prevents motion artefacts (skew, wobble, motion blur) and partial exposure effects, which otherwise would hinder the detection of fast moving objects. The camera is housed in an aluminum case supporting a 12 mm F/1.4 C-mount imaging lens, with a 40°×20° field-of-view. The whole system is very rugged and compact and a perfect solution for vehicle's cockpit, with dimensions of 80 mm × 45 mm × 70 mm, and less that 1 W consumption. To provide the required optical power (1.5 W, eye safe) and to allow fast (up to 25 MHz) modulation of the active illumination, we developed a modular laser source, based on five laser driver cards, with three 808 nm lasers each. We present the full characterization of

  17. Ultra-high speed and low latency broadband digital video transport

    NASA Astrophysics Data System (ADS)

    Stufflebeam, Joseph L.; Remley, Dennis M.; Sullivan, Anthony; Gurrola, Hector

    2004-07-01

    Various approaches for transporting digital video over Ethernet and SONET networks are presented. Commercial analog and digital frame grabbers are utilized, as well as software running under Microsoft Windows 2000/XP. No other specialized hardware is required. A network configuration using independent VLANs for video channels provides efficient transport for high bandwidth data. A framework is described for implementing both uncompressed and compressed streaming with standard and non-standard video. NTSC video is handled as well as other formats that include high resolution CMOS, high bit-depth infrared, and high frame rate parallel digital. End-to-end latencies of less than 200 msec are achieved.

  18. High speed wide field CMOS camera for Transneptunian Automatic Occultation Survey

    NASA Astrophysics Data System (ADS)

    Wang, Shiang-Yu; Geary, John C.; Amato, Stephen M.; Hu, Yen-Sang; Ling, Hung-Hsu; Huang, Pin-Jie; Furesz, Gabor; Chen, Hsin-Yo; Chang, Yin-Chang; Szentgyorgyi, Andrew; Lehner, Matthew; Norton, Timothy

    2014-08-01

    The Transneptunian Automated Occultation Survey (TAOS II) is a three robotic telescope project to detect the stellar occultation events generated by Trans Neptunian Objects (TNOs). TAOS II project aims to monitor about 10000 stars simultaneously at 20Hz to enable statistically significant event rate. The TAOS II camera is designed to cover the 1.7 degree diameter field of view (FoV) of the 1.3m telescope with 10 mosaic 4.5kx2k CMOS sensors. The new CMOS sensor has a back illumination thinned structure and high sensitivity to provide similar performance to that of the backillumination thinned CCDs. The sensor provides two parallel and eight serial decoders so the region of interests can be addressed and read out separately through different output channels efficiently. The pixel scale is about 0.6"/pix with the 16μm pixels. The sensors, mounted on a single Invar plate, are cooled to the operation temperature of about 200K by a cryogenic cooler. The Invar plate is connected to the dewar body through a supporting ring with three G10 bipods. The deformation of the cold plate is less than 10μm to ensure the sensor surface is always within ±40μm of focus range. The control electronics consists of analog part and a Xilinx FPGA based digital circuit. For each field star, 8×8 pixels box will be readout. The pixel rate for each channel is about 1Mpix/s and the total pixel rate for each camera is about 80Mpix/s. The FPGA module will calculate the total flux and also the centroid coordinates for every field star in each exposure.

  19. An Investigation On The Problems Of The Intermittent High-Speed Camera Of 360 Frames/S

    NASA Astrophysics Data System (ADS)

    Zhihong, Rong

    1989-06-01

    This paper discusses several problems on the JX-300 intermittent synchronous high-speed camera developed by the Institue of Optics and Electronics (10E), Academia Sinica in 1985. It is shown that when a framing rate is no more than 120 frames/s, a relatively high reliability is obtained resulting from low acceleration of the moving elements, weak intermittent pulldown strength, low frequency vibration, etc. At the time when a framing rate increases to over 200 frames/s, the photographic resolving power, as well as the film running reliability reduce due to the dramatic increase in vibration and pulldown strenth, which is similar to that in the stationary photography. It is getting worse when the framing rate is up to 300 frames/s. Therefore, deliberating on the choice of a claw mechanism having a framing rate of over 300 frames/s and conducting a series of technical measures are particularly important for a camera to obtain a sharp object image securely, otherwise it can hardly reach the framing rate of 300 frames/s for an intermittent camera. Even if this framing rate is attained, the image quality is also deformed and the mechanism would be rapidly worn off from high vibration.

  20. Characterization of calculus migration during Ho:YAG laser lithotripsy by high speed camera using suspended pendulum method

    NASA Astrophysics Data System (ADS)

    Zhang, Jian James; Rajabhandharaks, Danop; Xuan, Jason Rongwei; Chia, Ray W. J.; Hasenberg, Tom

    2014-03-01

    Calculus migration is a common problem during ureteroscopic laser lithotripsy procedure to treat urolithiasis. A conventional experimental method to characterize calculus migration utilized a hosting container (e.g. a "V" grove or a test tube). These methods, however, demonstrated large variation and poor detectability, possibly attributing to friction between the calculus and the container on which the calculus was situated. In this study, calculus migration was investigated using a pendulum model suspended under water to eliminate the aforementioned friction. A high speed camera was used to study the movement of the calculus which covered zero order (displacement), 1st order (speed) and 2nd order (acceleration). A commercialized, pulsed Ho:YAG laser at 2.1 um, 365-um core fiber, and calculus phantom (Plaster of Paris, 10×10×10mm cube) were utilized to mimic laser lithotripsy procedure. The phantom was hung on a stainless steel bar and irradiated by the laser at 0.5, 1.0 and 1.5J energy per pulse at 10Hz for 1 second (i.e., 5, 10, and 15W). Movement of the phantom was recorded by a high-speed camera with a frame rate of 10,000 FPS. Maximum displacement was 1.25+/-0.10, 3.01+/-0.52, and 4.37+/-0.58 mm for 0.5, 1, and 1.5J energy per pulse, respectively. Using the same laser power, the conventional method showed <0.5 mm total displacement. When reducing the phantom size to 5×5×5mm (1/8 in volume), the displacement was very inconsistent. The results suggested that using the pendulum model to eliminate the friction improved sensitivity and repeatability of the experiment. Detailed investigation on calculus movement and other causes of experimental variation will be conducted as a future study.

  1. Optical engineering application of modeled photosynthetically active radiation (PAR) for high-speed digital camera dynamic range optimization

    NASA Astrophysics Data System (ADS)

    Alves, James; Gueymard, Christian A.

    2009-08-01

    As efforts to create accurate yet computationally efficient estimation models for clear-sky photosynthetically active solar radiation (PAR) have succeeded, the range of practical engineering applications where these models can be successfully applied has increased. This paper describes a novel application of the REST2 radiative model (developed by the second author) in optical engineering. The PAR predictions in this application are used to predict the possible range of instantaneous irradiances that could impinge on the image plane of a stationary video camera designed to image license plates on moving vehicles. The overall spectral response of the camera (including lens and optical filters) is similar to the 400-700 nm PAR range, thereby making PAR irradiance (rather than luminance) predictions most suitable for this application. The accuracy of the REST2 irradiance predictions for horizontal surfaces, coupled with another radiative model to obtain irradiances on vertical surfaces, and to standard optical image formation models, enable setting the dynamic range controls of the camera to ensure that the license plate images are legible (unsaturated with adequate contrast) regardless of the time of day, sky condition, or vehicle speed. A brief description of how these radiative models are utilized as part of the camera control algorithm is provided. Several comparisons of the irradiance predictions derived from the radiative model versus actual PAR measurements under varying sky conditions with three Licor sensors (one horizontal and two vertical) have been made and showed good agreement. Various camera-to-plate geometries and compass headings have been considered in these comparisons. Time-lapse sequences of license plate images taken with the camera under various sky conditions over a 30-day period are also analyzed. They demonstrate the success of the approach at creating legible plate images under highly variable lighting, which is the main goal of this

  2. Thermal/structural/optical integrated design for optical window of a high-speed aerial optical camera

    NASA Astrophysics Data System (ADS)

    Zhang, Gaopeng; Yang, Hongtao; Mei, Chao; Shi, Kui; Wu, Dengshan; Qiao, Mingrui

    2015-10-01

    In order to obtain high quality image of the aero optical remote sensor, it is important to analysis its thermal-optical performance on the condition of high speed and high altitude. Especially for the key imaging assembly, such as optical window, the temperature variation and temperature gradient can result in defocus and aberrations in optical system, which will lead to the poor quality image. In order to improve the optical performance of a high speed aerial camera optical window, the thermal/structural/optical integrated design method is developed. Firstly, the flight environment of optical window is analyzed. Based on the theory of aerodynamics and heat transfer, the convection heat transfer coefficient is calculated. The temperature distributing of optical window is simulated by the finite element analysis software. The maximum difference in temperature of the inside and outside of optical window is obtained. Then the deformation of optical window under the boundary condition of the maximum difference in temperature is calculated. The optical window surface deformation is fitted in Zernike polynomial as the interface, the calculated Zernike fitting coefficients is brought in and analyzed by CodeV Optical Software. At last, the transfer function diagrams of the optical system on temperature field are comparatively analyzed. By comparing and analyzing the result, it can be obtained that the optical path difference caused by thermal deformation of the optical window is 149.6 nm, which is under PV <=1 4λ .The simulation result meets the requirements of optical design very well. The above study can be used as an important reference for other optical window designs.

  3. The issue of precision in the measurement of soil splash by a single drop using a high speed camera

    NASA Astrophysics Data System (ADS)

    Ryżak, Magdalena; Bieganowski, Andrzej; Polakowski, Cezary; Sochan, Agata

    2014-05-01

    Soil, being the top layer of the Earth's crust and a component of many ecosystems, undergoes continuous degradation. One of the forms of this degradation is water erosion. Erosion is a physical degradation process affecting the soil surface. This process affects not only the environment, but also the productivity and profitability of agriculture. Therefore, understanding the mechanisms of erosion and preventing it is important for agriculture and economy. Erosion has been the subject of many studies among various research teams around the world. The splash is the first stage of water erosion. The splash erosion can be characterised as two subprocesses: detachment of a particle from the soil surface and the transport of the particle in different directions. The aim of this study was to evaluate the reproducibility of the soil splash phenomenon that occurs as a result of the fall of a single drop. Using high-speed cameras, we measured the reproducibility of recorded splash parameters; these included the number and surface of detached particles and the width of the crown formed as a result of the splash. Measurements were carried out on soil samples with different textures taken from the topsoil of two soil profiles in south eastern Poland. After collection, these samples were dried at room temperature, sieved through a 2 mm sieve, and then humidified to three different humidity conditions. Drops of water with a diameter of 4.2 mm freely fell from a height of 1.5 m. Measurements were recorded using a high-speed camera (Vision Research MIRO M310) and the data were recording at 2000 frames per second. The number and surface of detached particles and the resulting width of the crown during the splash were analysed. The measurements demonstrated that: - Soil splash caused by the first drop striking the surface was significantly different from the splash caused by the impact of subsequent drops. This difference was due to the fact that less moisture was present at the time

  4. Large Area Divertor Temperature Measurements Using A High-speed Camera With Near-infrared FiIters in NSTX

    SciTech Connect

    Lyons, B C; Zweben, S J; Gray, T K; Hosea, J; Kaita, R; Kugel, H W; Maqueda, R J; McLean, A G; Roquemore, A L; Soukhanovskii, V A

    2011-04-05

    Fast cameras already installed on the National Spherical Torus Experiment (NSTX) have be equipped with near-infrared (NIR) filters in order to measure the surface temperature in the lower divertor region. Such a system provides a unique combination of high speed (> 50 kHz) and wide fi eld-of-view (> 50% of the divertor). Benchtop calibrations demonstrated the system's ability to measure thermal emission down to 330 oC. There is also, however, signi cant plasma light background in NSTX. Without improvements in background reduction, the current system is incapable of measuring signals below the background equivalent temperature (600 - 700 oC). Thermal signatures have been detected in cases of extreme divertor heating. It is observed that the divertor can reach temperatures around 800 oC when high harmonic fast wave (HHFW) heating is used. These temperature profiles were fi t using a simple heat diffusion code, providing a measurement of the heat flux to the divertor. Comparisons to other infrared thermography systems on NSTX are made.

  5. Scheimpflug camera in the quantitative assessment of reproducibility of high-speed corneal deformation during intraocular pressure measurement.

    PubMed

    Koprowski, Robert; Ambrósio, Renato; Reisdorf, Sven

    2015-11-01

    The paper presents an original analysis method of corneal deformation images from the ultra-high-speed Scheimpflug camera (Corvis ST tonometer). Particular attention was paid to deformation frequencies exceeding 100 Hz and their reproducibility in healthy subjects examined repeatedly. A total of 4200 images with a resolution of 200 × 576 pixels were recorded. The data derived from 3 consecutive measurements from 10 volunteers with normal corneas. A new image analysis algorithm, written in Matlab with the use of the Image Processing package, adaptive image filtering, morphological analysis methods and fast Fourier transform, was proposed. The following results were obtained: (1) reproducibility of the eyeball reaction in healthy subjects with precision of 10%, (2) corneal vibrations with a frequency of 369 ± 65 Hz (3) and amplitude of 7.86 ± 1.28 µm, (4) the phase shift within two parts of the cornea of the same subject of about 150°. The result of image sequence analysis for one subject and deformations with a corneal frequency response above 100 Hz. PMID:25623926

  6. A compact single-camera system for high-speed, simultaneous 3-D velocity and temperature measurements.

    SciTech Connect

    Lu, Louise; Sick, Volker; Frank, Jonathan H.

    2013-09-01

    The University of Michigan and Sandia National Laboratories collaborated on the initial development of a compact single-camera approach for simultaneously measuring 3-D gasphase velocity and temperature fields at high frame rates. A compact diagnostic tool is desired to enable investigations of flows with limited optical access, such as near-wall flows in an internal combustion engine. These in-cylinder flows play a crucial role in improving engine performance. Thermographic phosphors were proposed as flow and temperature tracers to extend the capabilities of a novel, compact 3D velocimetry diagnostic to include high-speed thermometry. Ratiometric measurements were performed using two spectral bands of laser-induced phosphorescence emission from BaMg2Al10O17:Eu (BAM) phosphors in a heated air flow to determine the optimal optical configuration for accurate temperature measurements. The originally planned multi-year research project ended prematurely after the first year due to the Sandia-sponsored student leaving the research group at the University of Michigan.

  7. The concurrent validity and reliability of a low-cost, high-speed camera-based method for measuring the flight time of vertical jumps.

    PubMed

    Balsalobre-Fernández, Carlos; Tejero-González, Carlos M; del Campo-Vecino, Juan; Bavaresco, Nicolás

    2014-02-01

    Flight time is the most accurate and frequently used variable when assessing the height of vertical jumps. The purpose of this study was to analyze the validity and reliability of an alternative method (i.e., the HSC-Kinovea method) for measuring the flight time and height of vertical jumping using a low-cost high-speed Casio Exilim FH-25 camera (HSC). To this end, 25 subjects performed a total of 125 vertical jumps on an infrared (IR) platform while simultaneously being recorded with a HSC at 240 fps. Subsequently, 2 observers with no experience in video analysis analyzed the 125 videos independently using the open-license Kinovea 0.8.15 software. The flight times obtained were then converted into vertical jump heights, and the intraclass correlation coefficient (ICC), Bland-Altman plot, and Pearson correlation coefficient were calculated for those variables. The results showed a perfect correlation agreement (ICC = 1, p < 0.0001) between both observers' measurements of flight time and jump height and a highly reliable agreement (ICC = 0.997, p < 0.0001) between the observers' measurements of flight time and jump height using the HSC-Kinovea method and those obtained using the IR system, thus explaining 99.5% (p < 0.0001) of the differences (shared variance) obtained using the IR platform. As a result, besides requiring no previous experience in the use of this technology, the HSC-Kinovea method can be considered to provide similarly valid and reliable measurements of flight time and vertical jump height as more expensive equipment (i.e., IR). As such, coaches from many sports could use the HSC-Kinovea method to measure the flight time and height of their athlete's vertical jumps. PMID:23689339

  8. Initial laboratory evaluation of color video cameras: Phase 2

    SciTech Connect

    Terry, P.L.

    1993-07-01

    Sandia National Laboratories has considerable experience with monochrome video cameras used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than to identify intruders. The monochrome cameras were selected over color cameras because they have greater sensitivity and resolution. There is a growing interest in the identification function of security video systems for both access control and insider protection. Because color camera technology is rapidly changing and because color information is useful for identification purposes, Sandia National Laboratories has established an on-going program to evaluate the newest color solid-state cameras. Phase One of the Sandia program resulted in the SAND91-2579/1 report titled: Initial Laboratory Evaluation of Color Video Cameras. The report briefly discusses imager chips, color cameras, and monitors, describes the camera selection, details traditional test parameters and procedures, and gives the results reached by evaluating 12 cameras. Here, in Phase Two of the report, we tested 6 additional cameras using traditional methods. In addition, all 18 cameras were tested by newly developed methods. This Phase 2 report details those newly developed test parameters and procedures, and evaluates the results.

  9. Time-synchronized high-speed video images, electric fields, and currents of rocket-and-wire triggered lightning

    NASA Astrophysics Data System (ADS)

    Biagi, C. J.; Hill, J. D.; Jordan, D. M.; Uman, M. A.; Rakov, V. A.

    2009-12-01

    We present novel observations of 20 classically-triggered lightning flashes from the 2009 summer season at the International Center for Lightning Research and Testing (ICLRT) in north-central Florida. We focus on: (1) upward positive leaders (UPL), (2) current decreases and current reflections associated with the destruction of the triggering wire, and (3) dart-stepped leader propagation involving space stems or space leaders ahead of the leader tip. High-speed video data were acquired 440 m from the triggered lightning using a Phantom v7.1 operating at frame rates of up to 10 kfps (90 µs frame time) with a field of view from ground to an altitude of 325 m and a Photron SA1.1 operating at frame rates of up to 300 kfps (3.3 µs frame time) that viewed from ground to an altitude of 120 m. These data were acquired along with time-synchronized measurements of electric field (dc to 3 MHz) and channel-base current (dc to 8 MHz). The sustained UPLs developed when the rockets were between altitudes of 100 m and 200 m, and accelerated from about 104 to 105 m s-1 from the top of the triggering wire to an altitude of 325 m. In each successive 10 kfps high-speed video image, the newly formed UPL channels were brighter than the previously established channel and the new channel segments were longer. The UPLs in two flashes were imaged at a frame rate of 300 kfps from the top of the wire to about 10 m above the wire (110 m to 120 m above ground). In these images the UPL developed in a stepped manner with luminosity waves traveling from the channel tip back toward the wire during a time of 2 to 3 frames (6.6 µs to 9.9 µs). The new channel segments were on average 1 m in length and the average interstep interval was 23 µs. During 13 of the 20 initial continuous currents, an abrupt current decrease and the beginning of the wire illumination (due to its melting) occurred simultaneously to within 1 high-speed video frame (between 3.3 µs and 10 µs). For two of the triggered

  10. Observation of diesel spray by pseudo-high-speed photography

    NASA Astrophysics Data System (ADS)

    Umezu, Seiji; Oka, Mohachiro

    2001-04-01

    Pseudo high speed photography has been developed to observe intermittent, periodic and high speed phenomena like diesel spray. Main device of this photography consists of Automatic Variable Retarder (AVR) which delays gradually timing between diesel injection and strobe spark with the micrometer. This technique enables us to observe diesel spray development just like images taken by a high speed video camera. This paper describes a principle of pseudo high speed photography, experimental results of adaptation to diesel spray and analysis of the diesel atomization mechanism.

  11. Using a slit lamp-mounted digital high-speed camera for dynamic observation of phakic lenses during eye movements: a pilot study

    PubMed Central

    Leitritz, Martin Alexander; Ziemssen, Focke; Bartz-Schmidt, Karl Ulrich; Voykov, Bogomil

    2014-01-01

    Purpose To evaluate a digital high-speed camera combined with digital morphometry software for dynamic measurements of phakic intraocular lens movements to observe kinetic influences, particularly in fast direction changes and at lateral end points. Materials and methods A high-speed camera taking 300 frames per second observed movements of eight iris-claw intraocular lenses and two angle-supported intraocular lenses. Standardized saccades were performed by the patients to trigger mass inertia with lens position changes. Freeze images with maximum deviation were used for digital software-based morphometry analysis with ImageJ. Results Two eyes from each of five patients (median age 32 years, range 28–45 years) without findings other than refractive errors were included. The high-speed images showed sufficient usability for further morphometric processing. In the primary eye position, the median decentrations downward and in a lateral direction were −0.32 mm (range −0.69 to 0.024) and 0.175 mm (range −0.37 to 0.45), respectively. Despite the small sample size of asymptomatic patients, we found a considerable amount of lens dislocation. The median distance amplitude during eye movements was 0.158 mm (range 0.02–0.84). There was a slight positive correlation (r=0.39, P<0.001) between the grade of deviation in the primary position and the distance increase triggered by movements. Conclusion With the use of a slit lamp-mounted high-speed camera system and morphometry software, observation and objective measurements of iris-claw intraocular lenses and angle-supported intraocular lenses movements seem to be possible. Slight decentration in the primary position might be an indicator of increased lens mobility during kinetic stress during eye movements. Long-term assessment by high-speed analysis with higher case numbers has to clarify the relationship between progressing motility and endothelial cell damage. PMID:25071365

  12. Implicit Memory in Monkeys: Development of a Delay Eyeblink Conditioning System with Parallel Electromyographic and High-Speed Video Measurements

    PubMed Central

    Suzuki, Kazutaka; Toyoda, Haruyoshi; Kano, Masanobu; Tsukada, Hideo; Kirino, Yutaka

    2015-01-01

    Delay eyeblink conditioning, a cerebellum-dependent learning paradigm, has been applied to various mammalian species but not yet to monkeys. We therefore developed an accurate measuring system that we believe is the first system suitable for delay eyeblink conditioning in a monkey species (Macaca mulatta). Monkey eyeblinking was simultaneously monitored by orbicularis oculi electromyographic (OO-EMG) measurements and a high-speed camera-based tracking system built around a 1-kHz CMOS image sensor. A 1-kHz tone was the conditioned stimulus (CS), while an air puff (0.02 MPa) was the unconditioned stimulus. EMG analysis showed that the monkeys exhibited a conditioned response (CR) incidence of more than 60% of trials during the 5-day acquisition phase and an extinguished CR during the 2-day extinction phase. The camera system yielded similar results. Hence, we conclude that both methods are effective in evaluating monkey eyeblink conditioning. This system incorporating two different measuring principles enabled us to elucidate the relationship between the actual presence of eyelid closure and OO-EMG activity. An interesting finding permitted by the new system was that the monkeys frequently exhibited obvious CRs even when they produced visible facial signs of drowsiness or microsleep. Indeed, the probability of observing a CR in a given trial was not influenced by whether the monkeys closed their eyelids just before CS onset, suggesting that this memory could be expressed independently of wakefulness. This work presents a novel system for cognitive assessment in monkeys that will be useful for elucidating the neural mechanisms of implicit learning in nonhuman primates. PMID:26068663

  13. Implicit Memory in Monkeys: Development of a Delay Eyeblink Conditioning System with Parallel Electromyographic and High-Speed Video Measurements.

    PubMed

    Kishimoto, Yasushi; Yamamoto, Shigeyuki; Suzuki, Kazutaka; Toyoda, Haruyoshi; Kano, Masanobu; Tsukada, Hideo; Kirino, Yutaka

    2015-01-01

    Delay eyeblink conditioning, a cerebellum-dependent learning paradigm, has been applied to various mammalian species but not yet to monkeys. We therefore developed an accurate measuring system that we believe is the first system suitable for delay eyeblink conditioning in a monkey species (Macaca mulatta). Monkey eyeblinking was simultaneously monitored by orbicularis oculi electromyographic (OO-EMG) measurements and a high-speed camera-based tracking system built around a 1-kHz CMOS image sensor. A 1-kHz tone was the conditioned stimulus (CS), while an air puff (0.02 MPa) was the unconditioned stimulus. EMG analysis showed that the monkeys exhibited a conditioned response (CR) incidence of more than 60% of trials during the 5-day acquisition phase and an extinguished CR during the 2-day extinction phase. The camera system yielded similar results. Hence, we conclude that both methods are effective in evaluating monkey eyeblink conditioning. This system incorporating two different measuring principles enabled us to elucidate the relationship between the actual presence of eyelid closure and OO-EMG activity. An interesting finding permitted by the new system was that the monkeys frequently exhibited obvious CRs even when they produced visible facial signs of drowsiness or microsleep. Indeed, the probability of observing a CR in a given trial was not influenced by whether the monkeys closed their eyelids just before CS onset, suggesting that this memory could be expressed independently of wakefulness. This work presents a novel system for cognitive assessment in monkeys that will be useful for elucidating the neural mechanisms of implicit learning in nonhuman primates. PMID:26068663

  14. Fused Six-Camera Video of STS-134 Launch

    NASA Video Gallery

    Imaging experts funded by the Space Shuttle Program and located at NASA's Ames Research Center prepared this video by merging nearly 20,000 photographs taken by a set of six cameras capturing 250 i...

  15. DETAIL VIEW OF A VIDEO CAMERA POSITIONED ALONG THE PERIMETER ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    DETAIL VIEW OF A VIDEO CAMERA POSITIONED ALONG THE PERIMETER OF THE MLP - Cape Canaveral Air Force Station, Launch Complex 39, Mobile Launcher Platforms, Launcher Road, East of Kennedy Parkway North, Cape Canaveral, Brevard County, FL

  16. DETAIL VIEW OF VIDEO CAMERA, MAIN FLOOR LEVEL, PLATFORM ESOUTH, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    DETAIL VIEW OF VIDEO CAMERA, MAIN FLOOR LEVEL, PLATFORM E-SOUTH, HB-3, FACING SOUTHWEST - Cape Canaveral Air Force Station, Launch Complex 39, Vehicle Assembly Building, VAB Road, East of Kennedy Parkway North, Cape Canaveral, Brevard County, FL

  17. Station Cameras Capture New Videos of Hurricane Katia

    NASA Video Gallery

    Aboard the International Space Station, external cameras captured new video of Hurricane Katia as it moved northwest across the western Atlantic north of Puerto Rico at 10:35 a.m. EDT on September ...

  18. High-speed video gait analysis reveals early and characteristic locomotor phenotypes in mouse models of neurodegenerative movement disorders.

    PubMed

    Preisig, Daniel F; Kulic, Luka; Krüger, Maik; Wirth, Fabian; McAfoose, Jordan; Späni, Claudia; Gantenbein, Pascal; Derungs, Rebecca; Nitsch, Roger M; Welt, Tobias

    2016-09-15

    Neurodegenerative diseases of the central nervous system frequently affect the locomotor system resulting in impaired movement and gait. In this study we performed a whole-body high-speed video gait analysis in three different mouse lines of neurodegenerative movement disorders to investigate the motor phenotype. Based on precise computerized motion tracking of all relevant joints and the tail, a custom-developed algorithm generated individual and comprehensive locomotor profiles consisting of 164 spatial and temporal parameters. Gait changes observed in the three models corresponded closely to the classical clinical symptoms described in these disorders: Muscle atrophy due to motor neuron loss in SOD1 G93A transgenic mice led to gait characterized by changes in hind-limb movement and positioning. In contrast, locomotion in huntingtin N171-82Q mice modeling Huntington's disease with basal ganglia damage was defined by hyperkinetic limb movements and rigidity of the trunk. Harlequin mutant mice modeling cerebellar degeneration showed gait instability and extensive changes in limb positioning. Moreover, model specific gait parameters were identified and were shown to be more sensitive than conventional motor tests. Altogether, this technique provides new opportunities to decipher underlying disease mechanisms and test novel therapeutic approaches. PMID:27233823

  19. Synchronised electrical monitoring and high speed video of bubble growth associated with individual discharges during plasma electrolytic oxidation

    NASA Astrophysics Data System (ADS)

    Troughton, S. C.; Nominé, A.; Nominé, A. V.; Henrion, G.; Clyne, T. W.

    2015-12-01

    Synchronised electrical current and high speed video information are presented from individual discharges on Al substrates during PEO processing. Exposure time was 8 μs and linear spatial resolution 9 μm. Image sequences were captured for periods of 2 s, during which the sample surface was illuminated with short duration flashes (revealing bubbles formed where the discharge reached the surface of the coating). Correlations were thus established between discharge current, light emission from the discharge channel and (externally-illuminated) dimensions of the bubble as it expanded and contracted. Bubbles reached radii of 500 μm, within periods of 100 μs, with peak growth velocity about 10 m/s. It is deduced that bubble growth occurs as a consequence of the progressive volatilisation of water (electrolyte), without substantial increases in either pressure or temperature within the bubble. Current continues to flow through the discharge as the bubble expands, and this growth (and the related increase in electrical resistance) is thought to be responsible for the current being cut off (soon after the point of maximum radius). A semi-quantitative audit is presented of the transformations between different forms of energy that take place during the lifetime of a discharge.

  20. BEHAVIORAL INTERACTIONS OF THE BLACK IMPORTED FIRE ANT (SOLENOPSIS RICHTERI FOREL) AND ITS PARASITOID FLY (PSEUDACTEON CURVATUS BORGMEIER) AS REVEALED BY HIGH-SPEED VIDEO.

    Technology Transfer Automated Retrieval System (TEKTRAN)

    High-speed video recordings were used to study the interactions between the phorid fly (Pseudacteon curvatus), and the black imported fire ant (Solenopsis richteri) in the field. Phorid flies are extremely fast agile fliers that can hover and fly in all directions. Wingbeat frequency recorded with...

  1. A new ex vivo beating heart model to investigate the application of heart valve performance tools with a high-speed camera.

    PubMed

    Kondruweit, Markus; Friedl, Sven; Heim, Christian; Wittenberg, Thomas; Weyand, Michael; Harig, Frank

    2014-01-01

    High-speed camera examination of heart valves is an established technique to examine heart valve prosthesis. The aim of this study was to examine the possibility to transmit new tools for high-speed camera examination of heart valve behavior under near-physiological conditions in a porcine ex vivo beating heart model. After explantation of the piglet heart, main coronary arteries were cannulated and the heart was reperfused with the previously collected donor blood. When the heart started beating in sinus rhythm again, the motion of the aortic and mitral valve was recorded using a digital high-speed camera system (recording rate 2,000 frames/sec). The image sequences of the mitral valve were analyzed, and digital kymograms were calculated at different angles for the exact analysis of the different closure phases. The image sequence of the aortic valve was analyzed, and several snakes were performed to analyze the effective orifice area over the time. Both processing tools were successfully applied to examine heart valves in this ex vivo beating heart model. We were able to investigate the exact open and closure time of the mitral valve, as well as the projected effective orifice area of the aortic valve over the time. The high-speed camera investigation in an ex vivo beating heart model of heart valve behavior is feasible and also reasonable because of using processing feature such as kymography for exact analysis. These analytical techniques might help to optimize reconstructive surgery of the mitral valve and the development of heart valve prostheses in future. PMID:24270227

  2. A comparison of DIC and grid measurements for processing spalling tests with the VFM and an 80-kpixel ultra-high speed camera

    NASA Astrophysics Data System (ADS)

    Saletti, D.; Forquin, P.

    2016-05-01

    During the last decades, the spalling technique has been more and more used to characterize the tensile strength of geomaterials at high-strain-rates. In 2012, a new processing technique was proposed by Pierron and Forquin [1] to measure the stress level and apparent Young's modulus in a concrete sample by means of an ultra-high speed camera, a grid bonded onto the sample and the Virtual Fields Method. However the possible benefit to use the DIC (Digital Image Correlation) technique instead of the grid method has not been investigated. In the present work, spalling experiments were performed on two aluminum alloy samples with HPV1 (Shimadzu) ultra-high speed camera providing 1 Mfps maximum recording frequency and about 80 kpixel spatial resolution. A grid with 1 mm pitch was bonded onto the first sample whereas a speckle pattern was covering the second sample for DIC measurements. Both methods were evaluated in terms of displacement and acceleration measurements by comparing the experimental data to laser interferometer measurements. In addition, the stress and strain levels in a given cross-section were compared to the experimental data provided by a strain gage glued on each sample. The measurements allow discussing the benefit of each (grid and DIC) technique to obtain the stress-strain relationship in the case of using an 80-kpixel ultra-high speed camera.

  3. The calibration of video cameras for quantitative measurements

    NASA Technical Reports Server (NTRS)

    Snow, Walter L.; Childers, Brooks A.; Shortis, Mark R.

    1993-01-01

    Several different recent applications of velocimetry at Langley Research Center are described in order to show the need for video camera calibration for quantitative measurements. Problems peculiar to video sensing are discussed, including synchronization and timing, targeting, and lighting. The extension of the measurements to include radiometric estimates is addressed.

  4. Demonstrations of Optical Spectra with a Video Camera

    ERIC Educational Resources Information Center

    Kraftmakher, Yaakov

    2012-01-01

    The use of a video camera may markedly improve demonstrations of optical spectra. First, the output electrical signal from the camera, which provides full information about a picture to be transmitted, can be used for observing the radiant power spectrum on the screen of a common oscilloscope. Second, increasing the magnification by the camera…

  5. High-speed video of competing and cut-off leaders prior to "upward illumination-type" lightning ground strokes

    NASA Astrophysics Data System (ADS)

    Stolzenburg, Maribeth; Marshall, Thomas; Karunarathne, Sumedhe; Karunarathna, Nadeeka; Warner, Tom; Orville, Richard

    2013-04-01

    This study presents evidence to test a hypothesis regarding the physical mechanism resulting in very weak "upward illumination" (UI) type ground strokes occurring within a few milliseconds after a normal return stroke (RS) of a negative lightning flash. As described in previous work [Stolzenburg et al., JGR D15203, 2012], these short duration (< 1 ms) strokes form a new ground connection, without apparent connection to the main RS, over their relatively short (< 3 km) visible upward return path. From a dataset of 170 video flashes acquired in 2011 (captured at 50000 frames per second), we find 20 good UI examples in 18 flashes at 2.5-32.3 km distance from the camera. Average separation values are 1.26 ms and 1.9 km between the ground connections of the UI and main RS. Based on electric field change data for the flashes, the estimated peak current of the UI strokes averages -5.0 kA, about one-third the average value for the preceding RS. In 15 cases the video data show a distinct stepped leader to the UI which develops concurrently with the stepped leader to the main RS. Estimated altitude of the UI leader tip just before the main RS occurs ranges from 0 to 610 m, and in 7 cases steps are visible in the UI leader after the main RS. In most of the examples the RS and UI appear as separate channels for their entire visible portion, but in 5 cases there is a junction indicating the UI leader is a cut-off branch from the main leader. A generalized schematic of the seven main luminosity stages in a typical UI, along with video examples showing each of these stages and electric field change data, will be presented.

  6. Video Cameras in the Ondrejov Flare Spectrograph Results and Prospects

    NASA Astrophysics Data System (ADS)

    Kotrc, P.

    Since 1991 video cameras have been widely used both in the image and in the spectral data acquisition of the Ondrejov Multichannel Flare Spectrograph. In addition to classical photographic data registration, this kind of detectors brought new possibilities, especially into dynamical solar phenomena observations and put new requirements on the digitization, archiving and data processing techniques. The unique complex video system consisting of four video cameras and auxiliary equipment was mostly developed, implemented and used in the Ondrejov observatory. The main advantages and limitations of the system are briefly described from the points of view of its scientific philosophy, intents and outputs. Some obtained results, experience and future prospects are discussed.

  7. Experimental Comparison of the High-Speed Imaging Performance of an EM-CCD and sCMOS Camera in a Dynamic Live-Cell Imaging Test Case

    PubMed Central

    Beier, Hope T.; Ibey, Bennett L.

    2014-01-01

    The study of living cells may require advanced imaging techniques to track weak and rapidly changing signals. Fundamental to this need is the recent advancement in camera technology. Two camera types, specifically sCMOS and EM-CCD, promise both high signal-to-noise and high speed (>100 fps), leaving researchers with a critical decision when determining the best technology for their application. In this article, we compare two cameras using a live-cell imaging test case in which small changes in cellular fluorescence must be rapidly detected with high spatial resolution. The EM-CCD maintained an advantage of being able to acquire discernible images with a lower number of photons due to its EM-enhancement. However, if high-resolution images at speeds approaching or exceeding 1000 fps are desired, the flexibility of the full-frame imaging capabilities of sCMOS is superior. PMID:24404178

  8. Court Reconstruction for Camera Calibration in Broadcast Basketball Videos.

    PubMed

    Wen, Pei-Chih; Cheng, Wei-Chih; Wang, Yu-Shuen; Chu, Hung-Kuo; Tang, Nick C; Liao, Hong-Yuan Mark

    2016-05-01

    We introduce a technique of calibrating camera motions in basketball videos. Our method particularly transforms player positions to standard basketball court coordinates and enables applications such as tactical analysis and semantic basketball video retrieval. To achieve a robust calibration, we reconstruct the panoramic basketball court from a video, followed by warping the panoramic court to a standard one. As opposed to previous approaches, which individually detect the court lines and corners of each video frame, our technique considers all video frames simultaneously to achieve calibration; hence, it is robust to illumination changes and player occlusions. To demonstrate the feasibility of our technique, we present a stroke-based system that allows users to retrieve basketball videos. Our system tracks player trajectories from broadcast basketball videos. It then rectifies the trajectories to a standard basketball court by using our camera calibration method. Consequently, users can apply stroke queries to indicate how the players move in gameplay during retrieval. The main advantage of this interface is an explicit query of basketball videos so that unwanted outcomes can be prevented. We show the results in Figs. 1, 7, 9, 10 and our accompanying video to exhibit the feasibility of our technique. PMID:27504515

  9. Assessment of the metrological performance of an in situ storage image sensor ultra-high speed camera for full-field deformation measurements

    NASA Astrophysics Data System (ADS)

    Rossi, Marco; Pierron, Fabrice; Forquin, Pascal

    2014-02-01

    Ultra-high speed (UHS) cameras allow us to acquire images typically up to about 1 million frames s-1 for a full spatial resolution of the order of 1 Mpixel. Different technologies are available nowadays to achieve these performances, an interesting one is the so-called in situ storage image sensor architecture where the image storage is incorporated into the sensor chip. Such an architecture is all solid state and does not contain movable devices as occurs, for instance, in the rotating mirror UHS cameras. One of the disadvantages of this system is the low fill factor (around 76% in the vertical direction and 14% in the horizontal direction) since most of the space in the sensor is occupied by memory. This peculiarity introduces a series of systematic errors when the camera is used to perform full-field strain measurements. The aim of this paper is to develop an experimental procedure to thoroughly characterize the performance of such kinds of cameras in full-field deformation measurement and identify the best operative conditions which minimize the measurement errors. A series of tests was performed on a Shimadzu HPV-1 UHS camera first using uniform scenes and then grids under rigid movements. The grid method was used as full-field measurement optical technique here. From these tests, it has been possible to appropriately identify the camera behaviour and utilize this information to improve actual measurements.

  10. Single software platform used for high speed data transfer implementation in a 65k pixel camera working in single photon counting mode

    NASA Astrophysics Data System (ADS)

    Maj, P.; Kasiński, K.; Gryboś, P.; Szczygieł, R.; Kozioł, A.

    2015-12-01

    Integrated circuits designed for specific applications generally use non-standard communication methods. Hybrid pixel detector readout electronics produces a huge amount of data as a result of number of frames per seconds. The data needs to be transmitted to a higher level system without limiting the ASIC's capabilities. Nowadays, the Camera Link interface is still one of the fastest communication methods, allowing transmission speeds up to 800 MB/s. In order to communicate between a higher level system and the ASIC with a dedicated protocol, an FPGA with dedicated code is required. The configuration data is received from the PC and written to the ASIC. At the same time, the same FPGA should be able to transmit the data from the ASIC to the PC at the very high speed. The camera should be an embedded system enabling autonomous operation and self-monitoring. In the presented solution, at least three different hardware platforms are used—FPGA, microprocessor with real-time operating system and the PC with end-user software. We present the use of a single software platform for high speed data transfer from 65k pixel camera to the personal computer.

  11. Study of fiber-tip damage mechanism during Ho:YAG laser lithotripsy by high-speed camera and the Schlieren method

    NASA Astrophysics Data System (ADS)

    Zhang, Jian J.; Getzan, Grant; Xuan, Jason R.; Yu, Honggang

    2015-02-01

    Fiber-tip degradation, damage, or burn back is a common problem during the ureteroscopic laser lithotripsy procedure to treat urolithiasis. Fiber-tip burn back results in reduced transmission of laser energy, which greatly reduces the efficiency of stone comminution. In some cases, the fiber-tip degradation is so severe that the damaged fiber-tip will absorb most of the laser energy, which can cause the tip portion to be overheated and melt the cladding or jacket layers of the fiber. Though it is known that the higher the energy density (which is the ratio of the laser energy fluence over the cross section area of the fiber core), the faster the fiber-tip degradation, the damage mechanism of the fibertip is still unclear. In this study, fiber-tip degradation was investigated by visualization of shockwave, cavitation/bubble dynamics, and calculus debris ejection with a high-speed camera and the Schlieren method. A commercialized, pulsed Ho:YAG laser at 2.12 um, 273/365/550-um core fibers, and calculus phantoms (Plaster of Paris, 10x10x10 mm cube) were utilized to mimic the laser lithotripsy procedure. Laser energy induced shockwave, cavitation/bubble dynamics, and stone debris ejection were recorded by a high-speed camera with a frame rate of 10,000 to 930,000 fps. The results suggested that using a high-speed camera and the Schlieren method to visualize the shockwave provided valuable information about time-dependent acoustic energy propagation and its interaction with cavitation and calculus. Detailed investigation on acoustic energy beam shaping by fiber-tip modification and interaction between shockwave, cavitation/bubble dynamics, and calculus debris ejection will be conducted as a future study.

  12. High-speed imaging system for observation of discharge phenomena

    NASA Astrophysics Data System (ADS)

    Tanabe, R.; Kusano, H.; Ito, Y.

    2008-11-01

    A thin metal electrode tip instantly changes its shape into a sphere or a needlelike shape in a single electrical discharge of high current. These changes occur within several hundred microseconds. To observe these high-speed phenomena in a single discharge, an imaging system using a high-speed video camera and a high repetition rate pulse laser was constructed. A nanosecond laser, the wavelength of which was 532 nm, was used as the illuminating source of a newly developed high-speed video camera, HPV-1. The time resolution of our system was determined by the laser pulse width and was about 80 nanoseconds. The system can take one hundred pictures at 16- or 64-microsecond intervals in a single discharge event. A band-pass filter at 532 nm was placed in front of the camera to block the emission of the discharge arc at other wavelengths. Therefore, clear images of the electrode were recorded even during the discharge. If the laser was not used, only images of plasma during discharge and thermal radiation from the electrode after discharge were observed. These results demonstrate that the combination of a high repetition rate and a short pulse laser with a high speed video camera provides a unique and powerful method for high speed imaging.

  13. Synchronizing Light Pulses With Video Camera

    NASA Technical Reports Server (NTRS)

    Kalshoven, James E., Jr.; Tierney, Michael; Dabney, Philip

    1993-01-01

    Interface circuit triggers laser or other external source of light to flash in proper frame and field (at proper time) for video recording and playback in "pause" mode. Also increases speed of electronic shutter (if any) during affected frame to reduce visibility of background illumination relative to that of laser illumination.

  14. Using a Digital Video Camera to Study Motion

    ERIC Educational Resources Information Center

    Abisdris, Gil; Phaneuf, Alain

    2007-01-01

    To illustrate how a digital video camera can be used to analyze various types of motion, this simple activity analyzes the motion and measures the acceleration due to gravity of a basketball in free fall. Although many excellent commercially available data loggers and software can accomplish this task, this activity requires almost no financial…

  15. 67. DETAIL OF VIDEO CAMERA CONTROL PANEL LOCATED IMMEDIATELY WEST ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    67. DETAIL OF VIDEO CAMERA CONTROL PANEL LOCATED IMMEDIATELY WEST OF ASSISTANT LAUNCH CONDUCTOR PANEL SHOWN IN CA-133-1-A-66 - Vandenberg Air Force Base, Space Launch Complex 3, Launch Operations Building, Napa & Alden Roads, Lompoc, Santa Barbara County, CA

  16. Compact 3D flash lidar video cameras and applications

    NASA Astrophysics Data System (ADS)

    Stettner, Roger

    2010-04-01

    The theory and operation of Advanced Scientific Concepts, Inc.'s (ASC) latest compact 3D Flash LIDAR Video Cameras (3D FLVCs) and a growing number of technical problems and solutions are discussed. The solutions range from space shuttle docking, planetary entry, decent and landing, surveillance, autonomous and manned ground vehicle navigation and 3D imaging through particle obscurants.

  17. Lights, Camera, Action! Using Video Recordings to Evaluate Teachers

    ERIC Educational Resources Information Center

    Petrilli, Michael J.

    2011-01-01

    Teachers and their unions do not want test scores to count for everything; classroom observations are key, too. But planning a couple of visits from the principal is hardly sufficient. These visits may "change the teacher's behavior"; furthermore, principals may not be the best judges of effective teaching. So why not put video cameras in…

  18. CameraCast: flexible access to remote video sensors

    NASA Astrophysics Data System (ADS)

    Kong, Jiantao; Ganev, Ivan; Schwan, Karsten; Widener, Patrick

    2007-01-01

    New applications like remote surveillance and online environmental or traffic monitoring are making it increasingly important to provide flexible and protected access to remote video sensor devices. Current systems use application-level codes like web-based solutions to provide such access. This requires adherence to user-level APIs provided by such services, access to remote video information through given application-specific service and server topologies, and that the data being captured and distributed is manipulated by third party service codes. CameraCast is a simple, easily used system-level solution to remote video access. It provides a logical device API so that an application can identically operate on local vs. remote video sensor devices, using its own service and server topologies. In addition, the application can take advantage of API enhancements to protect remote video information, using a capability-based model for differential data protection that offers fine grain control over the information made available to specific codes or machines, thereby limiting their ability to violate privacy or security constraints. Experimental evaluations of CameraCast show that the performance of accessing remote video information approximates that of accesses to local devices, given sufficient networking resources. High performance is also attained when protection restrictions are enforced, due to an efficient kernel-level realization of differential data protection.

  19. Unmanned Vehicle Guidance Using Video Camera/Vehicle Model

    NASA Technical Reports Server (NTRS)

    Sutherland, T.

    1999-01-01

    A video guidance sensor (VGS) system has flown on both STS-87 and STS-95 to validate a single camera/target concept for vehicle navigation. The main part of the image algorithm was the subtraction of two consecutive images using software. For a nominal size image of 256 x 256 pixels this subtraction can take a large portion of the time between successive frames in standard rate video leaving very little time for other computations. The purpose of this project was to integrate the software subtraction into hardware to speed up the subtraction process and allow for more complex algorithms to be performed, both in hardware and software.

  20. High-Speed Video for Investigating Splash Erosion Behaviour: Obtaining Initial Velocity and Angle of Ejections by Tracking Trajectories.

    NASA Astrophysics Data System (ADS)

    Ahn, S.; Doerr, S.; Douglas, P.; Bryant, R.; Hamlett, C.; McHale, G.; Newton, M.; Shirtcliffe, N.

    2012-04-01

    The use of high-speed videography has been shown to be very useful in some splash erosion studies. One methodological problem that arises in its application is the difficulty in tracking a large number of particles in slow motion, especially when the use of automatic tracking software is limited. With this problem, some studies simply assume a certain ejecting angle for all particles rather than actually tracking every particle. In this contribution, different combinations of variables (e.g. landing position, landing time or departing position, etc.) were compared in order to determine an efficient and sufficiently precise method for trajectory tracking when a large amount of particles are being ejected.

  1. Relationship between structures of sprite streamers and inhomogeneity of preceding halos captured by high-speed camera during a combined aircraft and ground-based campaign

    NASA Astrophysics Data System (ADS)

    Takahashi, Y.; Sato, M.; Kudo, T.; Shima, Y.; Kobayashi, N.; Inoue, T.; Stenbaek-Nielsen, H. C.; McHarg, M. G.; Haaland, R. K.; Kammae, T.; Yair, Y.; Lyons, W. A.; Cummer, S. A.; Ahrns, J.; Yukman, P.; Warner, T. A.; Sonnenfeld, R. G.; Li, J.; Lu, G.

    2011-12-01

    The relationship between diffuse glows such as elves and sprite halos and subsequent discrete structure of sprite streamers is considered to be one of the keys to solve the mechanism causing a large variation of sprite structures. However, it's not easy to image at high frame rate both the diffuse and discrete structures simultaneously, since it requires high sensitivity, high spatial resolution and high signal-to-noise ratio. To capture the real spatial structure of TLEs without influence of atmospheric absorption, spacecraft would be the best solution. However, since the imaging observation from space is mostly made for TLEs appeared near the horizon, the range from spacecraft to TLEs becomes large, such as few thousand km, resulting in low spatial resolution. The aircraft can approach thunderstorm up to a few hundred km or less and can carry heavy high-speed cameras with huge size data memories. In the period of June 27 - July 10, 2011, a combined aircraft and ground-based campaign, in support of NHK Cosmic Shore project, was carried with two jet airplanes under collaboration between NHK (Japan Broadcasting Corporation) and universities. On 8 nights out of 16 standing-by, the jets took off from the airport near Denver, Colorado, and an airborne high speed camera captured over 40 TLE events at a frame rate of 8300 /sec. Here we introduce the time development of sprite streamers and the both large and fine structures of preceding halos showing inhomogeneity, suggesting a mechanism to cause the large variation of sprite types, such as crown like sprites.

  2. In-flight Video Captured by External Tank Camera System

    NASA Technical Reports Server (NTRS)

    2005-01-01

    In this July 26, 2005 video, Earth slowly fades into the background as the STS-114 Space Shuttle Discovery climbs into space until the External Tank (ET) separates from the orbiter. An External Tank ET Camera System featuring a Sony XC-999 model camera provided never before seen footage of the launch and tank separation. The camera was installed in the ET LO2 Feedline Fairing. From this position, the camera had a 40% field of view with a 3.5 mm lens. The field of view showed some of the Bipod area, a portion of the LH2 tank and Intertank flange area, and some of the bottom of the shuttle orbiter. Contained in an electronic box, the battery pack and transmitter were mounted on top of the Solid Rocker Booster (SRB) crossbeam inside the ET. The battery pack included 20 Nickel-Metal Hydride batteries (similar to cordless phone battery packs) totaling 28 volts DC and could supply about 70 minutes of video. Located 95 degrees apart on the exterior of the Intertank opposite orbiter side, there were 2 blade S-Band antennas about 2 1/2 inches long that transmitted a 10 watt signal to the ground stations. The camera turned on approximately 10 minutes prior to launch and operated for 15 minutes following liftoff. The complete camera system weighs about 32 pounds. Marshall Space Flight Center (MSFC), Johnson Space Center (JSC), Goddard Space Flight Center (GSFC), and Kennedy Space Center (KSC) participated in the design, development, and testing of the ET camera system.

  3. Fast roadway detection using car cabin video camera

    NASA Astrophysics Data System (ADS)

    Krokhina, Daria; Blinov, Veniamin; Gladilin, Sergey; Tarhanov, Ivan; Postnikov, Vassili

    2015-12-01

    We describe a fast method for road detection in images from a vehicle cabin camera. Straight section of roadway is detected using Fast Hough Transform and the method of dynamic programming. We assume that location of horizon line in the image and the road pattern are known. The developed method is fast enough to detect the roadway on each frame of the video stream in real time and may be further accelerated by the use of tracking.

  4. Robust camera calibration for sport videos using court models

    NASA Astrophysics Data System (ADS)

    Farin, Dirk; Krabbe, Susanne; de With, Peter H. N.; Effelsberg, Wolfgang

    2003-12-01

    We propose an automatic camera calibration algorithm for court sports. The obtained camera calibration parameters are required for applications that need to convert positions in the video frame to real-world coordinates or vice versa. Our algorithm uses a model of the arrangement of court lines for calibration. Since the court model can be specified by the user, the algorithm can be applied to a variety of different sports. The algorithm starts with a model initialization step which locates the court in the image without any user assistance or a-priori knowledge about the most probable position. Image pixels are classified as court line pixels if they pass several tests including color and local texture constraints. A Hough transform is applied to extract line elements, forming a set of court line candidates. The subsequent combinatorial search establishes correspondences between lines in the input image and lines from the court model. For the succeeding input frames, an abbreviated calibration algorithm is used, which predicts the camera parameters for the new image and optimizes the parameters using a gradient-descent algorithm. We have conducted experiments on a variety of sport videos (tennis, volleyball, and goal area sequences of soccer games). Video scenes with considerable difficulties were selected to test the robustness of the algorithm. Results show that the algorithm is very robust to occlusions, partial court views, bad lighting conditions, or shadows.

  5. Identifying sports videos using replay, text, and camera motion features

    NASA Astrophysics Data System (ADS)

    Kobla, Vikrant; DeMenthon, Daniel; Doermann, David S.

    1999-12-01

    Automated classification of digital video is emerging as an important piece of the puzzle in the design of content management systems for digital libraries. The ability to classify videos into various classes such as sports, news, movies, or documentaries, increases the efficiency of indexing, browsing, and retrieval of video in large databases. In this paper, we discuss the extraction of features that enable identification of sports videos directly from the compressed domain of MPEG video. These features include detecting the presence of action replays, determining the amount of scene text in vide, and calculating various statistics on camera and/or object motion. The features are derived from the macroblock, motion,and bit-rate information that is readily accessible from MPEG video with very minimal decoding, leading to substantial gains in processing speeds. Full-decoding of selective frames is required only for text analysis. A decision tree classifier built using these features is able to identify sports clips with an accuracy of about 93 percent.

  6. Improving Photometric Calibration of Meteor Video Camera Systems

    NASA Technical Reports Server (NTRS)

    Ehlert, Steven; Kingery, Aaron; Cooke, William

    2016-01-01

    Current optical observations of meteors are commonly limited by systematic uncertainties in photometric calibration at the level of approximately 0.5 mag or higher. Future improvements to meteor ablation models, luminous efficiency models, or emission spectra will hinge on new camera systems and techniques that significantly reduce calibration uncertainties and can reliably perform absolute photometric measurements of meteors. In this talk we discuss the algorithms and tests that NASA's Meteoroid Environment Office (MEO) has developed to better calibrate photometric measurements for the existing All-Sky and Wide-Field video camera networks as well as for a newly deployed four-camera system for measuring meteor colors in Johnson-Cousins BV RI filters. In particular we will emphasize how the MEO has been able to address two long-standing concerns with the traditional procedure, discussed in more detail below.

  7. Determination of pulsed-source cloud size/rise information using high-speed, low-speed, and digitized-video photography techniques

    NASA Astrophysics Data System (ADS)

    Magiawala, Kiran R.; Schatzle, Paul R.; Petach, Michael B.; Figueroa, Miguel A.; Peabody, Alden S., II

    1993-01-01

    This paper discusses a laboratory method based on generating a buoyant thermal cloud through explosively bursting an aluminum foil by a rapid electric discharge procedure. The required electric energy is stored in a bank of capacitors and is discharged into the foil through a trigger circuit on external command. The aluminum first vaporizes and becomes an aluminum gas plasma at high temperature (approximately 8000 K) which then mixes with the surrounding air and ignites. The cloud containing these hot combustion products rises up in an unstratified anechoic environment. As the cloud rises, it entrains the air from the surroundings due to turbulent mixing and it grows. To characterize this cloud rise, three different types of photographic techniques are used. They are: high-speed photography (6000 fps), low-speed photography (200 fps), and video photography (30 fps). These techniques cover various time scales in foil firing schedule beginning from early time (up to 10 msec) to late time (up to 4 secs). Images obtained by video photography technique have been processed into a digital format. In digitizing the video tape data, an optical video disk player/recorder was used together with pc-based frame grabber hardware. A simple software routine was developed to obtain cloud size/rise data based on an edge detection technique.

  8. Comparison of high speed imaging technique to laser vibrometry for detection of vibration information from objects

    NASA Astrophysics Data System (ADS)

    Paunescu, Gabriela; Lutzmann, Peter; Göhler, Benjamin; Wegner, Daniel

    2015-10-01

    The development of camera technology in recent years has made high speed imaging a reliable method in vibration and dynamic measurements. The passive recovery of vibration information from high speed video recordings was reported in several recent papers. A highly developed technique, involving decomposition of the input video into spatial subframes to compute local motion signals, allowed an accurate sound reconstruction. A simpler technique based on image matching for vibration measurement was also reported as efficient in extracting audio information from a silent high speed video. In this paper we investigate and discuss the sensitivity and the limitations of the high speed imaging technique for vibration detection in comparison to the well-established Doppler vibrometry technique. Experiments on the extension of the high speed imaging method to longer range applications are presented.

  9. A low-bandwidth graphical user interface for high-speed triage of potential items of interest in video imagery

    NASA Astrophysics Data System (ADS)

    Huber, David J.; Khosla, Deepak; Martin, Kevin; Chen, Yang

    2013-06-01

    In this paper, we introduce a user interface called the "Threat Chip Display" (TCD) for rapid human-in-the-loop analysis and detection of "threats" in high-bandwidth imagery and video from a list of "Items of Interest" (IOI), which includes objects, targets and events that the human is interested in detecting and identifying. Typically some front-end algorithm (e.g., computer vision, cognitive algorithm, EEG RSVP based detection, radar detection) has been applied to the video and has pre-processed and identified a potential list of IOI. The goal of the TCD is to facilitate rapid analysis and triaging of this list of IOI to detect and confirm actual threats. The layout of the TCD is designed for ease of use, fast triage of IOI, and a low bandwidth requirement. Additionally, a very low mental demand allows the system to be run for extended periods of time.

  10. A multiscale product approach for an automatic classification of voice disorders from endoscopic high-speed videos.

    PubMed

    Unger, Jakob; Schuster, Maria; Hecker, Dietmar J; Schick, Bernhard; Lohscheller, Joerg

    2013-01-01

    Direct observation of vocal fold vibration is indispensable for a clinical diagnosis of voice disorders. Among current imaging techniques, high-speed videoendoscopy constitutes a state-of-the-art method capturing several thousand frames per second of the vocal folds during phonation. Recently, a method for extracting descriptive features from phonovibrograms, a two-dimensional image containing the spatio-temporal pattern of vocal fold dynamics, was presented. The derived features are closely related to a clinically established protocol for functional assessment of pathologic voices. The discriminative power of these features for different pathologic findings and configurations has not been assessed yet. In the current study, a collective of 220 subjects is considered for two- and multi-class problems of healthy and pathologic findings. The performance of the proposed feature set is compared to conventional feature reduction routines and was found to clearly outperform these. As such, the proposed procedure shows great potential for diagnostical issues of vocal fold disorders. PMID:24111445

  11. A new paradigm for video cameras: optical sensors

    NASA Astrophysics Data System (ADS)

    Grottle, Kevin; Nathan, Anoo; Smith, Catherine

    2007-04-01

    This paper presents a new paradigm for the utilization of video surveillance cameras as optical sensors to augment and significantly improve the reliability and responsiveness of chemical monitoring systems. Incorporated into a hierarchical tiered sensing architecture, cameras serve as 'Tier 1' or 'trigger' sensors monitoring for visible indications after a release of warfare or industrial toxic chemical agents. No single sensor today yet detects the full range of these agents, but the result of exposure is harmful and yields visible 'duress' behaviors. Duress behaviors range from simple to complex types of observable signatures. By incorporating optical sensors in a tiered sensing architecture, the resulting alarm signals based on these behavioral signatures increases the range of detectable toxic chemical agent releases and allows timely confirmation of an agent release. Given the rapid onset of duress type symptoms, an optical sensor can detect the presence of a release almost immediately. This provides cues for a monitoring system to send air samples to a higher-tiered chemical sensor, quickly launch protective mitigation steps, and notify an operator to inspect the area using the camera's video signal well before the chemical agent can disperse widely throughout a building.

  12. Social Justice through Literacy: Integrating Digital Video Cameras in Reading Summaries and Responses

    ERIC Educational Resources Information Center

    Liu, Rong; Unger, John A.; Scullion, Vicki A.

    2014-01-01

    Drawing data from an action-oriented research project for integrating digital video cameras into the reading process in pre-college courses, this study proposes using digital video cameras in reading summaries and responses to promote critical thinking and to teach social justice concepts. The digital video research project is founded on…

  13. Automatic radial distortion correction in zoom lens video camera

    NASA Astrophysics Data System (ADS)

    Kim, Daehyun; Shin, Hyoungchul; Oh, Juhyun; Sohn, Kwanghoon

    2010-10-01

    We present a novel method for automatically correcting the radial lens distortion in a zoom lens video camera system. We first define the zoom lens distortion model using an inherent characteristic of the zoom lens. Next, we sample some video frames with different focal lengths and estimate their radial distortion parameters and focal lengths. We then optimize the zoom lens distortion model with preestimated parameter pairs using the least-squares method. For more robust optimization, we divide the sample images into two groups according to distortion types (i.e., barrel and pincushion) and then separately optimize the zoom lens distortion models with respect to divided groups. Our results show that the zoom lens distortion model can accurately represent the radial distortion of a zoom lens.

  14. Non-mydriatic, wide field, fundus video camera

    NASA Astrophysics Data System (ADS)

    Hoeher, Bernhard; Voigtmann, Peter; Michelson, Georg; Schmauss, Bernhard

    2014-02-01

    We describe a method we call "stripe field imaging" that is capable of capturing wide field color fundus videos and images of the human eye at pupil sizes of 2mm. This means that it can be used with a non-dilated pupil even with bright ambient light. We realized a mobile demonstrator to prove the method and we could acquire color fundus videos of subjects successfully. We designed the demonstrator as a low-cost device consisting of mass market components to show that there is no major additional technical outlay to realize the improvements we propose. The technical core idea of our method is breaking the rotational symmetry in the optical design that is given in many conventional fundus cameras. By this measure we could extend the possible field of view (FOV) at a pupil size of 2mm from a circular field with 20° in diameter to a square field with 68° by 18° in size. We acquired a fundus video while the subject was slightly touching and releasing the lid. The resulting video showed changes at vessels in the region of the papilla and a change of the paleness of the papilla.

  15. Scientists Behind the Camera - Increasing Video Documentation in the Field

    NASA Astrophysics Data System (ADS)

    Thomson, S.; Wolfe, J.

    2013-12-01

    Over the last two years, Skypunch Creative has designed and implemented a number of pilot projects to increase the amount of video captured by scientists in the field. The major barrier to success that we tackled with the pilot projects was the conflicting demands of the time, space, storage needs of scientists in the field and the demands of shooting high quality video. Our pilots involved providing scientists with equipment, varying levels of instruction on shooting in the field and post-production resources (editing and motion graphics). In each project, the scientific team was provided with cameras (or additional equipment if they owned their own), tripods, and sometimes sound equipment, as well as an external hard drive to return the footage to us. Upon receiving the footage we professionally filmed follow-up interviews and created animations and motion graphics to illustrate their points. We also helped with the distribution of the final product (http://climatescience.tv/2012/05/the-story-of-a-flying-hippo-the-hiaper-pole-to-pole-observation-project/ and http://climatescience.tv/2013/01/bogged-down-in-alaska/). The pilot projects were a success. Most of the scientists returned asking for additional gear and support for future field work. Moving out of the pilot phase, to continue the project, we have produced a 14 page guide for scientists shooting in the field based on lessons learned - it contains key tips and best practice techniques for shooting high quality footage in the field. We have also expanded the project and are now testing the use of video cameras that can be synced with sensors so that the footage is useful both scientifically and artistically. Extract from A Scientist's Guide to Shooting Video in the Field

  16. High Speed Video Data Acquisition System (VDAS) for H. E. P. , including Reference Frame Subtractor, Data Compactor and 16 megabyte FIFO

    SciTech Connect

    Knickerbocker, K.L.; Baumbaugh, A.E.; Ruchti, R.; Baumbaugh, B.W.

    1987-02-01

    A Video-Data-Acquisition-System (VDAS) has been developed to record image data from a scintillating glass fiber-optic target developed for High Energy Physics. VDAS consists of a combination flash ADC, reference frame subtractor, high speed data compactor, an N megabyte First-In-First-Out (FIFO) memory (where N is a multiple of 4), and a single board computer as a control processor. System data rates are in excess of 30 megabytes/second. The reference frame subtractor, in conjunction with the data compactor, records only the differences from a standard frame. This greatly reduces the amount of data needed to record an image. Typical image sizes are reduced by as much as a factor of 20. With the exception of the ECL ADC board, the system uses standard TTL components to minimize power consumption and cost. VDAS operation as well as enhancements to the original system are discussed.

  17. Visible light communication in dynamic environment using image/high-speed communication hybrid sensor

    NASA Astrophysics Data System (ADS)

    Maeno, Keita; Panahpour Tehrani, Mehrdad; Fujii, Toshiaki; Okada, Hiraku; Yamazato, Takaya; Tanimoto, Masayuki; Yendo, Tomohiro

    2012-01-01

    Visible Light Communication (VLC) is a wireless communication method using LEDs. LEDs can respond in high-speed and VLC uses this characteristics. In VLC researches, there are two types of receivers mainly, one is photodiode receiver and the other is high-speed camera. A photodiode receiver can communicate in high-speed and has high transmission rate because of its high-speed response. A high-speed camera can detect and track the transmitter easily because it is not necessary to move the camera. In this paper, we use a hybrid sensor designed for VLC which has advantages of both photodiode and high-speed camera, that is, high transmission rate and easy detecting of the transmitter. The light receiving section of the hybrid sensor consists of communication pixels and video pixels, which realizes the advantages. This hybrid sensor can communicate in static environment in previous research. However in dynamic environment, high-speed tracking of the transmitter is essential for communication. So, we realize the high-speed tracking of the transmitter by using the information of the communication pixels. Experimental results show the possibility of communication in dynamic environment.

  18. High Speed Telescopic Imaging of Sprites

    NASA Astrophysics Data System (ADS)

    McHarg, M. G.; Stenbaek-Nielsen, H. C.; Kanmae, T.; Haaland, R. K.

    2010-12-01

    A total of 21 sprite events were recorded at Langmuir Laboratory, New Mexico, during the nights of 14 and 15 July 2010 with a 500 mm focal length Takahashi Sky 90 telescope. The camera used was a Phantom 7.3 with a VideoScope image intensifier. The images were 512x256 pixels for a field of view of 1.3x0.6 degrees. The data were recorded at 16,000 frames per second (62 μs between images) and an integration time of 20 μs per image. Co-aligned with the telescope was a second similar high-speed camera, but with an 85 mm Nikon lens; this camera recorded at 10,000 frames per second with 100 μs exposure. The image format was also 512x256 pixels for a field of view of 7.3x3.7 degrees. The 21 events recorded include all basic sprite elements: Elve, sprite halos, C-sprites, carrot sprites, and large jellyfish sprites. We compare and contrast the spatial details seen in the different types of sprites, including streamer head size and the number of streamers subsequent to streamer head splitting. Telescopic high speed image of streamer tip splitting in sprites recorded at 07:06:09 UT on 15 July 2010.

  19. Video-Camera-Based Position-Measuring System

    NASA Technical Reports Server (NTRS)

    Lane, John; Immer, Christopher; Brink, Jeffrey; Youngquist, Robert

    2005-01-01

    A prototype optoelectronic system measures the three-dimensional relative coordinates of objects of interest or of targets affixed to objects of interest in a workspace. The system includes a charge-coupled-device video camera mounted in a known position and orientation in the workspace, a frame grabber, and a personal computer running image-data-processing software. Relative to conventional optical surveying equipment, this system can be built and operated at much lower cost; however, it is less accurate. It is also much easier to operate than are conventional instrumentation systems. In addition, there is no need to establish a coordinate system through cooperative action by a team of surveyors. The system operates in real time at around 30 frames per second (limited mostly by the frame rate of the camera). It continuously tracks targets as long as they remain in the field of the camera. In this respect, it emulates more expensive, elaborate laser tracking equipment that costs of the order of 100 times as much. Unlike laser tracking equipment, this system does not pose a hazard of laser exposure. Images acquired by the camera are digitized and processed to extract all valid targets in the field of view. The three-dimensional coordinates (x, y, and z) of each target are computed from the pixel coordinates of the targets in the images to accuracy of the order of millimeters over distances of the orders of meters. The system was originally intended specifically for real-time position measurement of payload transfers from payload canisters into the payload bay of the Space Shuttle Orbiters (see Figure 1). The system may be easily adapted to other applications that involve similar coordinate-measuring requirements. Examples of such applications include manufacturing, construction, preliminary approximate land surveying, and aerial surveying. For some applications with rectangular symmetry, it is feasible and desirable to attach a target composed of black and white

  20. Deep-Sea Video Cameras Without Pressure Housings

    NASA Technical Reports Server (NTRS)

    Cunningham, Thomas

    2004-01-01

    Underwater video cameras of a proposed type (and, optionally, their light sources) would not be housed in pressure vessels. Conventional underwater cameras and their light sources are housed in pods that keep the contents dry and maintain interior pressures of about 1 atmosphere (.0.1 MPa). Pods strong enough to withstand the pressures at great ocean depths are bulky, heavy, and expensive. Elimination of the pods would make it possible to build camera/light-source units that would be significantly smaller, lighter, and less expensive. The depth ratings of the proposed camera/light source units would be essentially unlimited because the strengths of their housings would no longer be an issue. A camera according to the proposal would contain an active-pixel image sensor and readout circuits, all in the form of a single silicon-based complementary metal oxide/semiconductor (CMOS) integrated- circuit chip. As long as none of the circuitry and none of the electrical leads were exposed to seawater, which is electrically conductive, silicon integrated- circuit chips could withstand the hydrostatic pressure of even the deepest ocean. The pressure would change the semiconductor band gap by only a slight amount . not enough to degrade imaging performance significantly. Electrical contact with seawater would be prevented by potting the integrated-circuit chip in a transparent plastic case. The electrical leads for supplying power to the chip and extracting the video signal would also be potted, though not necessarily in the same transparent plastic. The hydrostatic pressure would tend to compress the plastic case and the chip equally on all sides; there would be no need for great strength because there would be no need to hold back high pressure on one side against low pressure on the other side. A light source suitable for use with the camera could consist of light-emitting diodes (LEDs). Like integrated- circuit chips, LEDs can withstand very large hydrostatic pressures. If

  1. High speed photography, videography, and photonics IV; Proceedings of the Meeting, San Diego, CA, Aug. 19, 20, 1986

    NASA Technical Reports Server (NTRS)

    Ponseggi, B. G. (Editor)

    1986-01-01

    Various papers on high-speed photography, videography, and photonics are presented. The general topics addressed include: photooptical and video instrumentation, streak camera data acquisition systems, photooptical instrumentation in wind tunnels, applications of holography and interferometry in wind tunnel research programs, and data analysis for photooptical and video instrumentation.

  2. The Camera Is Not a Methodology: Towards a Framework for Understanding Young Children's Use of Video Cameras

    ERIC Educational Resources Information Center

    Bird, Jo; Colliver, Yeshe; Edwards, Susan

    2014-01-01

    Participatory research methods argue that young children should be enabled to contribute their perspectives on research seeking to understand their worldviews. Visual research methods, including the use of still and video cameras with young children have been viewed as particularly suited to this aim because cameras have been considered easy and…

  3. ATR/OTR-SY Tank Camera Purge System and in Tank Color Video Imaging System

    SciTech Connect

    Werry, S.M.

    1995-06-06

    This procedure will document the satisfactory operation of the 101-SY tank Camera Purge System (CPS) and 101-SY in tank Color Camera Video Imaging System (CCVIS). Included in the CPRS is the nitrogen purging system safety interlock which shuts down all the color video imaging system electronics within the 101-SY tank vapor space during loss of nitrogen purge pressure.

  4. Reliable camera motion estimation from compressed MPEG videos using machine learning approach

    NASA Astrophysics Data System (ADS)

    Wang, Zheng; Ren, Jinchang; Wang, Yubin; Sun, Meijun; Jiang, Jianmin

    2013-05-01

    As an important feature in characterizing video content, camera motion has been widely applied in various multimedia and computer vision applications. A novel method for fast and reliable estimation of camera motion from MPEG videos is proposed, using support vector machine for estimation in a regression model trained on a synthesized sequence. Experiments conducted on real sequences show that the proposed method yields much improved results in estimating camera motions while the difficulty in selecting valid macroblocks and motion vectors is skipped.

  5. On the Complexity of Digital Video Cameras in/as Research: Perspectives and Agencements

    ERIC Educational Resources Information Center

    Bangou, Francis

    2014-01-01

    The goal of this article is to consider the potential for digital video cameras to produce as part of a research agencement. Our reflection will be guided by the current literature on the use of video recordings in research, as well as by the rhizoanalysis of two vignettes. The first of these vignettes is associated with a short video clip shot by…

  6. Application of Optical Measurement Techniques During Stages of Pregnancy: Use of Phantom High Speed Cameras for Digital Image Correlation (D.I.C.) During Baby Kicking and Abdomen Movements

    NASA Technical Reports Server (NTRS)

    Gradl, Paul

    2016-01-01

    Paired images were collected using a projected pattern instead of standard painting of the speckle pattern on her abdomen. High Speed cameras were post triggered after movements felt. Data was collected at 120 fps -limited due to 60hz frequency of projector. To ensure that kicks and movement data was real a background test was conducted with no baby movement (to correct for breathing and body motion).

  7. Real-time air quality monitoring by using internet video surveillance camera

    NASA Astrophysics Data System (ADS)

    Wong, C. J.; Lim, H. S.; MatJafri, M. Z.; Abdullah, K.; Low, K. L.

    2007-04-01

    Nowadays internet video surveillance cameras are widely use in security monitoring. The quantities of installations of these cameras also become more and more. This paper reports that the internet video surveillance cameras can be applied as a remote sensor for monitoring the concentrations of particulate matter less than 10 micron (PM10), so that real time air quality can be monitored at multi location simultaneously. An algorithm was developed based on the regression analysis of relationship between the measured reflectance components from a surface material and the atmosphere. This algorithm converts multispectral image pixel values acquired from these cameras into quantitative values of the concentrations of PM10. These computed PM10 values were compared to other standard values measured by a DustTrak TM meter. The correlation results showed that the newly develop algorithm produced a high degree of accuracy as indicated by high correlation coefficient (R2) and low root-mean-square-error (RMS) values. The preliminary results showed that the accuracy produced by this internet video surveillance camera is slightly better than that from the internet protocol (IP) camera. Basically the spatial resolution of images acquired by the IP camera was poorer compared to the internet video surveillance camera. This is because the images acquired by IP camera had been compressed and there was no compression for the images from the internet video surveillance camera.

  8. High speed photography, videography, and photonics III; Proceedings of the Meeting, San Diego, CA, August 22, 23, 1985

    NASA Technical Reports Server (NTRS)

    Ponseggi, B. G. (Editor); Johnson, H. C. (Editor)

    1985-01-01

    Papers are presented on the picosecond electronic framing camera, photogrammetric techniques using high-speed cineradiography, picosecond semiconductor lasers for characterizing high-speed image shutters, the measurement of dynamic strain by high-speed moire photography, the fast framing camera with independent frame adjustments, design considerations for a data recording system, and nanosecond optical shutters. Consideration is given to boundary-layer transition detectors, holographic imaging, laser holographic interferometry in wind tunnels, heterodyne holographic interferometry, a multispectral video imaging and analysis system, a gated intensified camera, a charge-injection-device profile camera, a gated silicon-intensified-target streak tube and nanosecond-gated photoemissive shutter tubes. Topics discussed include high time-space resolved photography of lasers, time-resolved X-ray spectrographic instrumentation for laser studies, a time-resolving X-ray spectrometer, a femtosecond streak camera, streak tubes and cameras, and a short pulse X-ray diagnostic development facility.

  9. [Research Award providing funds for a tracking video camera

    NASA Technical Reports Server (NTRS)

    Collett, Thomas

    2000-01-01

    The award provided funds for a tracking video camera. The camera has been installed and the system calibrated. It has enabled us to follow in real time the tracks of individual wood ants (Formica rufa) within a 3m square arena as they navigate singly in-doors guided by visual cues. To date we have been using the system on two projects. The first is an analysis of the navigational strategies that ants use when guided by an extended landmark (a low wall) to a feeding site. After a brief training period, ants are able to keep a defined distance and angle from the wall, using their memory of the wall's height on the retina as a controlling parameter. By training with walls of one height and length and testing with walls of different heights and lengths, we can show that ants adjust their distance from the wall so as to keep the wall at the height that they learned during training. Thus, their distance from the base of a tall wall is further than it is from the training wall, and the distance is shorter when the wall is low. The stopping point of the trajectory is defined precisely by the angle that the far end of the wall makes with the trajectory. Thus, ants walk further if the wall is extended in length and not so far if the wall is shortened. These experiments represent the first case in which the controlling parameters of an extended trajectory can be defined with some certainty. It raises many questions for future research that we are now pursuing.

  10. High speed imaging television system

    DOEpatents

    Wilkinson, William O.; Rabenhorst, David W.

    1984-01-01

    A television system for observing an event which provides a composite video output comprising the serially interlaced images the system is greater than the time resolution of any of the individual cameras.

  11. A Raman Spectroscopy and High-Speed Video Experimental Study: The Effect of Pressure on the Solid-Liquid Transformation Kinetics of N-octane

    NASA Astrophysics Data System (ADS)

    Liu, C.; Wang, D.

    2015-12-01

    Phase transitions of minerals and rocks in the interior of the Earth, especially at elevated pressures and temperatures, can make the crystal structures and state parameters obviously changed, so it is very important for the physical and chemical properties of these materials. It is known that the transformation between solid and liquid is relatively common in nature, such as the melting of ice and the crystallization of mineral or water. The kinetics relevant to these transformations might provide valuable information on the reaction rate and the reaction mechanism involving nucleation and growth. An in-situ transformation kinetic study of n-octane, which served as an example for this type of phase transition, has been carried out using a hydrothermal diamond anvil cell (HDAC) and high-speed video technique, and that the overall purpose of this study is to develop a comprehensive understanding of the reaction mechanism and the influence of pressure on the different transformation rates. At ambient temperature, the liquid-solid transformation of n-octane first took place with increasing pressure, and then the solid phase gradually transformed into the liquid phase when the sample was heated to a certain temperature. Upon the cooling of the system, the liquid-solid transformation occurred again. According to the established quantitative assessments of the transformation rates, pressure and temperature, it showed that there was a negative pressure dependence of the solid-liquid transformation rate. However, the elevation of pressure can accelerate the liquid-solid transformation rate. Based on the calculated activation energy values, an interfacial reaction and diffusion dominated the solid-liquid transformation, but the liquid-solid transformation was mainly controlled by diffusion. This experimental technique is a powerful and effective tool for the transformation kinetics study of n-octane, and the obtained results are of great significance to the kinetics study

  12. A New Methodology for Studying Dynamics of Aerosol Particles in Sneeze and Cough Using a Digital High-Vision, High-Speed Video System and Vector Analyses

    PubMed Central

    Nishimura, Hidekazu; Sakata, Soichiro; Kaga, Akikazu

    2013-01-01

    Microbial pathogens of respiratory infectious diseases are often transmitted through particles in sneeze and cough. Therefore, understanding the particle movement is important for infection control. Images of a sneeze induced by nasal cavity stimulation by healthy adult volunteers, were taken by a digital high-vision, high-speed video system equipped with a computer system and treated as a research model. The obtained images were enhanced electronically, converted to digital images every 1/300 s, and subjected to vector analysis of the bioparticles contained in the whole sneeze cloud using automatic image processing software. The initial velocity of the particles or their clusters in the sneeze was greater than 6 m/s, but decreased as the particles moved forward; the momentums of the particles seemed to be lost by 0.15–0.20 s and started a diffusion movement. An approximate equation of a function of elapsed time for their velocity was obtained from the vector analysis to represent the dynamics of the front-line particles. This methodology was also applied for a cough. Microclouds contained in a smoke exhaled with a voluntary cough by a volunteer after smoking one breath of cigarette, were traced as the visible, aerodynamic surrogates for invisible bioparticles of cough. The smoke cough microclouds had an initial velocity greater than 5 m/s. The fastest microclouds were located at the forefront of cloud mass that moving forward; however, their velocity clearly decreased after 0.05 s and they began to diffuse in the environmental airflow. The maximum direct reaches of the particles and microclouds driven by sneezing and coughing unaffected by environmental airflows were estimated by calculations using the obtained equations to be about 84 cm and 30 cm from the mouth, respectively, both achieved in about 0.2 s, suggesting that data relating to the dynamics of sneeze and cough became available by calculation. PMID:24312206

  13. Acceptance/operational test procedure 241-AN-107 Video Camera System

    SciTech Connect

    Pedersen, L.T.

    1994-11-18

    This procedure will document the satisfactory operation of the 241-AN-107 Video Camera System. The camera assembly, including camera mast, pan-and-tilt unit, camera, and lights, will be installed in Tank 241-AN-107 to monitor activities during the Caustic Addition Project. The camera focus, zoom, and iris remote controls will be functionally tested. The resolution and color rendition of the camera will be verified using standard reference charts. The pan-and-tilt unit will be tested for required ranges of motion, and the camera lights will be functionally tested. The master control station equipment, including the monitor, VCRs, printer, character generator, and video micrometer will be set up and performance tested in accordance with original equipment manufacturer`s specifications. The accuracy of the video micrometer to measure objects in the range of 0.25 inches to 67 inches will be verified. The gas drying distribution system will be tested to ensure that a drying gas can be flowed over the camera and lens in the event that condensation forms on these components. This test will be performed by attaching the gas input connector, located in the upper junction box, to a pressurized gas supply and verifying that the check valve, located in the camera housing, opens to exhaust the compressed gas. The 241-AN-107 camera system will also be tested to assure acceptable resolution of the camera imaging components utilizing the camera system lights.

  14. High speed data compactor

    DOEpatents

    Baumbaugh, Alan E.; Knickerbocker, Kelly L.

    1988-06-04

    A method and apparatus for suppressing from transmission, non-informational data words from a source of data words such as a video camera. Data words having values greater than a predetermined threshold are transmitted whereas data words having values less than a predetermined threshold are not transmitted but their occurrences instead are counted. Before being transmitted, the count of occurrences of invalid data words and valid data words are appended with flag digits which a receiving system decodes. The original data stream is fully reconstructable from the stream of valid data words and count of invalid data words.

  15. High speed photography, videography, and photonics VI; Proceedings of the Meeting, San Diego, CA, Aug. 15-17, 1988

    NASA Astrophysics Data System (ADS)

    Stradling, Gary L.

    1989-06-01

    Recent advances in high-speed optics are discussed in reviews and reports. Topics addressed include ultrafast spectroscopy for atomic and molecular studies, streak-camera technology, ultrafast streak systems, framing and X-ray streak-camera measurements, high-speed video techniques (lighting and analysis), and high-speed photography. Particular attention is given to fsec time-resolved observations of molecular and crystalline vibrations and rearrangements, space-charge effects in the fsec streak tube, noise propagation in streak systems, nsec framing photography for laser-produced interstreaming plasmas, an oil-cooled flash X-ray tube for biomedical radiography, a video tracker for high-speed location measurement, and electrooptic framing holography.

  16. Lori Losey - The Woman Behind the Video Camera

    NASA Video Gallery

    The often-spectacular aerial video imagery of NASA flight research, airborne science missions and space satellite launches doesn't just happen. Much of it is the work of Lori Losey, senior video pr...

  17. Operational test procedure 241-AZ-101 waste tank color video camera system

    SciTech Connect

    Robinson, R.S.

    1996-10-30

    The purpose of this procedure is to provide a documented means of verifying that all of the functional components of the 241-AZ- 101 Waste Tank Video Camera System operate properly before and after installation.

  18. Ultra-high-speed bionanoscope for cell and microbe imaging

    NASA Astrophysics Data System (ADS)

    Etoh, T. Goji; Vo Le, Cuong; Kawano, Hiroyuki; Ishikawa, Ikuko; Miyawaki, Atshushi; Dao, Vu T. S.; Nguyen, Hoang Dung; Yokoi, Sayoko; Yoshida, Shigeru; Nakano, Hitoshi; Takehara, Kohsei; Saito, Yoshiharu

    2008-11-01

    We are developing an ultra-high-sensitivity and ultra-high-speed imaging system for bioscience, mainly for imaging of microbes with visible light and cells with fluorescence emission. Scarcity of photons is the most serious problem in applications of high-speed imaging to the scientific field. To overcome the problem, the system integrates new technologies consisting of (1) an ultra-high-speed video camera with sub-ten-photon sensitivity with the frame rate of more than 1 mega frames per second, (2) a microscope with highly efficient use of light applicable to various unstained and fluorescence cell observations, and (3) very powerful long-pulse-strobe Xenon lights and lasers for microscopes. Various auxiliary technologies to support utilization of the system are also being developed. One example of them is an efficient video trigger system, which detects a weak signal of a sudden change in a frame under ultra-high-speed imaging by canceling high-frequency fluctuation of illumination light. This paper outlines the system with its preliminary evaluation results.

  19. Engineering task plan for flammable gas atmosphere mobile color video camera systems

    SciTech Connect

    Kohlman, E.H.

    1995-01-25

    This Engineering Task Plan (ETP) describes the design, fabrication, assembly, and testing of the mobile video camera systems. The color video camera systems will be used to observe and record the activities within the vapor space of a tank on a limited exposure basis. The units will be fully mobile and designed for operation in the single-shell flammable gas producing tanks. The objective of this tank is to provide two mobile camera systems for use in flammable gas producing single-shell tanks (SSTs) for the Flammable Gas Tank Safety Program. The camera systems will provide observation, video recording, and monitoring of the activities that occur in the vapor space of applied tanks. The camera systems will be designed to be totally mobile, capable of deployment up to 6.1 meters into a 4 inch (minimum) riser.

  20. Using a Video Camera to Measure the Radius of the Earth

    ERIC Educational Resources Information Center

    Carroll, Joshua; Hughes, Stephen

    2013-01-01

    A simple but accurate method for measuring the Earth's radius using a video camera is described. A video camera was used to capture a shadow rising up the wall of a tall building at sunset. A free program called ImageJ was used to measure the time it took the shadow to rise a known distance up the building. The time, distance and length of…

  1. Still-Video Photography: Tomorrow's Electronic Cameras in the Hands of Today's Photojournalists.

    ERIC Educational Resources Information Center

    Foss, Kurt; Kahan, Robert S.

    This paper examines the still-video camera and its potential impact by looking at recent experiments and by gathering information from some of the few people knowledgeable about the new technology. The paper briefly traces the evolution of the tools and processes of still-video photography, examining how photographers and their work have been…

  2. Measuring 8–250 ps short pulses using a high-speed streak camera on kilojoule, petawatt-class laser systems

    SciTech Connect

    Qiao, J.; Jaanimagi, P. A.; Boni, R.; Bromage, J.; Hill, E.

    2013-07-15

    Short-pulse measurements using a streak camera are sensitive to space-charge broadening, which depends on the pulse duration and shape, and on the uniformity of photocathode illumination. An anamorphic-diffuser-based beam-homogenizing system and a space-charge-broadening calibration method were developed to accurately measure short pulses using an optical streak camera. This approach provides a more-uniform streak image and enables one to characterize space-charge-induced pulse-broadening effects.

  3. High-speed imaging of explosive eruptions: applications and perspectives

    NASA Astrophysics Data System (ADS)

    Taddeucci, Jacopo; Scarlato, Piergiorgio; Gaudin, Damien; Capponi, Antonio; Alatorre-Ibarguengoitia, Miguel-Angel; Moroni, Monica

    2013-04-01

    Explosive eruptions, being by definition highly dynamic over short time scales, necessarily call for observational systems capable of relatively high sampling rates. "Traditional" tools, like as seismic and acoustic networks, have recently been joined by Doppler radar and electric sensors. Recent developments in high-speed camera systems now allow direct visual information of eruptions to be obtained with a spatial and temporal resolution suitable for the analysis of several key eruption processes. Here we summarize the methods employed to gather and process high-speed videos of explosive eruptions, and provide an overview of the several applications of these new type of data in understanding different aspects of explosive volcanism. Our most recent set up for high-speed imaging of explosive eruptions (FAMoUS - FAst, MUltiparametric Set-up,) includes: 1) a monochrome high speed camera, capable of 500 frames per second (fps) at high-definition (1280x1024 pixel) resolution and up to 200000 fps at reduced resolution; 2) a thermal camera capable of 50-200 fps at 480-120x640 pixel resolution; and 3) two acoustic to infrasonic sensors. All instruments are time-synchronized via a data logging system, a hand- or software-operated trigger, and via GPS, allowing signals from other instruments or networks to be directly recorded by the same logging unit or to be readily synchronized for comparison. FAMoUS weights less than 20 kg, easily fits into four, hand-luggage-sized backpacks, and can be deployed in less than 20' (and removed in less than 2', if needed). So far, explosive eruptions have been recorded in high-speed at several active volcanoes, including Fuego and Santiaguito (Guatemala), Stromboli (Italy), Yasur (Vanuatu), and Eyjafiallajokull (Iceland). Image processing and analysis from these eruptions helped illuminate several eruptive processes, including: 1) Pyroclasts ejection. High-speed videos reveal multiple, discrete ejection pulses within a single Strombolian

  4. Engineering task plan for Tanks 241-AN-103, 104, 105 color video camera systems

    SciTech Connect

    Kohlman, E.H.

    1994-11-17

    This Engineering Task Plan (ETP) describes the design, fabrication, assembly, and installation of the video camera systems into the vapor space within tanks 241-AN-103, 104, and 105. The one camera remotely operated color video systems will be used to observe and record the activities within the vapor space. Activities may include but are not limited to core sampling, auger activities, crust layer examination, monitoring of equipment installation/removal, and any other activities. The objective of this task is to provide a single camera system in each of the tanks for the Flammable Gas Tank Safety Program.

  5. Lights! Camera! Action! Handling Your First Video Assignment.

    ERIC Educational Resources Information Center

    Thomas, Marjorie Bekaert

    1989-01-01

    The author discusses points to consider when hiring and working with a video production company to develop a video for human resources purposes. Questions to ask the consultants are included, as is information on the role of the company liaison and on how to avoid expensive, time-wasting pitfalls. (CH)

  6. Lights, Cameras, Pencils! Using Descriptive Video to Enhance Writing

    ERIC Educational Resources Information Center

    Hoffner, Helen; Baker, Eileen; Quinn, Kathleen Benson

    2008-01-01

    Students of various ages and abilities can increase their comprehension and build vocabulary with the help of a new technology, Descriptive Video. Descriptive Video (also known as described programming) was developed to give individuals with visual impairments access to visual media such as television programs and films. Described programs,…

  7. Accuracy potential of large-format still-video cameras

    NASA Astrophysics Data System (ADS)

    Maas, Hans-Gerd; Niederoest, Markus

    1997-07-01

    High resolution digital stillvideo cameras have found wide interest in digital close range photogrammetry in the last five years. They can be considered fully autonomous digital image acquisition systems without the requirement of permanent connection to an external power supply and a host computer for camera control and data storage, thus allowing for convenient data acquisition in many applications of digital photogrammetry. The accuracy potential of stillvideo cameras has been extensively discussed. While large format CCD sensors themselves can be considered very accurate measurement devices, lenses, camera bodies and sensor mounts of stillvideo cameras are not compression techniques in image storage, which may also affect the accuracy potential. This presentation shows recent experiences from accuracy tests with a number of large format stillvideo cameras, including a modified Kodak DCS200, a Kodak DCS460, a Nikon E2 and a Polaroid PDC-2000. The tests of the cameras include absolute and relative measurements and were performed using strong photogrammetric networks and good external reference. The results of the tests indicate that very high accuracies can be achieved with large blocks of stillvideo imagery especially in deformation measurements. In absolute measurements, however, the accuracy potential of the large format CCD sensors is partly ruined by a lack of stability of the cameras.

  8. Feasibility study of transmission of OTV camera control information in the video vertical blanking interval

    NASA Technical Reports Server (NTRS)

    White, Preston A., III

    1994-01-01

    The Operational Television system at Kennedy Space Center operates hundreds of video cameras, many remotely controllable, in support of the operations at the center. This study was undertaken to determine if commercial NABTS (North American Basic Teletext System) teletext transmission in the vertical blanking interval of the genlock signals distributed to the cameras could be used to send remote control commands to the cameras and the associated pan and tilt platforms. Wavelength division multiplexed fiberoptic links are being installed in the OTV system to obtain RS-250 short-haul quality. It was demonstrated that the NABTS transmission could be sent over the fiberoptic cable plant without excessive video quality degradation and that video cameras could be controlled using NABTS transmissions over multimode fiberoptic paths as long as 1.2 km.

  9. High speed handpieces

    PubMed Central

    Bhandary, Nayan; Desai, Asavari; Shetty, Y Bharath

    2014-01-01

    High speed instruments are versatile instruments used by clinicians of all specialties of dentistry. It is important for clinicians to understand the types of high speed handpieces available and the mechanism of working. The centers for disease control and prevention have issued guidelines time and again for disinfection and sterilization of high speed handpieces. This article presents the recent developments in the design of the high speed handpieces. With a view to prevent hospital associated infections significant importance has been given to disinfection, sterilization & maintenance of high speed handpieces. How to cite the article: Bhandary N, Desai A, Shetty YB. High speed handpieces. J Int Oral Health 2014;6(1):130-2. PMID:24653618

  10. RECON 6: A real-time, wide-angle, solid-state reconnaissance camera system for high-speed, low-altitude aircraft

    NASA Technical Reports Server (NTRS)

    Labinger, R. L.

    1976-01-01

    The maturity of self-scanned, solid-state, multielement photosensors makes the realization of "real time" reconnaissance photography viable and practical. A system built around these sensors which can be constructed to satisfy the requirements of the tactical reconnaissance scenario is described. The concept chosen is the push broom strip camera system -- RECON 6 -- which represents the least complex and most economical approach for an electronic camera capable of providing a high level of performance over a 140 deg wide, continuous swath at altitudes from 200 to 3,000 feet and at minimum loss in resolution at higher altitudes.

  11. Kinematic Measurements of the Vocal-Fold Displacement Waveform in Typical Children and Adult Populations: Quantification of High-Speed Endoscopic Videos

    ERIC Educational Resources Information Center

    Patel, Rita; Donohue, Kevin D.; Unnikrishnan, Harikrishnan; Kryscio, Richard J.

    2015-01-01

    Purpose: This article presents a quantitative method for assessing instantaneous and average lateral vocal-fold motion from high-speed digital imaging, with a focus on developmental changes in vocal-fold kinematics during childhood. Method: Vocal-fold vibrations were analyzed for 28 children (aged 5-11 years) and 28 adults (aged 21-45 years)…

  12. Application of high-speed videography in sports analysis

    NASA Astrophysics Data System (ADS)

    Smith, Sarah L.

    1993-01-01

    The goal of sport biomechanists is to provide information to coaches and athletes about sport skill technique that will assist them in obtaining the highest levels of athletic performance. Within this technique evaluation process, two methodological approaches can be taken to study human movement. One method describes the motion being performed; the second approach focuses on understanding the forces causing the motion. It is with the movement description method that video image recordings offer a means for athletes, coaches, and sport biomechanists to analyze sport performance. Staff members of the Technique Evaluation Program provide video recordings of sport performance to athletes and coaches during training sessions held at the Olympic Training Center in Colorado Springs, Colorado. These video records are taken to provide a means for the qualitative evaluation or the quantitative analysis of sport skills as performed by elite athletes. High-speed video equipment (NAC HVRB-200 and NAC HSV-400 Video Systems) is used to capture various sport movement sequences that will permit coaches, athletes, and sport biomechanists to evaluate and/or analyze sport performance. The PEAK Performance Motion Measurement System allows sport biomechanists to measure selected mechanical variables appropriate to the sport being analyzed. Use of two high-speed cameras allows for three-dimensional analysis of the sport skill or the ability to capture images of an athlete's motion from two different perspectives. The simultaneous collection and synchronization of force data provides for a more comprehensive analysis and understanding of a particular sport skill. This process of combining force data with motion sequences has been done extensively with cycling. The decision to use high-speed videography rather than normal speed video is based upon the same criteria that are used in other settings. The rapidness of the sport movement sequence and the need to see the location of body parts

  13. A high-speed magnetic tweezer beyond 10,000 frames per second.

    PubMed

    Lansdorp, Bob M; Tabrizi, Shawn J; Dittmore, Andrew; Saleh, Omar A

    2013-04-01

    The magnetic tweezer is a single-molecule instrument that can apply a constant force to a biomolecule over a range of extensions, and is therefore an ideal tool to study biomolecules and their interactions. However, the video-based tracking inherent to most magnetic single-molecule instruments has traditionally limited the instrumental resolution to a few nanometers, above the length scale of single DNA base-pairs. Here we have introduced superluminescent diode illumination and high-speed camera detection to the magnetic tweezer, with graphics processing unit-accelerated particle tracking for high-speed analysis of video files. We have demonstrated the ability of the high-speed magnetic tweezer to resolve particle position to within 1 Å at 100 Hz, and to measure the extension of a 1566 bp DNA with 1 nm precision at 100 Hz in the presence of thermal noise. PMID:23635212

  14. A Comparison of Techniques for Camera Selection and Hand-Off in a Video Network

    NASA Astrophysics Data System (ADS)

    Li, Yiming; Bhanu, Bir

    Video networks are becoming increasingly important for solving many real-world problems. Multiple video sensors require collaboration when performing various tasks. One of the most basic tasks is the tracking of objects, which requires mechanisms to select a camera for a certain object and hand-off this object from one camera to another so as to accomplish seamless tracking. In this chapter, we provide a comprehensive comparison of current and emerging camera selection and hand-off techniques. We consider geometry-, statistics-, and game theory-based approaches and provide both theoretical and experimental comparison using centralized and distributed computational models. We provide simulation and experimental results using real data for various scenarios of a large number of cameras and objects for in-depth understanding of strengths and weaknesses of these techniques.

  15. A three-camera imaging microscope for high-speed single-molecule tracking and super-resolution imaging in living cells

    NASA Astrophysics Data System (ADS)

    English, Brian P.; Singer, Robert H.

    2015-08-01

    Our aim is to develop quantitative single-molecule assays to study when and where molecules are interacting inside living cells and where enzymes are active. To this end we present a three-camera imaging microscope for fast tracking of multiple interacting molecules simultaneously, with high spatiotemporal resolution. The system was designed around an ASI RAMM frame using three separate tube lenses and custom multi-band dichroics to allow for enhanced detection efficiency. The frame times of the three Andor iXon Ultra EMCCD cameras are hardware synchronized to the laser excitation pulses of the three excitation lasers, such that the fluorophores are effectively immobilized during frame acquisitions and do not yield detections that are motion-blurred. Stroboscopic illumination allows robust detection from even rapidly moving molecules while minimizing bleaching, and since snapshots can be spaced out with varying time intervals, stroboscopic illumination enables a direct comparison to be made between fast and slow molecules under identical light dosage. We have developed algorithms that accurately track and co-localize multiple interacting biomolecules. The three-color microscope combined with our co-movement algorithms have made it possible for instance to simultaneously image and track how the chromosome environment affects diffusion kinetics or determine how mRNAs diffuse during translation. Such multiplexed single-molecule measurements at a high spatiotemporal resolution inside living cells will provide a major tool for testing models relating molecular architecture and biological dynamics.

  16. A three-camera imaging microscope for high-speed single-molecule tracking and super-resolution imaging in living cells

    PubMed Central

    English, Brian P.; Singer, Robert H.

    2016-01-01

    Our aim is to develop quantitative single-molecule assays to study when and where molecules are interacting inside living cells and where enzymes are active. To this end we present a three-camera imaging microscope for fast tracking of multiple interacting molecules simultaneously, with high spatiotemporal resolution. The system was designed around an ASI RAMM frame using three separate tube lenses and custom multi-band dichroics to allow for enhanced detection efficiency. The frame times of the three Andor iXon Ultra EMCCD cameras are hardware synchronized to the laser excitation pulses of the three excitation lasers, such that the fluorophores are effectively immobilized during frame acquisitions and do not yield detections that are motion-blurred. Stroboscopic illumination allows robust detection from even rapidly moving molecules while minimizing bleaching, and since snapshots can be spaced out with varying time intervals, stroboscopic illumination enables a direct comparison to be made between fast and slow molecules under identical light dosage. We have developed algorithms that accurately track and co-localize multiple interacting biomolecules. The three-color microscope combined with our co-movement algorithms have made it possible for instance to simultaneously image and track how the chromosome environment affects diffusion kinetics or determine how mRNAs diffuse during translation. Such multiplexed single-molecule measurements at a high spatiotemporal resolution inside living cells will provide a major tool for testing models relating molecular architecture and biological dynamics. PMID:26819489

  17. Kids behind the Camera: Education for the Video Age.

    ERIC Educational Resources Information Center

    Berwick, Beverly

    1994-01-01

    Some San Diego teachers created the Montgomery Media Institute to tap the varied talents of young people attending area high schools and junior high schools. Featuring courses in video programming and production, photography, and journalism, this program engages students' interest while introducing them to fields with current employment…

  18. Passive millimeter-wave video camera for aviation applications

    NASA Astrophysics Data System (ADS)

    Fornaca, Steven W.; Shoucri, Merit; Yujiri, Larry

    1998-07-01

    Passive Millimeter Wave (PMMW) imaging technology offers significant safety benefits to world aviation. Made possible by recent technological breakthroughs, PMMW imaging sensors provide visual-like images of objects under low visibility conditions (e.g., fog, clouds, snow, sandstorms, and smoke) which blind visual and infrared sensors. TRW has developed an advanced, demonstrator version of a PMMW imaging camera that, when front-mounted on an aircraft, gives images of the forward scene at a rate and quality sufficient to enhance aircrew vision and situational awareness under low visibility conditions. Potential aviation uses for a PMMW camera are numerous and include: (1) Enhanced vision for autonomous take- off, landing, and surface operations in Category III weather on Category I and non-precision runways; (2) Enhanced situational awareness during initial and final approach, including Controlled Flight Into Terrain (CFIT) mitigation; (3) Ground traffic control in low visibility; (4) Enhanced airport security. TRW leads a consortium which began flight tests with the demonstration PMMW camera in September 1997. Flight testing will continue in 1998. We discuss the characteristics of PMMW images, the current state of the technology, the integration of the camera with other flight avionics to form an enhanced vision system, and other aviation applications.

  19. Camera/Video Phones in Schools: Law and Practice

    ERIC Educational Resources Information Center

    Parry, Gareth

    2005-01-01

    The emergence of mobile phones with built-in digital cameras is creating legal and ethical concerns for school systems throughout the world. Users of such phones can instantly email, print or post pictures to other MMS1 phones or websites. Local authorities and schools in Britain, Europe, USA, Canada, Australia and elsewhere have introduced…

  20. BOREAS RSS-3 Imagery and Snapshots from a Helicopter-Mounted Video Camera

    NASA Technical Reports Server (NTRS)

    Walthall, Charles L.; Loechel, Sara; Nickeson, Jaime (Editor); Hall, Forrest G. (Editor)

    2000-01-01

    The BOREAS RSS-3 team collected helicopter-based video coverage of forested sites acquired during BOREAS as well as single-frame "snapshots" processed to still images. Helicopter data used in this analysis were collected during all three 1994 IFCs (24-May to 16-Jun, 19-Jul to 10-Aug, and 30-Aug to 19-Sep), at numerous tower and auxiliary sites in both the NSA and the SSA. The VHS-camera observations correspond to other coincident helicopter measurements. The field of view of the camera is unknown. The video tapes are in both VHS and Beta format. The still images are stored in JPEG format.

  1. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.

    PubMed

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-01-01

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961

  2. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras

    PubMed Central

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-01-01

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961

  3. Using hand-held point and shoot video cameras in clinical education.

    PubMed

    Stoten, Sharon

    2011-02-01

    Clinical educators are challenged to design and implement creative instructional strategies to provide employees with optimal clinical practice learning opportunities. Using hand-held video cameras to capture patient encounters or skills demonstrations involves employees in active learning and can increase dialogue between employees and clinical educators. The video that is created also can be used for evaluation and feedback. Hands-on experiences may energize employees with different talents and styles of learning. PMID:21323214

  4. High Speed data acquisition

    SciTech Connect

    Cooper, Peter S.

    1998-02-01

    A general introduction to high Speed data acquisition system techniques in modern particle physics experiments is given. Examples are drawn from the SELEX(E781) high statistics charmed baryon production and decay experiment now taking data at Fermilab.

  5. High Speed Research Program

    NASA Technical Reports Server (NTRS)

    Anderson, Robert E.; Corsiglia, Victor R.; Schmitz, Frederic H. (Technical Monitor)

    1994-01-01

    An overview of the NASA High Speed Research Program will be presented from a NASA Headquarters perspective. The presentation will include the objectives of the program and an outline of major programmatic issues.

  6. Full-scale high-speed schlieren imaging of explosions and gunshots

    NASA Astrophysics Data System (ADS)

    Settles, Gary S.; Grumstrup, Torben P.; Dodson, Lori J.; Miller, J. D.; Gatto, Joseph A.

    2005-03-01

    High-speed imaging and cinematography are important in research on explosions, firearms, and homeland security. Much can be learned from imaging the motion of shock waves generated by such explosive events. However, the required optical equipment is generally not available for such research due to the small aperture and delicacy of the optics and the expense and expertise required to implement high-speed optical methods. For example, previous aircraft hardening experiments involving explosions aboard full-scale aircraft lacked optical shock imaging, even though such imaging is the principal tool of explosion and shock wave research. Here, experiments are reported using the Penn State Full-Scale Schlieren System, a lens-and-grid-type optical system with a very large field-of-view. High-speed images are captured by photography using an electronic flash and by a new high-speed digital video camera. These experiments cover a field-of-view of 2x3 m at frame rates up to 30 kHz. Our previous high-speed schlieren cinematography experiments on aircraft hardening used a traditional drum camera and photographic film. A stark contrast in utility is found between that technology and the all-digital high-speed videography featured in this paper.

  7. Observations of in situ deep-sea marine bioluminescence with a high-speed, high-resolution sCMOS camera

    NASA Astrophysics Data System (ADS)

    Phillips, Brennan T.; Gruber, David F.; Vasan, Ganesh; Roman, Christopher N.; Pieribone, Vincent A.; Sparks, John S.

    2016-05-01

    Observing and measuring marine bioluminescence in situ presents unique challenges, characterized by the difficult task of approaching and imaging weakly illuminated bodies in a three-dimensional environment. To address this problem, a scientific complementary-metal-oxide-semiconductor (sCMOS) microscopy camera was outfitted for deep-sea imaging of marine bioluminescence. This system was deployed on multiple platforms (manned submersible, remotely operated vehicle, and towed body) in three oceanic regions (Western Tropical Pacific, Eastern Equatorial Pacific, and Northwestern Atlantic) to depths up to 2500 m. Using light stimulation, bioluminescent responses were recorded at high frame rates and in high resolution, offering unprecedented low-light imagery of deep-sea bioluminescence in situ. The kinematics of light production in several zooplankton groups was observed, and luminescent responses at different depths were quantified as intensity vs. time. These initial results signify a clear advancement in the bioluminescent imaging methods available for observation and experimentation in the deep-sea.

  8. Laser Imaging Video Camera Sees Through Fire, Fog, Smoke

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Under a series of SBIR contracts with Langley Research Center, inventor Richard Billmers refined a prototype for a laser imaging camera capable of seeing through fire, fog, smoke, and other obscurants. Now, Canton, Ohio-based Laser Imaging through Obscurants (LITO) Technologies Inc. is demonstrating the technology as a perimeter security system at Glenn Research Center and planning its future use in aviation, shipping, emergency response, and other fields.

  9. High-Speed Observer: Automated Streak Detection for the Aerospike Engine

    NASA Technical Reports Server (NTRS)

    Rieckhoff, T. J.; Covan, M. A.; OFarrell, J. M.

    2001-01-01

    A high-frame-rate digital video camera, installed on test stands at Stennis Space Center (SSC), has been used to capture images of the aerospike engine plume during test. These plume images are processed in real time to detect and differentiate anomalous plume events. Results indicate that the High-Speed Observer (HSO) system can detect anomalous plume streaking events that are indicative of aerospike engine malfunction.

  10. Observation of hydrothermal flows with acoustic video camera

    NASA Astrophysics Data System (ADS)

    Mochizuki, M.; Asada, A.; Tamaki, K.; Scientific Team Of Yk09-13 Leg 1

    2010-12-01

    Ridge 18-20deg.S, where hydrothermal plume signatures were previously perceived. DIDSON was equipped on the top of Shinkai6500 in order to get acoustic video images of hydrothermal plumes. In this cruise, seven dives of Shinkai6500 were conducted. The acoustic video images of the hydrothermal plumes had been captured in three of seven dives. These are only a few acoustic video images of the hydrothermal plumes. Processing and analyzing the acoustic video image data are going on. We will report the overview of the acoustic video image of the hydrothermal plumes and discuss possibility of DIDSON as an observation tool for seafloor hydrothermal activity.

  11. Field-based high-speed imaging of explosive eruptions

    NASA Astrophysics Data System (ADS)

    Taddeucci, J.; Scarlato, P.; Freda, C.; Moroni, M.

    2012-12-01

    Explosive eruptions involve, by definition, physical processes that are highly dynamic over short time scales. Capturing and parameterizing such processes is a major task in eruption understanding and forecasting, and a task that necessarily requires observational systems capable of high sampling rates. Seismic and acoustic networks are a prime tool for high-frequency observation of eruption, recently joined by Doppler radar and electric sensors. In comparison with the above monitoring systems, imaging techniques provide more complete and direct information of surface processes, but usually at a lower sampling rate. However, recent developments in high-speed imaging systems now allow such information to be obtained with a spatial and temporal resolution suitable for the analysis of several key eruption processes. Our most recent set up for high-speed imaging of explosive eruptions (FAMoUS - FAst, MUltiparametric Set-up,) includes: 1) a monochrome high speed camera, capable of 500 frames per second (fps) at high-definition (1280x1024 pixel) resolution and up to 200000 fps at reduced resolution; 2) a thermal camera capable of 50-200 fps at 480-120x640 pixel resolution; and 3) two acoustic to infrasonic sensors. All instruments are time-synchronized via a data logging system, a hand- or software-operated trigger, and via GPS, allowing signals from other instruments or networks to be directly recorded by the same logging unit or to be readily synchronized for comparison. FAMoUS weights less than 20 kg, easily fits into four, hand-luggage-sized backpacks, and can be deployed in less than 20' (and removed in less than 2', if needed). So far, explosive eruptions have been recorded in high-speed at several active volcanoes, including Fuego and Santiaguito (Guatemala), Stromboli (Italy), Yasur (Vanuatu), and Eyjafiallajokull (Iceland). Image processing and analysis from these eruptions helped illuminate several eruptive processes, including: 1) Pyroclasts ejection. High-speed

  12. Digital Video Cameras for Brainstorming and Outlining: The Process and Potential

    ERIC Educational Resources Information Center

    Unger, John A.; Scullion, Vicki A.

    2013-01-01

    This "Voices from the Field" paper presents methods and participant-exemplar data for integrating digital video cameras into the writing process across postsecondary literacy contexts. The methods and participant data are part of an ongoing action-based research project systematically designed to bring research and theory into practice…

  13. Nyquist Sampling Theorem: Understanding the Illusion of a Spinning Wheel Captured with a Video Camera

    ERIC Educational Resources Information Center

    Levesque, Luc

    2014-01-01

    Inaccurate measurements occur regularly in data acquisition as a result of improper sampling times. An understanding of proper sampling times when collecting data with an analogue-to-digital converter or video camera is crucial in order to avoid anomalies. A proper choice of sampling times should be based on the Nyquist sampling theorem. If the…

  14. Video content analysis on body-worn cameras for retrospective investigation

    NASA Astrophysics Data System (ADS)

    Bouma, Henri; Baan, Jan; ter Haar, Frank B.; Eendebak, Pieter T.; den Hollander, Richard J. M.; Burghouts, Gertjan J.; Wijn, Remco; van den Broek, Sebastiaan P.; van Rest, Jeroen H. C.

    2015-10-01

    In the security domain, cameras are important to assess critical situations. Apart from fixed surveillance cameras we observe an increasing number of sensors on mobile platforms, such as drones, vehicles and persons. Mobile cameras allow rapid and local deployment, enabling many novel applications and effects, such as the reduction of violence between police and citizens. However, the increased use of bodycams also creates potential challenges. For example: how can end-users extract information from the abundance of video, how can the information be presented, and how can an officer retrieve information efficiently? Nevertheless, such video gives the opportunity to stimulate the professionals' memory, and support complete and accurate reporting. In this paper, we show how video content analysis (VCA) can address these challenges and seize these opportunities. To this end, we focus on methods for creating a complete summary of the video, which allows quick retrieval of relevant fragments. The content analysis for summarization consists of several components, such as stabilization, scene selection, motion estimation, localization, pedestrian tracking and action recognition in the video from a bodycam. The different components and visual representations of summaries are presented for retrospective investigation.

  15. Automated detection of feeding strikes by larval fish using continuous high-speed digital video: a novel method to extract quantitative data from fast, sparse kinematic events.

    PubMed

    Shamur, Eyal; Zilka, Miri; Hassner, Tal; China, Victor; Liberzon, Alex; Holzman, Roi

    2016-06-01

    Using videography to extract quantitative data on animal movement and kinematics constitutes a major tool in biomechanics and behavioral ecology. Advanced recording technologies now enable acquisition of long video sequences encompassing sparse and unpredictable events. Although such events may be ecologically important, analysis of sparse data can be extremely time-consuming and potentially biased; data quality is often strongly dependent on the training level of the observer and subject to contamination by observer-dependent biases. These constraints often limit our ability to study animal performance and fitness. Using long videos of foraging fish larvae, we provide a framework for the automated detection of prey acquisition strikes, a behavior that is infrequent yet critical for larval survival. We compared the performance of four video descriptors and their combinations against manually identified feeding events. For our data, the best single descriptor provided a classification accuracy of 77-95% and detection accuracy of 88-98%, depending on fish species and size. Using a combination of descriptors improved the accuracy of classification by ∼2%, but did not improve detection accuracy. Our results indicate that the effort required by an expert to manually label videos can be greatly reduced to examining only the potential feeding detections in order to filter false detections. Thus, using automated descriptors reduces the amount of manual work needed to identify events of interest from weeks to hours, enabling the assembly of an unbiased large dataset of ecologically relevant behaviors. PMID:26994179

  16. Acceptance/operational test procedure 101-AW tank camera purge system and 101-AW video camera system

    SciTech Connect

    Castleberry, J.L.

    1994-09-19

    This procedure will document the satisfactory operation of the 101-AW Tank Camera Purge System (CPS) and the 101-AW Video Camera System. The safety interlock which shuts down all the electronics inside the 101-AW vapor space, during loss of purge pressure, will be in place and tested to ensure reliable performance. This procedure is separated into four sections. Section 6.1 is performed in the 306 building prior to delivery to the 200 East Tank Farms and involves leak checking all fittings on the 101-AW Purge Panel for leakage using a Snoop solution and resolving the leakage. Section 7.1 verifies that PR-1, the regulator which maintains a positive pressure within the volume (cameras and pneumatic lines), is properly set. In addition the green light (PRESSURIZED) (located on the Purge Control Panel) is verified to turn on above 10 in. w.g. and after the time delay (TDR) has timed out. Section 7.2 verifies that the purge cycle functions properly, the red light (PURGE ON) comes on, and that the correct flowrate is obtained to meet the requirements of the National Fire Protection Association. Section 7.3 verifies that the pan and tilt, camera, associated controls and components operate correctly. This section also verifies that the safety interlock system operates correctly during loss of purge pressure. During the loss of purge operation the illumination of the amber light (PURGE FAILED) will be verified.

  17. Introducing a New High-Speed Imaging System for Measuring Raindrop Characteristics

    NASA Astrophysics Data System (ADS)

    Testik, F. Y.; Rahman, K.

    2013-12-01

    Here we present a new high-speed imaging system that we have developed for measuring rainfall microphysical quantities. This optical disdrometer system is capable of providing raindrop characteristics including drop diameter, fall velocity and acceleration, shape, and axis ratio. The main components of the system consist of a high-speed video camera capable of capturing 1000 frames per second, an LED light, a sensor unit to detect raindrops passing through the camera view frame, and a three-dimensional ultrasonic anemometer to measure the wind velocity. The entire imaging system is operated and synchronized using a LabView code developed in-house. In this system, the camera points at the LED light and records the silhouettes of the backlit drops. Because the digital storage limitations do not allow continuous recording of high-speed camera systems more than several seconds, we utilized a sensor system that triggers the camera when a raindrop is detected within the camera view frame at the focal plane. With the trigger signal, the camera records a predefined number of frames to the built-in storage space of the camera head. The images are downloaded to a computer for processing and storage once the rain event is over or the built-in storage space is full. The anemometer data is recorded continuously to the computer. The downloaded sharp, sequential raindrop images are digitally processed using a computer code that is developed in-house, which outputs accurate information on various raindrop characteristics (e.g., drop diameter, shape, axis ratio, fall velocity, and drop size distribution). The new high-speed imaging system is laboratory tested using high-precision spherical lenses with known diameters and also field tested under real rain events. The results of these tests will also be presented. This new imaging system was developed as part of a National Science Foundation grant (NSF Award # 1144846) to study raindrop characteristics and is expected to be an

  18. Composite video and graphics display for multiple camera viewing system in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1991-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  19. Composite video and graphics display for camera viewing systems in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1993-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  20. High Speed data acquisition

    SciTech Connect

    Cooper, P.S.

    1998-02-01

    A general introduction to high Speed data acquisition system techniques in modern particle physics experiments is given. Examples are drawn from the SELEX(E781) high statistics charmed baryon production and decay experiment now taking data at Fermilab. {copyright} {ital 1998 American Institute of Physics.}

  1. High speed civil transport

    NASA Technical Reports Server (NTRS)

    Mcknight, R. L.

    1992-01-01

    The design requirements of the High Speed Civil Transport (HSCT) are discussed. The following design concerns are presented: (1) environmental impact (emissions and noise); (2) critical components (the high temperature combustor and the lightweight exhaust nozzle); and (3) advanced materials (high temperature ceramic matrix composites (CMC's)/intermetallic matrix composites (IMC's)/metal matrix composites (MMC's)).

  2. High speed door assembly

    DOEpatents

    Shapiro, Carolyn

    1993-01-01

    A high speed door assembly, comprising an actuator cylinder and piston rods, a pressure supply cylinder and fittings, an electrically detonated explosive bolt, a honeycomb structured door, a honeycomb structured decelerator, and a structural steel frame encasing the assembly to close over a 3 foot diameter opening within 50 milliseconds of actuation, to contain hazardous materials and vapors within a test fixture.

  3. High speed door assembly

    DOEpatents

    Shapiro, C.

    1993-04-27

    A high speed door assembly is described, comprising an actuator cylinder and piston rods, a pressure supply cylinder and fittings, an electrically detonated explosive bolt, a honeycomb structured door, a honeycomb structured decelerator, and a structural steel frame encasing the assembly to close over a 3 foot diameter opening within 50 milliseconds of actuation, to contain hazardous materials and vapors within a test fixture.

  4. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    PubMed Central

    Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu

    2016-01-01

    Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%. PMID:26950127

  5. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras.

    PubMed

    Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu

    2016-01-01

    Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%. PMID:26950127

  6. Hardware-based smart camera for recovering high dynamic range video from multiple exposures

    NASA Astrophysics Data System (ADS)

    Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique

    2014-10-01

    In many applications such as video surveillance or defect detection, the perception of information related to a scene is limited in areas with strong contrasts. The high dynamic range (HDR) capture technique can deal with these limitations. The proposed method has the advantage of automatically selecting multiple exposure times to make outputs more visible than fixed exposure ones. A real-time hardware implementation of the HDR technique that shows more details both in dark and bright areas of a scene is an important line of research. For this purpose, we built a dedicated smart camera that performs both capturing and HDR video processing from three exposures. What is new in our work is shown through the following points: HDR video capture through multiple exposure control, HDR memory management, HDR frame generation, and representation under a hardware context. Our camera achieves a real-time HDR video output at 60 fps at 1.3 megapixels and demonstrates the efficiency of our technique through an experimental result. Applications of this HDR smart camera include the movie industry, the mass-consumer market, military, automotive industry, and surveillance.

  7. Surgical video recording with a modified GoPro Hero 4 camera

    PubMed Central

    Lin, Lily Koo

    2016-01-01

    Background Surgical videography can provide analytical self-examination for the surgeon, teaching opportunities for trainees, and allow for surgical case presentations. This study examined if a modified GoPro Hero 4 camera with a 25 mm lens could prove to be a cost-effective method of surgical videography with enough detail for oculoplastic and strabismus surgery. Method The stock lens mount and lens were removed from a GoPro Hero 4 camera, and was refitted with a Peau Productions SuperMount and 25 mm lens. The modified GoPro Hero 4 camera was then fixed to an overhead surgical light. Results Camera settings were set to 1080p video resolution. The 25 mm lens allowed for nine times the magnification as the GoPro stock lens. There was no noticeable video distortion. The entire cost was less than 600 USD. Conclusion The adapted GoPro Hero 4 with a 25 mm lens allows for high-definition, cost-effective, portable video capture of oculoplastic and strabismus surgery. The 25 mm lens allows for detailed videography that can enhance surgical teaching and self-examination. PMID:26834455

  8. Structured light camera calibration

    NASA Astrophysics Data System (ADS)

    Garbat, P.; Skarbek, W.; Tomaszewski, M.

    2013-03-01

    Structured light camera which is being designed with the joined effort of Institute of Radioelectronics and Institute of Optoelectronics (both being large units of the Warsaw University of Technology within the Faculty of Electronics and Information Technology) combines various hardware and software contemporary technologies. In hardware it is integration of a high speed stripe projector and a stripe camera together with a standard high definition video camera. In software it is supported by sophisticated calibration techniques which enable development of advanced application such as real time 3D viewer of moving objects with the free viewpoint or 3D modeller for still objects.

  9. A digital TV system for the detection of high speed human motion

    NASA Astrophysics Data System (ADS)

    Fang, R. C.

    1981-08-01

    Two array cameras and a force plate were linked to a PDP-11/34 minicomputer for an on-line recording of high speed human motion. A microprocessor-based interface system was constructed to allow preprocessing and coordinating of the video data before being transferred to the minicomputer. Control programs of the interface system are stored in the disk and loaded into the program storage areas of the microprocessor before the interface system starts its operation. Software programs for collecting and processing video and force data have been written. Experiments on the detection of human jumping have been carried out. Normal gait and amputee gait have also been recorded and analyzed.

  10. Comparison of Kodak Professional Digital Camera System images to conventional film, still video, and freeze-frame images

    NASA Astrophysics Data System (ADS)

    Kent, Richard A.; McGlone, John T.; Zoltowski, Norbert W.

    1991-06-01

    Electronic cameras provide near real time image evaluation with the benefits of digital storage methods for rapid transmission or computer processing and enhancement of images. But how does the image quality of their images compare to that of conventional film? A standard Nikon F-3TM 35 mm SLR camera was transformed into an electro-optical camera by replacing the film back with Kodak's KAF-1400V (or KAF-1300L) megapixel CCD array detector back and a processing accessory. Images taken with these Kodak electronic cameras were compared to those using conventional films and to several still video cameras. Quantitative and qualitative methods were used to compare images from these camera systems. Images captured on conventional video analog systems provide a maximum of 450 - 500 TV lines of resolution depending upon the camera resolution, storage method, and viewing system resolution. The Kodak Professional Digital Camera SystemTM exceeded this resolution and more closely approached that of film.

  11. Video and acoustic camera techniques for studying fish under ice: a review and comparison

    SciTech Connect

    Mueller, Robert P.; Brown, Richard S.; Hop, Haakon H.; Moulton, Larry

    2006-09-05

    Researchers attempting to study the presence, abundance, size, and behavior of fish species in northern and arctic climates during winter face many challenges, including the presence of thick ice cover, snow cover, and, sometimes, extremely low temperatures. This paper describes and compares the use of video and acoustic cameras for determining fish presence and behavior in lakes, rivers, and streams with ice cover. Methods are provided for determining fish density and size, identifying species, and measuring swimming speed and successful applications of previous surveys of fish under the ice are described. These include drilling ice holes, selecting batteries and generators, deploying pan and tilt cameras, and using paired colored lasers to determine fish size and habitat associations. We also discuss use of infrared and white light to enhance image-capturing capabilities, deployment of digital recording systems and time-lapse techniques, and the use of imaging software. Data are presented from initial surveys with video and acoustic cameras in the Sagavanirktok River Delta, Alaska, during late winter 2004. These surveys represent the first known successful application of a dual-frequency identification sonar (DIDSON) acoustic camera under the ice that achieved fish detection and sizing at camera ranges up to 16 m. Feasibility tests of video and acoustic cameras for determining fish size and density at various turbidity levels are also presented. Comparisons are made of the different techniques in terms of suitability for achieving various fisheries research objectives. This information is intended to assist researchers in choosing the equipment that best meets their study needs.

  12. High speed imager test station

    DOEpatents

    Yates, George J.; Albright, Kevin L.; Turko, Bojan T.

    1995-01-01

    A test station enables the performance of a solid state imager (herein called a focal plane array or FPA) to be determined at high image frame rates. A programmable waveform generator is adapted to generate clock pulses at determinable rates for clock light-induced charges from a FPA. The FPA is mounted on an imager header board for placing the imager in operable proximity to level shifters for receiving the clock pulses and outputting pulses effective to clock charge from the pixels forming the FPA. Each of the clock level shifters is driven by leading and trailing edge portions of the clock pulses to reduce power dissipation in the FPA. Analog circuits receive output charge pulses clocked from the FPA pixels. The analog circuits condition the charge pulses to cancel noise in the pulses and to determine and hold a peak value of the charge for digitizing. A high speed digitizer receives the peak signal value and outputs a digital representation of each one of the charge pulses. A video system then displays an image associated with the digital representation of the output charge pulses clocked from the FPA. In one embodiment, the FPA image is formatted to a standard video format for display on conventional video equipment.

  13. High speed imager test station

    DOEpatents

    Yates, G.J.; Albright, K.L.; Turko, B.T.

    1995-11-14

    A test station enables the performance of a solid state imager (herein called a focal plane array or FPA) to be determined at high image frame rates. A programmable waveform generator is adapted to generate clock pulses at determinable rates for clock light-induced charges from a FPA. The FPA is mounted on an imager header board for placing the imager in operable proximity to level shifters for receiving the clock pulses and outputting pulses effective to clock charge from the pixels forming the FPA. Each of the clock level shifters is driven by leading and trailing edge portions of the clock pulses to reduce power dissipation in the FPA. Analog circuits receive output charge pulses clocked from the FPA pixels. The analog circuits condition the charge pulses to cancel noise in the pulses and to determine and hold a peak value of the charge for digitizing. A high speed digitizer receives the peak signal value and outputs a digital representation of each one of the charge pulses. A video system then displays an image associated with the digital representation of the output charge pulses clocked from the FPA. In one embodiment, the FPA image is formatted to a standard video format for display on conventional video equipment. 12 figs.

  14. High speed door assembly

    SciTech Connect

    Shapiro, C.

    1991-12-31

    This invention is comprised of a high speed door assembly, comprising an actuator cylinder and piston rods, a pressure supply cylinder and fittings, an electrically detonated explosive bolt, a honeycomb structured door, a honeycomb structured decelerator, and a structural steel frame encasing the assembly to close over a 3 foot diameter opening within 50 milliseconds of actuation, to contain hazardous materials and vapors within a test fixture.

  15. ARINC 818 adds capabilities for high-speed sensors and systems

    NASA Astrophysics Data System (ADS)

    Keller, Tim; Grunwald, Paul

    2014-06-01

    ARINC 818, titled Avionics Digital Video Bus (ADVB), is the standard for cockpit video that has gained wide acceptance in both the commercial and military cockpits including the Boeing 787, the A350XWB, the A400M, the KC- 46A and many others. Initially conceived of for cockpit displays, ARINC 818 is now propagating into high-speed sensors, such as infrared and optical cameras due to its high-bandwidth and high reliability. The ARINC 818 specification that was initially release in the 2006 and has recently undergone a major update that will enhance its applicability as a high speed sensor interface. The ARINC 818-2 specification was published in December 2013. The revisions to the specification include: video switching, stereo and 3-D provisions, color sequential implementations, regions of interest, data-only transmissions, multi-channel implementations, bi-directional communication, higher link rates to 32Gbps, synchronization signals, options for high-speed coax interfaces and optical interface details. The additions to the specification are especially appealing for high-bandwidth, multi sensor systems that have issues with throughput bottlenecks and SWaP concerns. ARINC 818 is implemented on either copper or fiber optic high speed physical layers, and allows for time multiplexing multiple sensors onto a single link. This paper discusses each of the new capabilities in the ARINC 818-2 specification and the benefits for ISR and countermeasures implementations, several examples are provided.

  16. A Novel Method to Reduce Time Investment When Processing Videos from Camera Trap Studies

    PubMed Central

    Swinnen, Kristijn R. R.; Reijniers, Jonas; Breno, Matteo; Leirs, Herwig

    2014-01-01

    Camera traps have proven very useful in ecological, conservation and behavioral research. Camera traps non-invasively record presence and behavior of animals in their natural environment. Since the introduction of digital cameras, large amounts of data can be stored. Unfortunately, processing protocols did not evolve as fast as the technical capabilities of the cameras. We used camera traps to record videos of Eurasian beavers (Castor fiber). However, a large number of recordings did not contain the target species, but instead empty recordings or other species (together non-target recordings), making the removal of these recordings unacceptably time consuming. In this paper we propose a method to partially eliminate non-target recordings without having to watch the recordings, in order to reduce workload. Discrimination between recordings of target species and non-target recordings was based on detecting variation (changes in pixel values from frame to frame) in the recordings. Because of the size of the target species, we supposed that recordings with the target species contain on average much more movements than non-target recordings. Two different filter methods were tested and compared. We show that a partial discrimination can be made between target and non-target recordings based on variation in pixel values and that environmental conditions and filter methods influence the amount of non-target recordings that can be identified and discarded. By allowing a loss of 5% to 20% of recordings containing the target species, in ideal circumstances, 53% to 76% of non-target recordings can be identified and discarded. We conclude that adding an extra processing step in the camera trap protocol can result in large time savings. Since we are convinced that the use of camera traps will become increasingly important in the future, this filter method can benefit many researchers, using it in different contexts across the globe, on both videos and photographs. PMID:24918777

  17. A passive terahertz video camera based on lumped element kinetic inductance detectors

    NASA Astrophysics Data System (ADS)

    Rowe, Sam; Pascale, Enzo; Doyle, Simon; Dunscombe, Chris; Hargrave, Peter; Papageorgio, Andreas; Wood, Ken; Ade, Peter A. R.; Barry, Peter; Bideaud, Aurélien; Brien, Tom; Dodd, Chris; Grainger, William; House, Julian; Mauskopf, Philip; Moseley, Paul; Spencer, Locke; Sudiwala, Rashmi; Tucker, Carole; Walker, Ian

    2016-03-01

    We have developed a passive 350 GHz (850 μm) video-camera to demonstrate lumped element kinetic inductance detectors (LEKIDs)—designed originally for far-infrared astronomy—as an option for general purpose terrestrial terahertz imaging applications. The camera currently operates at a quasi-video frame rate of 2 Hz with a noise equivalent temperature difference per frame of ˜0.1 K, which is close to the background limit. The 152 element superconducting LEKID array is fabricated from a simple 40 nm aluminum film on a silicon dielectric substrate and is read out through a single microwave feedline with a cryogenic low noise amplifier and room temperature frequency domain multiplexing electronics.

  18. A digital underwater video camera system for aquatic research in regulated rivers

    USGS Publications Warehouse

    Martin, Benjamin M.; Irwin, Elise R.

    2010-01-01

    We designed a digital underwater video camera system to monitor nesting centrarchid behavior in the Tallapoosa River, Alabama, 20 km below a peaking hydropower dam with a highly variable flow regime. Major components of the system included a digital video recorder, multiple underwater cameras, and specially fabricated substrate stakes. The innovative design of the substrate stakes allowed us to effectively observe nesting redbreast sunfish Lepomis auritus in a highly regulated river. Substrate stakes, which were constructed for the specific substratum complex (i.e., sand, gravel, and cobble) identified at our study site, were able to withstand a discharge level of approximately 300 m3/s and allowed us to simultaneously record 10 active nests before and during water releases from the dam. We believe our technique will be valuable for other researchers that work in regulated rivers to quantify behavior of aquatic fauna in response to a discharge disturbance.

  19. Video camera system for locating bullet holes in targets at a ballistics tunnel

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Rummler, D. R.; Goad, W. K.

    1990-01-01

    A system consisting of a single charge coupled device (CCD) video camera, computer controlled video digitizer, and software to automate the measurement was developed to measure the location of bullet holes in targets at the International Shooters Development Fund (ISDF)/NASA Ballistics Tunnel. The camera/digitizer system is a crucial component of a highly instrumented indoor 50 meter rifle range which is being constructed to support development of wind resistant, ultra match ammunition. The system was designed to take data rapidly (10 sec between shoots) and automatically with little operator intervention. The system description, measurement concept, and procedure are presented along with laboratory tests of repeatability and bias error. The long term (1 hour) repeatability of the system was found to be 4 microns (one standard deviation) at the target and the bias error was found to be less than 50 microns. An analysis of potential errors and a technique for calibration of the system are presented.

  20. A passive terahertz video camera based on lumped element kinetic inductance detectors.

    PubMed

    Rowe, Sam; Pascale, Enzo; Doyle, Simon; Dunscombe, Chris; Hargrave, Peter; Papageorgio, Andreas; Wood, Ken; Ade, Peter A R; Barry, Peter; Bideaud, Aurélien; Brien, Tom; Dodd, Chris; Grainger, William; House, Julian; Mauskopf, Philip; Moseley, Paul; Spencer, Locke; Sudiwala, Rashmi; Tucker, Carole; Walker, Ian

    2016-03-01

    We have developed a passive 350 GHz (850 μm) video-camera to demonstrate lumped element kinetic inductance detectors (LEKIDs)--designed originally for far-infrared astronomy--as an option for general purpose terrestrial terahertz imaging applications. The camera currently operates at a quasi-video frame rate of 2 Hz with a noise equivalent temperature difference per frame of ∼0.1 K, which is close to the background limit. The 152 element superconducting LEKID array is fabricated from a simple 40 nm aluminum film on a silicon dielectric substrate and is read out through a single microwave feedline with a cryogenic low noise amplifier and room temperature frequency domain multiplexing electronics. PMID:27036756

  1. Operation and maintenance manual for the high resolution stereoscopic video camera system (HRSVS) system 6230

    SciTech Connect

    Pardini, A.F., Westinghouse Hanford

    1996-07-16

    The High Resolution Stereoscopic Video Cameral System (HRSVS),system 6230, is a stereoscopic camera system that will be used as an end effector on the LDUA to perform surveillance and inspection activities within Hanford waste tanks. It is attached to the LDUA by means of a Tool Interface Plate (TIP), which provides a feed through for all electrical and pneumatic utilities needed by the end effector to operate.

  2. Evaluation of lens distortion errors using an underwater camera system for video-based motion analysis

    NASA Technical Reports Server (NTRS)

    Poliner, Jeffrey; Fletcher, Lauren; Klute, Glenn K.

    1994-01-01

    Video-based motion analysis systems are widely employed to study human movement, using computers to capture, store, process, and analyze video data. This data can be collected in any environment where cameras can be located. One of the NASA facilities where human performance research is conducted is the Weightless Environment Training Facility (WETF), a pool of water which simulates zero-gravity with neutral buoyance. Underwater video collection in the WETF poses some unique problems. This project evaluates the error caused by the lens distortion of the WETF cameras. A grid of points of known dimensions was constructed and videotaped using a video vault underwater system. Recorded images were played back on a VCR and a personal computer grabbed and stored the images on disk. These images were then digitized to give calculated coordinates for the grid points. Errors were calculated as the distance from the known coordinates of the points to the calculated coordinates. It was demonstrated that errors from lens distortion could be as high as 8 percent. By avoiding the outermost regions of a wide-angle lens, the error can be kept smaller.

  3. Structural analysis of color video camera installation on tank 241AW101 (2 Volumes)

    SciTech Connect

    Strehlow, J.P.

    1994-08-24

    A video camera is planned to be installed on the radioactive storage tank 241AW101 at the DOE` s Hanford Site in Richland, Washington. The camera will occupy the 20 inch port of the Multiport Flange riser which is to be installed on riser 5B of the 241AW101 (3,5,10). The objective of the project reported herein was to perform a seismic analysis and evaluation of the structural components of the camera for a postulated Design Basis Earthquake (DBE) per the reference Structural Design Specification (SDS) document (6). The detail of supporting engineering calculations is documented in URS/Blume Calculation No. 66481-01-CA-03 (1).

  4. Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System

    PubMed Central

    Jung, Jaehoon; Yoon, Inhye; Paik, Joonki

    2016-01-01

    This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i) automatic camera calibration using both moving objects and a background structure; (ii) object depth estimation; and (iii) detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB) camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems. PMID:27347978

  5. Low-complexity camera digital signal imaging for video document projection system

    NASA Astrophysics Data System (ADS)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  6. High Speed Vortex Flows

    NASA Technical Reports Server (NTRS)

    Wood, Richard M.; Wilcox, Floyd J., Jr.; Bauer, Steven X. S.; Allen, Jerry M.

    2000-01-01

    A review of the research conducted at the National Aeronautics and Space Administration (NASA), Langley Research Center (LaRC) into high-speed vortex flows during the 1970s, 1980s, and 1990s is presented. The data reviewed is for flat plates, cavities, bodies, missiles, wings, and aircraft. These data are presented and discussed relative to the design of future vehicles. Also presented is a brief historical review of the extensive body of high-speed vortex flow research from the 1940s to the present in order to provide perspective of the NASA LaRC's high-speed research results. Data are presented which show the types of vortex structures which occur at supersonic speeds and the impact of these flow structures to vehicle performance and control is discussed. The data presented shows the presence of both small- and large scale vortex structures for a variety of vehicles, from missiles to transports. For cavities, the data show very complex multiple vortex structures exist at all combinations of cavity depth to length ratios and Mach number. The data for missiles show the existence of very strong interference effects between body and/or fin vortices and the downstream fins. It was shown that these vortex flow interference effects could be both positive and negative. Data are shown which highlights the effect that leading-edge sweep, leading-edge bluntness, wing thickness, location of maximum thickness, and camber has on the aerodynamics of and flow over delta wings. The observed flow fields for delta wings (i.e. separation bubble, classical vortex, vortex with shock, etc.) are discussed in the context of' aircraft design. And data have been shown that indicate that aerodynamic performance improvements are available by considering vortex flows as a primary design feature. Finally a discussing of a design approach for wings which utilize vortex flows for improved aerodynamic performance at supersonic speed is presented.

  7. High Speed Civil Transport

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This computer generated animation depicts a conceptual simulation of the flight of a High Speed Civil Transport (HSCT). As envisioned, the HSCT is a next-generation supersonic (faster than the speed of sound) passenger jet that would fly 300 passengers at more than 1,500 miles per hour -- more than twice the speed of sound. It will cross the Pacific or Atlantic in less than half the time of modern subsonic jets, and at a ticket price less than 20 percent above comparable, slower flights.

  8. High speed flywheel

    DOEpatents

    McGrath, Stephen V.

    1991-01-01

    A flywheel for operation at high speeds utilizes two or more ringlike coments arranged in a spaced concentric relationship for rotation about an axis and an expansion device interposed between the components for accommodating radial growth of the components resulting from flywheel operation. The expansion device engages both of the ringlike components, and the structure of the expansion device ensures that it maintains its engagement with the components. In addition to its expansion-accommodating capacity, the expansion device also maintains flywheel stiffness during flywheel operation.

  9. Video camera observation for assessing overland flow patterns during rainfall events

    NASA Astrophysics Data System (ADS)

    Silasari, Rasmiaditya; Oismüller, Markus; Blöschl, Günter

    2015-04-01

    Physically based hydrological models have been widely used in various studies to model overland flow propagation in cases such as flood inundation and dam break flow. The capability of such models to simulate the formation of overland flow by spatial and temporal discretization of the empirical equations makes it possible for hydrologists to trace the overland flow generation both spatially and temporally across surface and subsurface domains. As the upscaling methods transforming hydrological process spatial patterns from the small obrseved scale to the larger catchment scale are still being progressively developed, the physically based hydrological models become a convenient tool to assess the patterns and their behaviors crucial in determining the upscaling process. Related studies in the past had successfully used these models as well as utilizing field observation data for model verification. The common observation data used for this verification are overland flow discharge during natural rainfall events and camera observations during synthetic events (staged field experiments) while the use of camera observations during natural events are hardly discussed in publications. This study advances in exploring the potential of video camera observations of overland flow generation during natural rainfall events to support the physically based hydrological model verification and the assessment of overland flow spatial patterns. The study is conducted within a 64ha catchment located at Petzenkirchen, Lower Austria, known as HOAL (Hydrological Open Air Laboratory). The catchment land covers are dominated by arable land (87%) with small portions (13%) of forest, pasture and paved surfaces. A 600m stream is running at southeast of the catchment flowing southward and equipped with flumes and pressure transducers measuring water level in minutely basis from various inlets along the stream (i.e. drainages, surface runoffs, springs) to be calculated into flow discharge. A

  10. Congestion control of high-speed networks

    NASA Astrophysics Data System (ADS)

    1993-06-01

    We report on four areas of activity in the past six months. These areas include the following: (1) work on the control of integrated video and image traffic, both at the access to a network and within a high-speed network; (2) more general/game theoretic models for flow control in networks; (3) work on fault management for high-speed heterogeneous networks to improve survivability; and (4) work on all-optical (lightwave) networks of the future, designed to take advantage of the enormous bandwidth capability available at optical frequencies.

  11. High speed transient sampler

    DOEpatents

    McEwan, Thomas E.

    1995-01-01

    A high speed sampler comprises a meandered sample transmission line for transmitting an input signal, a straight strobe transmission line for transmitting a strobe signal, and a plurality of sampling gates along the transmission lines. The sampling gates comprise a four terminal diode bridge having a first strobe resistor connected from a first terminal of the bridge to the positive strobe line, a second strobe resistor coupled from the third terminal of the bridge to the negative strobe line, a tap connected to the second terminal of the bridge and to the sample transmission line, and a sample holding capacitor connected to the fourth terminal of the bridge. The resistance of the first and second strobe resistors is much higher than the signal transmission line impedance in the preferred system. This results in a sampling gate which applies a very small load on the sample transmission line and on the strobe generator. The sample holding capacitor is implemented using a smaller capacitor and a larger capacitor isolated from the smaller capacitor by resistance. The high speed sampler of the present invention is also characterized by other optimizations, including transmission line tap compensation, stepped impedance strobe line, a multi-layer physical layout, and unique strobe generator design. A plurality of banks of such samplers are controlled for concatenated or interleaved sample intervals to achieve long sample lengths or short sample spacing.

  12. High speed transient sampler

    DOEpatents

    McEwan, T.E.

    1995-11-28

    A high speed sampler comprises a meandered sample transmission line for transmitting an input signal, a straight strobe transmission line for transmitting a strobe signal, and a plurality of sampling gates along the transmission lines. The sampling gates comprise a four terminal diode bridge having a first strobe resistor connected from a first terminal of the bridge to the positive strobe line, a second strobe resistor coupled from the third terminal of the bridge to the negative strobe line, a tap connected to the second terminal of the bridge and to the sample transmission line, and a sample holding capacitor connected to the fourth terminal of the bridge. The resistance of the first and second strobe resistors is much higher than the signal transmission line impedance in the preferred system. This results in a sampling gate which applies a very small load on the sample transmission line and on the strobe generator. The sample holding capacitor is implemented using a smaller capacitor and a larger capacitor isolated from the smaller capacitor by resistance. The high speed sampler of the present invention is also characterized by other optimizations, including transmission line tap compensation, stepped impedance strobe line, a multi-layer physical layout, and unique strobe generator design. A plurality of banks of such samplers are controlled for concatenated or interleaved sample intervals to achieve long sample lengths or short sample spacing. 17 figs.

  13. Compact full-motion video hyperspectral cameras: development, image processing, and applications

    NASA Astrophysics Data System (ADS)

    Kanaev, A. V.

    2015-10-01

    Emergence of spectral pixel-level color filters has enabled development of hyper-spectral Full Motion Video (FMV) sensors operating in visible (EO) and infrared (IR) wavelengths. The new class of hyper-spectral cameras opens broad possibilities of its utilization for military and industry purposes. Indeed, such cameras are able to classify materials as well as detect and track spectral signatures continuously in real time while simultaneously providing an operator the benefit of enhanced-discrimination-color video. Supporting these extensive capabilities requires significant computational processing of the collected spectral data. In general, two processing streams are envisioned for mosaic array cameras. The first is spectral computation that provides essential spectral content analysis e.g. detection or classification. The second is presentation of the video to an operator that can offer the best display of the content depending on the performed task e.g. providing spatial resolution enhancement or color coding of the spectral analysis. These processing streams can be executed in parallel or they can utilize each other's results. The spectral analysis algorithms have been developed extensively, however demosaicking of more than three equally-sampled spectral bands has been explored scarcely. We present unique approach to demosaicking based on multi-band super-resolution and show the trade-off between spatial resolution and spectral content. Using imagery collected with developed 9-band SWIR camera we demonstrate several of its concepts of operation including detection and tracking. We also compare the demosaicking results to the results of multi-frame super-resolution as well as to the combined multi-frame and multiband processing.

  14. An explanation for camera perspective bias in voluntariness judgment for video-recorded confession: Suggestion of cognitive frame.

    PubMed

    Park, Kwangbai; Pyo, Jimin

    2012-06-01

    Three experiments were conducted to test the hypothesis that difference in voluntariness judgment for a custodial confession filmed in different camera focuses ("camera perspective bias") could occur because a particular camera focus conveys a suggestion of a particular cognitive frame. In Experiment 1, 146 juror eligible adults in Korea showed a camera perspective bias in voluntariness judgment with a simulated confession filmed with two cameras of different focuses, one on the suspect and the other on the detective. In Experiment 2, the same bias in voluntariness judgment emerged without cameras when the participants were cognitively framed, prior to listening to the audio track of the videos used in Experiment 1, by instructions to make either a voluntariness judgment for a confession or a coerciveness judgment for an interrogation. In Experiment 3, the camera perspective bias in voluntariness judgment disappeared when the participants viewing the video focused on the suspect were initially framed to make coerciveness judgment for the interrogation and the participants viewing the video focused on the detective were initially framed to make voluntariness judgment for the confession. The results in combination indicated that a particular camera focus may convey a suggestion of a particular cognitive frame in which a video-recorded confession/interrogation is initially represented. Some forensic and policy implications were discussed. PMID:22667808

  15. Video Capture of Perforator Flap Harvesting Procedure with a Full High-definition Wearable Camera.

    PubMed

    Miyamoto, Shimpei

    2016-06-01

    Recent advances in wearable recording technology have enabled high-quality video recording of several surgical procedures from the surgeon's perspective. However, the available wearable cameras are not optimal for recording the harvesting of perforator flaps because they are too heavy and cannot be attached to the surgical loupe. The Ecous is a small high-resolution camera that was specially developed for recording loupe magnification surgery. This study investigated the use of the Ecous for recording perforator flap harvesting procedures. The Ecous SC MiCron is a high-resolution camera that can be mounted directly on the surgical loupe. The camera is light (30 g) and measures only 28 × 32 × 60 mm. We recorded 23 perforator flap harvesting procedures with the Ecous connected to a laptop through a USB cable. The elevated flaps included 9 deep inferior epigastric artery perforator flaps, 7 thoracodorsal artery perforator flaps, 4 anterolateral thigh flaps, and 3 superficial inferior epigastric artery flaps. All procedures were recorded with no equipment failure. The Ecous recorded the technical details of the perforator dissection at a high-resolution level. The surgeon did not feel any extra stress or interference when wearing the Ecous. The Ecous is an ideal camera for recording perforator flap harvesting procedures. It fits onto the surgical loupe perfectly without creating additional stress on the surgeon. High-quality video from the surgeon's perspective makes accurate documentation of the procedures possible, thereby enhancing surgical education and allowing critical self-reflection. PMID:27482504

  16. Video Capture of Perforator Flap Harvesting Procedure with a Full High-definition Wearable Camera

    PubMed Central

    2016-01-01

    Summary: Recent advances in wearable recording technology have enabled high-quality video recording of several surgical procedures from the surgeon’s perspective. However, the available wearable cameras are not optimal for recording the harvesting of perforator flaps because they are too heavy and cannot be attached to the surgical loupe. The Ecous is a small high-resolution camera that was specially developed for recording loupe magnification surgery. This study investigated the use of the Ecous for recording perforator flap harvesting procedures. The Ecous SC MiCron is a high-resolution camera that can be mounted directly on the surgical loupe. The camera is light (30 g) and measures only 28 × 32 × 60 mm. We recorded 23 perforator flap harvesting procedures with the Ecous connected to a laptop through a USB cable. The elevated flaps included 9 deep inferior epigastric artery perforator flaps, 7 thoracodorsal artery perforator flaps, 4 anterolateral thigh flaps, and 3 superficial inferior epigastric artery flaps. All procedures were recorded with no equipment failure. The Ecous recorded the technical details of the perforator dissection at a high-resolution level. The surgeon did not feel any extra stress or interference when wearing the Ecous. The Ecous is an ideal camera for recording perforator flap harvesting procedures. It fits onto the surgical loupe perfectly without creating additional stress on the surgeon. High-quality video from the surgeon’s perspective makes accurate documentation of the procedures possible, thereby enhancing surgical education and allowing critical self-reflection. PMID:27482504

  17. First results from newly developed automatic video system MAIA and comparison with older analogue cameras

    NASA Astrophysics Data System (ADS)

    Koten, P.; Páta, P.; Fliegel, K.; Vítek, S.

    2013-09-01

    New automatic video system for meteor observations MAIA was developed in recent years [1]. The goal is to replace the older analogue cameras and provide a platform for continues round the year observations from two different stations. Here we present first results obtained during testing phase as well as the first double station observations. Comparison with the older analogue cameras is provided too. MAIA (Meteor Automatic Imager and Analyzer) is based on digital monochrome camera JAI CM-040 and well proved image intensifier XX1332 (Figure 1). The camera provides spatial resolution 776 x 582 pixels. The maximum frame rate is 61.15 frames per second. Fast Pentax SMS FA 1.4/50mm lens is used as the input element of the optical system. The resulting field-of-view is about 50º in diameter. For the first time new system was used in semiautomatic regime for the observation of the Draconid outburst on 8th October, 2011. Both cameras recorded more than 160 meteors. Additional hardware and software were developed in 2012 to enable automatic observation and basic processing of the data. The system usually records the video sequences for whole night. During the daytime it looks the records for moving object, saves them into short sequences and clears the hard drives to allow additional observations. Initial laboratory measurements [2] and simultaneous observations with older system show significant improvement of the obtained data. Table 1 shows comparison of the basic parameters of both systems. In this paper we will present comparison of the double station data obtained using both systems.

  18. High speed civil transport

    NASA Technical Reports Server (NTRS)

    Bogardus, Scott; Loper, Brent; Nauman, Chris; Page, Jeff; Parris, Rusty; Steinbach, Greg

    1990-01-01

    The design process of the High Speed Civil Transport (HSCT) combines existing technology with the expectation of future technology to create a Mach 3.0 transport. The HSCT was designed to have a range in excess of 6000 nautical miles and carry up to 300 passengers. This range will allow the HSCT to service the economically expanding Pacific Basin region. Effort was made in the design to enable the aircraft to use conventional airports with standard 12,000 foot runways. With a takeoff thrust of 250,000 pounds, the four supersonic through-flow engines will accelerate the HSCT to a cruise speed of Mach 3.0. The 679,000 pound (at takeoff) HSCT is designed to cruise at an altitude of 70,000 feet, flying above most atmospheric disturbances.

  19. Real Time Speed Estimation of Moving Vehicles from Side View Images from an Uncalibrated Video Camera

    PubMed Central

    Doğan, Sedat; Temiz, Mahir Serhan; Külür, Sıtkı

    2010-01-01

    In order to estimate the speed of a moving vehicle with side view camera images, velocity vectors of a sufficient number of reference points identified on the vehicle must be found using frame images. This procedure involves two main steps. In the first step, a sufficient number of points from the vehicle is selected, and these points must be accurately tracked on at least two successive video frames. In the second step, by using the displacement vectors of the tracked points and passed time, the velocity vectors of those points are computed. Computed velocity vectors are defined in the video image coordinate system and displacement vectors are measured by the means of pixel units. Then the magnitudes of the computed vectors in image space should be transformed to the object space to find the absolute values of these magnitudes. This transformation requires an image to object space information in a mathematical sense that is achieved by means of the calibration and orientation parameters of the video frame images. This paper presents proposed solutions for the problems of using side view camera images mentioned here. PMID:22399909

  20. Algorithm design for automated transportation photo enforcement camera image and video quality diagnostic check modules

    NASA Astrophysics Data System (ADS)

    Raghavan, Ajay; Saha, Bhaskar

    2013-03-01

    Photo enforcement devices for traffic rules such as red lights, toll, stops, and speed limits are increasingly being deployed in cities and counties around the world to ensure smooth traffic flow and public safety. These are typically unattended fielded systems, and so it is important to periodically check them for potential image/video quality problems that might interfere with their intended functionality. There is interest in automating such checks to reduce the operational overhead and human error involved in manually checking large camera device fleets. Examples of problems affecting such camera devices include exposure issues, focus drifts, obstructions, misalignment, download errors, and motion blur. Furthermore, in some cases, in addition to the sub-algorithms for individual problems, one also has to carefully design the overall algorithm and logic to check for and accurately classifying these individual problems. Some of these issues can occur in tandem or have the potential to be confused for each other by automated algorithms. Examples include camera misalignment that can cause some scene elements to go out of focus for wide-area scenes or download errors that can be misinterpreted as an obstruction. Therefore, the sequence in which the sub-algorithms are utilized is also important. This paper presents an overview of these problems along with no-reference and reduced reference image and video quality solutions to detect and classify such faults.

  1. Falling-incident detection and throughput enhancement in a multi-camera video-surveillance system.

    PubMed

    Shieh, Wann-Yun; Huang, Ju-Chin

    2012-09-01

    For most elderly, unpredictable falling incidents may occur at the corner of stairs or a long corridor due to body frailty. If we delay to rescue a falling elder who is likely fainting, more serious consequent injury may occur. Traditional secure or video surveillance systems need caregivers to monitor a centralized screen continuously, or need an elder to wear sensors to detect falling incidents, which explicitly waste much human power or cause inconvenience for elders. In this paper, we propose an automatic falling-detection algorithm and implement this algorithm in a multi-camera video surveillance system. The algorithm uses each camera to fetch the images from the regions required to be monitored. It then uses a falling-pattern recognition algorithm to determine if a falling incident has occurred. If yes, system will send short messages to someone needs to be noticed. The algorithm has been implemented in a DSP-based hardware acceleration board for functionality proof. Simulation results show that the accuracy of falling detection can achieve at least 90% and the throughput of a four-camera surveillance system can be improved by about 2.1 times. PMID:22154761

  2. A compact high-definition low-cost digital stereoscopic video camera for rapid robotic surgery development.

    PubMed

    Carlson, Jay; Kowalczuk, Jędrzej; Psota, Eric; Pérez, Lance C

    2012-01-01

    Robotic surgical platforms require vision feedback systems, which often consist of low-resolution, expensive, single-imager analog cameras. These systems are retooled for 3D display by simply doubling the cameras and outboard control units. Here, a fully-integrated digital stereoscopic video camera employing high-definition sensors and a class-compliant USB video interface is presented. This system can be used with low-cost PC hardware and consumer-level 3D displays for tele-medical surgical applications including military medical support, disaster relief, and space exploration. PMID:22356964

  3. Research on simulation and verification system of satellite remote sensing camera video processor based on dual-FPGA

    NASA Astrophysics Data System (ADS)

    Ma, Fei; Liu, Qi; Cui, Xuenan

    2014-09-01

    To satisfy the needs for testing video processor of satellite remote sensing cameras, a design is provided to achieve a simulation and verification system of satellite remote sensing camera video processor based on dual-FPGA. The correctness of video processor FPGA logic can be verified even without CCD signals or analog to digital convertor. Two Xilinx Virtex FPGAs are adopted to make a center unit, the logic of A/D digital data generating and data processing are developed with VHDL. The RS-232 interface is used to receive commands from the host computer, and different types of data are generated and outputted depending on the commands. Experimental results show that the simulation and verification system is flexible and can work well. The simulation and verification system meets the requirements of testing video processors for several different types of satellite remote sensing cameras.

  4. High speed packet switching

    NASA Technical Reports Server (NTRS)

    1991-01-01

    This document constitutes the final report prepared by Proteon, Inc. of Westborough, Massachusetts under contract NAS 5-30629 entitled High-Speed Packet Switching (SBIR 87-1, Phase 2) prepared for NASA-Greenbelt, Maryland. The primary goal of this research project is to use the results of the SBIR Phase 1 effort to develop a sound, expandable hardware and software router architecture capable of forwarding 25,000 packets per second through the router and passing 300 megabits per second on the router's internal busses. The work being delivered under this contract received its funding from three different sources: the SNIPE/RIG contract (Contract Number F30602-89-C-0014, CDRL Sequence Number A002), the SBIR contract, and Proteon. The SNIPE/RIG and SBIR contracts had many overlapping requirements, which allowed the research done under SNIPE/RIG to be applied to SBIR. Proteon funded all of the work to develop new router interfaces other than FDDI, in addition to funding the productization of the router itself. The router being delivered under SBIR will be a fully product-quality machine. The work done during this contract produced many significant findings and results, summarized here and explained in detail in later sections of this report. The SNIPE/RIG contract was completed. That contract had many overlapping requirements with the SBIR contract, and resulted in the successful demonstration and delivery of a high speed router. The development that took place during the SNIPE/RIG contract produced findings that included the choice of processor and an understanding of the issues surrounding inter processor communications in a multiprocessor environment. Many significant speed enhancements to the router software were made during that time. Under the SBIR contract (and with help from Proteon-funded work), it was found that a single processor router achieved a throughput significantly higher than originally anticipated. For this reason, a single processor router was

  5. A Semantic Autonomous Video Surveillance System for Dense Camera Networks in Smart Cities

    PubMed Central

    Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M.; Carro, Belén; Sánchez-Esguevillas, Antonio

    2012-01-01

    This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network. PMID:23112607

  6. VideoWeb Dataset for Multi-camera Activities and Non-verbal Communication

    NASA Astrophysics Data System (ADS)

    Denina, Giovanni; Bhanu, Bir; Nguyen, Hoang Thanh; Ding, Chong; Kamal, Ahmed; Ravishankar, Chinya; Roy-Chowdhury, Amit; Ivers, Allen; Varda, Brenda

    Human-activity recognition is one of the most challenging problems in computer vision. Researchers from around the world have tried to solve this problem and have come a long way in recognizing simple motions and atomic activities. As the computer vision community heads toward fully recognizing human activities, a challenging and labeled dataset is needed. To respond to that need, we collected a dataset of realistic scenarios in a multi-camera network environment (VideoWeb) involving multiple persons performing dozens of different repetitive and non-repetitive activities. This chapter describes the details of the dataset. We believe that this VideoWeb Activities dataset is unique and it is one of the most challenging datasets available today. The dataset is publicly available online at http://vwdata.ee.ucr.edu/ along with the data annotation.

  7. A semantic autonomous video surveillance system for dense camera networks in Smart Cities.

    PubMed

    Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M; Carro, Belén; Sánchez-Esguevillas, Antonio

    2012-01-01

    This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network. PMID:23112607

  8. System design description for the LDUA high resolution stereoscopic video camera system (HRSVS)

    SciTech Connect

    Pardini, A.F.

    1998-01-27

    The High Resolution Stereoscopic Video Camera System (HRSVS), system 6230, was designed to be used as an end effector on the LDUA to perform surveillance and inspection activities within a waste tank. It is attached to the LDUA by means of a Tool Interface Plate (TIP) which provides a feed through for all electrical and pneumatic utilities needed by the end effector to operate. Designed to perform up close weld and corrosion inspection roles in US T operations, the HRSVS will support and supplement the Light Duty Utility Arm (LDUA) and provide the crucial inspection tasks needed to ascertain waste tank condition.

  9. MOEMS-based time-of-flight camera for 3D video capturing

    NASA Astrophysics Data System (ADS)

    You, Jang-Woo; Park, Yong-Hwa; Cho, Yong-Chul; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Lee, Seung-Wan

    2013-03-01

    We suggest a Time-of-Flight (TOF) video camera capturing real-time depth images (a.k.a depth map), which are generated from the fast-modulated IR images utilizing a novel MOEMS modulator having switching speed of 20 MHz. In general, 3 or 4 independent IR (e.g. 850nm) images are required to generate a single frame of depth image. Captured video image of a moving object frequently shows motion drag between sequentially captured IR images, which results in so called `motion blur' problem even when the frame rate of depth image is fast (e.g. 30 to 60 Hz). We propose a novel `single shot' TOF 3D camera architecture generating a single depth image out of synchronized captured IR images. The imaging system constitutes of 2x2 imaging lens array, MOEMS optical shutters (modulator) placed on each lens aperture and a standard CMOS image sensor. The IR light reflected from object is modulated by optical shutters on the apertures of 2x2 lens array and then transmitted images are captured on the image sensor resulting in 2x2 sub-IR images. As a result, the depth image is generated with those simultaneously captured 4 independent sub-IR images, hence the motion blur problem is canceled. The resulting performance is very useful in the applications of 3D camera to a human-machine interaction device such as user interface of TV, monitor, or hand held devices and motion capturing of human body. In addition, we show that the presented 3D camera can be modified to capture color together with depth image simultaneously on `single shot' frame rate.

  10. High speed civil transport

    NASA Technical Reports Server (NTRS)

    1991-01-01

    This report discusses the design and marketability of a next generation supersonic transport. Apogee Aeronautics Corporation has designated its High Speed Civil Transport (HSCT): Supercruiser HS-8. Since the beginning of the Concorde era, the general consensus has been that the proper time for the introduction of a next generation Supersonic Transport (SST) would depend upon the technical advances made in the areas of propulsion (reduction in emissions) and material composites (stronger, lighter materials). It is believed by many in the aerospace industry that these beforementioned technical advances lie on the horizon. With this being the case, this is the proper time to begin the design phase for the next generation HSCT. The design objective for a HSCT was to develop an aircraft that would be capable of transporting at least 250 passengers with baggage at a distance of 5500 nmi. The supersonic Mach number is currently unspecified. In addition, the design had to be marketable, cost effective, and certifiable. To achieve this goal, technical advances in the current SST's must be made, especially in the areas of aerodynamics and propulsion. As a result of these required aerodynamic advances, several different supersonic design concepts were reviewed.

  11. High Speed Civil Transport-737 Landings at Wallops Island

    NASA Technical Reports Server (NTRS)

    1996-01-01

    NASA pilot Michael Wusk makes a 'windowless landing' aboard a NASA 737 research aircraft in flight tests aimed at developing technology for a future supersonic airliner. Cameras in the nose of the airplane relayed images to a computer screen in the aircrafts otherwise 'blind' research cockpit. Computer graphics were overlaid on the image to give cues to the pilot during approaches and landings. Researchers are hoping that by enhancing the pilots vision with high-resolution video displays aircraft designers of the future can do away with the expensive, mechanically-drooping nose of early supersonic transports. The tests were conducted in flights at NASAs Wallops Flights Facility, Wallops, Va. From November 1995 through January 1996. The flight deck systems research is part of the joint NASA-US industry High-Speed Research (HSR) Program, aimed at developing technologies for an economically viable, environmentally friendly high-speed civil transport around the turn of the century. The work is directed by the HSR Program Office, located at NASA Langley Research Center, Hampton.Va.

  12. 11. INTERIOR VIEW OF 8FOOT HIGH SPEED WIND TUNNEL. SAME ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. INTERIOR VIEW OF 8-FOOT HIGH SPEED WIND TUNNEL. SAME CAMERA POSITION AS VA-118-B-10 LOOKING IN THE OPPOSITE DIRECTION. - NASA Langley Research Center, 8-Foot High Speed Wind Tunnel, 641 Thornell Avenue, Hampton, Hampton, VA

  13. Mounted Video Camera Captures Launch of STS-112, Shuttle Orbiter Atlantis

    NASA Technical Reports Server (NTRS)

    2002-01-01

    A color video camera mounted to the top of the External Tank (ET) provided this spectacular never-before-seen view of the STS-112 mission as the Space Shuttle Orbiter Atlantis lifted off in the afternoon of October 7, 2002. The camera provided views as the orbiter began its ascent until it reached near-orbital speed, about 56 miles above the Earth, including a view of the front and belly of the orbiter, a portion of the Solid Rocket Booster, and ET. The video was downlinked during flight to several NASA data-receiving sites, offering the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. Atlantis carried the S1 Integrated Truss Structure and the Crew and Equipment Translation Aid (CETA) Cart. The CETA is the first of two human-powered carts that will ride along the International Space Station's railway providing a mobile work platform for future extravehicular activities by astronauts. Landing on October 18, 2002, the Orbiter Atlantis ended its 11-day mission.

  14. Mounted Video Camera Captures Launch of STS-112, Shuttle Orbiter Atlantis

    NASA Technical Reports Server (NTRS)

    2002-01-01

    A color video camera mounted to the top of the External Tank (ET) provided this spectacular never-before-seen view of the STS-112 mission as the Space Shuttle Orbiter Atlantis lifted off in the afternoon of October 7, 2002, The camera provided views as the the orbiter began its ascent until it reached near-orbital speed, about 56 miles above the Earth, including a view of the front and belly of the orbiter, a portion of the Solid Rocket Booster, and ET. The video was downlinked during flight to several NASA data-receiving sites, offering the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. Atlantis carried the S1 Integrated Truss Structure and the Crew and Equipment Translation Aid (CETA) Cart. The CETA is the first of two human-powered carts that will ride along the International Space Station's railway providing a mobile work platform for future extravehicular activities by astronauts. Landing on October 18, 2002, the Orbiter Atlantis ended its 11-day mission.

  15. Fusion: ultra-high-speed and IR image sensors

    NASA Astrophysics Data System (ADS)

    Etoh, T. Goji; Dao, V. T. S.; Nguyen, Quang A.; Kimata, M.

    2015-08-01

    Most targets of ultra-high-speed video cameras operating at more than 1 Mfps, such as combustion, crack propagation, collision, plasma, spark discharge, an air bag at a car accident and a tire under a sudden brake, generate sudden heat. Researchers in these fields require tools to measure the high-speed motion and heat simultaneously. Ultra-high frame rate imaging is achieved by an in-situ storage image sensor. Each pixel of the sensor is equipped with multiple memory elements to record a series of image signals simultaneously at all pixels. Image signals stored in each pixel are read out after an image capturing operation. In 2002, we developed an in-situ storage image sensor operating at 1 Mfps 1). However, the fill factor of the sensor was only 15% due to a light shield covering the wide in-situ storage area. Therefore, in 2011, we developed a backside illuminated (BSI) in-situ storage image sensor to increase the sensitivity with 100% fill factor and a very high quantum efficiency 2). The sensor also achieved a much higher frame rate,16.7 Mfps, thanks to the wiring on the front side with more freedom 3). The BSI structure has another advantage that it has less difficulties in attaching an additional layer on the backside, such as scintillators. This paper proposes development of an ultra-high-speed IR image sensor in combination of advanced nano-technologies for IR imaging and the in-situ storage technology for ultra-highspeed imaging with discussion on issues in the integration.

  16. Complex effusive events at Kilauea as documented by the GOES satellite and remote video cameras

    USGS Publications Warehouse

    Harris, A.J.L.; Thornber, C.R.

    1999-01-01

    GOES provides thermal data for all of the Hawaiian volcanoes once every 15 min. We show how volcanic radiance time series produced from this data stream can be used as a simple measure of effusive activity. Two types of radiance trends in these time series can be used to monitor effusive activity: (a) Gradual variations in radiance reveal steady flow-field extension and tube development. (b) Discrete spikes correlate with short bursts of activity, such as lava fountaining or lava-lake overflows. We are confident that any effusive event covering more than 10,000 m2 of ground in less than 60 min will be unambiguously detectable using this approach. We demonstrate this capability using GOES, video camera and ground-based observational data for the current eruption of Kilauea volcano (Hawai'i). A GOES radiance time series was constructed from 3987 images between 19 June and 12 August 1997. This time series displayed 24 radiance spikes elevated more than two standard deviations above the mean; 19 of these are correlated with video-recorded short-burst effusive events. Less ambiguous events are interpreted, assessed and related to specific volcanic events by simultaneous use of permanently recording video camera data and ground-observer reports. The GOES radiance time series are automatically processed on data reception and made available in near-real-time, so such time series can contribute to three main monitoring functions: (a) automatically alerting major effusive events; (b) event confirmation and assessment; and (c) establishing effusive event chronology.

  17. Calibration grooming and alignment for LDUA High Resolution Stereoscopic Video Camera System (HRSVS)

    SciTech Connect

    Pardini, A.F.

    1998-01-27

    The High Resolution Stereoscopic Video Camera System (HRSVS) was designed by the Savannah River Technology Center (SRTC) to provide routine and troubleshooting views of tank interiors during characterization and remediation phases of underground storage tank (UST) processing. The HRSVS is a dual color camera system designed to provide stereo viewing of the interior of the tanks including the tank wall in a Class 1, Division 1, flammable atmosphere. The HRSVS was designed with a modular philosophy for easy maintenance and configuration modifications. During operation of the system with the LDUA, the control of the camera system will be performed by the LDUA supervisory data acquisition system (SDAS). Video and control status 1458 will be displayed on monitors within the LDUA control center. All control functions are accessible from the front panel of the control box located within the Operations Control Trailer (OCT). The LDUA will provide all positioning functions within the waste tank for the end effector. Various electronic measurement instruments will be used to perform CG and A activities. The instruments may include a digital volt meter, oscilloscope, signal generator, and other electronic repair equipment. None of these instruments will need to be calibrated beyond what comes from the manufacturer. During CG and A a temperature indicating device will be used to measure the temperature of the outside of the HRSVS from initial startup until the temperature has stabilized. This device will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing. This sensor will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing.

  18. Visual fatigue modeling for stereoscopic video shot based on camera motion

    NASA Astrophysics Data System (ADS)

    Shi, Guozhong; Sang, Xinzhu; Yu, Xunbo; Liu, Yangdong; Liu, Jing

    2014-11-01

    As three-dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.

  19. High-speed 3D imaging using two-wavelength parallel-phase-shift interferometry.

    PubMed

    Safrani, Avner; Abdulhalim, Ibrahim

    2015-10-15

    High-speed three dimensional imaging based on two-wavelength parallel-phase-shift interferometry is presented. The technique is demonstrated using a high-resolution polarization-based Linnik interferometer operating with three high-speed phase-masked CCD cameras and two quasi-monochromatic modulated light sources. The two light sources allow for phase unwrapping the single source wrapped phase so that relatively high step profiles having heights as large as 3.7 μm can be imaged in video rate with ±2  nm accuracy and repeatability. The technique is validated using a certified very large scale integration (VLSI) step standard followed by a demonstration from the semiconductor industry showing an integrated chip with 2.75 μm height copper micro pillars at different packing densities. PMID:26469586

  20. Characterization of energetic devices for thermal battery applications by high-speed photography

    SciTech Connect

    Dosser, L.R.; Guidotti, R.

    1993-12-31

    High-speed photography at rates of up to 20,000 images per second was used to measure these properties in thermal battery igniters and also the ignition of thermal battery itself. By synchronizing a copper vapor laser to the high-speed camera, laser-illuminated images recorded details of the performance of a component. Output characteristics of several types of hermetically-sealed igniters using a TiH{chi}/KCIO{sub 4} pyrotechnic blend were measured as a function of the particle size of the pyrotechnic fuel and the closure disc thickness. The igniters were filmed under both ambient (i.e., unconfined) and confined conditions. Recently, the function of the igniter in a cut-away section of a ``mock`` thermal battery has been filmed. Partial details of these films are discussed in this paper, and selected examples of the films will be displayed via video tape during the presentation of the paper.

  1. Identifying predators and fates of grassland passerine nests using miniature video cameras

    USGS Publications Warehouse

    Pietz, P.J.; Granfors, D.A.

    2000-01-01

    Nest fates, causes of nest failure, and identities of nest predators are difficult to determine for grassland passerines. We developed a miniature video-camera system for use in grasslands and deployed it at 69 nests of 10 passerine species in North Dakota during 1996-97. Abandonment rates were higher at nests 1 day or night (22-116 hr) at 6 nests, 5 of which were depredated by ground squirrels or mice. For nests without cameras, estimated predation rates were lower for ground nests than aboveground nests (P = 0.055), but did not differ between open and covered nests (P = 0.74). Open and covered nests differed, however, when predation risk (estimated by initial-predation rate) was examined separately for day and night using camera-monitored nests; the frequency of initial predations that occurred during the day was higher for open nests than covered nests (P = 0.015). Thus, vulnerability of some nest types may depend on the relative importance of nocturnal and diurnal predators. Predation risk increased with nestling age from 0 to 8 days (P = 0.07). Up to 15% of fates assigned to camera-monitored nests were wrong when based solely on evidence that would have been available from periodic nest visits. There was no evidence of disturbance at nearly half the depredated nests, including all 5 depredated by large mammals. Overlap in types of sign left by different predator species, and variability of sign within species, suggests that evidence at nests is unreliable for identifying predators of grassland passerines.

  2. Efficient space-time sampling with pixel-wise coded exposure for high-speed imaging.

    PubMed

    Liu, Dengyu; Gu, Jinwei; Hitomi, Yasunobu; Gupta, Mohit; Mitsunaga, Tomoo; Nayar, Shree K

    2014-02-01

    Cameras face a fundamental trade-off between spatial and temporal resolution. Digital still cameras can capture images with high spatial resolution, but most high-speed video cameras have relatively low spatial resolution. It is hard to overcome this trade-off without incurring a significant increase in hardware costs. In this paper, we propose techniques for sampling, representing, and reconstructing the space-time volume to overcome this trade-off. Our approach has two important distinctions compared to previous works: 1) We achieve sparse representation of videos by learning an overcomplete dictionary on video patches, and 2) we adhere to practical hardware constraints on sampling schemes imposed by architectures of current image sensors, which means that our sampling function can be implemented on CMOS image sensors with modified control units in the future. We evaluate components of our approach, sampling function and sparse representation, by comparing them to several existing approaches. We also implement a prototype imaging system with pixel-wise coded exposure control using a liquid crystal on silicon device. System characteristics such as field of view and modulation transfer function are evaluated for our imaging system. Both simulations and experiments on a wide range of scenes show that our method can effectively reconstruct a video from a single coded image while maintaining high spatial resolution. PMID:24356347

  3. Flow visualization by mobile phone cameras

    NASA Astrophysics Data System (ADS)

    Cierpka, Christian; Hain, Rainer; Buchmann, Nicolas A.

    2016-06-01

    Mobile smart phones were completely changing people's communication within the last ten years. However, these devices do not only offer communication through different channels but also devices and applications for fun and recreation. In this respect, mobile phone cameras include now relatively fast (up to 240 Hz) cameras to capture high-speed videos of sport events or other fast processes. The article therefore explores the possibility to make use of this development and the wide spread availability of these cameras in the terms of velocity measurements for industrial or technical applications and fluid dynamics education in high schools and at universities. The requirements for a simplistic PIV (particle image velocimetry) system are discussed. A model experiment of a free water jet was used to prove the concept and shed some light on the achievable quality and determine bottle necks by comparing the results obtained with a mobile phone camera with data taken by a high-speed camera suited for scientific experiments.

  4. Optimizing Detection Rate and Characterization of Subtle Paroxysmal Neonatal Abnormal Facial Movements with Multi-Camera Video-Electroencephalogram Recordings.

    PubMed

    Pisani, Francesco; Pavlidis, Elena; Cattani, Luca; Ferrari, Gianluigi; Raheli, Riccardo; Spagnoli, Carlotta

    2016-06-01

    Objectives We retrospectively analyze the diagnostic accuracy for paroxysmal abnormal facial movements, comparing one camera versus multi-camera approach. Background Polygraphic video-electroencephalogram (vEEG) recording is the current gold standard for brain monitoring in high-risk newborns, especially when neonatal seizures are suspected. One camera synchronized with the EEG is commonly used. Methods Since mid-June 2012, we have started using multiple cameras, one of which point toward newborns' faces. We evaluated vEEGs recorded in newborns in the study period between mid-June 2012 and the end of September 2014 and compared, for each recording, the diagnostic accuracies obtained with one-camera and multi-camera approaches. Results We recorded 147 vEEGs from 87 newborns and found 73 episodes of paroxysmal facial abnormal movements in 18 vEEGs of 11 newborns with the multi-camera approach. By using the single-camera approach, only 28.8% of these events were identified (21/73). Ten positive vEEGs with multicamera with 52 paroxysmal facial abnormal movements (52/73, 71.2%) would have been considered as negative with the single-camera approach. Conclusions The use of one additional facial camera can significantly increase the diagnostic accuracy of vEEGs in the detection of paroxysmal abnormal facial movements in the newborns. PMID:27111027

  5. The NASA High-Speed Research Program

    NASA Technical Reports Server (NTRS)

    Beam, Sherilee F.

    1992-01-01

    Since its inception, one of NASA's commitments has been to develop the technology to advance aeronautics. As such, a new High-Speed Research Program was activated to develop the technology for industry to build a High-Speed Civil Transport - a second generation Supersonic Transport (SST). The baseline for this program is the British Concorde, a major technological achievement for its time, but an aircraft which is now both technologically and economically outdated. Therefore, a second generation SST must satisfy environmental concerns and still be economically viable. In order to do this, it must have no significant effect on the ozone layer, meet Federal Air Regulation 36, Stage 3 for community noise, and have no perceptible sonic boom over populated areas. These three concerns are the focus of the research efforts in Phase 1 of the program and are the specific areas covered in the technical video report.

  6. Embedded FIR filter design for real-time refocusing using a standard plenoptic video camera

    NASA Astrophysics Data System (ADS)

    Hahne, Christopher; Aggoun, Amar

    2014-03-01

    A novel and low-cost embedded hardware architecture for real-time refocusing based on a standard plenoptic camera is presented in this study. The proposed layout design synthesizes refocusing slices directly from micro images by omitting the process for the commonly used sub-aperture extraction. Therefore, intellectual property cores, containing switch controlled Finite Impulse Response (FIR) filters, are developed and applied to the Field Programmable Gate Array (FPGA) XC6SLX45 from Xilinx. Enabling the hardware design to work economically, the FIR filters are composed of stored product as well as upsampling and interpolation techniques in order to achieve an ideal relation between image resolution, delay time, power consumption and the demand of logic gates. The video output is transmitted via High-Definition Multimedia Interface (HDMI) with a resolution of 720p at a frame rate of 60 fps conforming to the HD ready standard. Examples of the synthesized refocusing slices are presented.

  7. A two camera video imaging system with application to parafoil angle of attack measurements

    NASA Astrophysics Data System (ADS)

    Meyn, Larry A.; Bennett, Mark S.

    1991-01-01

    This paper describes the development of a two-camera, video imaging system for the determination of three-dimensional spatial coordinates from stereo images. This system successfully measured angle of attack at several span-wise locations for large-scale parafoils tested in the NASA Ames 80- by 120-Foot Wind Tunnel. Measurement uncertainty for angle of attack was less than 0.6 deg. The stereo ranging system was the primary source for angle of attack measurements since inclinometers sewn into the fabric ribs of the parafoils had unknown angle offsets acquired during installation. This paper includes discussions of the basic theory and operation of the stereo ranging system, system measurement uncertainty, experimental set-up, calibration results, and test results. Planned improvements and enhancements to the system are also discussed.

  8. Plant iodine-131 uptake in relation to root concentration as measured in minirhizotron by video camera:

    SciTech Connect

    Moss, K.J.

    1990-09-01

    Glass viewing tubes (minirhizotrons) were placed in the soil beneath native perennial bunchgrass (Agropyron spicatum). The tubes provided access for observing and quantifying plant roots with a miniature video camera and soil moisture estimates by neutron hydroprobe. The radiotracer I-131 was delivered to the root zone at three depths with differing root concentrations. The plant was subsequently sampled and analyzed for I-131. Plant uptake was greater when I-131 was applied at soil depths with higher root concentrations. When I-131 was applied at soil depths with lower root concentrations, plant uptake was less. However, the relationship between root concentration and plant uptake was not a direct one. When I-131 was delivered to deeper soil depths with low root concentrations, the quantity of roots there appeared to be less effective in uptake than the same quantity of roots at shallow soil depths with high root concentration. 29 refs., 6 figs., 11 tabs.

  9. Autonomous video camera system for monitoring impacts to benthic habitats from demersal fishing gear, including longlines

    NASA Astrophysics Data System (ADS)

    Kilpatrick, Robert; Ewing, Graeme; Lamb, Tim; Welsford, Dirk; Constable, Andrew

    2011-04-01

    Studies of the interactions of demersal fishing gear with the benthic environment are needed in order to manage conservation of benthic habitats. There has been limited direct assessment of these interactions through deployment of cameras on commercial fishing gear especially on demersal longlines. A compact, autonomous deep-sea video system was designed and constructed by the Australian Antarctic Division (AAD) for deployment on commercial fishing gear to observe interactions with benthos in the Southern Ocean finfish fisheries (targeting toothfish, Dissostichus spp). The Benthic Impacts Camera System (BICS) is capable of withstanding depths to 2500 m, has been successfully fitted to both longline and demersal trawl fishing gear, and is suitable for routine deployment by non-experts such as fisheries observers or crew. The system is entirely autonomous, robust, compact, easy to operate, and has minimal effect on the performance of the fishing gear it is attached to. To date, the system has successfully captured footage that demonstrates the interactions between demersal fishing gear and the benthos during routine commercial operations. It provides the first footage demonstrating the nature of the interaction between demersal longlines and benthic habitats in the Southern Ocean, as well as showing potential as a tool for rapidly assessing habitat types and presence of mobile biota such as krill ( Euphausia superba).

  10. Gated high speed optical detector

    NASA Technical Reports Server (NTRS)

    Green, S. I.; Carson, L. M.; Neal, G. W.

    1973-01-01

    The design, fabrication, and test of two gated, high speed optical detectors for use in high speed digital laser communication links are discussed. The optical detectors used a dynamic crossed field photomultiplier and electronics including dc bias and RF drive circuits, automatic remote synchronization circuits, automatic gain control circuits, and threshold detection circuits. The equipment is used to detect binary encoded signals from a mode locked neodynium laser.

  11. A unified and efficient framework for court-net sports video analysis using 3D camera modeling

    NASA Astrophysics Data System (ADS)

    Han, Jungong; de With, Peter H. N.

    2007-01-01

    The extensive amount of video data stored on available media (hard and optical disks) necessitates video content analysis, which is a cornerstone for different user-friendly applications, such as, smart video retrieval and intelligent video summarization. This paper aims at finding a unified and efficient framework for court-net sports video analysis. We concentrate on techniques that are generally applicable for more than one sports type to come to a unified approach. To this end, our framework employs the concept of multi-level analysis, where a novel 3-D camera modeling is utilized to bridge the gap between the object-level and the scene-level analysis. The new 3-D camera modeling is based on collecting features points from two planes, which are perpendicular to each other, so that a true 3-D reference is obtained. Another important contribution is a new tracking algorithm for the objects (i.e. players). The algorithm can track up to four players simultaneously. The complete system contributes to summarization by various forms of information, of which the most important are the moving trajectory and real-speed of each player, as well as 3-D height information of objects and the semantic event segments in a game. We illustrate the performance of the proposed system by evaluating it for a variety of court-net sports videos containing badminton, tennis and volleyball, and we show that the feature detection performance is above 92% and events detection about 90%.

  12. A 3-D High Speed Photographic Survey For Bomb Dropping In The Wind Tunnel

    NASA Astrophysics Data System (ADS)

    Junren, Chen; Liangyi, Chen; Yuxian, Nie; Wenxing, Chen

    1989-06-01

    High speed Stereophotography may obtain 3-D information of the motion object. This paper deals with a high speed stereophotographic survey of dropping bomb in wind tunnel and measurement of its displacement, velocity, acceleration, angle of attack and yaw angle. Two high speed cinecameras are used, the two optical axes of the cameras are perpendicular to each other and in a plane being vertical to the plumb line. The optical axis of a camera (front camera) is parallel with the aircraft body, and the another (side camera) is perpendicular. Before taking the object and image distance of the two cameras must be measured by photographic method. The photographic rate is 304 fps.

  13. Optimal camera exposure for video surveillance systems by predictive control of shutter speed, aperture, and gain

    NASA Astrophysics Data System (ADS)

    Torres, Juan; Menéndez, José Manuel

    2015-02-01

    This paper establishes a real-time auto-exposure method to guarantee that surveillance cameras in uncontrolled light conditions take advantage of their whole dynamic range while provide neither under nor overexposed images. State-of-the-art auto-exposure methods base their control on the brightness of the image measured in a limited region where the foreground objects are mostly located. Unlike these methods, the proposed algorithm establishes a set of indicators based on the image histogram that defines its shape and position. Furthermore, the location of the objects to be inspected is likely unknown in surveillance applications. Thus, the whole image is monitored in this approach. To control the camera settings, we defined a parameters function (Ef ) that linearly depends on the shutter speed and the electronic gain; and is inversely proportional to the square of the lens aperture diameter. When the current acquired image is not overexposed, our algorithm computes the value of Ef that would move the histogram to the maximum value that does not overexpose the capture. When the current acquired image is overexposed, it computes the value of Ef that would move the histogram to a value that does not underexpose the capture and remains close to the overexposed region. If the image is under and overexposed, the whole dynamic range of the camera is therefore used, and a default value of the Ef that does not overexpose the capture is selected. This decision follows the idea that to get underexposed images is better than to get overexposed ones, because the noise produced in the lower regions of the histogram can be removed in a post-processing step while the saturated pixels of the higher regions cannot be recovered. The proposed algorithm was tested in a video surveillance camera placed at an outdoor parking lot surrounded by buildings and trees which produce moving shadows in the ground. During the daytime of seven days, the algorithm was running alternatively together

  14. High-speed imaging on static tensile test for unidirectional CFRP

    NASA Astrophysics Data System (ADS)

    Kusano, Hideaki; Aoki, Yuichiro; Hirano, Yoshiyasu; Kondo, Yasushi; Nagao, Yosuke

    2008-11-01

    The objective of this study is to clarify the fracture mechanism of unidirectional CFRP (Carbon Fiber Reinforced Plastics) under static tensile loading. The advantages of CFRP are higher specific stiffness and strength than the metal material. The use of CFRP is increasing in not only the aerospace and rapid transit railway industries but also the sports, leisure and automotive industries. The tensile fracture mechanism of unidirectional CFRP has not been experimentally made clear because the fracture speed of unidirectional CFRP is quite high. We selected the intermediate modulus and high strength unidirectional CFRP laminate which is a typical material used in the aerospace field. The fracture process under static tensile loading was captured by a conventional high-speed camera and a new type High-Speed Video Camera HPV-1. It was found that the duration of fracture is 200 microseconds or less, then images taken by a conventional camera doesn't have enough temporal-resolution. On the other hand, results obtained by HPV-1 have higher quality where the fracture process can be clearly observed.

  15. Color video camera capable of 1,000,000 fps with triple ultrahigh-speed image sensors

    NASA Astrophysics Data System (ADS)

    Maruyama, Hirotaka; Ohtake, Hiroshi; Hayashida, Tetsuya; Yamada, Masato; Kitamura, Kazuya; Arai, Toshiki; Tanioka, Kenkichi; Etoh, Takeharu G.; Namiki, Jun; Yoshida, Tetsuo; Maruno, Hiromasa; Kondo, Yasushi; Ozaki, Takao; Kanayama, Shigehiro

    2005-03-01

    We developed an ultrahigh-speed, high-sensitivity, color camera that captures moving images of phenomena too fast to be perceived by the human eye. The camera operates well even under restricted lighting conditions. It incorporates a special CCD device that is capable of ultrahigh-speed shots while retaining its high sensitivity. Its ultrahigh-speed shooting capability is made possible by directly connecting CCD storages, which record video images, to photodiodes of individual pixels. Its large photodiode area together with the low-noise characteristic of the CCD contributes to its high sensitivity. The camera can clearly capture events even under poor light conditions, such as during a baseball game at night. Our camera can record the very moment the bat hits the ball.

  16. Investigating particle phase velocity in a 3D spouted bed by a novel fiber high speed photography method

    NASA Astrophysics Data System (ADS)

    Qian, Long; Lu, Yong; Zhong, Wenqi; Chen, Xi; Ren, Bing; Jin, Baosheng

    2013-07-01

    A novel fiber high speed photography method has been developed to measure particle phase velocity in a dense gas-solid flow. The measurement system mainly includes a fiber-optic endoscope, a high speed video camera, a metal halide light source and a powerful computer with large memory. The endoscope which could be inserted into the reactors is used to form motion images of particles within the measurement window illuminated by the metal halide lamp. These images are captured by the high speed video camera and processed through a series of digital image processing algorithms, such as calibration, denoising, enhancement and binarization in order to improve the image quality. Then particles' instantaneous velocity is figured out by tracking each particle in consecutive frames. Particle phase velocity is statistically calculated according to the probability of particle velocity in each frame within a time period. This system has been applied to the investigation of particles fluidization characteristics in a 3D spouted bed. The experimental results indicate that the particle fluidization feature in the region investigated could be roughly classified into three sections by particle phase vertical velocity and the boundary between the first section and the second is the surface where particle phase velocity tends to be 0, which is in good agreement with the results published in other literature.

  17. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room

    PubMed Central

    Nair, Akshay Gopinathan; Kamal, Saurabh; Dave, Tarjani Vivek; Mishra, Kapil; Reddy, Harsha S; Rocca, David Della; Rocca, Robert C Della; Andron, Aleza; Jain, Vandana

    2015-01-01

    Objective: To study the utility of a commercially available small, portable ultra-high definition (HD) camera (GoPro Hero 4) for intraoperative recording. Methods: A head mount was used to fix the camera on the operating surgeon's head. Due care was taken to protect the patient's identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each). The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. Results: The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon's head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. Conclusions: A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery. PMID:26655001

  18. High-Speed Electrochemical Imaging.

    PubMed

    Momotenko, Dmitry; Byers, Joshua C; McKelvey, Kim; Kang, Minkyung; Unwin, Patrick R

    2015-09-22

    The design, development, and application of high-speed scanning electrochemical probe microscopy is reported. The approach allows the acquisition of a series of high-resolution images (typically 1000 pixels μm(-2)) at rates approaching 4 seconds per frame, while collecting up to 8000 image pixels per second, about 1000 times faster than typical imaging speeds used up to now. The focus is on scanning electrochemical cell microscopy (SECCM), but the principles and practicalities are applicable to many electrochemical imaging methods. The versatility of the high-speed scan concept is demonstrated at a variety of substrates, including imaging the electroactivity of a patterned self-assembled monolayer on gold, visualization of chemical reactions occurring at single wall carbon nanotubes, and probing nanoscale electrocatalysts for water splitting. These studies provide movies of spatial variations of electrochemical fluxes as a function of potential and a platform for the further development of high speed scanning with other electrochemical imaging techniques. PMID:26267455

  19. Visualization of high speed liquid jet impaction on a moving surface.

    PubMed

    Guo, Yuchen; Green, Sheldon

    2015-01-01

    Two apparatuses for examining liquid jet impingement on a high-speed moving surface are described: an air cannon device (for examining surface speeds between 0 and 25 m/sec) and a spinning disk device (for examining surface speeds between 15 and 100 m/sec). The air cannon linear traverse is a pneumatic energy-powered system that is designed to accelerate a metal rail surface mounted on top of a wooden projectile. A pressurized cylinder fitted with a solenoid valve rapidly releases pressurized air into the barrel, forcing the projectile down the cannon barrel. The projectile travels beneath a spray nozzle, which impinges a liquid jet onto its metal upper surface, and the projectile then hits a stopping mechanism. A camera records the jet impingement, and a pressure transducer records the spray nozzle backpressure. The spinning disk set-up consists of a steel disk that reaches speeds of 500 to 3,000 rpm via a variable frequency drive (VFD) motor. A spray system similar to that of the air cannon generates a liquid jet that impinges onto the spinning disc, and cameras placed at several optical access points record the jet impingement. Video recordings of jet impingement processes are recorded and examined to determine whether the outcome of impingement is splash, splatter, or deposition. The apparatuses are the first that involve the high speed impingement of low-Reynolds-number liquid jets on high speed moving surfaces. In addition to its rail industry applications, the described technique may be used for technical and industrial purposes such as steelmaking and may be relevant to high-speed 3D printing. PMID:25938331

  20. A lateral chromatic aberration correction system for ultrahigh-definition color video camera

    NASA Astrophysics Data System (ADS)

    Yamashita, Takayuki; Shimamoto, Hiroshi; Funatsu, Ryohei; Mitani, Kohji; Nojiri, Yuji

    2006-02-01

    We have developed color camera for an 8k x 4k-pixel ultrahigh-definition video system, which is called Super Hi- Vision, with a 5x zoom lens and a signal-processing system incorporating a function for real-time lateral chromatic aberration correction. The chromatic aberration of the lens degrades color image resolution. So in order to develop a compact zoom lens consistent with ultrahigh-resolution characteristics, we incorporated a real-time correction function in the signal-processing system. The signal-processing system has eight memory tables to store the correction data at eight focal length points on the blue and red channels. When the focal length data is inputted from the lens control units, the relevant correction data are interpolated from two of eights correction data tables. This system performs geometrical conversion on both channels using this correction data. This paper describes that the correction function can successfully reduce the lateral chromatic aberration, to an amount small enough to ensure the desired image resolution was achieved over the entire range of the lens in real time.

  1. SEAL FOR HIGH SPEED CENTRIFUGE

    DOEpatents

    Skarstrom, C.W.

    1957-12-17

    A seal is described for a high speed centrifuge wherein the centrifugal force of rotation acts on the gasket to form a tight seal. The cylindrical rotating bowl of the centrifuge contains a closure member resting on a shoulder in the bowl wall having a lower surface containing bands of gasket material, parallel and adjacent to the cylinder wall. As the centrifuge speed increases, centrifugal force acts on the bands of gasket material forcing them in to a sealing contact against the cylinder wall. This arrangememt forms a simple and effective seal for high speed centrifuges, replacing more costly methods such as welding a closure in place.

  2. Application of video-cameras for quality control and sampling optimisation of hydrological and erosion measurements in a catchment

    NASA Astrophysics Data System (ADS)

    Lora-Millán, Julio S.; Taguas, Encarnacion V.; Gomez, Jose A.; Perez, Rafael

    2014-05-01

    Long term soil erosion studies imply substantial efforts, particularly when there is the need to maintain continuous measurements. There are high costs associated to maintenance of field equipment keeping and quality control of data collection. Energy supply and/or electronic failures, vandalism and burglary are common causes of gaps in datasets, reducing their reach in many cases. In this work, a system of three video-cameras, a recorder and a transmission modem (3G technology) has been set up in a gauging station where rainfall, runoff flow and sediment concentration are monitored. The gauging station is located in the outlet of an olive orchard catchment of 6.4 ha. Rainfall is measured with one automatic raingauge that records intensity at one minute intervals. The discharge is measured by a flume of critical flow depth, where the water is recorded by an ultrasonic sensor. When the water level rises to a predetermined level, the automatic sampler turns on and fills a bottle at different intervals according to a program depending on the antecedent precipitation. A data logger controls the instruments' functions and records the data. The purpose of the video-camera system is to improve the quality of the dataset by i) the visual analysis of the measurement conditions of flow into the flume; ii) the optimisation of the sampling programs. The cameras are positioned to record the flow at the approximation and the gorge of the flume. In order to contrast the values of ultrasonic sensor, there is a third camera recording the flow level close to a measure tape. This system is activated when the ultrasonic sensor detects a height threshold, equivalent to an electric intensity level. Thus, only when there is enough flow, video-cameras record the event. This simplifies post-processing and reduces the cost of download of recordings. The preliminary contrast analysis will be presented as well as the main improvements in the sample program.

  3. Field-based study of volcanic ash via visible and thermal high-speed imaging of explosive eruptions

    NASA Astrophysics Data System (ADS)

    Tournigand, Pierre-Yves; Taddeucci, Jacopo; Scarlato, Piergiorgio; Gaudin, Damien; Del Bello, Elisabetta

    2015-04-01

    Subaerial explosive volcanic activity ejects a mixture of gas-ash-pyroclasts in the atmosphere. Parameterizing the physical processes responsible for ash injection and plume dynamics is crucial to constrain numerical models and forecasts of potentially hazardous ash dispersal events. In this study we present preliminary results from a new method based on visible and thermal high-speed video processing from Strombolian and Vulcanian explosions. High-speed videos were recorded by a Optronis CR600x2 camera (1280x1024 pixels definition, 500 Hz frame rate) for the visible and by a FLIR SC655 (640x480 pixels definition, 50 Hz frame rate) for the thermal. Qualitatively, different dynamics of ash injection and dispersal can be identified. High speed cameras allow us to observe all the different phases during volcanic plume dispersion with a very good time resolution. Multiple features were already observed about volcanic plumes, but this tool give a better accuracy to our observations and allow us to better define previously observed features and to be able to identify new ones. Quantitatively before using our videos a pre-processing is needed which aim is to isolate the plume from the background by using different types of filters without altering the data, to allow us to use automated procedures to track volcanic plumes. In this study we extract data from these videos (plume height, velocity, temperature, mass, volume,...) using different software tools. Doing this allow us to be able to define and constrain main parameters and processes in function of the observed volcano and explosion type, but also to find correlations between parameters and establish empirical relations. We define range of values for each parameter and their respective impact on plume dynamics and stability, to be able to obtain characteristic fields of values for each case and link it to explosions type and evolution.

  4. A multiframe soft x-ray camera with fast video capture for the LSX field reversed configuration (FRC) experiment

    SciTech Connect

    Crawford, E.A. )

    1992-10-01

    Soft x-ray pinhole imaging has proven to be an exceptionally useful diagnostic for qualitative observation of impurity radiation from field reversed configuration plasmas. We used a four frame device, similar in design to those discussed in an earlier paper (E. A. Crawford, D. P. Taggart, and A. D. Bailey III, Rev. Sci. Instrum. {bold 61}, 2795 (1990)) as a routine diagnostic during the last six months of the Large s Experiment (LSX) program. Our camera is an improvement over earlier implementations in several significant aspects. It was designed and used from the onset of the LSX experiments with a video frame capture system so that an instant visual record of the shot was available to the machine operator as well as facilitating quantitative interpretation of intensity information recorded in the images. The camera was installed in the end region of the LSX on axis approximately 5.5 m from the plasma midplane. Experience with bolometers on LSX showed serious problems with particle dumps'' at the axial location at various times during the plasma discharge. Therefore, the initial implementation of the camera included an effective magnetic sweeper assembly. Overall performance of the camera, video capture system, and sweeper is discussed.

  5. Application Of High Speed Photography In Science And Technology

    NASA Astrophysics Data System (ADS)

    Wu Ji-Zong, Wu; Yu-Ju, Lin

    1983-03-01

    The service works in high-speed photography carried out by the Department of Precision Instruments, Tianjin University are described in this paper. A compensation type high-speed camera was used in these works. The photographic methods adopted and better results achieved in the studies of several technical fields, such as velocity field of flow of overflow surface of high dam, combustion process of internal combustion engine, metal cutting, electrical are welding, experiment of piling of steel tube piles for supporting the marine platforms and characteristics of motion of wrist watch escape mechanism and so on are illustrated in more detail. As the extension of human visual organs and for increasing the abi-lities of observing and studying the high-speed processes, high-speed photography plays a very important role. In order to promote the application and development on high-speed photography, we have carried out the consultative and service works inside and outside Tianjin Uni-versity. The Pentazet 35 compensation type high-speed camera, made in East Germany, was used to record the high-speed events in various kinds of technical investigations and necessary results have been ob-tained. 1. Measurement of flow velocity on the overflow surface of high dam. In the design of a key water control project with high head, it is extremely necessary to determinate various characteristics of flow velocity field on the overflow surface of high dam. Since the water flow on the surface of high overflow dam possesses the features of large flow velocity and shallow water depth, therefore it is difficult to use the conventional current meters such as pilot tube, miniature cur-rent meter or electrical measuring methods of non-electrical quantities for studying this problem. Adopting the high-speed photographic method to study analogously the characteristics of flow velocity field on the overflow surface of high dam is a kind of new measuring method. People

  6. High-speed correspondence for object recognition and tracking

    NASA Astrophysics Data System (ADS)

    Ariyawansa, Dambakumbure D.; Clarke, Timothy A.

    1997-07-01

    Real-time measurement using multi-camera 3D measuring system requires three major components to operate at high speed: image data processing; correspondence; and least squares estimation. This paper is based upon a system developed at City University which uses high speed solutions for the first and last elements, and describes recent work to provide a high speed solution to the correspondence problem. Correspondence has traditionally been solved in photogrammetry by using human stereo fusion of two views of an object providing an immediate solution. Computer vision researchers and photogrammetrists have applied image processing techniques and computers to the same configuration and have developed numerous matching algorithms with considerable success. Where research is still required, and the published work is not so plentiful, is in the area of multi-camera correspondence. The most commonly used methods utilize the epipolar geometry to establish the correspondences. While this method is adequate for some simple situations, extensions to more than just a few cameras are required which are reliable and efficient. In this paper the early stages of research into reliable and efficient multi-camera correspondence method for high speed measurement tasks are reported.

  7. Faster than "g", Revisited with High-Speed Imaging

    ERIC Educational Resources Information Center

    Vollmer, Michael; Mollmann, Klaus-Peter

    2012-01-01

    The introduction of modern high-speed cameras in physics teaching provides a tool not only for easy visualization, but also for quantitative analysis of many simple though fast occurring phenomena. As an example, we present a very well-known demonstration experiment--sometimes also discussed in the context of falling chimneys--which is commonly…

  8. High speed photography and photonics applications: An underutilized technology

    SciTech Connect

    Paisley, D.L.

    1996-10-01

    Snapshot: Paisley describes the development of high-speed photography including the role of streak cameras, fiber optics, and lasers. Progress in this field has created a powerful tool for viewing such ultrafast processes as hypersonic events and ballistics. {copyright} {ital 1996 Optical Society of America.} [1047-6938-96-10-9939-04

  9. Analysis of javelin throwing by high-speed photography

    NASA Astrophysics Data System (ADS)

    Yamamoto, Yoshitaka; Matsuoka, Rutsu; Ishida, Yoshihisa; Seki, Kazuichi

    1999-06-01

    A xenon multiple exposure light source device was manufactured to record the trajectory of a flying javelin, and a wind tunnel experiment was performed with some javelin models to analyze the flying characteristics of the javelin. Furthermore, form of javelin throwing by athletes was recorded to estimate the characteristics in the form of each athlete using a high speed cameras.

  10. High-speed thermo-microscope for imaging thermal desorption phenomena

    NASA Astrophysics Data System (ADS)

    Staymates, Matthew; Gillen, Greg

    2012-07-01

    In this work, we describe a thermo-microscope imaging system that can be used to visualize atmospheric pressure thermal desorption phenomena at high heating rates and frame rates. This versatile and portable instrument is useful for studying events during rapid heating of organic particles on the microscopic scale. The system consists of a zoom lens coupled to a high-speed video camera that is focused on the surface of an aluminum nitride heating element. We leverage high-speed videography with oblique incidence microscopy along with forward and back-scattered illumination to capture vivid images of thermal desorption events during rapid heating of chemical compounds. In a typical experiment, particles of the material of interest are rapidly heated beyond their boiling point while the camera captures images at several thousand frames/s. A data acquisition system, along with an embedded thermocouple and infrared pyrometer are used to measure the temperature of the heater surface. We demonstrate that, while a typical thermocouple lacks the response time to accurately measure temperature ramps that approach 150 °C/s, it is possible to calibrate the system by using a combination of infrared pyrometry, melting point standards, and a thermocouple. Several examples of high explosives undergoing rapid thermal desorption are also presented.

  11. On the use of Video Camera Systems in the Detection of Kuiper Belt Objects by Stellar Occultations

    NASA Astrophysics Data System (ADS)

    Subasinghe, Dilini

    2012-10-01

    Due to the distance between us and the Kuiper Belt, direct detection of Kuiper Belt Objects (KBOs) is not currently possible for objects less than 10 km in diameter. Indirect methods such as stellar occultations must be employed to remotely probe these bodies. The size, shape, as well as atmospheric properties and ring system information of a body (if any), can be collected through observations of stellar occultations. This method has been previously used with some success - Roques et al. (2006) detected 3 Trans-Neptunian objects; Schlichting et al. (2009) detected a single object in archival data. However, previous assessments of KBO occultation detection rates have been calculated only for telescopes - we extend this method to video camera systems. Building on Roques & Moncuquet (2000), we present a derivation that can be applied to any video camera system, taking into account camera specifications and diffraction effects. This allows for a determination of the number of observable KBO occultations per night. Example calculations are presented for some of the automated meteor camera systems currently in use at the University of Western Ontario. The results of this project will allow us to refine and improve our own camera system, as well as allow others to enhance their systems for KBO detection. Roques, F., Doressoundiram, A., Dhillon, V., Marsh, T., Bickerton, S., Kavelaars, J. J., Moncuquet, M., Auvergne, M., Belskaya, I., Chevreton, M., Colas, F., Fernandez, A., Fitzsimmons, A., Lecacheux, J., Mousis, O., Pau, S., Peixinho, N., & Tozzi, G. P. (2006). The Astronomical Journal, 132(2), 819-822. Roques, F., & Moncuquet, M. (2000). Icarus, 147(2), 530-544. Schlichting, H. E., Ofek, E. O., Wenz, M., Sari, R., Gal-Yam, A., Livio, M., Nelan, E., & Zucker, S. (2009). Nature, 462(7275), 895-897.

  12. SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT

    SciTech Connect

    Ingram, S; Rao, A; Wendt, R; Castillo, R; Court, L; Yang, J; Beadle, B

    2014-06-01

    Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the camera by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination.

  13. Foraging at the edge of the world: low-altitude, high-speed manoeuvering in barn swallows.

    PubMed

    Warrick, Douglas R; Hedrick, Tyson L; Biewener, Andrew A; Crandell, Kristen E; Tobalske, Bret W

    2016-09-26

    While prior studies of swallow manoeuvering have focused on slow-speed flight and obstacle avoidance in still air, swallows survive by foraging at high speeds in windy environments. Recent advances in field-portable, high-speed video systems, coupled with precise anemometry, permit measures of high-speed aerial performance of birds in a natural state. We undertook the present study to test: (i) the manner in which barn swallows (Hirundo rustica) may exploit wind dynamics and ground effect while foraging and (ii) the relative importance of flapping versus gliding for accomplishing high-speed manoeuvers. Using multi-camera videography synchronized with wind-velocity measurements, we tracked coursing manoeuvers in pursuit of prey. Wind speed averaged 1.3-2.0 m s(-1) across the atmospheric boundary layer, exhibiting a shear gradient greater than expected, with instantaneous speeds of 0.02-6.1 m s(-1) While barn swallows tended to flap throughout turns, they exhibited reduced wingbeat frequency, relying on glides and partial bounds during maximal manoeuvers. Further, the birds capitalized on the near-earth wind speed gradient to gain kinetic and potential energy during both flapping and gliding turns; providing evidence that such behaviour is not limited to large, fixed-wing soaring seabirds and that exploitation of wind gradients by small aerial insectivores may be a significant aspect of their aeroecology.This article is part of the themed issue 'Moving in a moving medium: new perspectives on flight'. PMID:27528781

  14. Multiple single-point imaging (mSPI) as a tool for capturing and characterizing MR signals and repetitive signal disturbances with high temporal resolution: the MRI scanner as a high-speed camera.

    PubMed

    Bakker, Chris J G; van Gorp, Jetse S; Verwoerd, Jan L; Westra, Albert H; Bouwman, Job G; Zijlstra, Frank; Seevinck, Peter R

    2013-09-01

    In this paper we aim to lay down and demonstrate the use of multiple single-point imaging (mSPI) as a tool for capturing and characterizing steady-state MR signals and repetitive disturbances thereof with high temporal resolution. To achieve this goal, various 2D mSPI sequences were derived from the nearest standard 3D imaging sequences by (i) replacing the excitation of a 3D slab by the excitation of a 2D slice orthogonal to the read axis, (ii) setting the readout gradient to zero, and (iii) leaving out the inverse Fourier transform in the read direction. The thus created mSPI sequences, albeit slow with regard to the spatial encoding part, were shown to result into a series of densely spaced 2D single-point images in the time domain enabling monitoring of the evolution of the magnetization with a high temporal resolution and without interference from any encoding gradients. The high-speed capabilities of mSPI were demonstrated by capturing and characterizing the free induction decays and spin echoes of substances with long T2s (>30 ms) and long and short T2*s (4 - >30 ms) and by monitoring the perturbation of the transverse magnetization by, respectively, a titanium cylinder, representing a static disturbance; a pulsed magnetic field gradient, representing a stimulus inherent to a conventional MRI experiment; and a pulsed electric current, representing an external stimulus. The results of the study indicate the potential of mSPI for assessing the evolution of the magnetization and, when properly synchronized with the acquisition, repeatable disturbances thereof with a temporal resolution that is ultimately limited by the bandwidth of the receiver, but in practice governed by the SNR of the experiment and the magnitude of the disturbance. Potential applications of mSPI can be envisaged in research areas that are concerned with MR signal behavior, MR system performance and MR evaluation of magnetically evoked responses. PMID:23759651

  15. Flexible high-speed CODEC

    NASA Technical Reports Server (NTRS)

    Segallis, Greg P.; Wernlund, Jim V.; Corry, Glen

    1993-01-01

    This report is prepared by Harris Government Communication Systems Division for NASA Lewis Research Center under contract NAS3-25087. It is written in accordance with SOW section 4.0 (d) as detailed in section 2.6. The purpose of this document is to provide a summary of the program, performance results and analysis, and a technical assessment. The purpose of this program was to develop a flexible, high-speed CODEC that provides substantial coding gain while maintaining bandwidth efficiency for use in both continuous and bursted data environments for a variety of applications.

  16. High speed quantitative digital microscopy

    NASA Technical Reports Server (NTRS)

    Castleman, K. R.; Price, K. H.; Eskenazi, R.; Ovadya, M. M.; Navon, M. A.

    1984-01-01

    Modern digital image processing hardware makes possible quantitative analysis of microscope images at high speed. This paper describes an application to automatic screening for cervical cancer. The system uses twelve MC6809 microprocessors arranged in a pipeline multiprocessor configuration. Each processor executes one part of the algorithm on each cell image as it passes through the pipeline. Each processor communicates with its upstream and downstream neighbors via shared two-port memory. Thus no time is devoted to input-output operations as such. This configuration is expected to be at least ten times faster than previous systems.

  17. High-Speed TCP Testing

    NASA Technical Reports Server (NTRS)

    Brooks, David E.; Gassman, Holly; Beering, Dave R.; Welch, Arun; Hoder, Douglas J.; Ivancic, William D.

    1999-01-01

    Transmission Control Protocol (TCP) is the underlying protocol used within the Internet for reliable information transfer. As such, there is great interest to have all implementations of TCP efficiently interoperate. This is particularly important for links exhibiting long bandwidth-delay products. The tools exist to perform TCP analysis at low rates and low delays. However, for extremely high-rate and lone-delay links such as 622 Mbps over geosynchronous satellites, new tools and testing techniques are required. This paper describes the tools and techniques used to analyze and debug various TCP implementations over high-speed, long-delay links.

  18. A high speed sequential decoder

    NASA Technical Reports Server (NTRS)

    Lum, H., Jr.

    1972-01-01

    The performance and theory of operation for the High Speed Hard Decision Sequential Decoder are delineated. The decoder is a forward error correction system which is capable of accepting data from binary-phase-shift-keyed and quadriphase-shift-keyed modems at input data rates up to 30 megabits per second. Test results show that the decoder is capable of maintaining a composite error rate of 0.00001 at an input E sub b/N sub o of 5.6 db. This performance has been obtained with minimum circuit complexity.

  19. Superplane! High Speed Civil Transport

    NASA Technical Reports Server (NTRS)

    1998-01-01

    The High Speed Civil Transport (HSCT). This light-hearted promotional piece explains what the HSCT 'Superplane' is and what advantages it will have over current aircraft. As envisioned, the HSCT is a next-generation supersonic (faster than the speed of sound) passenger jet that would fly 300 passengers at more than 1,500 miles per hour -- more than twice the speed of sound. It will cross the Pacific or Atlantic in less than half the time of modern subsonic jets, and at a ticket price less than 20 percent above comparable, slower flights

  20. The Fastest Flights in Nature: High-Speed Spore Discharge Mechanisms among Fungi

    PubMed Central

    Yafetto, Levi; Carroll, Loran; Cui, Yunluan; Davis, Diana J.; Fischer, Mark W. F.; Henterly, Andrew C.; Kessler, Jordan D.; Kilroy, Hayley A.; Shidler, Jacob B.; Stolze-Rybczynski, Jessica L.; Sugawara, Zachary; Money, Nicholas P.

    2008-01-01

    Background A variety of spore discharge processes have evolved among the fungi. Those with the longest ranges are powered by hydrostatic pressure and include “squirt guns” that are most common in the Ascomycota and Zygomycota. In these fungi, fluid-filled stalks that support single spores or spore-filled sporangia, or cells called asci that contain multiple spores, are pressurized by osmosis. Because spores are discharged at such high speeds, most of the information on launch processes from previous studies has been inferred from mathematical models and is subject to a number of errors. Methodology/Principal Findings In this study, we have used ultra-high-speed video cameras running at maximum frame rates of 250,000 fps to analyze the entire launch process in four species of fungi that grow on the dung of herbivores. For the first time we have direct measurements of launch speeds and empirical estimates of acceleration in these fungi. Launch speeds ranged from 2 to 25 m s−1 and corresponding accelerations of 20,000 to 180,000 g propelled spores over distances of up to 2.5 meters. In addition, quantitative spectroscopic methods were used to identify the organic and inorganic osmolytes responsible for generating the turgor pressures that drive spore discharge. Conclusions/Significance The new video data allowed us to test different models for the effect of viscous drag and identify errors in the previous approaches to modeling spore motion. The spectroscopic data show that high speed spore discharge mechanisms in fungi are powered by the same levels of turgor pressure that are characteristic of fungal hyphae and do not require any special mechanisms of osmolyte accumulation. PMID:18797504

  1. Magneto-optical system for high speed real time imaging

    NASA Astrophysics Data System (ADS)

    Baziljevich, M.; Barness, D.; Sinvani, M.; Perel, E.; Shaulov, A.; Yeshurun, Y.

    2012-08-01

    A new magneto-optical system has been developed to expand the range of high speed real time magneto-optical imaging. A special source for the external magnetic field has also been designed, using a pump solenoid to rapidly excite the field coil. Together with careful modifications of the cryostat, to reduce eddy currents, ramping rates reaching 3000 T/s have been achieved. Using a powerful laser as the light source, a custom designed optical assembly, and a high speed digital camera, real time imaging rates up to 30 000 frames per seconds have been demonstrated.

  2. High-speed imaging and image processing in voice disorders

    NASA Astrophysics Data System (ADS)

    Tigges, Monika; Wittenberg, Thomas; Rosanowski, Frank; Eysholdt, Ulrich

    1996-12-01

    A digital high-speed camera system for the endoscopic examination of the larynx delivers recording speeds of up to 10,000 frames/s. Recordings of up to 1 s duration can be stored and used for further evaluation. Maximum resolution is 128 multiplied by 128 pixel. The acoustic and electroglottographic signals are recorded simultaneously. An image processing program especially developed for this purpose renders time-way-waveforms (high-speed glottograms) of several locations on the vocal cords. From the graphs all of the known objective parameters of the voice can be derived. Results of examinations in normal subjects and patients are presented.

  3. Magneto-optical system for high speed real time imaging.

    PubMed

    Baziljevich, M; Barness, D; Sinvani, M; Perel, E; Shaulov, A; Yeshurun, Y

    2012-08-01

    A new magneto-optical system has been developed to expand the range of high speed real time magneto-optical imaging. A special source for the external magnetic field has also been designed, using a pump solenoid to rapidly excite the field coil. Together with careful modifications of the cryostat, to reduce eddy currents, ramping rates reaching 3000 T/s have been achieved. Using a powerful laser as the light source, a custom designed optical assembly, and a high speed digital camera, real time imaging rates up to 30 000 frames per seconds have been demonstrated. PMID:22938303

  4. Lights, Camera, Action! Learning about Management with Student-Produced Video Assignments

    ERIC Educational Resources Information Center

    Schultz, Patrick L.; Quinn, Andrew S.

    2014-01-01

    In this article, we present a proposal for fostering learning in the management classroom through the use of student-produced video assignments. We describe the potential for video technology to create active learning environments focused on problem solving, authentic and direct experiences, and interaction and collaboration to promote student…

  5. Lights, Camera, Action: Advancing Learning, Research, and Program Evaluation through Video Production in Educational Leadership Preparation

    ERIC Educational Resources Information Center

    Friend, Jennifer; Militello, Matthew

    2015-01-01

    This article analyzes specific uses of digital video production in the field of educational leadership preparation, advancing a three-part framework that includes the use of video in (a) teaching and learning, (b) research methods, and (c) program evaluation and service to the profession. The first category within the framework examines videos…

  6. Remote Transmission at High Speed

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Omni and NASA Test Operations at Stennis entered a Dual-Use Agreement to develop the FOTR-125, a 125 megabit-per-second fiber-optic transceiver that allows accurate digital recordings over a great distance. The transceiver s fiber-optic link can be as long as 25 kilometers. This makes it much longer than the standard coaxial link, which can be no longer than 50 meters.The FOTR-125 utilizes laser diode transmitter modules and integrated receivers for the optical interface. Two transmitters and two receivers are employed at each end of the link with automatic or manual switchover to maximize the reliability of the communications link. NASA uses the transceiver in Stennis High-Speed Data Acquisition System (HSDAS). The HSDAS consists of several identical systems installed on the Center s test stands to process all high-speed data related to its propulsion test programs. These transceivers allow the recorder and HSDAS controls to be located in the Test Control Center in a remote location while the digitizer is located on the test stand.

  7. High speed hybrid active system

    NASA Astrophysics Data System (ADS)

    Gonzalez, Ignacio F.; Chang, Fu-Kuo; Qing, Peter X.; Kumar, Amrita; Zhang, David

    2005-05-01

    A novel piezoelectric/fiber-optic system is developed for long-term health monitoring of aerospace vehicles and structures. The hybrid diagnostic system uses the piezoelectric actuators to input a controlled excitation to the structure and the fiber optic sensors to capture the corresponding structural response. The aim of the system is to detect changes in structures such as those found in aerospace applications (damage, cracks, aging, etc.). This system involves the use of fiber Bragg gratings, which may be either bonded to the surface of the material or embedded within it in order to detect the linear strain component produced by the excitation waves generate by an arbitrary waveform generator. Interrogation of the Bragg gratings is carried out using a high speed fiber grating demodulation unit and a high speed data acquisition card to provide actuation input. With data collection and information processing; is able to determine the condition of the structure. The demands on a system suitable for detecting ultrasonic acoustic waves are different than for the more common strain and temperature systems. On the one hand, the frequency is much higher, with typical values for ultrasonic frequencies used in non-destructive testing ranging from 100 kHz up to several MHz. On the other hand, the related strain levels are much lower, normally in the μstrain range. Fiber-optic solutions for this problem do exist and are particularly attractive for ultrasonic sensing as the sensors offer broadband detection capability.

  8. High-speed phosphor thermometry.

    PubMed

    Fuhrmann, N; Baum, E; Brübach, J; Dreizler, A

    2011-10-01

    Phosphor thermometry is a semi-invasive surface temperature measurement technique utilising the luminescence properties of doped ceramic materials. Typically, these phosphor materials are coated onto the object of interest and are excited by a short UV laser pulse. Up to now, primarily Q-switched laser systems with repetition rates of 10 Hz were employed for excitation. Accordingly, this diagnostic tool was not applicable to resolve correlated temperature transients at time scales shorter than 100 ms. This contribution reports on the first realisation of a high-speed phosphor thermometry system employing a highly repetitive laser in the kHz regime and a fast decaying phosphor. A suitable material was characterised regarding its temperature lifetime characteristic and its measurement precision. Additionally, the influence of laser power on the phosphor coating was investigated in terms of heating effects. A demonstration of this high-speed technique has been conducted inside the thermally highly transient system of an optically accessible internal combustion engine. Temperatures have been measured with a repetition rate of 6 kHz corresponding to one sample per crank angle degree at 1000 rpm. PMID:22047319

  9. High Speed Research - External Vision System (EVS)

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Imagine flying a supersonic passenger jet (like the Concorde) at 1500 mph with no front windows in the cockpit - it may one day be a reality, as seen in this animation still. NASA engineers are working to develop technology that would replace the forward cockpit windows in future supersonic passenger jets with large sensor displays. These displays would use video images, enhanced by computer-generated graphics, to take the place of the view out the front windows. The envisioned eXternal Visibility System (XVS) would guide pilots to an airport, warn them of other aircraft near their path, and provide additional visual aides for airport approaches, landings and takeoffs. Currently, supersonic transports like the Anglo-French Concorde droop the front of the jet (the 'nose') downward to allow the pilots to see forward during takeoffs and landings. By enhancing the pilots' vision with high-resolution video displays, future supersonic transport designers could eliminate the heavy and expensive, mechanically-drooped nose. A future U.S. supersonic passenger jet, as envisioned by NASA's High-Speed Research (HSR) program, would carry 300 passengers more than 5000 nautical miles per hour more than 1500 miles per hour (more than twice the speed of sound). Traveling from Los Angeles to Tokyo would take only four hours, with an anticipated fare increase of only 20 percent over current ticket prices for substantially slower subsonic flights. Animation by Joey Ponthieux, Computer Sciences Corporation, Inc.

  10. 241-AZ-101 Waste Tank Color Video Camera System Shop Acceptance Test Report

    SciTech Connect

    WERRY, S.M.

    2000-03-23

    This report includes shop acceptance test results. The test was performed prior to installation at tank AZ-101. Both the camera system and camera purge system were originally sought and procured as a part of initial waste retrieval project W-151.

  11. Biofeedback control analysis using a synchronized system of two CCD video cameras and a force-plate sensor

    NASA Astrophysics Data System (ADS)

    Tsuruoka, Masako; Shibasaki, Ryosuke; Murai, Shunji

    1999-01-01

    The biofeedback control analysis of human movement has become increasingly important in rehabilitation, sports medicine and physical fitness. In this study, a synchronized system was developed for acquiring sequential data of a person's movement. The setup employs a video recorder system linked with two CCD video cameras and fore-plate sensor system, which are configured to stop and start simultaneously. The feedback control movement of postural stability was selected as a subject for analysis. The person's center of body gravity (COG) was calculated by measured 3-D coordinates of major joints using videometry with bundle adjustment and self-calibration. The raw serial data of COG and foot pressure by measured force plate sensor are difficult to analyze directly because of their complex fluctuations. Utilizing auto regressive modeling, the power spectrum and the impulse response of movement factors, enable analysis of their dynamic relations. This new biomedical engineering approach provides efficient information for medical evaluation of a person's stability.

  12. In-situ measurements of alloy oxidation/corrosion/erosion using a video camera and proximity sensor with microcomputer control

    NASA Technical Reports Server (NTRS)

    Deadmore, D. L.

    1984-01-01

    Two noncontacting and nondestructive, remotely controlled methods of measuring the progress of oxidation/corrosion/erosion of metal alloys, exposed to flame test conditions, are described. The external diameter of a sample under test in a flame was measured by a video camera width measurement system. An eddy current proximity probe system, for measurements outside of the flame, was also developed and tested. The two techniques were applied to the measurement of the oxidation of 304 stainless steel at 910 C using a Mach 0.3 flame. The eddy current probe system yielded a recession rate of 0.41 mils diameter loss per hour and the video system gave 0.27.

  13. Hand-gesture extraction and recognition from the video sequence acquired by a dynamic camera using condensation algorithm

    NASA Astrophysics Data System (ADS)

    Luo, Dan; Ohya, Jun

    2009-01-01

    To achieve environments in which humans and mobile robots co-exist, technologies for recognizing hand gestures from the video sequence acquired by a dynamic camera could be useful for human-to-robot interface systems. Most of conventional hand gesture technologies deal with only still camera images. This paper proposes a very simple and stable method for extracting hand motion trajectories based on the Human-Following Local Coordinate System (HFLC System), which is obtained from the located human face and both hands. Then, we apply Condensation Algorithm to the extracted hand trajectories so that the hand motion is recognized. We demonstrate the effectiveness of the proposed method by conducting experiments on 35 kinds of sign language based hand gestures.

  14. Activity profiles and hook-tool use of New Caledonian crows recorded by bird-borne video cameras.

    PubMed

    Troscianko, Jolyon; Rutz, Christian

    2015-12-01

    New Caledonian crows are renowned for their unusually sophisticated tool behaviour. Despite decades of fieldwork, however, very little is known about how they make and use their foraging tools in the wild, which is largely owing to the difficulties in observing these shy forest birds. To obtain first estimates of activity budgets, as well as close-up observations of tool-assisted foraging, we equipped 19 wild crows with self-developed miniature video cameras, yielding more than 10 h of analysable video footage for 10 subjects. While only four crows used tools during recording sessions, they did so extensively: across all 10 birds, we conservatively estimate that tool-related behaviour occurred in 3% of total observation time, and accounted for 19% of all foraging behaviour. Our video-loggers provided first footage of crows manufacturing, and using, one of their most complex tool types--hooked stick tools--under completely natural foraging conditions. We recorded manufacture from live branches of paperbark (Melaleuca sp.) and another tree species (thought to be Acacia spirorbis), and deployment of tools in a range of contexts, including on the forest floor. Taken together, our video recordings reveal an 'expanded' foraging niche for hooked stick tools, and highlight more generally how crows routinely switch between tool- and bill-assisted foraging. PMID:26701755

  15. Activity profiles and hook-tool use of New Caledonian crows recorded by bird-borne video cameras

    PubMed Central

    Troscianko, Jolyon; Rutz, Christian

    2015-01-01

    New Caledonian crows are renowned for their unusually sophisticated tool behaviour. Despite decades of fieldwork, however, very little is known about how they make and use their foraging tools in the wild, which is largely owing to the difficulties in observing these shy forest birds. To obtain first estimates of activity budgets, as well as close-up observations of tool-assisted foraging, we equipped 19 wild crows with self-developed miniature video cameras, yielding more than 10 h of analysable video footage for 10 subjects. While only four crows used tools during recording sessions, they did so extensively: across all 10 birds, we conservatively estimate that tool-related behaviour occurred in 3% of total observation time, and accounted for 19% of all foraging behaviour. Our video-loggers provided first footage of crows manufacturing, and using, one of their most complex tool types—hooked stick tools—under completely natural foraging conditions. We recorded manufacture from live branches of paperbark (Melaleuca sp.) and another tree species (thought to be Acacia spirorbis), and deployment of tools in a range of contexts, including on the forest floor. Taken together, our video recordings reveal an ‘expanded’ foraging niche for hooked stick tools, and highlight more generally how crows routinely switch between tool- and bill-assisted foraging. PMID:26701755

  16. High speed laser tomography system.

    PubMed

    Samsonov, D; Elsaesser, A; Edwards, A; Thomas, H M; Morfill, G E

    2008-03-01

    A high speed laser tomography system was developed capable of acquiring three-dimensional (3D) images of optically thin clouds of moving micron-sized particles. It operates by parallel-shifting an illuminating laser sheet with a pair of galvanometer-driven mirrors and synchronously recording two-dimensional (2D) images of thin slices of the imaged volume. The maximum scanning speed achieved was 120,000 slices/s, sequences of 24 volume scans (up to 256 slices each) have been obtained. The 2D slices were stacked to form 3D images of the volume, then the positions of the particles were identified and followed in the consecutive scans. The system was used to image a complex plasma with particles moving at speeds up to cm/s. PMID:18377040

  17. Experiments on high speed ejectors

    NASA Technical Reports Server (NTRS)

    Wu, J. J.

    1986-01-01

    Experimental studies were conducted to investigate the flow and the performance of thrust augmenting ejectors for flight Mach numbers in the range of 0.5 to 0.8, primary air stagnation pressures up to 107 psig (738 kPa), and primary air stagnation temperatures up to 1250 F (677 C). The experiment verified the existence of the second solution ejector flow, where the flow after complete mixing is supersonic. Thrust augmentation in excess of 1.2 was demonstrated for both hot and cold primary jets. The experimental ejector performed better than the corresponding theoretical optimal first solution ejector, where the mixed flow is subsonic. Further studies are required to realize the full potential of the second solution ejector. The research program was started by the Flight Dynamics Research Corporation (FDRC) to investigate the characteristic of a high speed ejector which augments thrust of a jet at high flight speeds.

  18. High-speed data search

    NASA Technical Reports Server (NTRS)

    Driscoll, James N.

    1994-01-01

    The high-speed data search system developed for KSC incorporates existing and emerging information retrieval technology to help a user intelligently and rapidly locate information found in large textual databases. This technology includes: natural language input; statistical ranking of retrieved information; an artificial intelligence concept called semantics, where 'surface level' knowledge found in text is used to improve the ranking of retrieved information; and relevance feedback, where user judgements about viewed information are used to automatically modify the search for further information. Semantics and relevance feedback are features of the system which are not available commercially. The system further demonstrates focus on paragraphs of information to decide relevance; and it can be used (without modification) to intelligently search all kinds of document collections, such as collections of legal documents medical documents, news stories, patents, and so forth. The purpose of this paper is to demonstrate the usefulness of statistical ranking, our semantic improvement, and relevance feedback.

  19. Small Scale High Speed Turbomachinery

    NASA Technical Reports Server (NTRS)

    London, Adam P. (Inventor); Droppers, Lloyd J. (Inventor); Lehman, Matthew K. (Inventor); Mehra, Amitav (Inventor)

    2015-01-01

    A small scale, high speed turbomachine is described, as well as a process for manufacturing the turbomachine. The turbomachine is manufactured by diffusion bonding stacked sheets of metal foil, each of which has been pre-formed to correspond to a cross section of the turbomachine structure. The turbomachines include rotating elements as well as static structures. Using this process, turbomachines may be manufactured with rotating elements that have outer diameters of less than four inches in size, and/or blading heights of less than 0.1 inches. The rotating elements of the turbomachines are capable of rotating at speeds in excess of 150 feet per second. In addition, cooling features may be added internally to blading to facilitate cooling in high temperature operations.

  20. Flexible High Speed Codec (FHSC)

    NASA Technical Reports Server (NTRS)

    Segallis, G. P.; Wernlund, J. V.

    1991-01-01

    The ongoing NASA/Harris Flexible High Speed Codec (FHSC) program is described. The program objectives are to design and build an encoder decoder that allows operation in either burst or continuous modes at data rates of up to 300 megabits per second. The decoder handles both hard and soft decision decoding and can switch between modes on a burst by burst basis. Bandspreading is low since the code rate is greater than or equal to 7/8. The encoder and a hard decision decoder fit on a single application specific integrated circuit (ASIC) chip. A soft decision applique is implemented using 300 K emitter coupled logic (ECL) which can be easily translated to an ECL gate array.

  1. High-Speed Optical Spectroscopy

    NASA Astrophysics Data System (ADS)

    Marsh, T. R.

    The large surveys and sensitive instruments of modern astronomy are turning ever more examples of variable objects, many of which are extending the parameter space to testing theories of stellar evolution and accretion. Future projects such as the Laser Interferometer Space Antenna (LISA) and the Large Synoptic Survey Telescope (LSST) will only add more challenging candidates to this list. Understanding such objects often requires fast spectroscopy, but the trend for ever larger detectors makes this difficult. In this contribution I outline the science made possible by high-speed spectroscopy, and consider how a combination of the well-known progress in computer technology combined with recent advances in CCD detectors may finally enable it to become a standard tool of astrophysics.

  2. High speed sampler and demultiplexer

    DOEpatents

    McEwan, Thomas E.

    1995-01-01

    A high speed sampling demultiplexer based on a plurality of sampler banks, each bank comprising a sample transmission line for transmitting an input signal, a strobe transmission line for transmitting a strobe signal, and a plurality of sampling gates at respective positions along the sample transmission line for sampling the input signal in response to the strobe signal. Strobe control circuitry is coupled to the plurality of banks, and supplies a sequence of bank strobe signals to the strobe transmission lines in each of the plurality of banks, and includes circuits for controlling the timing of the bank strobe signals among the banks of samplers. Input circuitry is included for supplying the input signal to be sampled to the plurality of sample transmission lines in the respective banks. The strobe control circuitry can repetitively strobe the plurality of banks of samplers such that the banks of samplers are cycled to create a long sample length. Second tier demultiplexing circuitry is coupled to each of the samplers in the plurality of banks. The second tier demultiplexing circuitry senses the sample taken by the corresponding sampler each time the bank in which the sampler is found is strobed. A plurality of such samples can be stored by the second tier demultiplexing circuitry for later processing. Repetitive sampling with the high speed transient sampler induces an effect known as "strobe kickout". The sample transmission lines include structures which reduce strobe kickout to acceptable levels, generally 60 dB below the signal, by absorbing the kickout pulses before the next sampling repetition.

  3. High speed sampler and demultiplexer

    DOEpatents

    McEwan, T.E.

    1995-12-26

    A high speed sampling demultiplexer based on a plurality of sampler banks, each bank comprising a sample transmission line for transmitting an input signal, a strobe transmission line for transmitting a strobe signal, and a plurality of sampling gates at respective positions along the sample transmission line for sampling the input signal in response to the strobe signal. Strobe control circuitry is coupled to the plurality of banks, and supplies a sequence of bank strobe signals to the strobe transmission lines in each of the plurality of banks, and includes circuits for controlling the timing of the bank strobe signals among the banks of samplers. Input circuitry is included for supplying the input signal to be sampled to the plurality of sample transmission lines in the respective banks. The strobe control circuitry can repetitively strobe the plurality of banks of samplers such that the banks of samplers are cycled to create a long sample length. Second tier demultiplexing circuitry is coupled to each of the samplers in the plurality of banks. The second tier demultiplexing circuitry senses the sample taken by the corresponding sampler each time the bank in which the sampler is found is strobed. A plurality of such samples can be stored by the second tier demultiplexing circuitry for later processing. Repetitive sampling with the high speed transient sampler induces an effect known as ``strobe kickout``. The sample transmission lines include structures which reduce strobe kickout to acceptable levels, generally 60 dB below the signal, by absorbing the kickout pulses before the next sampling repetition. 16 figs.

  4. Lights, Camera: Learning! Findings from studies of video in formal and informal science education

    NASA Astrophysics Data System (ADS)

    Borland, J.

    2013-12-01

    As part of the panel, media researcher, Jennifer Borland, will highlight findings from a variety of studies of videos across the spectrum of formal to informal learning, including schools, museums, and in viewers homes. In her presentation, Borland will assert that the viewing context matters a great deal, but there are some general take-aways that can be extrapolated to the use of educational video in a variety of settings. Borland has served as an evaluator on several video-related projects funded by NASA and the the National Science Foundation including: Data Visualization videos and Space Shows developed by the American Museum of Natural History, DragonflyTV, Earth the Operators Manual, The Music Instinct and Time Team America.

  5. Algorithms for High-Speed Noninvasive Eye-Tracking System

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Morookian, John-Michael; Lambert, James

    2010-01-01

    Two image-data-processing algorithms are essential to the successful operation of a system of electronic hardware and software that noninvasively tracks the direction of a person s gaze in real time. The system was described in High-Speed Noninvasive Eye-Tracking System (NPO-30700) NASA Tech Briefs, Vol. 31, No. 8 (August 2007), page 51. To recapitulate from the cited article: Like prior commercial noninvasive eyetracking systems, this system is based on (1) illumination of an eye by a low-power infrared light-emitting diode (LED); (2) acquisition of video images of the pupil, iris, and cornea in the reflected infrared light; (3) digitization of the images; and (4) processing the digital image data to determine the direction of gaze from the centroids of the pupil and cornea in the images. Most of the prior commercial noninvasive eyetracking systems rely on standard video cameras, which operate at frame rates of about 30 Hz. Such systems are limited to slow, full-frame operation. The video camera in the present system includes a charge-coupled-device (CCD) image detector plus electronic circuitry capable of implementing an advanced control scheme that effects readout from a small region of interest (ROI), or subwindow, of the full image. Inasmuch as the image features of interest (the cornea and pupil) typically occupy a small part of the camera frame, this ROI capability can be exploited to determine the direction of gaze at a high frame rate by reading out from the ROI that contains the cornea and pupil (but not from the rest of the image) repeatedly. One of the present algorithms exploits the ROI capability. The algorithm takes horizontal row slices and takes advantage of the symmetry of the pupil and cornea circles and of the gray-scale contrasts of the pupil and cornea with respect to other parts of the eye. The algorithm determines which horizontal image slices contain the pupil and cornea, and, on each valid slice, the end coordinates of the pupil and cornea

  6. Lights, camera, action…critique? Submit videos to AGU communications workshop

    NASA Astrophysics Data System (ADS)

    Viñas, Maria-José

    2011-08-01

    What does it take to create a science video that engages the audience and draws thousands of views on YouTube? Those interested in finding out should submit their research-related videos to AGU's Fall Meeting science film analysis workshop, led by oceanographer turned documentary director Randy Olson. Olson, writer-director of two films (Flock of Dodos: The Evolution-Intelligent Design Circus and Sizzle: A Global Warming Comedy) and author of the book Don't Be Such a Scientist: Talking Substance in an Age of Style, will provide constructive criticism on 10 selected video submissions, followed by moderated discussion with the audience. To submit your science video (5 minutes or shorter), post it on YouTube and send the link to the workshop coordinator, Maria-José Viñas (mjvinas@agu.org), with the following subject line: Video submission for Olson workshop. AGU will be accepting submissions from researchers and media officers of scientific institutions until 6:00 P.M. eastern time on Friday, 4 November. Those whose videos are selected to be screened will be notified by Friday, 18 November. All are welcome to attend the workshop at the Fall Meeting.

  7. Faster Is Better: High-Speed Modems.

    ERIC Educational Resources Information Center

    Roth, Cliff

    1995-01-01

    Discusses using high-speed modems to access the Internet. Examines internal and external modems, data speeds, compression and error reduction, faxing and voice capabilities, and software features. Considers ISDN (Integrated Services Digital Network) as the future replacement of high-speed modems. Sidebars present high-speed modem product…

  8. Preliminary study of high-speed machining

    SciTech Connect

    Jordan, R.E.

    1980-07-01

    The feasibility of a high speed machining process has been established for application to Bendix aluminum products, based upon information gained through visits to existing high speed machining facilities and by the completion of a representative Bendix part using this process. The need for an experimental high speed machining capability at Bendix for further process evaluation is established.

  9. 3-D eye movement measurements on four Comex's divers using video CCD cameras, during high pressure diving.

    PubMed

    Guillemant, P; Ulmer, E; Freyss, G

    1995-01-01

    Previous studies have shown the vulnerability of the vestibular system regarding barotraumatism (1) and deep diving may induce immediate neurological changes (2). These extreme conditions (high pressure, limited examination time, restricted space, hydrogen-oxygen mixture, communication difficulties etc.) require adapted technology and associated fast experimental procedure. We were able to solve these problems by developing a new system of 3-D ocular movements on line analysis by means of a video camera. This analyser uses image processing and forms recognition software which allows non-invasive video frequency calculation of eye movements including torsional component. As this system is immediately ready for use, we were able to realize the subsequent examinations in a maximum time of 8 min for each diver: oculomotor tests including saccadic, slow and optokinetic traditional automatic measurements; vestibular tests regarding spontaneous and positional nystagmus, and reactional nystagmus to the pendular test. For pendular induced nystagmus we used appropriate head positions to stimulate separately the lateral and the posterior semicircular canal, and we measured the gain by operating successively in visible light and complete darkness. Recordings were done during a simulated onshore dive to an ambient pressure corresponding to a depth of 350 m. The above examinations were completed on the first and last days by caloric tests with the same video system analyser. The results of the investigations demonstrated perfect tolerance of the oculomotor and vestibular systems of these 4 divers thus fulfilling the preventive conditions defined by Comex Co. We were able to overcome the limitations due to low cost PC computer operation and cameras (necessity of adaptation to pressure, focus difficulties and direct light exposure eye reflexions). We still have on line accurate measurements even on the torsional component of the eye movement. Due to this technological efficiency

  10. Applying compressive sensing to TEM video: A substantial frame rate increase on any camera

    DOE PAGESBeta

    Stevens, Andrew; Kovarik, Libor; Abellan, Patricia; Yuan, Xin; Carin, Lawrence; Browning, Nigel D.

    2015-08-13

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ transmission electron microscopy (TEM) experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1 ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing (CS) methods to increase the frame rate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integratedmore » into a single camera frame during the acquisition process, and then extracted upon readout using statistical CS inversion. Here we describe the background of CS and statistical methods in depth and simulate the frame rates and efficiencies for in-situ TEM experiments. Depending on the resolution and signal/noise of the image, it should be possible to increase the speed of any camera by more than an order of magnitude using this approach.« less

  11. Applying compressive sensing to TEM video: A substantial frame rate increase on any camera

    SciTech Connect

    Stevens, Andrew; Kovarik, Libor; Abellan, Patricia; Yuan, Xin; Carin, Lawrence; Browning, Nigel D.

    2015-08-13

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ transmission electron microscopy (TEM) experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1 ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing (CS) methods to increase the frame rate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integrated into a single camera frame during the acquisition process, and then extracted upon readout using statistical CS inversion. Here we describe the background of CS and statistical methods in depth and simulate the frame rates and efficiencies for in-situ TEM experiments. Depending on the resolution and signal/noise of the image, it should be possible to increase the speed of any camera by more than an order of magnitude using this approach.

  12. High-speed observations of Transient Luminous Events and Lightning (The 2008/2009 Ebro Valley campaign)

    NASA Astrophysics Data System (ADS)

    Montanyà, Joan; van der Velde, Oscar; Soula, Serge; Romero, David; Pineda, Nicolau; Solà, Glòria; March, Víctor

    2010-05-01

    The future ASIM mission will provide x/y rays detections from space to investigate the origins of the Terrestrial Gamma-ray Flashes and its possible relation with transient luminous events (TLE). In order to support the future space observations we are setting up some ground infrastructure located at the Ebro Valley region (Northeast of Spain). At the end of 2008 and during 2009 we carried out our first observation campaign in order to acquire experience to support the future ASIM mission. From January 2008 to February 2009 we focused on the observation of TLE's with our intensified high-speed camera system. We recorded 14 sprites, 19 elves and, in three sprites, we observed also halos (Montanyà et al. 2009). Unfortunately no high-speed records of TLEs where observed in the range of the (XDDE) VHF network. However, we have recorded several tens of TLEs at normal frame rate (25 fps) which are in the XDDE range (Van der Velde et al., 2009). Additionally, in August 2009 we installed our first camera for TLE observation in the Caribean. The camera is located in San Andrés Isl. (Colombia). From June 2009 to October 2009 we focused all of our efforts to record lightning at high-speed (10000 fps), vertical close electric fields and x-ray emissions from lightning. We recorded around 60 lightning flashes but we only clearly evidenced high energy detections in only one flash. The detections were produced during the leader phase of a cloud-to-ground flash. The leader signature on the recorded electric field was very short (around 1 ms) and, during this period, a burst of high energy emissions where detected. Then, few detections where produced just after the return stroke. The experience of this preliminary campaign has given us the basis for the future campaigns where we plan to count with two high-speed cameras and a Lightning Mapping Array. References Montanyà et al. (2009). High-Speed Intensified Video Recordings of Sprites and Elves over the Western Mediterranean Sea

  13. Compact all-CMOS spatiotemporal compressive sensing video camera with pixel-wise coded exposure.

    PubMed

    Zhang, Jie; Xiong, Tao; Tran, Trac; Chin, Sang; Etienne-Cummings, Ralph

    2016-04-18

    We present a low power all-CMOS implementation of temporal compressive sensing with pixel-wise coded exposure. This image sensor can increase video pixel resolution and frame rate simultaneously while reducing data readout speed. Compared to previous architectures, this system modulates pixel exposure at the individual photo-diode electronically without external optical components. Thus, the system provides reduction in size and power compare to previous optics based implementations. The prototype image sensor (127 × 90 pixels) can reconstruct 100 fps videos from coded images sampled at 5 fps. With 20× reduction in readout speed, our CMOS image sensor only consumes 14μW to provide 100 fps videos. PMID:27137331

  14. Internet Teleprescence by Real-Time View-Dependent Image Generation with Omnidirectional Video Camera

    NASA Astrophysics Data System (ADS)

    Morita, Shinji; Yamazawa, Kazumasa; Yokoya, Naokazu

    2003-01-01

    This paper describes a new networked telepresence system which realizes virtual tours into a visualized dynamic real world without significant time delay. Our system is realized by the following three steps: (1) video-rate omnidirectional image acquisition, (2) transportation of an omnidirectional video stream via internet, and (3) real-time view-dependent perspective image generation from the omnidirectional video stream. Our system is applicable to real-time telepresence in the situation where the real world to be seen is far from an observation site, because the time delay from the change of user"s viewing direction to the change of displayed image is small and does not depend on the actual distance between both sites. Moreover, multiple users can look around from a single viewpoint in a visualized dynamic real world in different directions at the same time. In experiments, we have proved that the proposed system is useful for internet telepresence.

  15. Real-time intraoperative high-speed imaging during phacoemulsification.

    PubMed

    Srivastava, Samaresh; Vasavada, Abhay R; Vasavada, Vaishali A; Vasavada, Viraj A

    2012-09-01

    We describe the use of high-speed imaging during phacoemulsification in a clinical scenario. Images captured during surgery at high frame rates are converted into a slow-motion film to view and analyze various surgical steps. This technique highlights events that are not captured in a normal-speed video recording. It has obvious applications for understanding surgical techniques and technology. PMID:22841426

  16. ADVANCED HIGH SPEED PROGRAMMABLE PREFORMING

    SciTech Connect

    Norris Jr, Robert E; Lomax, Ronny D; Xiong, Fue; Dahl, Jeffrey S; Blanchard, Patrick J

    2010-01-01

    Polymer-matrix composites offer greater stiffness and strength per unit weight than conventional materials resulting in new opportunities for lightweighting of automotive and heavy vehicles. Other benefits include design flexibility, less corrosion susceptibility, and the ability to tailor properties to specific load requirements. However, widespread implementation of structural composites requires lower-cost manufacturing processes than those that are currently available. Advanced, directed-fiber preforming processes have demonstrated exceptional value for rapid preforming of large, glass-reinforced, automotive composite structures. This is due to process flexibility and inherently low material scrap rate. Hence directed fiber performing processes offer a low cost manufacturing methodology for producing preforms for a variety of structural automotive components. This paper describes work conducted at the Oak Ridge National Laboratory (ORNL), focused on the development and demonstration of a high speed chopper gun to enhance throughput capabilities. ORNL and the Automotive Composites Consortium (ACC) revised the design of a standard chopper gun to expand the operational envelope, enabling delivery of up to 20kg/min. A prototype unit was fabricated and used to demonstrate continuous chopping of multiple roving at high output over extended periods. In addition fiber handling system modifications were completed to sustain the high output the modified chopper affords. These hardware upgrades are documented along with results of process characterization and capabilities assessment.

  17. High-speed pressure clamp.

    PubMed

    Besch, Stephen R; Suchyna, Thomas; Sachs, Frederick

    2002-10-01

    We built a high-speed, pneumatic pressure clamp to stimulate patch-clamped membranes mechanically. The key control element is a newly designed differential valve that uses a single, nickel-plated piezoelectric bending element to control both pressure and vacuum. To minimize response time, the valve body was designed with minimum dead volume. The result is improved response time and stability with a threefold decrease in actuation latency. Tight valve clearances minimize the steady-state air flow, permitting us to use small resonant-piston pumps to supply pressure and vacuum. To protect the valve from water contamination in the event of a broken pipette, an optical sensor detects water entering the valve and increases pressure rapidly to clear the system. The open-loop time constant for pressure is 2.5 ms for a 100-mmHg step, and the closed-loop settling time is 500-600 micros. Valve actuation latency is 120 micros. The system performance is illustrated for mechanically induced changes in patch capacitance. PMID:12397401

  18. Quiet High-Speed Fan

    NASA Technical Reports Server (NTRS)

    Lieber, Lysbeth; Repp, Russ; Weir, Donald S.

    1996-01-01

    A calibration of the acoustic and aerodynamic prediction methods was performed and a baseline fan definition was established and evaluated to support the quiet high speed fan program. A computational fluid dynamic analysis of the NASA QF-12 Fan rotor, using the DAWES flow simulation program was performed to demonstrate and verify the causes of the relatively poor aerodynamic performance observed during the fan test. In addition, the rotor flowfield characteristics were qualitatively compared to the acoustic measurements to identify the key acoustic characteristics of the flow. The V072 turbofan source noise prediction code was used to generate noise predictions for the TFE731-60 fan at three operating conditions and compared to experimental data. V072 results were also used in the Acoustic Radiation Code to generate far field noise for the TFE731-60 nacelle at three speed points for the blade passage tone. A full 3-D viscous flow simulation of the current production TFE731-60 fan rotor was performed with the DAWES flow analysis program. The DAWES analysis was used to estimate the onset of multiple pure tone noise, based on predictions of inlet shock position as a function of the rotor tip speed. Finally, the TFE731-60 fan rotor wake structure predicted by the DAWES program was used to define a redesigned stator with the leading edge configured to minimize the acoustic effects of rotor wake / stator interaction, without appreciably degrading performance.

  19. Clinical diagnostic of pleural effusions using a high-speed viscosity measurement method

    NASA Astrophysics Data System (ADS)

    Hurth, Cedric; Klein, Katherine; van Nimwegen, Lena; Korn, Ronald; Vijayaraghavan, Krishnaswami; Zenhausern, Frederic

    2011-08-01

    We present a novel bio-analytical method to discriminate between transudative and exudative pleural effusions based on a high-speed video analysis of a solid glass sphere impacting a liquid. Since the result depends on the solution viscosity, it can ultimately replace the battery of biochemical assays currently used. We present results obtained on a series of 7 pleural effusions obtained from consenting patients by analyzing both the splash observed after the glass impactor hits the liquid surface, and in a configuration reminiscent of the drop ball viscometer with added sensitivity and throughput provided by the high-speed camera. The results demonstrate distinction between the pleural effusions and good correlation with the fluid chemistry analysis to accurately differentiate exudates and transudates for clinical purpose. The exudative effusions display a viscosity around 1.39 ± 0.08 cP whereas the transudative effusion was measured at 0.89 ± 0.09 cP, in good agreement with previous reports.

  20. High-speed quantitative phase imaging of dynamic thermal deformation in laser irradiated films

    NASA Astrophysics Data System (ADS)

    Taylor, Lucas N.; Brown, Andrew K.; Olson, Kyle D.; Talghader, Joseph J.

    2015-11-01

    We present a technique for high-speed imaging of the dynamic thermal deformation of transparent substrates under high-power laser irradiation. Traditional thermal sensor arrays are not fast enough to capture thermal decay events. Our system adapts a Mach-Zender interferometer, along with a high-speed camera to capture phase images on sub-millisecond time-scales. These phase images are related to temperature by thermal expansion effects and by the change of refractive index with temperature. High power continuous-wave and long-pulse laser damage often hinges on thermal phenomena rather than the field-induced effects of ultra-short pulse lasers. Our system was able to measure such phenomena. We were able to record 2D videos of 1 ms thermal deformation waves, with 6 frames per wave, from a 100 ns, 10 mJ Q-switched Nd:YAG laser incident on a yttria-coated glass slide. We recorded thermal deformation waves with peak temperatures on the order of 100 degrees Celsius during non-destructive testing.

  1. Three-dimensional measurement of CFRP deformation during high-speed impact loading

    NASA Astrophysics Data System (ADS)

    Yamada, Masayoshi; Tanabe, Yasuhiro; Yoshimura, Akinori; Ogasawara, Toshio

    2011-08-01

    The deformation of carbon fiber reinforced plastics (CFRPs) caused by projectile impact governs the absorption or dissipation of kinetic energy of the projectile. However, three-dimensional (3D) numerical information about the CFRP deformations caused by the projectile impact is not yet available. Therefore, a 3D measurement was conducted to evaluate the deformation process and deformation behavior of the CFRPs under high-velocity projectile impact, and to subsequently evaluate the performance of the CFRPs. CFRPs having two different stacking sequences were used as the specimens. For measuring the deformation, a high-speed stereovision system comprised of two high-speed video cameras was adopted. An SUJ-2 sphere projectile was impacted against a specimen plate using a light-gas accelerator at an impact velocity of approximately 175 m/s, and the deformation was recorded by synchronously capturing the images using this system. The captured images were converted to stereo images by a 3D correlation method. The stereo images clearly revealed numerical differences in the deformation of the CFRPs having different stacking sequences. The result accuracy of the 3D measurement was verified by comparing their results with the direct measurement results. Moreover, the stereo images corresponded to the results from a numerical simulation of the CFRP deformations, which both qualitatively and quantitatively confirms the validity of the simulation. This 3D measurement method is a powerful and useful tool for evaluating the performance of CFRPs during high-velocity projectile impact.

  2. Visual surveys can reveal rather different 'pictures' of fish densities: Comparison of trawl and video camera surveys in the Rockall Bank, NE Atlantic Ocean

    NASA Astrophysics Data System (ADS)

    McIntyre, F. D.; Neat, F.; Collie, N.; Stewart, M.; Fernandes, P. G.

    2015-01-01

    Visual surveys allow non-invasive sampling of organisms in the marine environment which is of particular importance in deep-sea habitats that are vulnerable to damage caused by destructive sampling devices such as bottom trawls. To enable visual surveying at depths greater than 200 m we used a deep towed video camera system, to survey large areas around the Rockall Bank in the North East Atlantic. The area of seabed sampled was similar to that sampled by a bottom trawl, enabling samples from the towed video camera system to be compared with trawl sampling to quantitatively assess the numerical density of deep-water fish populations. The two survey methods provided different results for certain fish taxa and comparable results for others. Fish that exhibited a detectable avoidance behaviour to the towed video camera system, such as the Chimaeridae, resulted in mean density estimates that were significantly lower (121 fish/km2) than those determined by trawl sampling (839 fish/km2). On the other hand, skates and rays showed no reaction to the lights in the towed body of the camera system, and mean density estimates of these were an order of magnitude higher (64 fish/km2) than the trawl (5 fish/km2). This is probably because these fish can pass under the footrope of the trawl due to their flat body shape lying close to the seabed but are easily detected by the benign towed video camera system. For other species, such as Molva sp, estimates of mean density were comparable between the two survey methods (towed camera, 62 fish/km2; trawl, 73 fish/km2). The towed video camera system presented here can be used as an alternative benign method for providing indices of abundance for species such as ling in areas closed to trawling, or for those fish that are poorly monitored by trawl surveying in any area, such as the skates and rays.

  3. Lights! Camera! Action! Producing Library Instruction Video Tutorials Using Camtasia Studio

    ERIC Educational Resources Information Center

    Charnigo, Laurie

    2009-01-01

    From Web guides to online tutorials, academic librarians are increasingly experimenting with many different technologies in order to meet the needs of today's growing distance education populations. In this article, the author discusses one librarian's experience using Camtasia Studio to create subject specific video tutorials. Benefits, as well…

  4. Use of a UAV-mounted video camera to assess feeding behavior of Raramuri Criollo cows

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Interest in use of unmanned aerial vehicles in science has increased in recent years. It is predicted that they will be a preferred remote sensing platform for applications that inform sustainable rangeland management in the future. The objective of this study was to determine whether UAV video moni...

  5. "Lights, Camera, Reflection": Using Peer Video to Promote Reflective Dialogue among Student Teachers

    ERIC Educational Resources Information Center

    Harford, Judith; MacRuairc, Gerry; McCartan, Dermot

    2010-01-01

    This paper examines the use of peer-videoing in the classroom as a means of promoting reflection among student teachers. Ten pre-service teachers participating in a teacher education programme in a university in the Republic of Ireland and ten pre-service teachers participating in a teacher education programme in a university in the North of…

  6. Making the Most of Your Video Camera. Technology in Language Learning Series.

    ERIC Educational Resources Information Center

    Lonergan, Jack

    A practical guide for language teachers illustrates the different ways in which cameras can be employed in language work, with suggestions and advice taken from current experience. Teachers can be involved by making their own language training videotapes and focusing on an area of language, literature, or thematic interest directly applicable to…

  7. High-speed stereoscopy of aurora

    NASA Astrophysics Data System (ADS)

    Kataoka, R.; Fukuda, Y.; Uchida, H. A.; Yamada, H.; Miyoshi, Y.; Ebihara, Y.; Dahlgren, H.; Hampton, D.

    2016-01-01

    We performed 100 fps stereoscopic imaging of aurora for the first time. Two identical sCMOS cameras equipped with narrow field-of-view lenses (15° by 15°) were directed at magnetic zenith with the north-south base distance of 8.1 km. Here we show the best example that a rapidly pulsating diffuse patch and a streaming discrete arc were observed at the same time with different parallaxes, and the emission altitudes were estimated as 85-95 km and > 100 km, respectively. The estimated emission altitudes are consistent with those estimated in previous studies, and it is suggested that high-speed stereoscopy is useful to directly measure the emission altitudes of various types of rapidly varying aurora. It is also found that variation of emission altitude is gradual (e.g., 10 km increase over 5 s) for pulsating patches and is fast (e.g., 10 km increase within 0.5 s) for streaming arcs.

  8. Video-based realtime IMU-camera calibration for robot navigation

    NASA Astrophysics Data System (ADS)

    Petersen, Arne; Koch, Reinhard

    2012-06-01

    This paper introduces a new method for fast calibration of inertial measurement units (IMU) with cameras being rigidly coupled. That is, the relative rotation and translation between the IMU and the camera is estimated, allowing for the transfer of IMU data to the cameras coordinate frame. Moreover, the IMUs nuisance parameters (biases and scales) and the horizontal alignment of the initial camera frame are determined. Since an iterated Kalman Filter is used for estimation, information on the estimations precision is also available. Such calibrations are crucial for IMU-aided visual robot navigation, i.e. SLAM, since wrong calibrations cause biases and drifts in the estimated position and orientation. As the estimation is performed in realtime, the calibration can be done using a freehand movement and the estimated parameters can be validated just in time. This provides the opportunity of optimizing the used trajectory online, increasing the quality and minimizing the time effort for calibration. Except for a marker pattern, used for visual tracking, no additional hardware is required. As will be shown, the system is capable of estimating the calibration within a short period of time. Depending on the requested precision trajectories of 30 seconds to a few minutes are sufficient. This allows for calibrating the system at startup. By this, deviations in the calibration due to transport and storage can be compensated. The estimation quality and consistency are evaluated in dependency of the traveled trajectories and the amount of IMU-camera displacement and rotation misalignment. It is analyzed, how different types of visual markers, i.e. 2- and 3-dimensional patterns, effect the estimation. Moreover, the method is applied to mono and stereo vision systems, providing information on the applicability to robot systems. The algorithm is implemented using a modular software framework, such that it can be adopted to altered conditions easily.

  9. Real-time multi-camera video acquisition and processing platform for ADAS

    NASA Astrophysics Data System (ADS)

    Saponara, Sergio

    2016-04-01

    The paper presents the design of a real-time and low-cost embedded system for image acquisition and processing in Advanced Driver Assisted Systems (ADAS). The system adopts a multi-camera architecture to provide a panoramic view of the objects surrounding the vehicle. Fish-eye lenses are used to achieve a large Field of View (FOV). Since they introduce radial distortion of the images projected on the sensors, a real-time algorithm for their correction is also implemented in a pre-processor. An FPGA-based hardware implementation, re-using IP macrocells for several ADAS algorithms, allows for real-time processing of input streams from VGA automotive CMOS cameras.

  10. A risk-based coverage model for video surveillance camera control optimization

    NASA Astrophysics Data System (ADS)

    Zhang, Hongzhou; Du, Zhiguo; Zhao, Xingtao; Li, Peiyue; Li, Dehua

    2015-12-01

    Visual surveillance system for law enforcement or police case investigation is different from traditional application, for it is designed to monitor pedestrians, vehicles or potential accidents. Visual surveillance risk is defined as uncertainty of visual information of targets and events monitored in present work and risk entropy is introduced to modeling the requirement of police surveillance task on quality and quantity of vide information. the prosed coverage model is applied to calculate the preset FoV position of PTZ camera.

  11. Measurement and processing of signatures in the visible range using a calibrated video camera and the CAMDET software package

    NASA Astrophysics Data System (ADS)

    Sheffer, Dan

    1997-06-01

    A procedure for calibration of a color video camera has been developed at EORD. The RGB values of standard samples, together with the spectral radiance values of the samples, are used to calculate a transformation matrix between the RGB and CIEXYZ color spaces. The transformation matrix is then used to calculate the XYZ color coordinates of distant objects imaged in the field. These, in turn, are used in order to calculate the CIELAB color coordinates of the objects. Good agreement between the calculated coordinates and those obtained from spectroradiometric data is achieved. Processing of the RGB values of pixels in the digital image of a scene using the CAMDET software package which was developed at EORD, results in `Painting Maps' in which the true apparent CIELAB color coordinates are used. The paper discusses the calibration procedure, its advantages and shortcomings and suggests a definition for the visible signature of objects. The Camdet software package is described and some examples are given.

  12. High-speed cineradiography using electronic imaging

    NASA Astrophysics Data System (ADS)

    Lucero, Jacob P.; Fry, David A.; Gaskill, William E.; Henderson, R. L.; Crawford, Ted R.; Carey, N. E.

    1993-01-01

    The Los Alamos National Laboratory has constructed and is now operating a cineradiography system for imaging and evaluation of ballistic interaction events at the 1200 meter range of the Terminal Effects Research and Analysis (TERA) Group at the New Mexico Institute of Mining and Technology. Cineradiography is part of a complete firing, tracking, and analysis system at the range. The cine system consists of flash x-ray sources illuminating a one-half meter by two meter fast phosphor screen which is viewed by gated-intensified high resolution still video cameras via turning mirrors. The entire system is armored to protect against events containing up to 13.5 kg of high explosive. Digital images are available for immediate display and processing. The system is capable of frame rates up to 105/sec for up to five total images.

  13. High speed cineradiography using electronic imaging

    NASA Astrophysics Data System (ADS)

    Lucero, J. P.; Fry, D. A.; Gaskill, W. E.; Henderson, R. L.; Crawford, T. R.; Carey, N. E.

    1992-12-01

    The Los Alamos National Laboratory has constructed and is now operating a cineradiography system for imaging and evaluation of ballistic interaction events at the 1200 meter range of the Terminal Effects Research and Analysis (TERA) Group at the New Mexico Institute of Mining and Technology. Cineradiography is part of a complete firing, tracking, and analysis system at the range. The cine system consists of flash x-ray sources illuminating a one-half meter by two meter fast phosphor screen which is viewed by gated-intensified high resolution still video cameras via turning mirrors. The entire system is armored to protect against events containing up to 13.5 kg of high explosive. Digital images are available for immediate display and processing. The system is capable of frame rates up to 10(exp 5)/sec for up to five total images.

  14. Review of high speed communications photomultiplier detectors

    NASA Technical Reports Server (NTRS)

    Enck, R. S.; Abraham, W. G.

    1978-01-01

    Four types of newly developed high speed photomultipliers are discussed: all electrostatic; static crossed field; dynamic crossed field; and hybrid (EBS). Design, construction, and performance parameters of each class are presented along with limitations of each class of device and prognosis for its future in high speed light detection. The particular advantage of these devices lies in high speed applications using low photon flux, large cathode areas, and broadband optical detection.

  15. High speed, real-time, camera bandwidth converter

    DOEpatents

    Bower, Dan E; Bloom, David A; Curry, James R

    2014-10-21

    Image data from a CMOS sensor with 10 bit resolution is reformatted in real time to allow the data to stream through communications equipment that is designed to transport data with 8 bit resolution. The incoming image data has 10 bit resolution. The communication equipment can transport image data with 8 bit resolution. Image data with 10 bit resolution is transmitted in real-time, without a frame delay, through the communication equipment by reformatting the image data.

  16. Liquid-crystal-display projector-based modulation transfer function measurements of charge-coupled-device video camera systems.

    PubMed

    Teipen, B T; MacFarlane, D L

    2000-02-01

    We demonstrate the ability to measure the system modulation transfer function (MTF) of both color and monochrome charge-coupled-device (CCD) video camera systems with a liquid-crystal-display (LCD) projector. Test matrices programmed to the LCD projector were chosen primarily to have a flat power spectral density (PSD) when averaged along one dimension. We explored several matrices and present results for a matrix produced with a random-number generator, a matrix of sequency-ordered Walsh functions, a pseudorandom Hadamard matrix, and a pseudorandom uniformly redundant array. All results are in agreement with expected filtering. The Walsh matrix and the Hadamard matrix show excellent agreement with the matrix from the random-number generator. We show that shift-variant effects between the LCD array and the CCD array can be kept small. This projector test method offers convenient measurement of the MTF of a low-cost video system. Such characterization is useful for an increasing number of machine vision applications and metrology applications. PMID:18337921

  17. Study of recognizing multiple persons' complicated hand gestures from the video sequence acquired by a moving camera

    NASA Astrophysics Data System (ADS)

    Dan, Luo; Ohya, Jun

    2010-02-01

    Recognizing hand gestures from the video sequence acquired by a dynamic camera could be a useful interface between humans and mobile robots. We develop a state based approach to extract and recognize hand gestures from moving camera images. We improved Human-Following Local Coordinate (HFLC) System, a very simple and stable method for extracting hand motion trajectories, which is obtained from the located human face, body part and hand blob changing factor. Condensation algorithm and PCA-based algorithm was performed to recognize extracted hand trajectories. In last research, this Condensation Algorithm based method only applied for one person's hand gestures. In this paper, we propose a principal component analysis (PCA) based approach to improve the recognition accuracy. For further improvement, temporal changes in the observed hand area changing factor are utilized as new image features to be stored in the database after being analyzed by PCA. Every hand gesture trajectory in the database is classified into either one hand gesture categories, two hand gesture categories, or temporal changes in hand blob changes. We demonstrate the effectiveness of the proposed method by conducting experiments on 45 kinds of sign language based Japanese and American Sign Language gestures obtained from 5 people. Our experimental recognition results show better performance is obtained by PCA based approach than the Condensation algorithm based method.

  18. Design and analysis of filter-based optical systems for spectral responsivity estimation of digital video cameras

    NASA Astrophysics Data System (ADS)

    Chang, Gao-Wei; Jian, Hong-Da; Yeh, Zong-Mu; Cheng, Chin-Pao

    2004-02-01

    For estimating spectral responsivities of digital video cameras, a filter-based optical system is designed with sophisticated filter selections, in this paper. The filter consideration in the presence of noise is central to the optical systems design, since the spectral filters primarily prescribe the structure of the perturbed system. A theoretical basis is presented to confirm that sophisticated filter selections can make this system as insensitive to noise as possible. Also, we propose a filter selection method based on the orthogonal-triangular (QR) decomposition with column pivoting (QRCP). To investigate the noise effects, we assess the estimation errors between the actual and estimated spectral responsivities, with the different signal-to-noise ratio (SNR) levels of an eight-bit/channel camera. Simulation results indicate that the proposed method yields satisfactory estimation accuracy. That is, the filter-based optical system with the spectral filters selected from the QRCP-based method is much less sensitive to noise than those with other filters from different selections.

  19. High-Speed Ring Bus

    NASA Technical Reports Server (NTRS)

    Wysocky, Terry; Kopf, Edward, Jr.; Katanyoutananti, Sunant; Steiner, Carl; Balian, Harry

    2010-01-01

    The high-speed ring bus at the Jet Propulsion Laboratory (JPL) allows for future growth trends in spacecraft seen with future scientific missions. This innovation constitutes an enhancement of the 1393 bus as documented in the Institute of Electrical and Electronics Engineers (IEEE) 1393-1999 standard for a spaceborne fiber-optic data bus. It allows for high-bandwidth and time synchronization of all nodes on the ring. The JPL ring bus allows for interconnection of active units with autonomous operation and increased fault handling at high bandwidths. It minimizes the flight software interface with an intelligent physical layer design that has few states to manage as well as simplified testability. The design will soon be documented in the AS-1393 standard (Serial Hi-Rel Ring Network for Aerospace Applications). The framework is designed for "Class A" spacecraft operation and provides redundant data paths. It is based on "fault containment regions" and "redundant functional regions (RFR)" and has a method for allocating cables that completely supports the redundancy in spacecraft design, allowing for a complete RFR to fail. This design reduces the mass of the bus by incorporating both the Control Unit and the Data Unit in the same hardware. The standard uses ATM (asynchronous transfer mode) packets, standardized by ITU-T, ANSI, ETSI, and the ATM Forum. The IEEE-1393 standard uses the UNI form of the packet and provides no protection for the data portion of the cell. The JPL design adds optional formatting to this data portion. This design extends fault protection beyond that of the interconnect. This includes adding protection to the data portion that is contained within the Bus Interface Units (BIUs) and by adding to the signal interface between the Data Host and the JPL 1393 Ring Bus. Data transfer on the ring bus does not involve a master or initiator. Following bus protocol, any BIU may transmit data on the ring whenever it has data received from its host. There

  20. Imaging of high-speed dust particle trajectories on NSTX

    SciTech Connect

    Roquemore, A. L.; Davis, W.; Kaita, R.; Skinner, C. H.; Maqueda, R.; Nishino, N.

    2006-10-15

    Imaging of high-speed incandescent dust particle trajectories in a tokamak plasma has been accomplished on NSTX using up to three high-speed cameras each viewing the same plasma volume from different locations and operating at speeds up to 68 000 frames/s with exposure times varying from 2 to 300 {mu}s. The dynamics of the dust trajectories can be quite complex exhibiting a large variation in both speed (10-200 m/s) and direction. Simulations of these trajectories will be utilized to ascertain the role dust may play in future machines such as ITER where significant dust production from wall erosion is expected. NSTX has numerous view ports including both tangential as well as radial views in both the midplane and lower divertors. Several vertical ports are also available so that a few specific regions in NSTX may be viewed simultaneously from several different camera positions. The cameras can be operated in the full visible spectrum but near-infrared filters can be utilized to enhance the observation of incandescent particles against a bright background. A description of the cameras and required optics is presented.

  1. Observation and analysis of high-speed human motion with frequent occlusion in a large area

    NASA Astrophysics Data System (ADS)

    Wang, Yuru; Liu, Jiafeng; Liu, Guojun; Tang, Xianglong; Liu, Peng

    2009-12-01

    The use of computer vision technology in collecting and analyzing statistics during sports matches or training sessions is expected to provide valuable information for tactics improvement. However, the measurements published in the literature so far are either unreliably documented to be used in training planning due to their limitations or unsuitable for studying high-speed motion in large area with frequent occlusions. A sports annotation system is introduced in this paper for tracking high-speed non-rigid human motion over a large playing area with the aid of motion camera, taking short track speed skating competitions as an example. The proposed system is composed of two sub-systems: precise camera motion compensation and accurate motion acquisition. In the video registration step, a distinctive invariant point feature detector (probability density grads detector) and a global parallax based matching points filter are used, to provide reliable and robust matching across a large range of affine distortion and illumination change. In the motion acquisition step, a two regions' relationship constrained joint color model and Markov chain Monte Carlo based joint particle filter are emphasized, by dividing the human body into two relative key regions. Several field tests are performed to assess measurement errors, including comparison to popular algorithms. With the help of the system presented, the system obtains position data on a 30 m × 60 m large rink with root-mean-square error better than 0.3975 m, velocity and acceleration data with absolute error better than 1.2579 m s-1 and 0.1494 m s-2, respectively.

  2. A Refrigerated Web Camera for Photogrammetric Video Measurement inside Biomass Boilers and Combustion Analysis

    PubMed Central

    Porteiro, Jacobo; Riveiro, Belén; Granada, Enrique; Armesto, Julia; Eguía, Pablo; Collazo, Joaquín

    2011-01-01

    This paper describes a prototype instrumentation system for photogrammetric measuring of bed and ash layers, as well as for flying particle detection and pursuit using a single device (CCD) web camera. The system was designed to obtain images of the combustion process in the interior of a domestic boiler. It includes a cooling system, needed because of the high temperatures in the combustion chamber of the boiler. The cooling system was designed using CFD simulations to ensure effectiveness. This method allows more complete and real-time monitoring of the combustion process taking place inside a boiler. The information gained from this system may facilitate the optimisation of boiler processes. PMID:22319349

  3. High-speed pulse-shape generator, pulse multiplexer

    DOEpatents

    Burkhart, Scott C.

    2002-01-01

    The invention combines arbitrary amplitude high-speed pulses for precision pulse shaping for the National Ignition Facility (NIF). The circuitry combines arbitrary height pulses which are generated by replicating scaled versions of a trigger pulse and summing them delayed in time on a pulse line. The combined electrical pulses are connected to an electro-optic modulator which modulates a laser beam. The circuit can also be adapted to combine multiple channels of high speed data into a single train of electrical pulses which generates the optical pulses for very high speed optical communication. The invention has application in laser pulse shaping for inertial confinement fusion, in optical data links for computers, telecommunications, and in laser pulse shaping for atomic excitation studies. The invention can be used to effect at least a 10.times. increase in all fiber communication lines. It allows a greatly increased data transfer rate between high-performance computers. The invention is inexpensive enough to bring high-speed video and data services to homes through a super modem.

  4. Research on frame capture of high speed and image storage

    NASA Astrophysics Data System (ADS)

    Hao, Dong; Ju, Huo

    2007-01-01

    Conflicts among high speed, large amount of data rate and long time to record are still problems in the field of frame grabbing. It has been settled partly and temporarily by the development of raid technology. A frame grabbing system with the characteristic of high speed and large storage is generated using raid technology and Fibre Channels. It is able to keep recording frames at a high speed for a long time without reducing resolution. The system has been set up successfully whose recording and displaying process can be generally controlled. Problems that show stable live video in real time while recording have been solved. The composition of hardware in this system is given out in the paper. The principle how it works is described. For the purpose of recording at a high speed without dropping frames and to insure the imaging quality while synchronized with outer signals that generated from an outer circuit, several synchronizing ways are discussed and compared. The most suitable way is chosen through analyzing theoretically and tested out by experiment.

  5. High-speed imaging of blood splatter patterns

    SciTech Connect

    McDonald, T.E.; Albright, K.A.; King, N.S.P.; Yates, G.J.; Levine, G.F.

    1993-05-01

    The interpretation of blood splatter patterns is an important element in reconstructing the events and circumstances of an accident or crime scene. Unfortunately, the interpretation of patterns and stains formed by blood droplets is not necessarily intuitive and study and analysis are required to arrive at a correct conclusion. A very useful tool in the study of blood splatter patterns is high-speed photography. Scientists at the Los Alamos National Laboratory, Department of Energy (DOE), and Bureau of Forensic Services, State of California, have assembled a high-speed imaging system designed to image blood splatter patterns. The camera employs technology developed by Los Alamos for the underground nuclear testing program and has also been used in a military mine detection program. The camera uses a solid-state CCD sensor operating at approximately 650 frames per second (75 MPixels per second) with a microchannel plate image intensifier that can provide shuttering as short as 5 ns. The images are captured with a laboratory high-speed digitizer and transferred to an IBM compatible PC for display and hard copy output for analysis. The imaging system is described in this paper.

  6. High-speed imaging of blood splatter patterns

    SciTech Connect

    McDonald, T.E.; Albright, K.A.; King, N.S.P.; Yates, G.J. ); Levine, G.F. . Bureau of Forensic Services)

    1993-01-01

    The interpretation of blood splatter patterns is an important element in reconstructing the events and circumstances of an accident or crime scene. Unfortunately, the interpretation of patterns and stains formed by blood droplets is not necessarily intuitive and study and analysis are required to arrive at a correct conclusion. A very useful tool in the study of blood splatter patterns is high-speed photography. Scientists at the Los Alamos National Laboratory, Department of Energy (DOE), and Bureau of Forensic Services, State of California, have assembled a high-speed imaging system designed to image blood splatter patterns. The camera employs technology developed by Los Alamos for the underground nuclear testing program and has also been used in a military mine detection program. The camera uses a solid-state CCD sensor operating at approximately 650 frames per second (75 MPixels per second) with a microchannel plate image intensifier that can provide shuttering as short as 5 ns. The images are captured with a laboratory high-speed digitizer and transferred to an IBM compatible PC for display and hard copy output for analysis. The imaging system is described in this paper.

  7. High-Speed Photography with Computer Control.

    ERIC Educational Resources Information Center

    Winters, Loren M.

    1991-01-01

    Describes the use of a microcomputer as an intervalometer for the control and timing of several flash units to photograph high-speed events. Applies this technology to study the oscillations of a stretched rubber band, the deceleration of high-speed projectiles in water, the splashes of milk drops, and the bursts of popcorn kernels. (MDH)

  8. Reducing Heating In High-Speed Cinematography

    NASA Technical Reports Server (NTRS)

    Slater, Howard A.

    1989-01-01

    Infrared-absorbing and infrared-reflecting glass filters simple and effective means for reducing rise in temperature during high-speed motion-picture photography. "Hot-mirror" and "cold-mirror" configurations, employed in projection of images, helps prevent excessive heating of scenes by powerful lamps used in high-speed photography.

  9. Record And Analysis Of High-Speed Photomicrography On Rheology Of Red Blood Cells In Vivo

    NASA Astrophysics Data System (ADS)

    Jian, Zhang; Yuju, Lin; Jizong, Wu; Qiang, Wang; Guishan, Li; Ni, Liang

    1989-06-01

    Microcirculation is the basic functional unit of blood circulation in human body. The oxygen needed and the carbon dioxide discharged in human body were accomplished in the case of flow and deformation of red blood cells (RBC) in capillaries. The rheology of RBC performs an important function for maintaining normal blood irrigation and nutritional metabolism. Obviously, for blood irrigation, dynamic mechanism of RBC, blood cell microrheology, law of mivrocirculation and cause of disease, it has very important significance to study quantitatively the rheology of RBC in the capillaries of live animal. In recent years, Tianjin University, cooperating with the Institute of Hematology, used the method of high speed photomicrography to record the flow states of RBC in the capillaries of the hamster cheek pouch and the frog web. Some systems were assembled through the study of luminous energy transmission, illumination system and optical match. These systems included the microhigh-speed camera system, the microhighspeed video recorder system and the microhighspeed camera system combining with an image enhancement tube. Some useful results were obtained by the photography of the flow states of RBC, film analysis and data processing. These results provided the beneficial data for the dynamic mechanism that RBC were deformed by the different blood flow field.

  10. High-Speed Imaging of Shock-Wave Motion in Aviation Security Research

    NASA Astrophysics Data System (ADS)

    Anderson, B. W.; Settles, G. S.; Miller, J. D.; Keane, B. T.; Gatto, J. A.

    2001-11-01

    A high-speed drum camera is used in conjunction with Penn State's Full-Scale Schlieren Facility to capture blast wave phenomena in aviation security scenarios. Several hundred photographic frames at a rate of 45k frames/sec allow the imaging of entire explosive events typical of blasts inside an aircraft fuselage. The large (2.3 x 2.9 m) schlieren field-of-view further allows these experiments to be done at or near full-scale. Shock waves up to Mach 1.3 are produced by detonating small balloons filled with an oxygen-acetylene gas mixture. Blasts underneath actual aircraft seats occupied by mannequins reveal shock motion inside a passenger cabin. Blasts were also imaged within the luggage container of a 3/5-scale aircraft fuselage, including hull-holing, as occurred in the Pan Am 103 incident. Drum-camera frames are assembled into digital video clips of several seconds duration, which will be shown in the presentation. These brief movies provide the first-ever visualization of shock motion due to explosions onboard aircraft. They also illustrate the importance of shock imaging in aircraft-hardening experiments, and they provide data to validate numerical predictions of such events. Supported by FAA Grant 99-G-032.

  11. A simple, inexpensive video camera setup for the study of avian nest activity

    USGS Publications Warehouse

    Sabine, J.B.; Meyers, J.M.; Schweitzer, Sara H.

    2005-01-01

    Time-lapse video photography has become a valuable tool for collecting data on avian nest activity and depredation; however, commercially available systems are expensive (>USA $4000/unit). We designed an inexpensive system to identify causes of nest failure of American Oystercatchers (Haematopus palliatus) and assessed its utility at Cumberland Island National Seashore, Georgia. We successfully identified raccoon (Procyon lotor), bobcat (Lynx rufus), American Crow (Corvus brachyrhynchos), and ghost crab (Ocypode quadrata) predation on oystercatcher nests. Other detected causes of nest failure included tidal overwash, horse trampling, abandonment, and human destruction. System failure rates were comparable with commercially available units. Our system's efficacy and low cost (<$800) provided useful data for the management and conservation of the American Oystercatcher.

  12. High-Speed Schlieren Movies of Decelerators at Supersonic Speeds

    NASA Technical Reports Server (NTRS)

    1960-01-01

    High-Speed Schlieren Movies of Decelerators at Supersonic Speeds. Tests were conducted on several types of porous parachutes, a paraglider, and a simulated retrorocket. Mach numbers ranged from 1.8-3.0, porosity from 20-80 percent, and camera speeds from 1680-3000 feet per second (fps) in trials with porous parachutes. Trials of reefed parachutes were conducted at Mach number 2.0 and reefing of 12-33 percent at camera speeds of 600 fps. A flexible parachute with an inflatable ring in the periphery of the canopy was tested at Reynolds number 750,000 per foot, Mach number 2.85, porosity of 28 percent, and camera speed of 36oo fps. A vortex-ring parachute was tested at Mach number 2.2 and camera speed of 3000 fps. The paraglider, with a sweepback of 45 degrees at an angle of attack of 45 degrees was tested at Mach number 2.65, drag coefficient of 0.200, and lift coefficient of 0.278 at a camera speed of 600 fps. A cold air jet exhausting upstream from the center of a bluff body was used to simulate a retrorocket. The free-stream Mach number was 2.0, free-stream dynamic pressure was 620 lb/sq ft, jet-exit static pressure ratio was 10.9, and camera speed was 600 fps. [Entire movie available on DVD from CASI as Doc ID 20070030973. Contact help@sti.nasa.gov

  13. Jellyfish Support High Energy Intake of Leatherback Sea Turtles (Dermochelys coriacea): Video Evidence from Animal-Borne Cameras

    PubMed Central

    Heaslip, Susan G.; Iverson, Sara J.; Bowen, W. Don; James, Michael C.

    2012-01-01

    The endangered leatherback turtle is a large, highly migratory marine predator that inexplicably relies upon a diet of low-energy gelatinous zooplankton. The location of these prey may be predictable at large oceanographic scales, given that leatherback turtles perform long distance migrations (1000s of km) from nesting beaches to high latitude foraging grounds. However, little is known about the profitability of this migration and foraging strategy. We used GPS location data and video from animal-borne cameras to examine how prey characteristics (i.e., prey size, prey type, prey encounter rate) correlate with the daytime foraging behavior of leatherbacks (n = 19) in shelf waters off Cape Breton Island, NS, Canada, during August and September. Video was recorded continuously, averaged 1:53 h per turtle (range 0:08–3:38 h), and documented a total of 601 prey captures. Lion's mane jellyfish (Cyanea capillata) was the dominant prey (83–100%), but moon jellyfish (Aurelia aurita) were also consumed. Turtles approached and attacked most jellyfish within the camera's field of view and appeared to consume prey completely. There was no significant relationship between encounter rate and dive duration (p = 0.74, linear mixed-effects models). Handling time increased with prey size regardless of prey species (p = 0.0001). Estimates of energy intake averaged 66,018 kJ•d−1 but were as high as 167,797 kJ•d−1 corresponding to turtles consuming an average of 330 kg wet mass•d−1 (up to 840 kg•d−1) or approximately 261 (up to 664) jellyfish•d-1. Assuming our turtles averaged 455 kg body mass, they consumed an average of 73% of their body mass•d−1 equating to an average energy intake of 3–7 times their daily metabolic requirements, depending on estimates used. This study provides evidence that feeding tactics used by leatherbacks in Atlantic Canadian waters are highly profitable and our results are consistent with estimates of mass gain prior to

  14. Jellyfish support high energy intake of leatherback sea turtles (Dermochelys coriacea): video evidence from animal-borne cameras.

    PubMed

    Heaslip, Susan G; Iverson, Sara J; Bowen, W Don; James, Michael C

    2012-01-01

    The endangered leatherback turtle is a large, highly migratory marine predator that inexplicably relies upon a diet of low-energy gelatinous zooplankton. The location of these prey may be predictable at large oceanographic scales, given that leatherback turtles perform long distance migrations (1000s of km) from nesting beaches to high latitude foraging grounds. However, little is known about the profitability of this migration and foraging strategy. We used GPS location data and video from animal-borne cameras to examine how prey characteristics (i.e., prey size, prey type, prey encounter rate) correlate with the daytime foraging behavior of leatherbacks (n = 19) in shelf waters off Cape Breton Island, NS, Canada, during August and September. Video was recorded continuously, averaged 1:53 h per turtle (range 0:08-3:38 h), and documented a total of 601 prey captures. Lion's mane jellyfish (Cyanea capillata) was the dominant prey (83-100%), but moon jellyfish (Aurelia aurita) were also consumed. Turtles approached and attacked most jellyfish within the camera's field of view and appeared to consume prey completely. There was no significant relationship between encounter rate and dive duration (p = 0.74, linear mixed-effects models). Handling time increased with prey size regardless of prey species (p = 0.0001). Estimates of energy intake averaged 66,018 kJ • d(-1) but were as high as 167,797 kJ • d(-1) corresponding to turtles consuming an average of 330 kg wet mass • d(-1) (up to 840 kg • d(-1)) or approximately 261 (up to 664) jellyfish • d(-1). Assuming our turtles averaged 455 kg body mass, they consumed an average of 73% of their body mass • d(-1) equating to an average energy intake of 3-7 times their daily metabolic requirements, depending on estimates used. This study provides evidence that feeding tactics used by leatherbacks in Atlantic Canadian waters are highly profitable and our results are consistent with estimates of mass gain prior to

  15. Laryngeal High-Speed Videoendoscopy: Rationale and Recommendation for Accurate and Consistent Terminology

    PubMed Central

    Deliyski, Dimitar D.; Hillman, Robert E.

    2015-01-01

    Purpose The authors discuss the rationale behind the term laryngeal high-speed videoendoscopy to describe the application of high-speed endoscopic imaging techniques to the visualization of vocal fold vibration. Method Commentary on the advantages of using accurate and consistent terminology in the field of voice research is provided. Specific justification is described for each component of the term high-speed videoendoscopy, which is compared and contrasted with alternative terminologies in the literature. Results In addition to the ubiquitous high-speed descriptor, the term endoscopy is necessary to specify the appropriate imaging technology and distinguish among modalities such as ultrasound, magnetic resonance imaging, and nonendoscopic optical imaging. Furthermore, the term video critically indicates the electronic recording of a sequence of optical still images representing scenes in motion, in contrast to strobed images using high-speed photography and non-optical high-speed magnetic resonance imaging. High-speed videoendoscopy thus concisely describes the technology and can be appended by the desired anatomical nomenclature such as laryngeal. Conclusions Laryngeal high-speed videoendoscopy strikes a balance between conciseness and specificity when referring to the typical high-speed imaging method performed on human participants. Guidance for the creation of future terminology provides clarity and context for current and future experiments and the dissemination of results among researchers. PMID:26375398

  16. A CMOS high speed imaging system design based on FPGA

    NASA Astrophysics Data System (ADS)

    Tang, Hong; Wang, Huawei; Cao, Jianzhong; Qiao, Mingrui

    2015-10-01

    CMOS sensors have more advantages than traditional CCD sensors. The imaging system based on CMOS has become a hot spot in research and development. In order to achieve the real-time data acquisition and high-speed transmission, we design a high-speed CMOS imaging system on account of FPGA. The core control chip of this system is XC6SL75T and we take advantages of CameraLink interface and AM41V4 CMOS image sensors to transmit and acquire image data. AM41V4 is a 4 Megapixel High speed 500 frames per second CMOS image sensor with global shutter and 4/3" optical format. The sensor uses column parallel A/D converters to digitize the images. The CameraLink interface adopts DS90CR287 and it can convert 28 bits of LVCMOS/LVTTL data into four LVDS data stream. The reflected light of objects is photographed by the CMOS detectors. CMOS sensors convert the light to electronic signals and then send them to FPGA. FPGA processes data it received and transmits them to upper computer which has acquisition cards through CameraLink interface configured as full models. Then PC will store, visualize and process images later. The structure and principle of the system are both explained in this paper and this paper introduces the hardware and software design of the system. FPGA introduces the driven clock of CMOS. The data in CMOS is converted to LVDS signals and then transmitted to the data acquisition cards. After simulation, the paper presents a row transfer timing sequence of CMOS. The system realized real-time image acquisition and external controls.

  17. Stereoscopic high-speed imaging using additive colors

    NASA Astrophysics Data System (ADS)

    Sankin, Georgy N.; Piech, David; Zhong, Pei

    2012-04-01

    An experimental system for digital stereoscopic imaging produced by using a high-speed color camera is described. Two bright-field image projections of a three-dimensional object are captured utilizing additive-color backlighting (blue and red). The two images are simultaneously combined on a two-dimensional image sensor using a set of dichromatic mirrors, and stored for off-line separation of each projection. This method has been demonstrated in analyzing cavitation bubble dynamics near boundaries. This technique may be useful for flow visualization and in machine vision applications.

  18. High speed lasercomm data transfer in Seahawk 2007 exercise

    NASA Astrophysics Data System (ADS)

    Burris, H. R.; Moore, C. I.; Waterman, J. R.; Suite, M. R.; Vilardebo, K.; Wasiczko, L. M.; Rabinovich, W. S.; Mahon, R.; Ferraro, M. S.; Sainte Georges, E.; Uecke, S.; Poirier, P.; Lovern, M.; Hanson, F.

    2008-04-01

    The U.S. Naval Research Laboratory (NRL) established a one-way Gigabit Ethernet lasercomm link during the Seahawk exercise in August, 2007 to transfer data ~8 miles across the inlet of San Diego Bay from Point Loma to the Imperial Beach base camp. The data transferred over the link was from an NRL developed, wide field of view (90 degrees), high resolution, mid-wave infrared camera operating at 30 frames per second. Details of the high speed link will be presented as well as packet error rate data and atmospheric propagation data taken during the two week long exercise.

  19. Synchronizing Photography For High-Speed-Engine Research

    NASA Technical Reports Server (NTRS)

    Chun, K. S.

    1989-01-01

    Light flashes when shaft reaches predetermined angle. Synchronization system facilitates visualization of flow in high-speed internal-combustion engines. Designed for cinematography and holographic interferometry, system synchronizes camera and light source with predetermined rotational angle of engine shaft. 10-bit resolution of absolute optical shaft encoder adapted, and 2 to tenth power combinations of 10-bit binary data computed to corresponding angle values. Pre-computed angle values programmed into EPROM's (erasable programmable read-only memories) to use as angle lookup table. Resolves shaft angle to within 0.35 degree at rotational speeds up to 73,240 revolutions per minute.

  20. Damping Bearings In High-Speed Turbomachines

    NASA Technical Reports Server (NTRS)

    Von Pragenau, George L.

    1994-01-01

    Paper presents comparison of damping bearings with traditional ball, roller, and hydrostatic bearings in high-speed cryogenic turbopumps. Concept of damping bearings described in "Damping Seals and Bearings for a Turbomachine" (MFS-28345).

  1. Study of high speed photography measuring instrument

    NASA Astrophysics Data System (ADS)

    Zhang, Zhijun; Sun, Jiyu; Wu, Keyong

    2007-01-01

    High speed photograph measuring instrument is mainly used to measure and track the exterior ballistics, which can measure the flying position of the missile in the initial phase and trajectory. A new high speed photograph measuring instrument is presented in this paper. High speed photography measuring system records the parameters of object real-time, and then acquires the flying position and trajectory data of the missile in the initial phase. The detection distance of high speed photography is more than 4.5km, and the least detection distance is 450m, under the condition of well-balanced angular velocity and angular acceleration, program pilot track error less than 5'. This instrument also can measure and record the flying trail and trajectory parameters of plane's aero naval missile.

  2. Lubrication and cooling for high speed gears

    NASA Technical Reports Server (NTRS)

    Townsend, D. P.

    1985-01-01

    The problems and failures occurring with the operation of high speed gears are discussed. The gearing losses associated with high speed gearing such as tooth mesh friction, bearing friction, churning, and windage are discussed with various ways shown to help reduce these losses and thereby improve efficiency. Several different methods of oil jet lubrication for high speed gearing are given such as into mesh, out of mesh, and radial jet lubrication. The experiments and analytical results for the various methods of oil jet lubrication are shown with the strengths and weaknesses of each method discussed. The analytical and experimental results of gear lubrication and cooling at various test conditions are presented. These results show the very definite need of improved methods of gear cooling at high speed and high load conditions.

  3. Bird-Borne Video-Cameras Show That Seabird Movement Patterns Relate to Previously Unrevealed Proximate Environment, Not Prey

    PubMed Central

    Tremblay, Yann; Thiebault, Andréa; Mullers, Ralf; Pistorius, Pierre

    2014-01-01

    The study of ecological and behavioral processes has been revolutionized in the last two decades with the rapid development of biologging-science. Recently, using image-capturing devices, some pilot studies demonstrated the potential of understanding marine vertebrate movement patterns in relation to their proximate, as opposed to remote sensed environmental contexts. Here, using miniaturized video cameras and GPS tracking recorders simultaneously, we show for the first time that information on the immediate visual surroundings of a foraging seabird, the Cape gannet, is fundamental in understanding the origins of its movement patterns. We found that movement patterns were related to specific stimuli which were mostly other predators such as gannets, dolphins or fishing boats. Contrary to a widely accepted idea, our data suggest that foraging seabirds are not directly looking for prey. Instead, they search for indicators of the presence of prey, the latter being targeted at the very last moment and at a very small scale. We demonstrate that movement patterns of foraging seabirds can be heavily driven by processes unobservable with conventional methodology. Except perhaps for large scale processes, local-enhancement seems to be the only ruling mechanism; this has profounds implications for ecosystem-based management of marine areas. PMID:24523892

  4. Assessing the application of an airborne intensified multispectral video camera to measure chlorophyll a in three Florida estuaries

    SciTech Connect

    Dierberg, F.E.; Zaitzeff, J.

    1997-08-01

    After absolute and spectral calibration, an airborne intensified, multispectral video camera was field tested for water quality assessments over three Florida estuaries (Tampa Bay, Indian River Lagoon, and the St. Lucie River Estuary). Univariate regression analysis of upwelling spectral energy vs. ground-truthed uncorrected chlorophyll a (Chl a) for each estuary yielded lower coefficients of determination (R{sup 2}) with increasing concentrations of Gelbstoff within an estuary. More predictive relationships were established by adding true color as a second independent variable in a bivariate linear regression model. These regressions successfully explained most of the variation in upwelling light energy (R{sup 2}=0.94, 0.82 and 0.74 for the Tampa Bay, Indian River Lagoon, and St. Lucie estuaries, respectively). Ratioed wavelength bands within the 625-710 nm range produced the highest correlations with ground-truthed uncorrected Chl a, and were similar to those reported as being the most predictive for Chl a in Tennessee reservoirs. However, the ratioed wavebands producing the best predictive algorithms for Chl a differed among the three estuaries due to the effects of varying concentrations of Gelbstoff on upwelling spectral signatures, which precluded combining the data into a common data set for analysis.

  5. Thermographic measurements of high-speed metal cutting

    NASA Astrophysics Data System (ADS)

    Mueller, Bernhard; Renz, Ulrich

    2002-03-01

    Thermographic measurements of a high-speed cutting process have been performed with an infrared camera. To realize images without motion blur the integration times were reduced to a few microseconds. Since the high tool wear influences the measured temperatures a set-up has been realized which enables small cutting lengths. Only single images have been recorded because the process is too fast to acquire a sequence of images even with the frame rate of the very fast infrared camera which has been used. To expose the camera when the rotating tool is in the middle of the camera image an experimental set-up with a light barrier and a digital delay generator with a time resolution of 1 ns has been realized. This enables a very exact triggering of the camera at the desired position of the tool in the image. Since the cutting depth is between 0.1 and 0.2 mm a high spatial resolution was also necessary which was obtained by a special close-up lens allowing a resolution of app. 45 microns. The experimental set-up will be described and infrared images and evaluated temperatures of a titanium alloy and a carbon steel will be presented for cutting speeds up to 42 m/s.

  6. A real time data compactor (sparsifier) and 8 megabyte high speed FIFO for HEP

    SciTech Connect

    Baumbaugh, A.E.; Knickerbocker, K.L.; Wegner, C.R.; Ruchti, R.

    1986-02-01

    A Video-Data-Acquisition-System (VDAS) has been developed to record image data from a scintillating glass fiber-optic target developed for High Energy Physics. The major components of the VDAS are a flash ADC, a ''real time'' high speed data compactor, and high speed 8 megabyte FIFO memory. The data rates through the system are in excess of 30 megabytes/second. The compactor is capable of reducing the amount of data needed to reconstruct typical images by as much as a factor of 20. The FIFO uses only standard NMOS DRAMS and TTL components to achieve its large size and high speed at relatively low power and cost.

  7. Real time data compactor (sparsifier) and 8 megabyte high speed FIFO for HEP

    SciTech Connect

    Baumbaugh, A.E.; Knickerbocker, K.L.; Wegner, C.R.; Baumbaugh, B.W.; Ruchti, R.

    1985-10-01

    A Video-Data-Acquisition-System (VDAS) has been developed to record image data from a scintillating glass fiber-optic target developed for High Energy Physics. The major components of the VDAS are a flash ADC, a ''real time'' high speed data compactor, and high speed 8 megabyte FIFO memory. The data rates through the system are in excess of 30 megabytes/second. The compactor is capable of reducing the amount of data needed to reconstruct typical images by as much as a factor of 20. The FIFO uses only standard NMOS DRAMS and TTL components to achieve its large size and high speed at relatively low power and cost.

  8. High speed photography, videography, and photonics V; Proceedings of the Meeting, San Diego, CA, Aug. 17-19, 1987

    NASA Technical Reports Server (NTRS)

    Johnson, Howard C. (Editor)

    1988-01-01

    Recent advances in high-speed optical and electrooptic devices are discussed in reviews and reports. Topics examined include data quantification and related technologies, high-speed photographic applications and instruments, flash and cine radiography, and novel ultrafast methods. Also considered are optical streak technology, high-speed videographic and photographic equipment, and X-ray streak cameras. Extensive diagrams, drawings, graphs, sample images, and tables of numerical data are provided.

  9. A high-speed network for cardiac image review.

    PubMed

    Elion, J L; Petrocelli, R R

    1994-01-01

    A high-speed fiber-based network for the transmission and display of digitized full-motion cardiac images has been developed. Based on Asynchronous Transfer Mode (ATM), the network is scaleable, meaning that the same software and hardware is used for a small local area network or for a large multi-institutional network. The system can handle uncompressed digital angiographic images, considered to be at the "high-end" of the bandwidth requirements. Along with the networking, a general-purpose multi-modality review station has been implemented without specialized hardware. This station can store a full injection sequence in "loop RAM" in a 512 x 512 format, then interpolate to 1024 x 1024 while displaying at 30 frames per second. The network and review stations connect to a central file server that uses a virtual file system to make a large high-speed RAID storage disk and associated off-line storage tapes and cartridges all appear as a single large file system to the software. In addition to supporting archival storage and review, the system can also digitize live video using high-speed Direct Memory Access (DMA) from the frame grabber to present uncompressed data to the network. Fully functional prototypes have provided the proof of concept, with full deployment in the institution planned as the next stage. PMID:7949964

  10. High Speed Imaging of Cavitation around Dental Ultrasonic Scaler Tips

    PubMed Central

    Vyas, Nina; Pecheva, Emilia; Dehghani, Hamid; Sammons, Rachel L.; Wang, Qianxi X.; Leppinen, David M.; Walmsley, A. Damien

    2016-01-01

    Cavitation occurs around dental ultrasonic scalers, which are used clinically for removing dental biofilm and calculus. However it is not known if this contributes to the cleaning process. Characterisation of the cavitation around ultrasonic scalers will assist in assessing its contribution and in developing new clinical devices for removing biofilm with cavitation. The aim is to use high speed camera imaging to quantify cavitation patterns around an ultrasonic scaler. A Satelec ultrasonic scaler operating at 29 kHz with three different shaped tips has been studied at medium and high operating power using high speed imaging at 15,000, 90,000 and 250,000 frames per second. The tip displacement has been recorded using scanning laser vibrometry. Cavitation occurs at the free end of the tip and increases with power while the area and width of the cavitation cloud varies for different shaped tips. The cavitation starts at the antinodes, with little or no cavitation at the node. High speed image sequences combined with scanning laser vibrometry show individual microbubbles imploding and bubble clouds lifting and moving away from the ultrasonic scaler tip, with larger tip displacement causing more cavitation. PMID:26934340

  11. High Speed Imaging of Cavitation around Dental Ultrasonic Scaler Tips.

    PubMed

    Vyas, Nina; Pecheva, Emilia; Dehghani, Hamid; Sammons, Rachel L; Wang, Qianxi X; Leppinen, David M; Walmsley, A Damien

    2016-01-01

    Cavitation occurs around dental ultrasonic scalers, which are used clinically for removing dental biofilm and calculus. However it is not known if this contributes to the cleaning process. Characterisation of the cavitation around ultrasonic scalers will assist in assessing its contribution and in developing new clinical devices for removing biofilm with cavitation. The aim is to use high speed camera imaging to quantify cavitation patterns around an ultrasonic scaler. A Satelec ultrasonic scaler operating at 29 kHz with three different shaped tips has been studied at medium and high operating power using high speed imaging at 15,000, 90,000 and 250,000 frames per second. The tip displacement has been recorded using scanning laser vibrometry. Cavitation occurs at the free end of the tip and increases with power while the area and width of the cavitation cloud varies for different shaped tips. The cavitation starts at the antinodes, with little or no cavitation at the node. High speed image sequences combined with scanning laser vibrometry show individual microbubbles imploding and bubble clouds lifting and moving away from the ultrasonic scaler tip, with larger tip displacement causing more cavitation. PMID:26934340

  12. Laser illuminated high speed photography of energetic materials

    SciTech Connect

    Dosser, L.R.; Reed, J.W.; Stark, M.A.

    1988-01-01

    The evaluation of the properties of energetic materials, such as burn rate and ignition, is of primary importance in understanding their reactions and how devices containing them perform their function. We have recently applied high speed photography at rates of up to 20,000 images per second to this problem. When a copper vapor laser is synchronized to the high speed camera, laser illuminated images can be recorded that detail the performance of a component in a manner never before possible. The copper vapor laser used for these experiments had an average power of 30 watts, and produced pulses at a rate of up to 10 kHz. The 30 nanosecond pulsewidth of the laser essentially freezes all motion in the functioning componment, thus providing stop-action pictures at a rate of up to 10,000 per second. Each laser pulse has a peak power of approximately 170,000 watts which provides ample illumination for the high speed photography. Several energetic materials and components studied include the pyrotechnic Ti/2B, a pyrotechnic torch, laser ignition of high explosives, and a functioning igniter.

  13. Image qualification of high-speed film for crash tests

    NASA Astrophysics Data System (ADS)

    Oleksy, Jerry E.; Choi, James H.

    1994-10-01

    The Department of Transportation, National Highway Traffic Safety Administration (NHTSA) desires to develop qualifications for the film received from independent organizations that perform automobile collision performance tests. All crash tests are recorded on high-speed film. Running 18 cameras at frame rates up to 1000 frames per second is not uncommon. Some of the films acquired during the 150 to 200 ms time-frame of the actual crash are used for computation of both human and vehicle kinematics. Detailed recommendations for performing the tests are outlined by NHTSA in SAE documents. However, no specifications for film quality are defined. Problems arise when unclear and/or incorrectly exposed films result in images unsuitable for analysis. Aspects of the optical data channel that are evaluated in this paper include lighting, lenses, cameras, film, film processing, and timing. Recommendations for reliable data acquisition as well as a set of criteria are also developed.

  14. LISS-4 camera for Resourcesat

    NASA Astrophysics Data System (ADS)

    Paul, Sandip; Dave, Himanshu; Dewan, Chirag; Kumar, Pradeep; Sansowa, Satwinder Singh; Dave, Amit; Sharma, B. N.; Verma, Anurag

    2006-12-01

    The Indian Remote Sensing Satellites use indigenously developed high resolution cameras for generating data related to vegetation, landform /geomorphic and geological boundaries. This data from this camera is used for working out maps at 1:12500 scale for national level policy development for town planning, vegetation etc. The LISS-4 Camera was launched onboard Resourcesat-1 satellite by ISRO in 2003. LISS-4 is a high-resolution multi-spectral camera with three spectral bands and having a resolution of 5.8m and swath of 23Km from 817 Km altitude. The panchromatic mode provides a swath of 70Km and 5-day revisit. This paper briefly discusses the configuration of LISS-4 Camera of Resourcesat-1, its onboard performance and also the changes in the Camera being developed for Resourcesat-2. LISS-4 camera images the earth in push-broom mode. It is designed around a three mirror un-obscured telescope, three linear 12-K CCDs and associated electronics for each band. Three spectral bands are realized by splitting the focal plane in along track direction using an isosceles prism. High-speed Camera Electronics is designed for each detector with 12- bit digitization and digital double sampling of video. Seven bit data selected from 10 MSBs data by Telecommand is transmitted. The total dynamic range of the sensor covers up to 100% albedo. The camera structure has heritage of IRS- 1C/D. The optical elements are precisely glued to specially designed flexure mounts. The camera is assembled onto a rotating deck on spacecraft to facilitate +/- 26° steering in Pitch-Yaw plane. The camera is held on spacecraft in a stowed condition before deployment. The excellent imageries from LISS-4 Camera onboard Resourcesat-1 are routinely used worldwide. Such second Camera is being developed for Resourcesat-2 launch in 2007 with similar performance. The Camera electronics is optimized and miniaturized. The size and weight are reduced to one third and the power to half of the values in Resourcesat

  15. Evaluating the Effects of Camera Perspective in Video Modeling for Children with Autism: Point of View versus Scene Modeling

    ERIC Educational Resources Information Center

    Cotter, Courtney

    2010-01-01

    Video modeling has been used effectively to teach a variety of skills to children with autism. This body of literature is characterized by a variety of procedural variations including the characteristics of the video model (e.g., self vs. other, adult vs. peer). Traditionally, most video models have been filmed using third person perspective…

  16. Aerodynamics of High-Speed Trains

    NASA Astrophysics Data System (ADS)

    Schetz, Joseph A.

    This review highlights the differences between the aerodynamics of high-speed trains and other types of transportation vehicles. The emphasis is on modern, high-speed trains, including magnetic levitation (Maglev) trains. Some of the key differences are derived from the fact that trains operate near the ground or a track, have much greater length-to-diameter ratios than other vehicles, pass close to each other and to trackside structures, are more subject to crosswinds, and operate in tunnels with entry and exit events. The coverage includes experimental techniques and results and analytical and numerical methods, concentrating on the most recent information available.

  17. Small, high-speed dataflow processor

    SciTech Connect

    Leler, W.

    1983-01-01

    Dataflow processors show much promise for high-speed computation at reasonable cost, but they are not without problems. The author discusses a processor design which combines ideas from dynamic dataflow architecture with those from reduced instruction set computers and proven large computers with parallel internal structures. The resulting processor includes a number of innovations, including operand destinations, killer tokens, I/O streams and closed-loop computation, which result in a small, relatively inexpensive processor capable of high-speed computation. The expected application areas of the processor include interactive computer graphics, signal processing, and artificial intelligence. 6 references.

  18. Effects of chamber temperature and pressure on the characteristics of high speed diesel jets

    NASA Astrophysics Data System (ADS)

    Sittiwong, W.; Pianthong, K.; Seehanam, W.; Milton, B. E.; Takayama, K.

    2012-05-01

    This study is an investigation into the effects of temperature and pressure within a test chamber on the dynamic characteristics of injected supersonic diesel fuel jets. These jets were generated by the impact of a projectile driven by a horizontal single stage powder gun. A high speed video camera and a shadowgraph optical system were used to capture their dynamic characteristics. The test chamber had controlled air conditions of temperature and pressure up to 150 °C and 8.2 bar, respectively. It was found experimentally that, at the highest temperature, a maximum jet velocity of around 1,500 m/s was obtained. At this temperature, a narrow pointed jet appeared while at the highest pressure, a thick, blunt headed jet was obtained. Strong shock waves were generated in both cases at the jet head. For analytical prediction, equations of jet tip velocity and penetration from the work of Dent and of Hiroyasu were employed to describe the dynamic characteristics of the experiments at a standard condition of 1 bar, 30 °C. These analytical predictions show reasonable agreement to the experimental results, the experimental trend differing in slope because of the effect of the pressure, density fluctuation of the injection and the shock wave phenomena occurring during the jet generation process.

  19. CCD high-speed videography system with new concepts and techniques

    NASA Astrophysics Data System (ADS)

    Zheng, Zengrong; Zhao, Wenyi; Wu, Zhiqiang

    1997-05-01

    A novel CCD high speed videography system with brand-new concepts and techniques is developed by Zhejiang University recently. The system can send a series of short flash pulses to the moving object. All of the parameters, such as flash numbers, flash durations, flash intervals, flash intensities and flash colors, can be controlled according to needs by the computer. A series of moving object images frozen by flash pulses, carried information of moving object, are recorded by a CCD video camera, and result images are sent to a computer to be frozen, recognized and processed with special hardware and software. Obtained parameters can be displayed, output as remote controlling signals or written into CD. The highest videography frequency is 30,000 images per second. The shortest image freezing time is several microseconds. The system has been applied to wide fields of energy, chemistry, medicine, biological engineering, aero- dynamics, explosion, multi-phase flow, mechanics, vibration, athletic training, weapon development and national defense engineering. It can also be used in production streamline to carry out the online, real-time monitoring and controlling.

  20. Compressive high speed flow microscopy with motion contrast (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Bosworth, Bryan; Stroud, Jasper R.; Tran, Dung N.; Tran, Trac D.; Chin, Sang; Foster, Mark A.

    2016-03-01

    High-speed continuous imaging systems are constrained by analog-to-digital conversion, storage, and transmission. However, real video signals of objects such as microscopic cells and particles require only a few percent or less of the full video bandwidth for high fidelity representation by modern compression algorithms. Compressed Sensing (CS) is a recent influential paradigm in signal processing that builds real-time compression into the acquisition step by computing inner products between the signal of interest and known random waveforms and then applying a nonlinear reconstruction algorithm. Here, we extend the continuous high-rate photonically-enabled compressed sensing (CHiRP-CS) framework to acquire motion contrast video of microscopic flowing objects. We employ chirp processing in optical fiber and high-speed electro-optic modulation to produce ultrashort pulses each with a unique pseudorandom binary sequence (PRBS) spectral pattern with 325 features per pulse at the full laser repetition rate (90 MHz). These PRBS-patterned pulses serve as random structured illumination inside a one-dimensional (1D) spatial disperser. By multiplexing the PRBS patterns with a user-defined repetition period, the difference signal y_i=&phi_i (x_i - x_{i-tau}) can be computed optically with balanced detection, where x is the image signal, phi_i is the PRBS pattern, and tau is the repetition period of the patterns. Two-dimensional (2D) image reconstruction via iterative alternating minimization to find the best locally-sparse representation yields an image of the edges in the flow direction, corresponding to the spatial and temporal 1D derivative. This provides both a favorable representation for image segmentation and a sparser representation for many objects that can improve image compression.

  1. Italian High-speed Airplane Engines

    NASA Technical Reports Server (NTRS)

    Bona, C F

    1940-01-01

    This paper presents an account of Italian high-speed engine designs. The tests were performed on the Fiat AS6 engine, and all components of that engine are discussed from cylinders to superchargers as well as the test set-up. The results of the bench tests are given along with the performance of the engines in various races.

  2. Aerodynamic design on high-speed trains

    NASA Astrophysics Data System (ADS)

    Ding, San-San; Li, Qiang; Tian, Ai-Qin; Du, Jian; Liu, Jia-Li

    2016-01-01

    Compared with the traditional train, the operational speed of the high-speed train has largely improved, and the dynamic environment of the train has changed from one of mechanical domination to one of aerodynamic domination. The aerodynamic problem has become the key technological challenge of high-speed trains and significantly affects the economy, environment, safety, and comfort. In this paper, the relationships among the aerodynamic design principle, aerodynamic performance indexes, and design variables are first studied, and the research methods of train aerodynamics are proposed, including numerical simulation, a reduced-scale test, and a full-scale test. Technological schemes of train aerodynamics involve the optimization design of the streamlined head and the smooth design of the body surface. Optimization design of the streamlined head includes conception design, project design, numerical simulation, and a reduced-scale test. Smooth design of the body surface is mainly used for the key parts, such as electric-current collecting system, wheel truck compartment, and windshield. The aerodynamic design method established in this paper has been successfully applied to various high-speed trains (CRH380A, CRH380AM, CRH6, CRH2G, and the Standard electric multiple unit (EMU)) that have met expected design objectives. The research results can provide an effective guideline for the aerodynamic design of high-speed trains.

  3. High speed hydrogen/graphite interaction

    NASA Technical Reports Server (NTRS)

    Kelly, A. J.; Hamman, R.; Sharma, O. P.; Harrje, D. T.

    1974-01-01

    Various aspects of a research program on high speed hydrogen/graphite interaction are presented. Major areas discussed are: (1) theoretical predictions of hydrogen/graphite erosion rates; (2) high temperature, nonequilibrium hydrogen flow in a nozzle; and (3) molecular beam studies of hydrogen/graphite erosion.

  4. High-speed fiber grating pressure sensors

    NASA Astrophysics Data System (ADS)

    Udd, Eric; Rodriguez, George; Sandberg, Richard L.

    2014-06-01

    Fiber grating pressure sensors have been used to support pressure measurements associated with burn, deflagration and detonation of energetic materials. This paper provides an overview of this technology and serves as a companion paper to the application of this technology to measuring pressure during high speed impacts.

  5. Aerodynamic design on high-speed trains

    NASA Astrophysics Data System (ADS)

    Ding, San-San; Li, Qiang; Tian, Ai-Qin; Du, Jian; Liu, Jia-Li

    2016-04-01

    Compared with the traditional train, the operational speed of the high-speed train has largely improved, and the dynamic environment of the train has changed from one of mechanical domination to one of aerodynamic domination. The aerodynamic problem has become the key technological challenge of high-speed trains and significantly affects the economy, environment, safety, and comfort. In this paper, the relationships among the aerodynamic design principle, aerodynamic performance indexes, and design variables are first studied, and the research methods of train aerodynamics are proposed, including numerical simulation, a reduced-scale test, and a full-scale test. Technological schemes of train aerodynamics involve the optimization design of the streamlined head and the smooth design of the body surface. Optimization design of the streamlined head includes conception design, project design, numerical simulation, and a reduced-scale test. Smooth design of the body surface is mainly used for the key parts, such as electric-current collecting system, wheel truck compartment, and windshield. The aerodynamic design method established in this paper has been successfully applied to various high-speed trains (CRH380A, CRH380AM, CRH6, CRH2G, and the Standard electric multiple unit (EMU)) that have met expected design objectives. The research results can provide an effective guideline for the aerodynamic design of high-speed trains.

  6. High-speed data word monitor

    NASA Technical Reports Server (NTRS)

    Wirth, M. N.

    1975-01-01

    Small, portable, self-contained device provides high-speed display of bit pattern or any selected portion of transmission, can suppress filler patterns so that display is not updated, and can freeze display so that specific event may be observed in detail.

  7. An optical system for detecting 3D high-speed oscillation of a single ultrasound microbubble

    PubMed Central

    Liu, Yuan; Yuan, Baohong

    2013-01-01

    As contrast agents, microbubbles have been playing significant roles in ultrasound imaging. Investigation of microbubble oscillation is crucial for microbubble characterization and detection. Unfortunately, 3-dimensional (3D) observation of microbubble oscillation is challenging and costly because of the bubble size—a few microns in diameter—and the high-speed dynamics under MHz ultrasound pressure waves. In this study, a cost-efficient optical confocal microscopic system combined with a gated and intensified charge-coupled device (ICCD) camera were developed to detect 3D microbubble oscillation. The capability of imaging microbubble high-speed oscillation with much lower costs than with an ultra-fast framing or streak camera system was demonstrated. In addition, microbubble oscillations along both lateral (x and y) and axial (z) directions were demonstrated. Accordingly, this system is an excellent alternative for 3D investigation of microbubble high-speed oscillation, especially when budgets are limited. PMID:24049677

  8. High-speed measurement of nozzle swing angle of rocket engine based on monocular vision

    NASA Astrophysics Data System (ADS)

    Qu, Yufu; Yang, Haijuan

    2015-02-01

    A nozzle angle measurement system based on monocular vision is proposed to achieve high-speed and non-contact angle measurement of rocket engine nozzle. The measurement system consists of two illumination sources, a lens, a target board with spots, a high-speed camera, an image acquisition card and a PC. A target board with spots was fixed on the end of rocket engine nozzle. The image of the target board moved along with the rocket engine nozzle swing was captured by a high-speed camera and transferred to the PC by an image acquisition card. Then a data processing algorithm was utilized to acquire the swing angle of the engine nozzle. Experiment shows that the accuracy of swing angle measurement was 0.2° and the measurement frequency was up to 500Hz.

  9. A DSP Based POD Implementation for High Speed Multimedia Communications

    NASA Astrophysics Data System (ADS)

    Zhang, Chang Nian; Li, Hua; Zhang, Nuannuan; Xie, Jiesheng

    2002-12-01

    In the cable network services, the audio/video entertainment contents should be protected from unauthorized copying, intercepting, and tampering. Point-of-deployment (POD) security module, proposed by[InlineEquation not available: see fulltext.], allows viewers to receive secure cable services such as premium subscription channels, impulse pay-per-view, video-on-demand as well as other interactive services. In this paper, we present a digital signal processor (DSP) (TMS320C6211) based POD implementation for the real-time applications which include elliptic curve digital signature algorithm (ECDSA), elliptic curve Diffie Hellman (ECDH) key exchange, elliptic curve key derivation function (ECKDF), cellular automata (CA) cryptography, communication processes between POD and Host, and Host authentication. In order to get different security levels and different rates of encryption/decryption, a CA based symmetric key cryptography algorithm is used whose encryption/decryption rate can be up to[InlineEquation not available: see fulltext.]. The experiment results indicate that the DSP based POD implementation provides high speed and flexibility, and satisfies the requirements of real-time video data transmission.

  10. Propulsion concepts for high speed aircraft

    NASA Technical Reports Server (NTRS)

    Stull, F. D.; Jones, R. A.; Zima, W. P.

    1975-01-01

    A wide variety of potentially useful and effective airbreathing aircraft have been postulated to operate at speeds in excess of Mach 3.0 by NASA and the USAF. These systems include hydrogen-fueled transports of interest for very long ranges and airbreathing launch vehicles which are aircraft-type first stage candidates for future space shuttle systems. Other high speed airbreathing systems for possible future military applications include advanced reconnaissance and fighter/interceptor type aircraft and strategic systems. This paper presents (1) a chronology of Air Force technical activity on future propulsion concepts, (2) a status report on NASA research on scramjet technology for future systems which may require speeds above Mach 5, and (3) a description of a research vehicle by which advanced propulsion technology and other technologies related to high speed can be demonstrated.

  11. Safety issues in high speed machining

    NASA Astrophysics Data System (ADS)

    1994-05-01

    There are several risks related to High-Speed Milling, but they have not been systematically determined or studied so far. Increased loads by high centrifugal forces may result in dramatic hazards. Flying tools or fragments from a tool with high kinetic energy may damage surrounding people, machines and devices. In the project, mechanical risks were evaluated, theoretic values for kinetic energies of rotating tools were calculated, possible damages of the flying objects were determined and terms to eliminate the risks were considered. The noise levels of the High-Speed Machining center owned by the Helsinki University of Technology (HUT) and the Technical Research Center of Finland (VTT) in practical machining situation were measured and the results were compared to those after basic preventive measures were taken.

  12. High Speed Research Program Sonic Fatigue

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A. (Technical Monitor); Beier, Theodor H.; Heaton, Paul

    2005-01-01

    The objective of this sonic fatigue summary is to provide major findings and technical results of studies, initiated in 1994, to assess sonic fatigue behavior of structure that is being considered for the High Speed Civil Transport (HSCT). High Speed Research (HSR) program objectives in the area of sonic fatigue were to predict inlet, exhaust and boundary layer acoustic loads; measure high cycle fatigue data for materials developed during the HSR program; develop advanced sonic fatigue calculation methods to reduce required conservatism in airframe designs; develop damping techniques for sonic fatigue reduction where weight effective; develop wing and fuselage sonic fatigue design requirements; and perform sonic fatigue analyses on HSCT structural concepts to provide guidance to design teams. All goals were partially achieved, but none were completed due to the premature conclusion of the HSR program. A summary of major program findings and recommendations for continued effort are included in the report.

  13. Pulse Detonation Engines for High Speed Flight

    NASA Technical Reports Server (NTRS)

    Povinelli, Louis A.

    2002-01-01

    Revolutionary concepts in propulsion are required in order to achieve high-speed cruise capability in the atmosphere and for low cost reliable systems for earth to orbit missions. One of the advanced concepts under study is the air-breathing pulse detonation engine. Additional work remains in order to establish the role and performance of a PDE in flight applications, either as a stand-alone device or as part of a combined cycle system. In this paper, we shall offer a few remarks on some of these remaining issues, i.e., combined cycle systems, nozzles and exhaust systems and thrust per unit frontal area limitations. Currently, an intensive experimental and numerical effort is underway in order to quantify the propulsion performance characteristics of this device. In this paper, we shall highlight our recent efforts to elucidate the propulsion potential of pulse detonation engines and their possible application to high-speed or hypersonic systems.

  14. DAC 22 High Speed Civil Transport Model

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Between tests, NASA research engineer Dave Hahne inspects a tenth-scale model of a supersonic transport model in the 30- by 60-Foot Tunnel at NASA Langley Research Center, Hampton, Virginia. The model is being used in support of NASA's High-Speed Research (HSR) program. Langley researchers are applying advance aerodynamic design methods to develop a wing leading-edge flap system which significantly improves low-speed fuel efficiency and reduces noise generated during takeoff operation. Langley is NASA's lead center for the agency's HSR program, aimed at developing technology to help U.S. industry compete in the rapidly expanding trans-oceanic transport market. A U.S. high-speed civil transport is expected to fly in about the year 2010. As envisioned, it would fly 300 passengers across the Pacific in about four hours at Mach 2.4 (approximately 1,600 mph/1950 kph) for a modest increase over business class fares.

  15. High-speed tensile test instrument

    NASA Astrophysics Data System (ADS)

    Mott, P. H.; Twigg, J. N.; Roland, D. F.; Schrader, H. S.; Pathak, J. A.; Roland, C. M.

    2007-04-01

    A novel high-speed tensile test instrument is described, capable of measuring the mechanical response of elastomers at strain rates ranging from 10 to 1600 s-1 for strains through failure. The device employs a drop weight that engages levers to stretch a sample on a horizontal track. To improve dynamic equilibrium, a common problem in high speed testing, equal and opposite loading was applied to each end of the sample. Demonstrative results are reported for two elastomers at strain rates to 588 s-1 with maximum strains of 4.3. At the higher strain rates, there is a substantial inertial contribution to the measured force, an effect unaccounted for in prior works using the drop weight technique. The strain rates were essentially constant over most of the strain range and fill a three-decade gap in the data from existing methods.

  16. High-speed massively parallel scanning

    DOEpatents

    Decker, Derek E.

    2010-07-06

    A new technique for recording a series of images of a high-speed event (such as, but not limited to: ballistics, explosives, laser induced changes in materials, etc.) is presented. Such technique(s) makes use of a lenslet array to take image picture elements (pixels) and concentrate light from each pixel into a spot that is much smaller than the pixel. This array of spots illuminates a detector region (e.g., film, as one embodiment) which is scanned transverse to the light, creating tracks of exposed regions. Each track is a time history of the light intensity for a single pixel. By appropriately configuring the array of concentrated spots with respect to the scanning direction of the detection material, different tracks fit between pixels and sufficient lengths are possible which can be of interest in several high-speed imaging applications.

  17. High speed printing with polygon scan heads

    NASA Astrophysics Data System (ADS)

    Stutz, Glenn

    2016-03-01

    To reduce and in many cases eliminate the costs associated with high volume printing of consumer and industrial products, this paper investigates and validates the use of the new generation of high speed pulse on demand (POD) lasers in concert with high speed (HS) polygon scan heads (PSH). Associated costs include consumables such as printing ink and nozzles, provisioning labor, maintenance and repair expense as well as reduction of printing lines due to high through put. Targets that are applicable and investigated include direct printing on plastics, printing on paper/cardboard as well as printing on labels. Market segments would include consumer products (CPG), medical and pharmaceutical products, universal ID (UID), and industrial products. In regards to the POD lasers employed, the wavelengths include UV(355nm), Green (532nm) and IR (1064nm) operating within the repetition range of 180 to 250 KHz.

  18. Turbulence modeling for high speed compressible flows

    NASA Technical Reports Server (NTRS)

    Chandra, Suresh

    1993-01-01

    The following grant objectives were delineated in the proposal to NASA: to offer course work in computational fluid dynamics (CFD) and related areas to enable mechanical engineering students at North Carolina A&T State University (N.C. A&TSU) to pursue M.S. studies in CFD, and to enable students and faculty to engage in research in high speed compressible flows. Since no CFD-related activity existed at N.C. A&TSU before the start of the NASA grant period, training of students in the CFD area and initiation of research in high speed compressible flows were proposed as the key aspects of the project. To that end, graduate level courses in CFD, boundary layer theory, and fluid dynamics were offered. This effort included initiating a CFD course for graduate students. Also, research work was performed on studying compressibility effects in high speed flows. Specifically, a modified compressible dissipation model, which included a fourth order turbulent Mach number term, was incorporated into the SPARK code and verified for the air-air mixing layer case. The results obtained for this case were compared with a wide variety of experimental data to discern the trends in the mixing layer growth rates with varying convective Mach numbers. Comparison of the predictions of the study with the results of several analytical models was also carried out. The details of the research study are described in the publication entitled 'Compressibility Effects in Modeling Turbulent High Speed Mixing Layers,' which is attached to this report.

  19. Turbulence modeling for high speed compressible flows

    NASA Astrophysics Data System (ADS)

    Chandra, Suresh

    1993-08-01

    The following grant objectives were delineated in the proposal to NASA: to offer course work in computational fluid dynamics (CFD) and related areas to enable mechanical engineering students at North Carolina A&T State University (N.C. A&TSU) to pursue M.S. studies in CFD, and to enable students and faculty to engage in research in high speed compressible flows. Since no CFD-related activity existed at N.C. A&TSU before the start of the NASA grant period, training of students in the CFD area and initiation of research in high speed compressible flows were proposed as the key aspects of the project. To that end, graduate level courses in CFD, boundary layer theory, and fluid dynamics were offered. This effort included initiating a CFD course for graduate students. Also, research work was performed on studying compressibility effects in high speed flows. Specifically, a modified compressible dissipation model, which included a fourth order turbulent Mach number term, was incorporated into the SPARK code and verified for the air-air mixing layer case. The results obtained for this case were compared with a wide variety of experimental data to discern the trends in the mixing layer growth rates with varying convective Mach numbers. Comparison of the predictions of the study with the results of several analytical models was also carried out. The details of the research study are described in the publication entitled 'Compressibility Effects in Modeling Turbulent High Speed Mixing Layers,' which is attached to this report.

  20. Data Capture Technique for High Speed Signaling

    DOEpatents

    Barrett, Wayne Melvin; Chen, Dong; Coteus, Paul William; Gara, Alan Gene; Jackson, Rory; Kopcsay, Gerard Vincent; Nathanson, Ben Jesse; Vranas, Paylos Michael; Takken, Todd E.

    2008-08-26

    A data capture technique for high speed signaling to allow for optimal sampling of an asynchronous data stream. This technique allows for extremely high data rates and does not require that a clock be sent with the data as is done in source synchronous systems. The present invention also provides a hardware mechanism for automatically adjusting transmission delays for optimal two-bit simultaneous bi-directional (SiBiDi) signaling.