Science.gov

Sample records for high-speed video camera

  1. Development of high-speed video cameras

    NASA Astrophysics Data System (ADS)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  2. Precise color images a high-speed color video camera system with three intensified sensors

    NASA Astrophysics Data System (ADS)

    Oki, Sachio; Yamakawa, Masafumi; Gohda, Susumu; Etoh, Takeharu G.

    1999-06-01

    High speed imaging systems have been used in a large field of science and engineering. Although the high speed camera systems have been improved to high performance, most of their applications are only to get high speed motion pictures. However, in some fields of science and technology, it is useful to get some other information, such as temperature of combustion flame, thermal plasma and molten materials. Recent digital high speed video imaging technology should be able to get such information from those objects. For this purpose, we have already developed a high speed video camera system with three-intensified-sensors and cubic prism image splitter. The maximum frame rate is 40,500 pps (picture per second) at 64 X 64 pixels and 4,500 pps at 256 X 256 pixels with 256 (8 bit) intensity resolution for each pixel. The camera system can store more than 1,000 pictures continuously in solid state memory. In order to get the precise color images from this camera system, we need to develop a digital technique, which consists of a computer program and ancillary instruments, to adjust displacement of images taken from two or three image sensors and to calibrate relationship between incident light intensity and corresponding digital output signals. In this paper, the digital technique for pixel-based displacement adjustment are proposed. Although the displacement of the corresponding circle was more than 8 pixels in original image, the displacement was adjusted within 0.2 pixels at most by this method.

  3. HIGH SPEED CAMERA

    DOEpatents

    Rogers, B.T. Jr.; Davis, W.C.

    1957-12-17

    This patent relates to high speed cameras having resolution times of less than one-tenth microseconds suitable for filming distinct sequences of a very fast event such as an explosion. This camera consists of a rotating mirror with reflecting surfaces on both sides, a narrow mirror acting as a slit in a focal plane shutter, various other mirror and lens systems as well as an innage recording surface. The combination of the rotating mirrors and the slit mirror causes discrete, narrow, separate pictures to fall upon the film plane, thereby forming a moving image increment of the photographed event. Placing a reflecting surface on each side of the rotating mirror cancels the image velocity that one side of the rotating mirror would impart, so as a camera having this short a resolution time is thereby possible.

  4. HDR {sup 192}Ir source speed measurements using a high speed video camera

    SciTech Connect

    Fonseca, Gabriel P.; Rubo, Rodrigo A.; Sales, Camila P. de; Verhaegen, Frank

    2015-01-15

    Purpose: The dose delivered with a HDR {sup 192}Ir afterloader can be separated into a dwell component, and a transit component resulting from the source movement. The transit component is directly dependent on the source speed profile and it is the goal of this study to measure accurate source speed profiles. Methods: A high speed video camera was used to record the movement of a {sup 192}Ir source (Nucletron, an Elekta company, Stockholm, Sweden) for interdwell distances of 0.25–5 cm with dwell times of 0.1, 1, and 2 s. Transit dose distributions were calculated using a Monte Carlo code simulating the source movement. Results: The source stops at each dwell position oscillating around the desired position for a duration up to (0.026 ± 0.005) s. The source speed profile shows variations between 0 and 81 cm/s with average speed of ∼33 cm/s for most of the interdwell distances. The source stops for up to (0.005 ± 0.001) s at nonprogrammed positions in between two programmed dwell positions. The dwell time correction applied by the manufacturer compensates the transit dose between the dwell positions leading to a maximum overdose of 41 mGy for the considered cases and assuming an air-kerma strength of 48 000 U. The transit dose component is not uniformly distributed leading to over and underdoses, which is within 1.4% for commonly prescribed doses (3–10 Gy). Conclusions: The source maintains its speed even for the short interdwell distances. Dose variations due to the transit dose component are much lower than the prescribed treatment doses for brachytherapy, although transit dose component should be evaluated individually for clinical cases.

  5. High Speed Digital Camera Technology Review

    NASA Technical Reports Server (NTRS)

    Clements, Sandra D.

    2009-01-01

    A High Speed Digital Camera Technology Review (HSD Review) is being conducted to evaluate the state-of-the-shelf in this rapidly progressing industry. Five HSD cameras supplied by four camera manufacturers participated in a Field Test during the Space Shuttle Discovery STS-128 launch. Each camera was also subjected to Bench Tests in the ASRC Imaging Development Laboratory. Evaluation of the data from the Field and Bench Tests is underway. Representatives from the imaging communities at NASA / KSC and the Optical Systems Group are participating as reviewers. A High Speed Digital Video Camera Draft Specification was updated to address Shuttle engineering imagery requirements based on findings from this HSD Review. This draft specification will serve as the template for a High Speed Digital Video Camera Specification to be developed for the wider OSG imaging community under OSG Task OS-33.

  6. Introducing Contactless Blood Pressure Assessment Using a High Speed Video Camera.

    PubMed

    Jeong, In Cheol; Finkelstein, Joseph

    2016-04-01

    Recent studies demonstrated that blood pressure (BP) can be estimated using pulse transit time (PTT). For PTT calculation, photoplethysmogram (PPG) is usually used to detect a time lag in pulse wave propagation which is correlated with BP. Until now, PTT and PPG were registered using a set of body-worn sensors. In this study a new methodology is introduced allowing contactless registration of PTT and PPG using high speed camera resulting in corresponding image-based PTT (iPTT) and image-based PPG (iPPG) generation. The iPTT value can be potentially utilized for blood pressure estimation however extent of correlation between iPTT and BP is unknown. The goal of this preliminary feasibility study was to introduce the methodology for contactless generation of iPPG and iPTT and to make initial estimation of the extent of correlation between iPTT and BP "in vivo." A short cycling exercise was used to generate BP changes in healthy adult volunteers in three consecutive visits. BP was measured by a verified BP monitor simultaneously with iPTT registration at three exercise points: rest, exercise peak, and recovery. iPPG was simultaneously registered at two body locations during the exercise using high speed camera at 420 frames per second. iPTT was calculated as a time lag between pulse waves obtained as two iPPG's registered from simultaneous recoding of head and palm areas. The average inter-person correlation between PTT and iPTT was 0.85 ± 0.08. The range of inter-person correlations between PTT and iPTT was from 0.70 to 0.95 (p < 0.05). The average inter-person coefficient of correlation between SBP and iPTT was -0.80 ± 0.12. The range of correlations between systolic BP and iPTT was from 0.632 to 0.960 with p < 0.05 for most of the participants. Preliminary data indicated that a high speed camera can be potentially utilized for unobtrusive contactless monitoring of abrupt blood pressure changes in a variety of settings. The initial prototype system was able to

  7. High speed multiwire photon camera

    NASA Technical Reports Server (NTRS)

    Lacy, Jeffrey L. (Inventor)

    1989-01-01

    An improved multiwire proportional counter camera having particular utility in the field of clinical nuclear medicine imaging. The detector utilizes direct coupled, low impedance, high speed delay lines, the segments of which are capacitor-inductor networks. A pile-up rejection test is provided to reject confused events otherwise caused by multiple ionization events occurring during the readout window.

  8. High speed multiwire photon camera

    NASA Technical Reports Server (NTRS)

    Lacy, Jeffrey L. (Inventor)

    1991-01-01

    An improved multiwire proportional counter camera having particular utility in the field of clinical nuclear medicine imaging. The detector utilizes direct coupled, low impedance, high speed delay lines, the segments of which are capacitor-inductor networks. A pile-up rejection test is provided to reject confused events otherwise caused by multiple ionization events occuring during the readout window.

  9. High-speed pulse camera

    NASA Technical Reports Server (NTRS)

    Lawson, J. R.

    1968-01-01

    Miniaturized, 16 mm high speed pulse camera takes spectral photometric photographs upon instantaneous command. The design includes a low-friction, low-inertia film transport, a very thin beryllium shutter driven by a low-inertia stepper motor for minimum actuation time after a pulse command, and a binary encoder.

  10. High Speed Video for Airborne Instrumentation Application

    NASA Technical Reports Server (NTRS)

    Tseng, Ting; Reaves, Matthew; Mauldin, Kendall

    2006-01-01

    A flight-worthy high speed color video system has been developed. Extensive system development and ground and environmental. testing hes yielded a flight qualified High Speed Video System (HSVS), This HSVS was initially used on the F-15B #836 for the Lifting Insulating Foam Trajectory (LIFT) project.

  11. Visualization of explosion phenomena using a high-speed video camera with an uncoupled objective lens by fiber-optic

    NASA Astrophysics Data System (ADS)

    Tokuoka, Nobuyuki; Miyoshi, Hitoshi; Kusano, Hideaki; Hata, Hidehiro; Hiroe, Tetsuyuki; Fujiwara, Kazuhito; Yasushi, Kondo

    2008-11-01

    Visualization of explosion phenomena is very important and essential to evaluate the performance of explosive effects. The phenomena, however, generate blast waves and fragments from cases. We must protect our visualizing equipment from any form of impact. In the tests described here, the front lens was separated from the camera head by means of a fiber-optic cable in order to be able to use the camera, a Shimadzu Hypervision HPV-1, for tests in severe blast environment, including the filming of explosions. It was possible to obtain clear images of the explosion that were not inferior to the images taken by the camera with the lens directly coupled to the camera head. It could be confirmed that this system is very useful for the visualization of dangerous events, e.g., at an explosion site, and for visualizations at angles that would be unachievable under normal circumstances.

  12. HIGH SPEED KERR CELL FRAMING CAMERA

    DOEpatents

    Goss, W.C.; Gilley, L.F.

    1964-01-01

    The present invention relates to a high speed camera utilizing a Kerr cell shutter and a novel optical delay system having no moving parts. The camera can selectively photograph at least 6 frames within 9 x 10/sup -8/ seconds during any such time interval of an occurring event. The invention utilizes particularly an optical system which views and transmits 6 images of an event to a multi-channeled optical delay relay system. The delay relay system has optical paths of successively increased length in whole multiples of the first channel optical path length, into which optical paths the 6 images are transmitted. The successively delayed images are accepted from the exit of the delay relay system by an optical image focusing means, which in turn directs the images into a Kerr cell shutter disposed to intercept the image paths. A camera is disposed to simultaneously view and record the 6 images during a single exposure of the Kerr cell shutter. (AEC)

  13. Using high-speed video in ballistic experiments with crossbows

    NASA Astrophysics Data System (ADS)

    Geradts, Zeno J.; Dofferhoff, Gerard; Visser, Rob

    1997-02-01

    In a short period of two weeks experiments had to be done for court. The order was to investigate the effects of ballpoints shot transorbitally by a crossbow. The use of a high speed video camera turned out to be valuable for detailed observation of the ballpoint during launching and penetration of a gelatine model and demonstration of the results in court.

  14. High Speed and Slow Motion: The Technology of Modern High Speed Cameras

    ERIC Educational Resources Information Center

    Vollmer, Michael; Mollmann, Klaus-Peter

    2011-01-01

    The enormous progress in the fields of microsystem technology, microelectronics and computer science has led to the development of powerful high speed cameras. Recently a number of such cameras became available as low cost consumer products which can also be used for the teaching of physics. The technology of high speed cameras is discussed,…

  15. High-speed cameras at Los Alamos

    NASA Astrophysics Data System (ADS)

    Brixner, Berlyn

    1997-05-01

    In 1943, there was no camera with the microsecond resolution needed for research in Atomic Bomb development. We had the Mitchell camera (100 fps), the Fastax (10 000), the Marley (100 000), the drum streak (moving slit image) 10-5 s resolution, and electro-optical shutters for 10-6 s. Julian Mack invented a rotating-mirror camera for 10-7 s, which was in use by 1944. Small rotating mirror changes secured a resolution of 10-8 s. Photography of oscilloscope traces soon recorded 10-6 resolution, which was later improved to 10-8 s. Mack also invented two time resolving spectrographs for studying the radiation of the first atomic explosion. Much later, he made a large aperture spectrograph for shock wave spectra. An image dissecting drum camera running at 107 frames per second (fps) was used for studying high velocity jets. Brixner invented a simple streak camera which gave 10-8 s resolution. Using a moving film camera, an interferometer pressure gauge was developed for measuring shock-front pressures up to 100 000 psi. An existing Bowen 76-lens frame camera was speeded up by our turbine driven mirror to make 1 500 000 fps. Several streak cameras were made with writing arms from 4 1/2 to 40 in. and apertures from f/2.5 to f/20. We made framing cameras with top speeds of 50 000, 1 000 000, 3 500 000, and 14 000 000 fps.

  16. Design of high speed camera based on CMOS technology

    NASA Astrophysics Data System (ADS)

    Park, Sei-Hun; An, Jun-Sick; Oh, Tae-Seok; Kim, Il-Hwan

    2007-12-01

    The capacity of a high speed camera in taking high speed images has been evaluated using CMOS image sensors. There are 2 types of image sensors, namely, CCD and CMOS sensors. CMOS sensor consumes less power than CCD sensor and can take images more rapidly. High speed camera with built-in CMOS sensor is widely used in vehicle crash tests and airbag controls, golf training aids, and in bullet direction measurement in the military. The High Speed Camera System made in this study has the following components: CMOS image sensor that can take about 500 frames per second at a resolution of 1280*1024; FPGA and DDR2 memory that control the image sensor and save images; Camera Link Module that transmits saved data to PC; and RS-422 communication function that enables control of the camera from a PC.

  17. High-Speed Video Analysis of Damped Harmonic Motion

    ERIC Educational Resources Information Center

    Poonyawatpornkul, J.; Wattanakasiwich, P.

    2013-01-01

    In this paper, we acquire and analyse high-speed videos of a spring-mass system oscillating in glycerin at different temperatures. Three cases of damped harmonic oscillation are investigated and analysed by using high-speed video at a rate of 120 frames s[superscript -1] and Tracker Video Analysis (Tracker) software. We present empirical data for…

  18. Hypervelocity High Speed Projectile Imagery and Video

    NASA Technical Reports Server (NTRS)

    Henderson, Donald J.

    2009-01-01

    This DVD contains video showing the results of hypervelocity impact. One is showing a projectile impact on a Kevlar wrapped Aluminum bottle containing 3000 psi gaseous oxygen. One video show animations of a two stage light gas gun.

  19. High-speed camera characterization of voluntary eye blinking kinematics.

    PubMed

    Kwon, Kyung-Ah; Shipley, Rebecca J; Edirisinghe, Mohan; Ezra, Daniel G; Rose, Geoff; Best, Serena M; Cameron, Ruth E

    2013-08-01

    Blinking is vital to maintain the integrity of the ocular surface and its characteristics such as blink duration and speed can vary significantly, depending on the health of the eyes. The blink is so rapid that special techniques are required to characterize it. In this study, a high-speed camera was used to record and characterize voluntary blinking. The blinking motion of 25 healthy volunteers was recorded at 600 frames per second. Master curves for the palpebral aperture and blinking speed were constructed using palpebral aperture versus time data taken from the high-speed camera recordings, which show that one blink can be divided into four phases; closing, closed, early opening and late opening. Analysis of data from the high-speed camera images was used to calculate the palpebral aperture, peak blinking speed, average blinking speed and duration of voluntary blinking and compare it with data generated by other methods previously used to evaluate voluntary blinking. The advantages of the high-speed camera method over the others are discussed, thereby supporting the high potential usefulness of the method in clinical research.

  20. High-Speed Edge-Detecting Line Scan Smart Camera

    NASA Technical Reports Server (NTRS)

    Prokop, Norman F.

    2012-01-01

    A high-speed edge-detecting line scan smart camera was developed. The camera is designed to operate as a component in a NASA Glenn Research Center developed inlet shock detection system. The inlet shock is detected by projecting a laser sheet through the airflow. The shock within the airflow is the densest part and refracts the laser sheet the most in its vicinity, leaving a dark spot or shadowgraph. These spots show up as a dip or negative peak within the pixel intensity profile of an image of the projected laser sheet. The smart camera acquires and processes in real-time the linear image containing the shock shadowgraph and outputting the shock location. Previously a high-speed camera and personal computer would perform the image capture and processing to determine the shock location. This innovation consists of a linear image sensor, analog signal processing circuit, and a digital circuit that provides a numerical digital output of the shock or negative edge location. The smart camera is capable of capturing and processing linear images at over 1,000 frames per second. The edges are identified as numeric pixel values within the linear array of pixels, and the edge location information can be sent out from the circuit in a variety of ways, such as by using a microcontroller and onboard or external digital interface to include serial data such as RS-232/485, USB, Ethernet, or CAN BUS; parallel digital data; or an analog signal. The smart camera system can be integrated into a small package with a relatively small number of parts, reducing size and increasing reliability over the previous imaging system..

  1. The development of high-speed 100 fps CCD camera

    NASA Astrophysics Data System (ADS)

    Hoffberg, Michael; Laird, Robert; Lenkzsus, Frank; Liu, Chuande; Rodricks, Brian; Gelbart, Asher

    1997-02-01

    This paper describes the development of a high-speed CCD digital camera system. The system has been designed to use CCDs from various manufacturers with minimal modifications. The first camera built on this design utilizes a Thomson 512 × 512 pixel CCD as its sensor, which is read out from two parallel outputs at a speed of 15 MHz/pixel/output. The data undergo correlated double sampling after which it is digitized into 12 bits. The throughput of the system translates into 60 MB/second, which is either stored directly in a PC or transferred to a custom-designed VXI module. The PC data acquisition version of the camera can collect sustained data in real time that is limited to the memory installed in the PC. The VXI version of the camera, also controlled by a PC, stores 512 MB of real-time data before it must be read out to the PC disk storage. The uncooled CCD can be used either with lenses for visible light imaging or with a phosphor screen for X-ray imaging. This camera has been tested with a phosphor screen coupled to a fiber-optic face plate for high-resolution, high-speed X-ray imaging. The camera is controlled through a custom event-driven user-friendly Windows package. The pixel clock speed can be changed from 1 to 15 MHz. The noise was measured to be 1.05 bits at a 13.3 MHz pixel clock. This paper will describe the electronics, software, and characterizations that have been performed using both visible and X-ray photons.

  2. In a Hurry to Work with High-Speed Video at School?

    ERIC Educational Resources Information Center

    Heck, Andre; Uylings, Peter

    2010-01-01

    Casio Computer Co., Ltd., brought in 2008 high-speed video to the consumer level with the release of the EXILIM Pro EX-F1 and the EX-FH20 digital camera.[R] The EX-F1 point-and-shoot camera can shoot up to 60 six-megapixel photos per second and capture movies at up to 1200 frames per second. All this, for a price of about US $1000 at the time of…

  3. High Speed Video Applications In The Pharmaceutical Industry

    NASA Astrophysics Data System (ADS)

    Stapley, David

    1985-02-01

    The pursuit of quality is essential in the development and production of drugs. The pursuit of excellence is relentless, a never ending search. In the pharmaceutical industry, we all know and apply wide-ranging techniques to assure quality production. We all know that in reality none of these techniques are perfect for all situations. We have all experienced, the damaged foil, blister or tube, the missing leaflet, the 'hard to read' batch code. We are all aware of the need to supplement the traditional techniques of fault finding. This paper shows how high speed video systems can be applied to fully automated filling and packaging operations as a tool to aid the company's drive for high quality and productivity. The range of products involved totals some 350 in approximately 3,000 pack variants, encompassing creams, ointments, lotions, capsules, tablets, parenteral and sterile antibiotics. Pharmaceutical production demands diligence at all stages, with optimum use of the techniques offered by the latest technology. Figure 1 shows typical stages of pharmaceutical production in which quality must be assured, and highlights those stages where the use of high speed video systems have proved of value to date. The use of high speed video systems begins with the very first use of machine and materials: commissioning and validation, (the term used for determining that a process is capable of consistently producing the requisite quality) and continues to support inprocess monitoring, throughout the life of the plant. The activity of validation in the packaging environment is particularly in need of a tool to see the nature of high speed faults, no matter how infrequently they occur, so that informed changes can be made precisely and rapidly. The prime use of this tool is to ensure that machines are less sensitive to minor variations in component characteristics.

  4. In a Hurry to Work with High-Speed Video at School?

    NASA Astrophysics Data System (ADS)

    Heck, André; Uylings, Peter

    2010-03-01

    Casio Computer Co., Ltd., brought in 2008 high-speed video to the consumer level with the release of the EXILIM Pro EX-F1 and the EX-FH20 digital camera.® The EX-F1 point-and-shoot camera can shoot up to 60 six-megapixel photos per second and capture movies at up to 1200 frames per second. All this, for a price of about US 1000 at the time of introduction and with an ease of operation that allows high school students to work in 10 minutes with the camera. The EX-FH20 is a more compact, more user-friendly, and cheaper high-speed camera that can still shoot up to 40 photos per second and capture up to 1000 fps. Yearly, new camera models appear and prices have gone down to about US 250-300 for a decent high-speed camera. For more details we refer to Casio's website.

  5. Combining High-Speed Cameras and Stop-Motion Animation Software to Support Students' Modeling of Human Body Movement

    ERIC Educational Resources Information Center

    Lee, Victor R.

    2015-01-01

    Biomechanics, and specifically the biomechanics associated with human movement, is a potentially rich backdrop against which educators can design innovative science teaching and learning activities. Moreover, the use of technologies associated with biomechanics research, such as high-speed cameras that can produce high-quality slow-motion video,…

  6. High-speed optical shutter coupled to fast-readout CCD camera

    NASA Astrophysics Data System (ADS)

    Yates, George J.; Pena, Claudine R.; McDonald, Thomas E., Jr.; Gallegos, Robert A.; Numkena, Dustin M.; Turko, Bojan T.; Ziska, George; Millaud, Jacques E.; Diaz, Rick; Buckley, John; Anthony, Glen; Araki, Takae; Larson, Eric D.

    1999-04-01

    A high frame rate optically shuttered CCD camera for radiometric imaging of transient optical phenomena has been designed and several prototypes fabricated, which are now in evaluation phase. the camera design incorporates stripline geometry image intensifiers for ultra fast image shutters capable of 200ps exposures. The intensifiers are fiber optically coupled to a multiport CCD capable of 75 MHz pixel clocking to achieve 4KHz frame rate for 512 X 512 pixels from simultaneous readout of 16 individual segments of the CCD array. The intensifier, Philips XX1412MH/E03 is generically a Generation II proximity-focused micro channel plate intensifier (MCPII) redesigned for high speed gating by Los Alamos National Laboratory and manufactured by Philips Components. The CCD is a Reticon HSO512 split storage with bi-direcitonal vertical readout architecture. The camera main frame is designed utilizing a multilayer motherboard for transporting CCD video signals and clocks via imbedded stripline buses designed for 100MHz operation. The MCPII gate duration and gain variables are controlled and measured in real time and up-dated for data logging each frame, with 10-bit resolution, selectable either locally or by computer. The camera provides both analog and 10-bit digital video. The camera's architecture, salient design characteristics, and current test data depicting resolution, dynamic range, shutter sequences, and image reconstruction will be presented and discussed.

  7. Jack & the Video Camera

    ERIC Educational Resources Information Center

    Charlan, Nathan

    2010-01-01

    This article narrates how the use of video camera has transformed the life of Jack Williams, a 10-year-old boy from Colorado Springs, Colorado, who has autism. The way autism affected Jack was unique. For the first nine years of his life, Jack remained in his world, alone. Functionally non-verbal and with motor skill problems that affected his…

  8. Motion Analysis Of An Object Onto Fine Plastic Beads Using High-Speed Camera

    NASA Astrophysics Data System (ADS)

    Sato, Minoru

    2010-07-01

    Fine spherical polystyrene beads (NaRiKa, D20-1406-01, industrial materials of styrene form) are useful for frictionless demonstrations of dynamics and kinematics. Sawamoto et al. have developed the method of demonstrations using the plastic beads onto a glass board. These fine beads (the average of the diameter is 280 μm and the standard deviation of the diameter is 56 μm) function as ball bearings to reduce the friction between a moving object, glass Petri dish, and the surface of the glass board. The beads that are charged stick onto the glass board by static electricity, and arrange themselves at intervals. The movement characteristic of a Petri dish that moves on the fine polystyrene beads that adhere onto the glass board is shown by video analysis using a USB camera and a high-speed camera (CASIO, EX-F1). The movement of the Petri dish on the fine polystyrene beads onto the glass board is good linearity, but the friction of the beads is not too small. The high-speed video showed that only a small number of beads behind the bottom of the Petri dish supported the Petri dish. The number of the beads that supported the Petri dish that caused the friction is about 0.14.

  9. High-speed video recording with the TDAS

    NASA Astrophysics Data System (ADS)

    Liu, Daniel W.; Griesheimer, Eric D.; Kesler, Lynn O.

    1990-08-01

    The Tracker Data Acquisition System, TDAS is a system architecture for a high speed data recording and analysis system. The device utilizes dual Direct Memory Access (DMA), parallel Small Computer System Interface (SCSI) interface channels and multiple SCSI hard drives. Video rate data capture and storage is accomplished on 16 bit digital data at video rates to 15 Megahertz. The average data rate is approximately 1 Megabyte per second to the current hard disk drives, with instantaneous rates to 5 Megabytes per second. Message protocol enables symbology and frame data to be stored concurrently with the windowed image data. Dual parallel image buffers store 512 Kilobytes of raw image data for each frame and pass windowed data to the storage drives via the SCSI interfaces. Microcomputer control of DMA, Counter Input/Output, Serial Communications Controller and FIFO's is accomplished with a 16 bit processor which efficiently stores the video and ancillary data. Off-line storage is accomplished on 60 Megabyte streaming tape units for image and data dumps. Current applications mclude real-time multimode tracker performance recording as well as statistical post processing of system parameters. Data retrieval is driven by a separate microcomputer, providing laboratory frame-by-frame analysis of the video images and symbology. The TDAS can support 80 Megabytes of on-line storage presently, but can be simply expanded to 400 Megabytes. Phase 2 of the TDAS will include real-time playback of video images to recreate recorded scenarios. This paper describes the system architecture and implementation of the Tracker Data Acquisition system (TDAS), with current applications.

  10. Instantaneous phase-shifting Fizeau interferometry with high-speed pixelated phase-mask camera

    NASA Astrophysics Data System (ADS)

    Yatagai, Toyohiko; Jackin, Boaz Jessie; Ono, Akira; Kiyohara, Kosuke; Noguchi, Masato; Yoshii, Minoru; Kiyohara, Motosuke; Niwa, Hayato; Ikuo, Kazuyuki; Onuma, Takashi

    2015-08-01

    A Fizeou interferometer with instantaneous phase-shifting ability using a Wollaston prism is designed. to measure dynamic phase change of objects, a high-speed video camera of 10-5s of shutter speed is used with a pixelated phase-mask of 1024 × 1024 elements. The light source used is a laser of wavelength 532 nm which is split into orthogonal polarization states by passing through a Wollaston prism. By adjusting the tilt of the reference surface it is possible to make the reference and object beam with orthogonal polarizations states to coincide and interfere. Then the pixelated phase-mask camera calculate the phase changes and hence the optical path length difference. Vibration of speakers and turbulence of air flow were successfully measured in 7,000 frames/sec.

  11. Advanced High-Definition Video Cameras

    NASA Technical Reports Server (NTRS)

    Glenn, William

    2007-01-01

    A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.

  12. Synthetic streak images (x-t diagrams) from high-speed digital video records

    NASA Astrophysics Data System (ADS)

    Settles, Gary

    2013-11-01

    Modern digital video cameras have entirely replaced the older photographic drum and rotating-mirror cameras for recording high-speed physics phenomena. They are superior in almost every regard except, at speeds approaching one million frames/s, sensor segmentation results in severely reduced frame size, especially height. However, if the principal direction of subject motion is arranged to be along the frame length, a simple Matlab code can extract a row of pixels from each frame and stack them to produce a pseudo-streak image or x-t diagram. Such a 2-D image can convey the essence of the large volume of information contained in a high-speed video sequence, and can be the basis for the extraction of quantitative velocity data. Examples include streak shadowgrams of explosions and gunshots, streak schlieren images of supersonic cavity-flow oscillations, and direct streak images of shock-wave motion in polyurea samples struck by gas-gun projectiles, from which the shock Hugoniot curve of the polymer is measured. This approach is especially useful, since commercial streak cameras remain very expensive and rooted in 20th-century technology.

  13. Efficient and high speed depth-based 2D to 3D video conversion

    NASA Astrophysics Data System (ADS)

    Somaiya, Amisha Himanshu; Kulkarni, Ramesh K.

    2013-09-01

    Stereoscopic video is the new era in video viewing and has wide applications such as medicine, satellite imaging and 3D Television. Such stereo content can be generated directly using S3D cameras. However, this approach requires expensive setup and hence converting monoscopic content to S3D becomes a viable approach. This paper proposes a depth-based algorithm for monoscopic to stereoscopic video conversion by using the y axis co-ordinates of the bottom-most pixels of foreground objects. This code can be used for arbitrary videos without prior database training. It does not face the limitations of single monocular depth cues nor does it combine depth cues, thus consuming less processing time without affecting the efficiency of the 3D video output. The algorithm, though not comparable to real-time, is faster than the other available 2D to 3D video conversion techniques in the average ratio of 1:8 to 1:20, essentially qualifying as high-speed. It is an automatic conversion scheme, hence directly gives the 3D video output without human intervention and with the above mentioned features becomes an ideal choice for efficient monoscopic to stereoscopic video conversion. [Figure not available: see fulltext.

  14. Development of a High Speed Camera Network to Monitor and Study Lightning (Project RAMMER)

    NASA Astrophysics Data System (ADS)

    Saraiva, A. V.; Pinto, O.; Santos, H. H.; Saba, M. M.

    2010-12-01

    This work proposes the development and applications of a network of high speed cameras for observation and study of lightning flashes. Four high-speed cameras are being acquired to be part of the RAMMER network. They are capable to record high resolution videos up to 1632 x 1200 pixels at 1000 frames per second. A robust system is being assembled to ensure the safe operation of the cameras in adverse weather conditions and enable the recording of a large number of lightning flashes per storm, larger than the values reported to date. As the amount of physical memory to record only 1 second of data is something like 3 - 4 GBytes, there is no way to make long recordings of thunderstorms, so a triggering system was conceived to address this problem and do the recordings of 2 seconds of data automatically for each lightning flash. The triggering system is an optical/electromagnetic system that is being tested since September/2010 and the whole system is under testing yet. This lightning information from the video recordings will be correlated with data from the sensors of the Brazilian Lightning Detection Network (BrasilDAT), from a network of electric field fast antennas, slow electric field antennas and Field-Mills, as well as with data from the LMA (Lightning Mapping Array) to be installed in 2011 in the cities of Sao Paulo and Sao Jose dos Campos. The following objectives are envisaged: a) make the first three-dimensional reconstructions of the lightning channel with high speed cameras and verify its dependence on the physical conditions associated with each storm; b) to observe almost all CG lightning flashes of a single storm cloud in order to compare the physical characteristics of the CG lightning flashes for different storms and their dependence on physical conditions associated with each storm; c) evaluate the performance of the new sensors of BrasilDAT network in different localities and simultaneously. The schematics of the sensors will be shown here, with

  15. Very High-Speed Digital Video Capability for In-Flight Use

    NASA Technical Reports Server (NTRS)

    Corda, Stephen; Tseng, Ting; Reaves, Matthew; Mauldin, Kendall; Whiteman, Donald

    2006-01-01

    digital video camera system has been qualified for use in flight on the NASA supersonic F-15B Research Testbed aircraft. This system is capable of very-high-speed color digital imaging at flight speeds up to Mach 2. The components of this system have been ruggedized and shock-mounted in the aircraft to survive the severe pressure, temperature, and vibration of the flight environment. The system includes two synchronized camera subsystems installed in fuselage-mounted camera pods (see Figure 1). Each camera subsystem comprises a camera controller/recorder unit and a camera head. The two camera subsystems are synchronized by use of an MHub(TradeMark) synchronization unit. Each camera subsystem is capable of recording at a rate up to 10,000 pictures per second (pps). A state-of-the-art complementary metal oxide/semiconductor (CMOS) sensor in the camera head has a maximum resolution of 1,280 1,024 pixels at 1,000 pps. Exposure times of the electronic shutter of the camera range from 1/200,000 of a second to full open. The recorded images are captured in a dynamic random-access memory (DRAM) and can be downloaded directly to a personal computer or saved on a compact flash memory card. In addition to the high-rate recording of images, the system can display images in real time at 30 pps. Inter Range Instrumentation Group (IRIG) time code can be inserted into the individual camera controllers or into the M-Hub unit. The video data could also be used to obtain quantitative, three-dimensional trajectory information. The first use of this system was in support of the Space Shuttle Return to Flight effort. Data were needed to help in understanding how thermally insulating foam is shed from a space shuttle external fuel tank during launch. The cameras captured images of simulated external tank debris ejected from a fixture mounted under the centerline of the F-15B aircraft. Digital video was obtained at subsonic and supersonic flight conditions, including speeds up to Mach 2

  16. High-speed radiometric imaging with a gated, intensified, digitally controlled camera

    NASA Astrophysics Data System (ADS)

    Ross, Charles C.; Sturz, Richard A.

    1997-05-01

    The development of an advanced instrument for real-time radiometric imaging of high-speed events is described. The Intensified Digitally-Controlled Gated (IDG) camera is a microprocessor-controlled instrument based on an intensified CCD that is specifically designed to provide radiometric optical data. The IDG supports a variety of camera- synchronous and camera-asynchronous imaging tasks in both passive imaging and active laser range-gated applications. It features both automatic and manual modes of operation, digital precision and repeatability, and ease of use. The IDG produces radiometric imagery by digitally controlling the instrument's optical gain and exposure duration, and by encoding and annotating the parameters necessary for radiometric analysis onto the resultant video signal. Additional inputs, such as date, time, GPS, IRIG-B timing, and other data can also be encoded and annotated. The IDG optical sensitivity can be readily calibrated, with calibration data tables stored in the camera's nonvolatile flash memory. The microprocessor then uses this data to provide a linear, calibrated output. The IDG possesses both synchronous and asynchronous imaging modes in order to allow internal or external control of exposure, timing, and direct interface to external equipment such as event triggers and frame grabbers. Support for laser range-gating is implemented by providing precise asynchronous CCD operation and nanosecond resolution of the intensifier photocathode gate duration and timing. Innovative methods used to control the CCD for asynchronous image capture, as well as other sensor and system considerations relevant to high-speed imaging are discussed in this paper.

  17. The Calibration of High-Speed Camera Imaging System for ELMs Observation on EAST Tokamak

    NASA Astrophysics Data System (ADS)

    Fu, Chao; Zhong, Fangchuan; Hu, Liqun; Yang, Jianhua; Yang, Zhendong; Gan, Kaifu; Zhang, Bin; East Team

    2016-09-01

    A tangential fast visible camera has been set up in EAST tokamak for the study of edge MHD instabilities such as ELM. To determine the 3-D information from CCD images, Tsai's two-stage technique was utilized to calibrate the high-speed camera imaging system for ELM study. By applying tiles of the passive stabilizers in the tokamak device as the calibration pattern, transformation parameters for transforming from a 3-D world coordinate system to a 2-D image coordinate system were obtained, including the rotation matrix, the translation vector, the focal length and the lens distortion. The calibration errors were estimated and the results indicate the reliability of the method used for the camera imaging system. Through the calibration, some information about ELM filaments, such as positions and velocities were obtained from images of H-mode CCD videos. supported by National Natural Science Foundation of China (No. 11275047), the National Magnetic Confinement Fusion Science Program of China (No. 2013GB102000)

  18. A new high-speed IR camera system

    NASA Technical Reports Server (NTRS)

    Travis, Jeffrey W.; Shu, Peter K.; Jhabvala, Murzy D.; Kasten, Michael S.; Moseley, Samuel H.; Casey, Sean C.; Mcgovern, Lawrence K.; Luers, Philip J.; Dabney, Philip W.; Kaipa, Ravi C.

    1994-01-01

    A multi-organizational team at the Goddard Space Flight Center is developing a new far infrared (FIR) camera system which furthers the state of the art for this type of instrument by the incorporating recent advances in several technological disciplines. All aspects of the camera system are optimized for operation at the high data rates required for astronomical observations in the far infrared. The instrument is built around a Blocked Impurity Band (BIB) detector array which exhibits responsivity over a broad wavelength band and which is capable of operating at 1000 frames/sec, and consists of a focal plane dewar, a compact camera head electronics package, and a Digital Signal Processor (DSP)-based data system residing in a standard 486 personal computer. In this paper we discuss the overall system architecture, the focal plane dewar, and advanced features and design considerations for the electronics. This system, or one derived from it, may prove useful for many commercial and/or industrial infrared imaging or spectroscopic applications, including thermal machine vision for robotic manufacturing, photographic observation of short-duration thermal events such as combustion or chemical reactions, and high-resolution surveillance imaging.

  19. Development of High Speed Digital Camera: EXILIM EX-F1

    NASA Astrophysics Data System (ADS)

    Nojima, Osamu

    The EX-F1 is a high speed digital camera featuring a revolutionary improvement in burst shooting speed that is expected to create entirely new markets. This model incorporates a high speed CMOS sensor and a high speed LSI processor. With this model, CASIO has achieved an ultra-high speed 60 frames per second (fps) burst rate for still images, together with 1,200 fps high speed movie that captures movements which cannot even be seen by human eyes. Moreover, this model can record movies at full High-Definition. After launching it into the market, it was able to get a lot of high appraisals as an innovation camera. We will introduce the concept, features and technologies about the EX-F1.

  20. Location accuracy evaluation of lightning location systems using natural lightning flashes recorded by a network of high-speed cameras

    NASA Astrophysics Data System (ADS)

    Alves, J.; Saraiva, A. C. V.; Campos, L. Z. D. S.; Pinto, O., Jr.; Antunes, L.

    2014-12-01

    This work presents a method for the evaluation of location accuracy of all Lightning Location System (LLS) in operation in southeastern Brazil, using natural cloud-to-ground (CG) lightning flashes. This can be done through a multiple high-speed cameras network (RAMMER network) installed in the Paraiba Valley region - SP - Brazil. The RAMMER network (Automated Multi-camera Network for Monitoring and Study of Lightning) is composed by four high-speed cameras operating at 2,500 frames per second. Three stationary black-and-white (B&W) cameras were situated in the cities of São José dos Campos and Caçapava. A fourth color camera was mobile (installed in a car), but operated in a fixed location during the observation period, within the city of São José dos Campos. The average distance among cameras was 13 kilometers. Each RAMMER sensor position was determined so that the network can observe the same lightning flash from different angles and all recorded videos were GPS (Global Position System) time stamped, allowing comparisons of events between cameras and the LLS. The RAMMER sensor is basically composed by a computer, a Phantom high-speed camera version 9.1 and a GPS unit. The lightning cases analyzed in the present work were observed by at least two cameras, their position was visually triangulated and the results compared with BrasilDAT network, during the summer seasons of 2011/2012 and 2012/2013. The visual triangulation method is presented in details. The calibration procedure showed an accuracy of 9 meters between the accurate GPS position of the object triangulated and the result from the visual triangulation method. Lightning return stroke positions, estimated with the visual triangulation method, were compared with LLS locations. Differences between solutions were not greater than 1.8 km.

  1. High speed single charge coupled device Cranz-Schardin camera

    NASA Astrophysics Data System (ADS)

    Deblock, Y.; Ducloux, O.; Derbesse, L.; Merlen, A.; Pernod, P.

    2007-03-01

    This article describes an ultrahigh speed visualization system based on a miniaturization of the Cranz-Schardin principle. It uses a set of high power light emitting diodes (LEDs) (Golden Dragon) as the light source and a highly sensitive charge coupled device (CCD) camera for reception. Each LED is fired in sequence and images the refraction index variation between two relay lenses, on a partial region of a CCD image sensor. The originality of this system consists in achieving several images on a single CCD during a frame time. The number of images is 4. The time interval between successive firings determines the speed of the imaging system. This time lies from 100nsto10μs. The light pulse duration lies from 100nsto10μs. The principle and the optical and electronic parts of such a system are described. As an example, some images of acoustic waves propagating in water are presented.

  2. New measuring concepts using integrated online analysis of color and monochrome digital high-speed camera sequences

    NASA Astrophysics Data System (ADS)

    Renz, Harald

    1997-05-01

    High speed sequences allow a subjective assessment of very fast processes and serve as an important basis for the quantitative analysis of movements. Computer systems help to acquire, handle, display and store digital image sequences as well as to perform measurement tasks automatically. High speed cameras have been used since several years for safety tests, material testing or production optimization. To get the very high speed of 1000 or more images per second, three have been used mainly 16 mm film cameras, which could provide an excellent image resolution and the required time resolution. But up to now, most results have been only judged by viewing. For some special applications like safety tests using crash or high-g sled tests in the automobile industry there have been used image analyzing techniques to measure also the characteristic of moving objects inside images. High speed films, shot during the short impact, allow judgement of the dynamic scene. Additionally they serve as an important basis for the quantitative analysis of the very fast movements. Thus exact values of the velocity and acceleration, the dummies or vehicles are exposed to, can be derived. For analysis of the sequences the positions of signalized points--mostly markers, which are fixed by the test engineers before a test--have to be measured frame by frame. The trajectories show the temporal sequence of the test objects and are the base for calibrated diagrams of distance, velocity and acceleration. Today there are replaced more and more 16 mm film cameras by electronic high speed cameras. The development of high-speed recording systems is very far advanced and the prices of these systems are more and more comparable to those of traditional film cameras. Also the resolution has been increased very greatly. The new cameras are `crashproof' and can be used for similar tasks as the 16 mm film cameras at similar sizes. High speed video cameras now offer an easy setup and direct access to

  3. Investigation of a Plasma Ball using a High Speed Camera

    NASA Astrophysics Data System (ADS)

    Laird, James; Zweben, Stewart; Raitses, Yevgeny; Zwicker, Andrew; Kaganovich, Igor

    2008-11-01

    The physics of how a plasma ball works is not clearly understood. A plasma ball is a commercial ``toy'' in which a center electrode is charged to a high voltage and lightning-like discharges fill the ball with many plasma filaments. The ball uses high voltage applied on the center electrode (˜5 kV) which is covered with glass and capacitively coupled to the plasma filaments. This voltage oscillates at a frequency of ˜26 kHz. A Nebula plasma ball from Edmund Scientific was filmed with a Phantom v7.3 camera, which can operate at speeds up to 150,000 frames per second (fps) with a limit of >=2 μsec exposure per frame. At 100,000 fps the filaments were only visible for ˜5 μsec every ˜40 μsec. When the plasma ball is first switched on, the filaments formed only after ˜800 μsec and initially had a much larger diameter with more chaotic behavior than when the ball reached its final plasma filament state at ˜30 msec. Measurements are also being made of the final filament diameter, the speed of the filament propagation, and the effect of thermal gradients on the filament density. An attempt will be made to explain these results from plasma theory and movies of these filaments will be shown. Possible theoretical models include streamer-like formation, thermal condensation instability, and dielectric barrier discharge instability.

  4. High Speed Intensified Video Observations of TLEs in Support of PhOCAL

    NASA Technical Reports Server (NTRS)

    Lyons, Walter A.; Nelson, Thomas E.; Cummer, Steven A.; Lang, Timothy; Miller, Steven; Beavis, Nick; Yue, Jia; Samaras, Tim; Warner, Tom A.

    2013-01-01

    The third observing season of PhOCAL (Physical Origins of Coupling to the upper Atmosphere by Lightning) was conducted over the U.S. High Plains during the late spring and summer of 2013. The goal was to capture using an intensified high-speed camera, a transient luminous event (TLE), especially a sprite, as well as its parent cloud-to-ground (SP+CG) lightning discharge, preferably within the domain of a 3-D lightning mapping array (LMA). The co-capture of sprite and its SP+CG was achieved within useful range of an interferometer operating near Rapid City. Other high-speed sprite video sequences were captured above the West Texas LMA. On several occasions the large mesoscale convective complexes (MCSs) producing the TLE-class lightning were also generating vertically propagating convectively generated gravity waves (CGGWs) at the mesopause which were easily visible using NIR-sensitive color cameras. These were captured concurrent with sprites. These observations were follow-ons to a case on 15 April 2012 in which CGGWs were also imaged by the new Day/Night Band on the Suomi NPP satellite system. The relationship between the CGGW and sprite initiation are being investigated. The past year was notable for a large number of elve+halo+sprite sequences sequences generated by the same parent CG. And on several occasions there appear to be prominent banded modulations of the elves' luminosity imaged at >3000 ips. These stripes appear coincident with the banded CGGW structure, and presumably its density variations. Several elves and a sprite from negative CGs were also noted. New color imaging systems have been tested and found capable of capturing sprites. Two cases of sprites with an aurora as a backdrop were also recorded. High speed imaging was also provided in support of the UPLIGHTS program near Rapid City, SD and the USAFA SPRITES II airborne campaign over the Great Plains.

  5. High-Speed Video Analysis in a Conceptual Physics Class

    ERIC Educational Resources Information Center

    Desbien, Dwain M.

    2011-01-01

    The use of probe ware and computers has become quite common in introductory physics classrooms. Video analysis is also becoming more popular and is available to a wide range of students through commercially available and/or free software. Video analysis allows for the study of motions that cannot be easily measured in the traditional lab setting…

  6. High-Speed Color Video System For Data Acquisition At 200 Fields Per Second

    NASA Astrophysics Data System (ADS)

    Holzapfel, C.

    1982-02-01

    Nac Incorporated has recently introduced a new high speed color video system which employs a standard VHS color video cassette. Playback can be accomplished on either the HSV-200 or on a standard VHS video recorder/playback unit, such as manufactured by JVC or Panasonic.

  7. The World in Slow Motion: Using a High-Speed Camera in a Physics Workshop

    ERIC Educational Resources Information Center

    Dewanto, Andreas; Lim, Geok Quee; Kuang, Jianhong; Zhang, Jinfeng; Yeo, Ye

    2012-01-01

    We present a physics workshop for college students to investigate various physical phenomena using high-speed cameras. The technical specifications required, the step-by-step instructions, as well as the practical limitations of the workshop, are discussed. This workshop is also intended to be a novel way to promote physics to Generation-Y…

  8. Using a High-Speed Camera to Measure the Speed of Sound

    ERIC Educational Resources Information Center

    Hack, William Nathan; Baird, William H.

    2012-01-01

    The speed of sound is a physical property that can be measured easily in the lab. However, finding an inexpensive and intuitive way for students to determine this speed has been more involved. The introduction of affordable consumer-grade high-speed cameras (such as the Exilim EX-FC100) makes conceptually simple experiments feasible. Since the…

  9. Digital synchroballistic schlieren camera for high-speed photography of bullets and rocket sleds

    NASA Astrophysics Data System (ADS)

    Buckner, Benjamin D.; L'Esperance, Drew

    2013-08-01

    A high-speed digital streak camera designed for simultaneous high-resolution color photography and focusing schlieren imaging is described. The camera uses a computer-controlled galvanometer scanner to achieve synchroballistic imaging through a narrow slit. Full color 20 megapixel images of a rocket sled moving at 480 m/s and of projectiles fired at around 400 m/s were captured, with high-resolution schlieren imaging in the latter cases, using conventional photographic flash illumination. The streak camera can achieve a line rate for streak imaging of up to 2.4 million lines/s.

  10. Perfect Optical Compensator With 1:1 Shutter Ratio Used For High Speed Camera

    NASA Astrophysics Data System (ADS)

    Zhihong, Rong

    1983-03-01

    An optical compensator used for high speed camera is described. The method of compensation, the analysis of the imaging quality and the result of experiment are introduced. The compensator consists of pairs of parallel mirrors. It can perform perfect compensation even at 1:1 shutter ratio. Using this compensator a high speed camera can be operated with no shutter and can obtain the same image sharpness as that of the intermittent camera. The advantages of this compensator are summarized as follows: . While compensating, the aberration correction of the objective would not be damaged. . There is no displacement and defocussing between the scanning image and the film in frame center during compensation. Increasing the exposure angle doesn't reduce the resolving power. . The compensator can also be used in the projector in place of the intermittent mechanism to practise continuous (non-intermittent) projection without shutter.

  11. High-speed two-camera imaging pyrometer for mapping fireball temperatures.

    PubMed

    Densmore, John M; Homan, Barrie E; Biss, Matthew M; McNesby, Kevin L

    2011-11-20

    A high-speed imaging pyrometer was developed to investigate the behavior of flames and explosive events. The instrument consists of two monochrome high-speed Phantom v7.3 m cameras made by Vision Research Inc. arranged so that one lens assembly collects light for both cameras. The cameras are filtered at 700 or 900 nm with a 10 nm bandpass. The high irradiance produced by blackbody emission combined with variable shutter time and f-stop produces properly exposed images. The wavelengths were chosen with the expected temperatures in mind, and also to avoid any molecular or atomic gas phase emission. Temperatures measured using this pyrometer of exploded TNT charges are presented. PMID:22108886

  12. Correlated High-speed Video and Multi-frequency Electromagnetic Observations of Lightning

    NASA Astrophysics Data System (ADS)

    Stolzenburg, M.; Marshall, T. C.; Warner, T. A.; Orville, R. E.; Betz, H.; Gebauer, R.; Karunarathne, S.; Vickers, L.

    2010-12-01

    In July 2010, time-correlated data for ten natural cloud-to-ground lightning flashes were obtained near Kennedy Space Center, Florida, with two high-speed video cameras, five flat-plate “fast” antennas, a seven-station LINET, and the nine-station LDAR2 system. The optical images were obtained at 54000 and 7200 frames per s, while the fast electric field changes were sampled at 1 MHz with antenna decay time of 1 s. The LINET system that we deployed for Jun-Aug 2010 uses time-of-arrival of the magnetic field change in the VLF/LF (5-200 kHz) to detect and locate in-cloud and ground strokes during lightning flashes. At KSC, the LDAR2 (also called 4DLSS) lightning mapping system detects and locates impulsive radio sources in the VHF (60-66 MHz). In this presentation we will show the available data from these various sensors during leader development between cloud and ground before first and subsequent return strokes. Apparent failed downward leaders, upward leaders, and K-changes are also visible in some of the data, although all the lightning details are not present in the video images because the flashes were 20-30 km distant and occurred at 1400-1600 Local Time. We will also discuss the implications for lightning propagation revealed within this set of observations.

  13. Full-field dynamic deformation and strain measurements using high-speed digital cameras

    NASA Astrophysics Data System (ADS)

    Schmidt, Timothy E.; Tyson, John; Galanulis, Konstantin; Revilock, Duane M.; Melis, Matthew E.

    2005-03-01

    Digital cameras are rapidly supplanting film, even for very high speed and ultra high-speed applications. The benefits of these cameras, particularly CMOS versions, are well appreciated. This paper describes how a pair of synchronized digital high-speed cameras can provide full-field dynamic deformation, shape and strain information, through a process known as 3D image correlation photogrammetry. The data is equivalent to thousands of non-contact x-y-z extensometers and strain rosettes, as well as instant non-contact CMM shape measurement. A typical data acquisition rate is 27,000 frames per second, with displacement accuracy on the order of 25-50 microns, and strain accuracy of 250-500 microstrain. High-speed 3D image correlation is being used extensively at the NASA Glenn Ballistic Impact Research Lab, in support of Return to Flight activities. This leading edge work is playing an important role in validating and iterating LS-DYNA models of foam impact on reinforced carbon-carbon, including orbiter wing panel tests. The technique has also been applied to air blast effect studies and Kevlar ballistic impact testing. In these cases, full-field and time history analysis revealed the complexity of the dynamic buckling, including multiple lobes of out-of-plane and in-plane displacements, strain maxima shifts, and damping over time.

  14. Combining High-Speed Cameras and Stop-Motion Animation Software to Support Students' Modeling of Human Body Movement

    NASA Astrophysics Data System (ADS)

    Lee, Victor R.

    2015-04-01

    Biomechanics, and specifically the biomechanics associated with human movement, is a potentially rich backdrop against which educators can design innovative science teaching and learning activities. Moreover, the use of technologies associated with biomechanics research, such as high-speed cameras that can produce high-quality slow-motion video, can be deployed in such a way to support students' participation in practices of scientific modeling. As participants in classroom design experiment, fifteen fifth-grade students worked with high-speed cameras and stop-motion animation software (SAM Animation) over several days to produce dynamic models of motion and body movement. The designed series of learning activities involved iterative cycles of animation creation and critique and use of various depictive materials. Subsequent analysis of flipbooks of human jumping movements created by the students at the beginning and end of the unit revealed a significant improvement in both the epistemic fidelity of students' representations. Excerpts from classroom observations highlight the role that the teacher plays in supporting students' thoughtful reflection of and attention to slow-motion video. In total, this design and research intervention demonstrates that the combination of technologies, activities, and teacher support can lead to improvements in some of the foundations associated with students' modeling.

  15. High speed video analysis study of elastic and inelastic collisions

    NASA Astrophysics Data System (ADS)

    Baker, Andrew; Beckey, Jacob; Aravind, Vasudeva; Clarion Team

    We study inelastic and elastic collisions with a high frame rate video capture to study the process of deformation and other energy transformations during collision. Snapshots are acquired before and after collision and the dynamics of collision are analyzed using Tracker software. By observing the rapid changes (over few milliseconds) and slower changes (over few seconds) in momentum and kinetic energy during the process of collision, we study the loss of momentum and kinetic energy over time. Using this data, it could be possible to design experiments that reduce error involved in these experiments, helping students build better and more robust models to understand the physical world. We thank Clarion University undergraduate student grant for financial support involving this project.

  16. Dynamics at the Holuhraun eruption based on high speed video data analysis

    NASA Astrophysics Data System (ADS)

    Witt, Tanja; Walter, Thomas R.

    2016-04-01

    The 2014/2015 Holuhraun eruption was an gas rich fissure eruption with high fountains. The magma was transported by a horizontal dyke over a distance of 45km. At the first day the fountains occur over a distance of 1.5km and focused at isolated vents during the following day. Based on video analysis of the fountains we obtained a detailed view onto the velocities of the eruption, the propagation path of magma, communication between vents and complexities in the magma paths. We collected videos from the Holuhraun eruption with 2 high speed cameras and one DSLR camera from 31st August, 2015 to 4th September, 2015 for several hours. The fountains at adjacent vents visually seemed to be related at all days. Hence, we calculated the height as a function of time from the video data. All fountains show a pulsating regime with apparent and sporadic alternations from meter to several tens of meters heights. By a time-dependent cross-correlation approach developed within the FUTUREVOLC project, we are able to compare the pulses in the height at adjacent vents. We find that in most cases there is a time lag between the pulses. From the calculated time lags between the pulses and the distance between the correlated vents, we calculate the apparent speed of magma pulses. The analysis of the frequency of the fountains and the eruption and rest time between the the fountains itself, are quite similar and suggest a connection and controlling process of the fountains in the feeder below. At the Holuhraun eruption 2014/2015 (Iceland) we find a significant time shift between the single pulses of adjacent vents at all days. The mean velocity of all days is 30-40 km/hr, which could be interpreted by a magma flow velocity along the dike at depth.Comparison of the velocities derived from the video data analysis to the assumed magma flow velocity in the dike based on seismic data shows a very good agreement, implying that surface expressions of pulsating vents provide an insight into the

  17. Precision of FLEET Velocimetry Using High-Speed CMOS Camera Systems

    NASA Technical Reports Server (NTRS)

    Peters, Christopher J.; Danehy, Paul M.; Bathel, Brett F.; Jiang, Naibo; Calvert, Nathan D.; Miles, Richard B.

    2015-01-01

    Femtosecond laser electronic excitation tagging (FLEET) is an optical measurement technique that permits quantitative velocimetry of unseeded air or nitrogen using a single laser and a single camera. In this paper, we seek to determine the fundamental precision of the FLEET technique using high-speed complementary metal-oxide semiconductor (CMOS) cameras. Also, we compare the performance of several different high-speed CMOS camera systems for acquiring FLEET velocimetry data in air and nitrogen free-jet flows. The precision was defined as the standard deviation of a set of several hundred single-shot velocity measurements. Methods of enhancing the precision of the measurement were explored such as digital binning (similar in concept to on-sensor binning, but done in post-processing), row-wise digital binning of the signal in adjacent pixels and increasing the time delay between successive exposures. These techniques generally improved precision; however, binning provided the greatest improvement to the un-intensified camera systems which had low signal-to-noise ratio. When binning row-wise by 8 pixels (about the thickness of the tagged region) and using an inter-frame delay of 65 microseconds, precisions of 0.5 meters per second in air and 0.2 meters per second in nitrogen were achieved. The camera comparison included a pco.dimax HD, a LaVision Imager scientific CMOS (sCMOS) and a Photron FASTCAM SA-X2, along with a two-stage LaVision HighSpeed IRO intensifier. Excluding the LaVision Imager sCMOS, the cameras were tested with and without intensification and with both short and long inter-frame delays. Use of intensification and longer inter-frame delay generally improved precision. Overall, the Photron FASTCAM SA-X2 exhibited the best performance in terms of greatest precision and highest signal-to-noise ratio primarily because it had the largest pixels.

  18. Precision of FLEET Velocimetry Using High-speed CMOS Camera Systems

    NASA Technical Reports Server (NTRS)

    Peters, Christopher J.; Danehy, Paul M.; Bathel, Brett F.; Jiang, Naibo; Calvert, Nathan D.; Miles, Richard B.

    2015-01-01

    Femtosecond laser electronic excitation tagging (FLEET) is an optical measurement technique that permits quantitative velocimetry of unseeded air or nitrogen using a single laser and a single camera. In this paper, we seek to determine the fundamental precision of the FLEET technique using high-speed complementary metal-oxide semiconductor (CMOS) cameras. Also, we compare the performance of several different high-speed CMOS camera systems for acquiring FLEET velocimetry data in air and nitrogen free-jet flows. The precision was defined as the standard deviation of a set of several hundred single-shot velocity measurements. Methods of enhancing the precision of the measurement were explored such as digital binning (similar in concept to on-sensor binning, but done in post-processing), row-wise digital binning of the signal in adjacent pixels and increasing the time delay between successive exposures. These techniques generally improved precision; however, binning provided the greatest improvement to the un-intensified camera systems which had low signal-to-noise ratio. When binning row-wise by 8 pixels (about the thickness of the tagged region) and using an inter-frame delay of 65 micro sec, precisions of 0.5 m/s in air and 0.2 m/s in nitrogen were achieved. The camera comparison included a pco.dimax HD, a LaVision Imager scientific CMOS (sCMOS) and a Photron FASTCAM SA-X2, along with a two-stage LaVision High Speed IRO intensifier. Excluding the LaVision Imager sCMOS, the cameras were tested with and without intensification and with both short and long inter-frame delays. Use of intensification and longer inter-frame delay generally improved precision. Overall, the Photron FASTCAM SA-X2 exhibited the best performance in terms of greatest precision and highest signal-to-noise ratio primarily because it had the largest pixels.

  19. High-Speed Video Observations of a Natural Lightning Stepped Leader

    NASA Astrophysics Data System (ADS)

    Jordan, D. M.; Hill, J. D.; Uman, M. A.; Yoshida, S.; Kawasaki, Z.

    2010-12-01

    High-speed video images of one branch of a natural negative lightning stepped leader were obtained at a frame rate of 300 kfps (3.33 us exposure) on June 18th, 2010 at the International Center for Lightning Research and Testing (ICLRT) located on the Camp Blanding Army National Guard Base in north-central Florida. The images were acquired using a 20 mm Nikon lens mounted on a Photron SA1.1 high-speed camera. A total of 225 frames (about 0.75 ms) of the downward stepped leader were captured, followed by 45 frames of the leader channel re-illumination by the return stroke and subsequent decay following the ground attachment of the primary leader channel. Luminous characteristics of dart-stepped leader propagation in triggered lightning obtained by Biagi et al. [2009, 2010] and of long laboratory spark formation [e.g., Bazelyan and Raizer, 1998; Gallimberti et al., 2002] are evident in the frames of the natural lightning stepped leader. Space stems/leaders are imaged in twelve different frames at various distances in front of the descending leader tip, which branches into two distinct components 125 frames after the channel enters the field of view. In each case, the space stem/leader appears to connect to the leader tip above in the subsequent frame, forming a new step. Each connection is associated with significant isolated brightening of the channel at the connection point followed by typically three or four frames of upward propagating re-illumination of the existing leader channel. In total, at least 80 individual steps were imaged.

  20. CCD video camera and airborne applications

    NASA Astrophysics Data System (ADS)

    Sturz, Richard A.

    2000-11-01

    The human need to see for ones self and to do so remotely, has given rise to video camera applications never before imagined and growing constantly. The instant understanding and verification offered by video lends its applications to every facet of life. Once an entertainment media, video is now ever present in out daily life. The application to the aircraft platform is one aspect of the video camera versatility. Integrating the video camera into the aircraft platform is yet another story. The typical video camera when applied to more standard scene imaging poses less demanding parameters and considerations. This paper explores the video camera as applied to the more complicated airborne environment.

  1. High-speed camera analysis for nanoparticles produced by using a pulsed wire-discharge method

    NASA Astrophysics Data System (ADS)

    Kim, Jong Hwan; Kim, Dae Sung; Ryu, Bong Ki; Suematsu, Hisayuki; Tanaka, Kenta

    2016-07-01

    We investigated the performance of a high-speed camera and the nanoparticle size distribution to quantify the mechanism of synthesized nanoparticle formation in a pulsed wire discharge (PWD) experiment. The Sn-58Bi alloy wire was 0.5 mm in diameter and 32 mm long; it was prepared in the PWD chamber, and the evaporation explosion process was observed by using a high-speed camera. In order to vary the conditions and analyze the mechanisms of nanoparticle synthesis in the PWD, we changed the pressure of the N2 gas in the chamber from 25 to 75 kPa. To synthesize nanoparticles on a nano-scale, we fixed the charging voltage at 6 kV, and the high-speed camera captured pictures at 22,500 frames per second. The experimental results show that the electronic explosion process at different N2 gas pressures can be characterized by using the explosion's duration and the explosion's intensity. The experiments at the lowest pressure exhibited a longer explosion duration and a greater intensity. Also, at low pressure, very small nanoparticles with a good dispersion were produced.

  2. Development of a high-speed CT imaging system using EMCCD camera

    NASA Astrophysics Data System (ADS)

    Thacker, Samta C.; Yang, Kai; Packard, Nathan; Gaysinskiy, Valeriy; Burkett, George; Miller, Stuart; Boone, John M.; Nagarkar, Vivek

    2009-02-01

    The limitations of current CCD-based microCT X-ray imaging systems arise from two important factors. First, readout speeds are curtailed in order to minimize system read noise, which increases significantly with increasing readout rates. Second, the afterglow associated with commercial scintillator films can introduce image lag, leading to substantial artifacts in reconstructed images, especially when the detector is operated at several hundred frames/second (fps). For high speed imaging systems, high-speed readout electronics and fast scintillator films are required. This paper presents an approach to developing a high-speed CT detector based on a novel, back-thinned electron-multiplying CCD (EMCCD) coupled to various bright, high resolution, low afterglow films. The EMCCD camera, when operated in its binned mode, is capable of acquiring data at up to 300 fps with reduced imaging area. CsI:Tl,Eu and ZnSe:Te films, recently fabricated at RMD, apart from being bright, showed very good afterglow properties, favorable for high-speed imaging. Since ZnSe:Te films were brighter than CsI:Tl,Eu films, for preliminary experiments a ZnSe:Te film was coupled to an EMCCD camera at UC Davis Medical Center. A high-throughput tungsten anode X-ray generator was used, as the X-ray fluence from a mini- or micro-focus source would be insufficient to achieve high-speed imaging. A euthanized mouse held in a glass tube was rotated 360 degrees in less than 3 seconds, while radiographic images were recorded at various readout rates (up to 300 fps); images were reconstructed using a conventional Feldkamp cone-beam reconstruction algorithm. We have found that this system allows volumetric CT imaging of small animals in approximately two seconds at ~110 to 190 μm resolution, compared to several minutes at 160 μm resolution needed for the best current systems.

  3. Characterization of Axial Inducer Cavitation Instabilities via High Speed Video Recordings

    NASA Technical Reports Server (NTRS)

    Arellano, Patrick; Peneda, Marinelle; Ferguson, Thomas; Zoladz, Thomas

    2011-01-01

    Sub-scale water tests were undertaken to assess the viability of utilizing high resolution, high frame-rate digital video recordings of a liquid rocket engine turbopump axial inducer to characterize cavitation instabilities. These high speed video (HSV) images of various cavitation phenomena, including higher order cavitation, rotating cavitation, alternating blade cavitation, and asymmetric cavitation, as well as non-cavitating flows for comparison, were recorded from various orientations through an acrylic tunnel using one and two cameras at digital recording rates ranging from 6,000 to 15,700 frames per second. The physical characteristics of these cavitation forms, including the mechanisms that define the cavitation frequency, were identified. Additionally, these images showed how the cavitation forms changed and transitioned from one type (tip vortex) to another (sheet cavitation) as the inducer boundary conditions (inlet pressures) were changed. Image processing techniques were developed which tracked the formation and collapse of cavitating fluid in a specified target area, both in the temporal and frequency domains, in order to characterize the cavitation instability frequency. The accuracy of the analysis techniques was found to be very dependent on target size for higher order cavitation, but much less so for the other phenomena. Tunnel-mounted piezoelectric, dynamic pressure transducers were present throughout these tests and were used as references in correlating the results obtained by image processing. Results showed good agreement between image processing and dynamic pressure spectral data. The test set-up, test program, and test results including H-Q and suction performance, dynamic environment and cavitation characterization, and image processing techniques and results will be discussed.

  4. Measuring droplet fall speed with a high-speed camera: indoor accuracy and potential outdoor applications

    NASA Astrophysics Data System (ADS)

    Yu, Cheng-Ku; Hsieh, Pei-Rong; Yuter, Sandra E.; Cheng, Lin-Wen; Tsai, Chia-Lun; Lin, Che-Yu; Chen, Ying

    2016-04-01

    Acquisition of accurate raindrop fall speed measurements outdoors in natural rain by means of moderate-cost and easy-to-use devices represents a long-standing and challenging issue in the meteorological community. Feasibility experiments were conducted to evaluate the indoor accuracy of fall speed measurements made with a high-speed camera and to evaluate its capability for outdoor applications. An indoor experiment operating in calm conditions showed that the high-speed imaging technique can provide fall speed measurements with a mean error of 4.1-9.7 % compared to Gunn and Kinzer's empirical fall-speed-size relationship for typical sizes of rain and drizzle drops. Results obtained using the same apparatus outside in summer afternoon showers indicated larger positive and negative velocity deviations compared to the indoor measurements. These observed deviations suggest that ambient flow and turbulence play a role in modifying drop fall speeds which can be quantified with future outdoor high-speed camera measurements. Because the fall speed measurements, as presented in this article, are analyzed on the basis of tracking individual, specific raindrops, sampling uncertainties commonly found in the widely adopted optical disdrometers can be significantly mitigated.

  5. Distinguishing fall activities from normal activities by angular rate characteristics and high-speed camera characterization.

    PubMed

    Nyan, M N; Tay, F E H; Tan, A W Y; Seah, K H W

    2006-10-01

    Distinguishing sideways and backward falls from normal activities of daily living using angular rate sensors (gyroscopes) was explored in this paper. Gyroscopes were secured on a shirt at the positions of sternum (S), front of the waist (FW) and right underarm (RU) to measure angular rate in lateral and sagittal planes of the body during falls and normal activities. Moreover, the motions of the fall incidents were captured by a high-speed camera at a frame rate of 250 frames per second (fps) to study the body configuration during fall. The high-speed camera and the sensor data capture system were activated simultaneously to synchronize the picture frame of high-speed camera and the sensor data. The threshold level for each sensor was set to distinguish fall activities from normal activities. Lead time of fall activities (time after threshold value is surpassed to the time when the hip hits the ground) and relative angle of body configuration (angle beta between the vertical line and the line from the center point of the foot or the center point between the two legs to that of the waist) at the threshold level were studied. For sideways falls, lead times of sensors at positions FW and S were about 200-220ms and 135-182ms, respectively. The lead time of the slippery backward fall (about 98ms) from the sensor at position RU was shorter than that of the sideways falls from the sensors at positions FW and S. The relative angle of body configuration at threshold level for sideways and backward falls were about 40-43 degrees for the sensor at position FW, about 43-52 degrees for the sensor at position S and about 54 degrees for the sensor at position RU, respectively. This is the first study that investigates fall dynamics in detection of fall before the person hits the ground using angular rate sensors (gyroscopes). PMID:16406739

  6. Inexpensive range camera operating at video speed.

    PubMed

    Kramer, J; Seitz, P; Baltes, H

    1993-05-01

    An optoelectronic device has been developed and built that acquires and displays the range data of an object surface in space in video real time. The recovery of depth is performed with active triangulation. A galvanometer scanner system sweeps a sheet of light across the object at a video field rate of 50 Hz. High-speed signal processing is achieved through the use of a special optical sensor and hardware implementation of the simple electronic-processing steps. Fifty range maps are generated per second and converted into a European standard video signal where the depth is encoded in gray levels or color. The image resolution currently is 128 x 500 pixels with a depth accuracy of 1.5% of the depth range. The present setup uses a 500-mW diode laser for the generation of the light sheet. A 45-mm imaging lens covers a measurement volume of 93 mm x 61 mm x 63 mm at a medium distance of 250 mm from the camera, but this can easily be adapted to other dimensions. PMID:20820391

  7. A novel compact high speed x-ray streak camera (invited).

    PubMed

    Hares, J D; Dymoke-Bradshaw, A K L

    2008-10-01

    Conventional in-line high speed streak cameras have fundamental issues when their performance is extended below a picosecond. The transit time spread caused by both the spread in the photoelectron (PE) "birth" energy and space charge effects causes significant electron pulse broadening along the axis of the streak camera and limits the time resolution. Also it is difficult to generate a sufficiently large sweep speed. This paper describes a new instrument in which the extraction electrostatic field at the photocathode increases with time, converting time to PE energy. A uniform magnetic field is used to measure the PE energy, and thus time, and also focuses in one dimension. Design calculations are presented for the factors limiting the time resolution. With our design, subpicosecond resolution with high dynamic range is expected. PMID:19044647

  8. A novel compact high speed x-ray streak camera (invited)

    SciTech Connect

    Hares, J. D.; Dymoke-Bradshaw, A. K. L.

    2008-10-15

    Conventional in-line high speed streak cameras have fundamental issues when their performance is extended below a picosecond. The transit time spread caused by both the spread in the photoelectron (PE) ''birth'' energy and space charge effects causes significant electron pulse broadening along the axis of the streak camera and limits the time resolution. Also it is difficult to generate a sufficiently large sweep speed. This paper describes a new instrument in which the extraction electrostatic field at the photocathode increases with time, converting time to PE energy. A uniform magnetic field is used to measure the PE energy, and thus time, and also focuses in one dimension. Design calculations are presented for the factors limiting the time resolution. With our design, subpicosecond resolution with high dynamic range is expected.

  9. In-Situ Observation of Horizontal Centrifugal Casting using a High-Speed Camera

    NASA Astrophysics Data System (ADS)

    Esaka, Hisao; Kawai, Kohsuke; Kaneko, Hiroshi; Shinozuka, Kei

    2012-07-01

    In order to understand the solidification process of horizontal centrifugal casting, experimental equipment for in-situ observation using transparent organic substance has been constructed. Succinonitrile-1 mass% water alloy was filled in the round glass cell and the glass cell was completely sealed. To observe the movement of equiaxed grains more clearly and to understand the effect of movement of free surface, a high-speed camera has been installed on the equipment. The most advantageous point of this equipment is that the camera rotates with mold, so that one can observe the same location of the glass cell. Because the recording rate could be increased up to 250 frames per second, the quality of movie was dramatically modified and this made easier and more precise to pursue the certain equiaxed grain. The amplitude of oscillation of equiaxed grain ( = At) decreased as the solidification proceeded.

  10. Estimation of Rotational Velocity of Baseball Using High-Speed Camera Movies

    NASA Astrophysics Data System (ADS)

    Inoue, Takuya; Uematsu, Yuko; Saito, Hideo

    Movies can be used to analyze a player's performance and improve his/her skills. In the case of baseball, the pitching is recorded by using a high-speed camera, and the recorded images are used to improve the pitching skills of the players. In this paper, we present a method for estimating of the rotational velocity of a baseball on the basis of movies recorded by high-speed cameras. Unlike in the previous methods, we consider the original seam pattern of the ball seen in the input movie and identify the corresponding image from a database of images by adopting the parametric eigenspace method. These database images are CG Images. The ball's posture can be determined on the basis of the rotational parameters. In the proposed method, the symmetric property of the ball is also taken into consideration, and the time continuity is used to determine the ball's posture. In the experiments, we use the proposed method to estimate the rotational velocity of a baseball on the basis of real movies and movies consisting of CG images of the baseball. The results of both the experiments prove that our method can be used to estimate the ball's rotation accurately.

  11. Photography of the commutation spark using a high-speed camera

    NASA Astrophysics Data System (ADS)

    Hanazawa, Tamio; Egashira, Torao; Tanaka, Yasuhiro; Egoshi, Jun

    1997-12-01

    In the single-phase AC commutator motor (known as a universal motor), which is widely used in cleaners, electrical machines, etc., some problems generated by commutation sparks are wear on the brush and noise impediments. We have therefore attempted to use a high-speed camera to elucidate the commutation spark mechanism visually. The high-speed camera that we used is capable of photographing at 5,000 - 20,000,000 frames/s. Selection of a trigger module can be obtained from the operation unit and the exterior triggering signal. In this paper, we proposed an exterior trigger method that involved opening a hole of several millimeters across in the motor and using argon laser light, so that commutator segments may be photographed in position; we then conducted the experiment. This method enabled us to photograph the motor's commutator segment from any position, and we were able to confirm spark generation at every other commutator segment. Furthermore, after confirming the spark generation position of the commutator segment, we next attempted to accelerate the photographing speed to obtain more detailed photography of the moment of spark generation; we then prepared our report.

  12. Laser Doppler Perfusion Imaging with a high-speed CMOS-camera

    NASA Astrophysics Data System (ADS)

    Draijer, Matthijs J.; Hondebrink, Erwin; Steenbergen, Wiendelt; van Leeuwen, Ton G.

    2007-07-01

    The technique of Laser Doppler Perfusion Imaging (LDPI) is widely used for determining cerebral blood flow or skin perfusion in the case of burns. The commonly used Laser Doppler Perfusion Imagers are scanning systems which point by point scan the area under investigation and use a single photo detector to capture the photoelectric current to obtain a perfusion map. In that case the imaging time for a perfusion map of 64 x 64 pixels is around 5 minutes. Disadvantages of a long imaging time for in-vivo imaging are the bigger chance of movement artifacts, reduced comfort for the patient and the inability to follow fast changing perfusion conditions. We present a Laser Doppler Perfusion Imager which makes use of a high speed CMOS-camera. By illuminating the area under investigation and simultaneously taking images at high speed with the camera, it is possible to obtain a perfusion map of the area under investigation in a shorter period of time than with the commonly used Laser Doppler Perfusion Imagers.

  13. High-speed video study of laser-induced forward transfer of silver nano-suspensions

    NASA Astrophysics Data System (ADS)

    Mathews, S. A.; Auyeung, R. C. Y.; Kim, H.; Charipar, N. A.; Piqué, A.

    2013-08-01

    High-speed video (100 000 fps) is used to examine the behavior of silver nanoparticle suspensions ejected from a donor substrate during laser-induced forward transfer (LIFT) as a function of viscosity, donor film thickness, and voxel area. Both high-speed video and inspection of the post-transferred material indicate dramatic changes in the behavior of the fluid as the viscosity of the nano-suspensions increases from that of inks (˜0.01 Pa.s) to pastes (>100 Pa.s). Over a specific range of viscosities (90-150 Pa.s) and laser fluences (35-65 mJ/cm2), the ejected voxels precisely reproduce the size and shape of the laser spot. This LIFT regime is known as laser decal transfer or LDT. Analysis of the high-speed video indicates that the speeds of the voxels released by the LDT process do not exceed 1 m/s. Such transfer speeds are at least an order of magnitude lower than those associated with other LIFT processes, thus minimizing voxel deformation during flight and upon impact with the receiving substrate. Variation in the threshold fluence for initiating the LDT process is measured as a function of donor film thickness and transfer spot size. Overall, the congruent nature of the silver nanopaste voxels deposited by LDT is unique among non-contact digital printing techniques given its control of the voxel's size and shape, thus allowing partial parallelization of the direct-write process.

  14. System Synchronizes Recordings from Separated Video Cameras

    NASA Technical Reports Server (NTRS)

    Nail, William; Nail, William L.; Nail, Jasper M.; Le, Doung T.

    2009-01-01

    A system of electronic hardware and software for synchronizing recordings from multiple, physically separated video cameras is being developed, primarily for use in multiple-look-angle video production. The system, the time code used in the system, and the underlying method of synchronization upon which the design of the system is based are denoted generally by the term "Geo-TimeCode(TradeMark)." The system is embodied mostly in compact, lightweight, portable units (see figure) denoted video time-code units (VTUs) - one VTU for each video camera. The system is scalable in that any number of camera recordings can be synchronized. The estimated retail price per unit would be about $350 (in 2006 dollars). The need for this or another synchronization system external to video cameras arises because most video cameras do not include internal means for maintaining synchronization with other video cameras. Unlike prior video-camera-synchronization systems, this system does not depend on continuous cable or radio links between cameras (however, it does depend on occasional cable links lasting a few seconds). Also, whereas the time codes used in prior video-camera-synchronization systems typically repeat after 24 hours, the time code used in this system does not repeat for slightly more than 136 years; hence, this system is much better suited for long-term deployment of multiple cameras.

  15. The NACA High-Speed Motion-Picture Camera Optical Compensation at 40,000 Photographs Per Second

    NASA Technical Reports Server (NTRS)

    Miller, Cearcy D

    1946-01-01

    The principle of operation of the NACA high-speed camera is completely explained. This camera, operating at the rate of 40,000 photographs per second, took the photographs presented in numerous NACA reports concerning combustion, preignition, and knock in the spark-ignition engine. Many design details are presented and discussed, details of an entirely conventional nature are omitted. The inherent aberrations of the camera are discussed and partly evaluated. The focal-plane-shutter effect of the camera is explained. Photographs of the camera are presented. Some high-speed motion pictures of familiar objects -- photoflash bulb, firecrackers, camera shutter -- are reproduced as an illustration of the quality of the photographs taken by the camera.

  16. Three-Dimensional Optical Reconstruction of Vocal Fold Kinematics Using High-Speed Video With a Laser Projection System.

    PubMed

    Luegmair, Georg; Mehta, Daryush D; Kobler, James B; Döllinger, Michael

    2015-12-01

    Vocal fold kinematics and its interaction with aerodynamic characteristics play a primary role in acoustic sound production of the human voice. Investigating the temporal details of these kinematics using high-speed videoendoscopic imaging techniques has proven challenging in part due to the limitations of quantifying complex vocal fold vibratory behavior using only two spatial dimensions. Thus, we propose an optical method of reconstructing the superior vocal fold surface in three spatial dimensions using a high-speed video camera and laser projection system. Using stereo-triangulation principles, we extend the camera-laser projector method and present an efficient image processing workflow to generate the three-dimensional vocal fold surfaces during phonation captured at 4000 frames per second. Initial results are provided for airflow-driven vibration of an ex vivo vocal fold model in which at least 75% of visible laser points contributed to the reconstructed surface. The method captures the vertical motion of the vocal folds at a high accuracy to allow for the computation of three-dimensional mucosal wave features such as vibratory amplitude, velocity, and asymmetry.

  17. Three-dimensional optical reconstruction of vocal fold kinematics using high-speed video with a laser projection system

    PubMed Central

    Luegmair, Georg; Mehta, Daryush D.; Kobler, James B.; Döllinger, Michael

    2015-01-01

    Vocal fold kinematics and its interaction with aerodynamic characteristics play a primary role in acoustic sound production of the human voice. Investigating the temporal details of these kinematics using high-speed videoendoscopic imaging techniques has proven challenging in part due to the limitations of quantifying complex vocal fold vibratory behavior using only two spatial dimensions. Thus, we propose an optical method of reconstructing the superior vocal fold surface in three spatial dimensions using a high-speed video camera and laser projection system. Using stereo-triangulation principles, we extend the camera-laser projector method and present an efficient image processing workflow to generate the three-dimensional vocal fold surfaces during phonation captured at 4000 frames per second. Initial results are provided for airflow-driven vibration of an ex vivo vocal fold model in which at least 75% of visible laser points contributed to the reconstructed surface. The method captures the vertical motion of the vocal folds at a high accuracy to allow for the computation of three-dimensional mucosal wave features such as vibratory amplitude, velocity, and asymmetry. PMID:26087485

  18. Review of ULTRANAC high-speed camera: applications, results, and techniques

    NASA Astrophysics Data System (ADS)

    Lawrence, Brett R.

    1997-05-01

    The ULTRANAC Ultra-High Speed Framing and Streak Camera System, from Imco Electro-Optics Limited of England was first presented to the market at the 19th ICHSPP held in Cambridge, England, in 1990. It was the world's first fully computerized image converter camera and is capable of remote programming at framing speeds up to 20 million fps and streak speeds up to 1 nS/mm. The delay, exposure, interframe and output trigger times can be independently programmed within any one sequence. Increased spatial resolution is obtained by generating a series of static frames during the exposure period as opposed to the previously utilized sine wave shuttering technique. The first ULTRANAC was supplied to Japan, through the parent company, NAC, in 1991. Since then, more than 40 cameras have been installed world-wide. The range of applications is many and varied covering impact studies, shock wave research, high voltage discharge, ballistics, detonics, laser and plasma effects, combustion and injection research, nuclear and particle studies, crack propagation and ink jet printer development among many others. This paper attempts to present the results obtained from such tests. It will describe the methods of recording the images, both film and electronically, and recent advances in cooled CCD image technology and associated software analysis programs.

  19. High-speed motion picture camera experiments of cavitation in dynamically loaded journal bearings

    NASA Technical Reports Server (NTRS)

    Jacobson, B. O.; Hamrock, B. J.

    1982-01-01

    A high-speed camera was used to investigate cavitation in dynamically loaded journal bearings. The length-diameter ratio of the bearing, the speeds of the shaft and bearing, the surface material of the shaft, and the static and dynamic eccentricity of the bearing were varied. The results reveal not only the appearance of gas cavitation, but also the development of previously unsuspected vapor cavitation. It was found that gas cavitation increases with time until, after many hundreds of pressure cycles, there is a constant amount of gas kept in the cavitation zone of the bearing. The gas can have pressures of many times the atmospheric pressure. Vapor cavitation bubbles, on the other hand, collapse at pressures lower than the atmospheric pressure and cannot be transported through a high-pressure zone, nor does the amount of vapor cavitation in a bearing increase with time. Analysis is given to support the experimental findings for both gas and vapor cavitation.

  20. Measuring the kinetic parameters of saltating sand grains using a high-speed digital camera

    NASA Astrophysics Data System (ADS)

    Zhang, Yang; Wang, Yuan; Jia, Pan

    2014-06-01

    A high-speed digital camera is used to record the saltation of three sand samples (diameter range: 300-500, 200-300 and 100-125 μm). This is followed by an overlapping particle tracking algorithm to reconstruct the saltating trajectory and the differential scheme to abstract the kinetic parameters of saltating grains. The velocity results confirm the propagating feature of saltation in maintaining near-face aeolian sand transport. Moreover, the acceleration of saltating sand grains was obtained directly from the reconstructed trajectory, and the results reveal that the climbing stage of the saltating trajectory represents an critical process of energy transfer while the sand grains travel through air.

  1. Study of cavitation bubble dynamics during Ho:YAG laser lithotripsy by high-speed camera

    NASA Astrophysics Data System (ADS)

    Zhang, Jian J.; Xuan, Jason R.; Yu, Honggang; Devincentis, Dennis

    2016-02-01

    Although laser lithotripsy is now the preferred treatment option for urolithiasis, the mechanism of laser pulse induced calculus damage is still not fully understood. This is because the process of laser pulse induced calculus damage involves quite a few physical and chemical processes and their time-scales are very short (down to sub micro second level). For laser lithotripsy, the laser pulse induced impact by energy flow can be summarized as: Photon energy in the laser pulse --> photon absorption generated heat in the water liquid and vapor (super heat water or plasma effect) --> shock wave (Bow shock, acoustic wave) --> cavitation bubble dynamics (oscillation, and center of bubble movement , super heat water at collapse, sonoluminscence) --> calculus damage and motion (calculus heat up, spallation/melt of stone, breaking of mechanical/chemical bond, debris ejection, and retropulsion of remaining calculus body). Cavitation bubble dynamics is the center piece of the physical processes that links the whole energy flow chain from laser pulse to calculus damage. In this study, cavitation bubble dynamics was investigated by a high-speed camera and a needle hydrophone. A commercialized, pulsed Ho:YAG laser at 2.1 mu;m, StoneLightTM 30, with pulse energy from 0.5J up to 3.0 J, and pulse width from 150 mu;s up to 800 μs, was used as laser pulse source. The fiber used in the investigation is SureFlexTM fiber, Model S-LLF365, a 365 um core diameter fiber. A high-speed camera with frame rate up to 1 million fps was used in this study. The results revealed the cavitation bubble dynamics (oscillation and center of bubble movement) by laser pulse at different energy level and pulse width. More detailed investigation on bubble dynamics by different type of laser, the relationship between cavitation bubble dynamics and calculus damage (fragmentation/dusting) will be conducted as a future study.

  2. Measurement of intracellular ice formation kinetics by high-speed video cryomicroscopy.

    PubMed

    Karlsson, Jens O M

    2015-01-01

    Quantitative information about the kinetics and cumulative probability of intracellular ice formation is necessary to develop minimally damaging freezing procedures for the cryopreservation of cells and tissue. Conventional cryomicroscopic assays, which rely on indirect evidence of intracellular freezing (e.g., opacity changes in the cell cytoplasm), can yield significant errors in the estimated kinetics. In contrast, the formation and growth of intracellular ice crystals can be accurately detected using temporally resolved imaging methods (i.e., video recording at sub-millisecond resolution). Here, detailed methods for the setup and operation of a high-speed video cryomicroscope system are described, including protocols for imaging of intracellular ice crystallization events, and stochastic analysis of the ice formation kinetics in a cell population. Recommendations are provided for temperature profile design, sample preparation, and configuration of the video acquisition parameters. Throughout this chapter, the protocols incorporate best practices that have been drawn from over a decade of experience with high-speed video cryomicroscopy in our laboratory.

  3. Television camera video level control system

    NASA Technical Reports Server (NTRS)

    Kravitz, M.; Freedman, L. A.; Fredd, E. H.; Denef, D. E. (Inventor)

    1985-01-01

    A video level control system is provided which generates a normalized video signal for a camera processing circuit. The video level control system includes a lens iris which provides a controlled light signal to a camera tube. The camera tube converts the light signal provided by the lens iris into electrical signals. A feedback circuit in response to the electrical signals generated by the camera tube, provides feedback signals to the lens iris and the camera tube. This assures that a normalized video signal is provided in a first illumination range. An automatic gain control loop, which is also responsive to the electrical signals generated by the camera tube 4, operates in tandem with the feedback circuit. This assures that the normalized video signal is maintained in a second illumination range.

  4. Video indirect ophthalmoscopy using a hand-held video camera.

    PubMed

    Shanmugam, Mahesh P

    2011-01-01

    Fundus photography in adults and cooperative children is possible with a fundus camera or by using a slit lamp-mounted digital camera. Retcam TM or a video indirect ophthalmoscope is necessary for fundus imaging in infants and young children under anesthesia. Herein, a technique of converting and using a digital video camera into a video indirect ophthalmoscope for fundus imaging is described. This device will allow anyone with a hand-held video camera to obtain fundus images. Limitations of this technique involve a learning curve and inability to perform scleral depression.

  5. High-speed holographic correlation system for video identification on the internet

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ikeda, Kanami; Kodate, Kashiko

    2013-12-01

    Automatic video identification is important for indexing, search purposes, and removing illegal material on the Internet. By combining a high-speed correlation engine and web-scanning technology, we developed the Fast Recognition Correlation system (FReCs), a video identification system for the Internet. FReCs is an application thatsearches through a number of websites with user-generated content (UGC) and detects video content that violates copyright law. In this paper, we describe the FReCs configuration and an approach to investigating UGC websites using FReCs. The paper also illustrates the combination of FReCs with an optical correlation system, which is capable of easily replacing a digital authorization sever in FReCs with optical correlation.

  6. Wind dynamic range video camera

    NASA Technical Reports Server (NTRS)

    Craig, G. D. (Inventor)

    1985-01-01

    A television camera apparatus is disclosed in which bright objects are attenuated to fit within the dynamic range of the system, while dim objects are not. The apparatus receives linearly polarized light from an object scene, the light being passed by a beam splitter and focused on the output plane of a liquid crystal light valve. The light valve is oriented such that, with no excitation from the cathode ray tube, all light is rotated 90 deg and focused on the input plane of the video sensor. The light is then converted to an electrical signal, which is amplified and used to excite the CRT. The resulting image is collected and focused by a lens onto the light valve which rotates the polarization vector of the light to an extent proportional to the light intensity from the CRT. The overall effect is to selectively attenuate the image pattern focused on the sensor.

  7. The Eye, Film, And Video In High-Speed Motion Analysis

    NASA Astrophysics Data System (ADS)

    Hyzer, William G.

    1987-09-01

    The unaided human eye with its inherent limitations serves us well in the examination of most large-scale, slow-moving, natural and man-made phenomena, but constraints imposed by inertial factors in the visual mechanism severely limit our ability to observe fast-moving and short-duration events. The introduction of high-speed photography (c. 1851) and videography (c. 1970) served to stretch the temporal limits of human perception by several orders of magnitude so critical analysis could be performed on a wide range of rapidly occurring events of scientific, technological, industrial, and educational interest. The preferential selection of eye, film, or video imagery in fulfilling particular motion analysis requirements is determined largely by the comparative attributes and limitations of these methods. The choice of either film or video does not necessarily eliminate the eye, because it usually continues as a vital link in the analytical chain. The important characteristics of the eye, film, and video imagery in high-speed motion analysis are discussed with particular reference to fields of application which include biomechanics, ballistics, machine design, mechanics of materials, sports analysis, medicine, production engineering, and industrial trouble-shooting.

  8. Game of thrown bombs in 3D: using high speed cameras and photogrammetry techniques to reconstruct bomb trajectories at Stromboli (Italy)

    NASA Astrophysics Data System (ADS)

    Gaudin, D.; Taddeucci, J.; Scarlato, P.; Del Bello, E.; Houghton, B. F.; Orr, T. R.; Andronico, D.; Kueppers, U.

    2015-12-01

    Large juvenile bombs and lithic clasts, produced and ejected during explosive volcanic eruptions, follow ballistic trajectories. Of particular interest are: 1) the determination of ejection velocity and launch angle, which give insights into shallow conduit conditions and geometry; 2) particle trajectories, with an eye on trajectory evolution caused by collisions between bombs, as well as the interaction between bombs and ash/gas plumes; and 3) the computation of the final emplacement of bomb-sized clasts, which is important for hazard assessment and risk management. Ground-based imagery from a single camera only allows the reconstruction of bomb trajectories in a plan perpendicular to the line of sight, which may lead to underestimation of bomb velocities and does not allow the directionality of the ejections to be studied. To overcome this limitation, we adapted photogrammetry techniques to reconstruct 3D bomb trajectories from two or three synchronized high-speed video cameras. In particular, we modified existing algorithms to consider the errors that may arise from the very high velocity of the particles and the impossibility of measuring tie points close to the scene. Our method was tested during two field campaigns at Stromboli. In 2014, two high-speed cameras with a 500 Hz frame rate and a ~2 cm resolution were set up ~350m from the crater, 10° apart and synchronized. The experiment was repeated with similar parameters in 2015, but using three high-speed cameras in order to significantly reduce uncertainties and allow their estimation. Trajectory analyses for tens of bombs at various times allowed for the identification of shifts in the mean directivity and dispersal angle of the jets during the explosions. These time evolutions are also visible on the permanent video-camera monitoring system, demonstrating the applicability of our method to all kinds of explosive volcanoes.

  9. Motion analysis of mechanical heart valve prosthesis utilizing high-speed video

    NASA Astrophysics Data System (ADS)

    Adlparvar, Payam; Guo, George; Kingsbury, Chris

    1993-01-01

    The Edwards-Duromedics (ED) mechanical heart valve prosthesis is of a bileaflet design, incorporating unique design features that distinguish its performance with respect to other mechanical valves of similar type. Leaflet motion of mechanical heart valves, particularly during closure, is related to valve durability, valve sounds and the efficiency of the cardiac output. Modifications to the ED valve have resulted in significant improvements with respect to leaflet motion. In this study a high-speed video system was used to monitor the leaflet motion of the valve, and to compare the performance of the Modified Specification to that of the Original Specification using a St. Jude Medical as a control valve.

  10. Time-Correlated High-Speed Video and Lightning Mapping Array Results For Triggered Lightning Flashes

    NASA Astrophysics Data System (ADS)

    Eastvedt, E. M.; Eack, K.; Edens, H. E.; Aulich, G. D.; Hunyady, S.; Winn, W. P.; Murray, C.

    2009-12-01

    Several lightning flashes triggered by the rocket-and-wire technique at Langmuir Laboratory's Kiva facility on South Baldy (approximately 3300 meters above sea level) were captured on high-speed video during the summers of 2008 and 2009. These triggered flashes were also observed with Langmuir Laboratory's Lightning Mapping Array (LMA), a 3-D VHF time-of-arrival system. We analyzed nine flashes (obtained in four different storms) for which the electric field at ground was positive (foul-weather). Each was initiated by an upward positive leader that propagated into the cloud. In all cases observed, the leader exhibited upward branching, and most of the flashes had multiple return strokes.

  11. High-Speed Photography 101

    NASA Astrophysics Data System (ADS)

    Davidhazy, Andrew

    1997-05-01

    This paper describes the contents of a unique introductory, applications oriented, high speed photography course offered to Imaging and Photographic Technology majors at the Rochester Institute of Technology. The course covers the theory and practice of photographic systems designed to permit analysis of events of very short duration. Included are operational characteristics of intermittent and rotating prism cameras, rotating mirror and drum cameras, synchronization systems and timing controls and high speed flash and stroboscopic systems, and high speed video recording. Students gain basic experience not only in the use of fundamental equipment but also in proper planning, set-up and introductory data reduction techniques through a series of practical experiments.

  12. ARINC 818 express for high-speed avionics video and power over coax

    NASA Astrophysics Data System (ADS)

    Keller, Tim; Alexander, Jon

    2012-06-01

    CoaXPress is a new standard for high-speed video over coax cabling developed for the machine vision industry. CoaXPress includes both a physical layer and a video protocol. The physical layer has desirable features for aerospace and defense applications: it allows 3Gbps (up to 6Gbps) communication, includes 21Mbps return path allowing for bidirectional communication, and provides up to 13W of power, all over a single coax connection. ARINC 818, titled "Avionics Digital Video Bus" is a protocol standard developed specifically for high speed, mission critical aerospace video systems. ARINC 818 is being widely adopted for new military and commercial display and sensor applications. The ARINC 818 protocol combined with the CoaXPress physical layer provide desirable characteristics for many aerospace systems. This paper presents the results of a technology demonstration program to marry the physical layer from CoaXPress with the ARINC 818 protocol. ARINC 818 is a protocol, not a physical layer. Typically, ARINC 818 is implemented over fiber or copper for speeds of 1 to 2Gbps, but beyond 2Gbps, it has been implemented exclusively over fiber optic links. In many rugged applications, a copper interface is still desired, by implementing ARINC 818 over the CoaXPress physical layer, it provides a path to 3 and 6 Gbps copper interfaces for ARINC 818. Results of the successful technology demonstration dubbed ARINC 818 Express are presented showing 3Gbps communication while powering a remote module over a single coax cable. The paper concludes with suggested next steps for bring this technology to production readiness.

  13. Measurement of inkjet first-drop behavior using a high-speed camera

    NASA Astrophysics Data System (ADS)

    Kwon, Kye-Si; Kim, Hyung-Seok; Choi, Moohyun

    2016-03-01

    Drop-on-demand inkjet printing has been used as a manufacturing tool for printed electronics, and it has several advantages since a droplet of an exact amount can be deposited on an exact location. Such technology requires positioning the inkjet head on the printing location without jetting, so a jetting pause (non-jetting) idle time is required. Nevertheless, the behavior of the first few drops after the non-jetting pause time is well known to be possibly different from that which occurs in the steady state. The abnormal behavior of the first few drops may result in serious problems regarding printing quality. Therefore, a proper evaluation of a first-droplet failure has become important for the inkjet industry. To this end, in this study, we propose the use of a high-speed camera to evaluate first-drop dissimilarity. For this purpose, the image acquisition frame rate was determined to be an integer multiple of the jetting frequency, and in this manner, we can directly compare the droplet locations of each drop in order to characterize the first-drop behavior. Finally, we evaluate the effect of a sub-driving voltage during the non-jetting pause time to effectively suppress the first-drop dissimilarity.

  14. High-speed motion picture camera experiments of cavitation in dynamically loaded journal bearings

    NASA Technical Reports Server (NTRS)

    Hamrock, B. J.; Jacobson, B. O.

    1983-01-01

    A high-speed camera was used to investigate cavitation in dynamically loaded journal bearings. The length-diameter ratio of the bearing, the speeds of the shaft and bearing, the surface material of the shaft, and the static and dynamic eccentricity of the bearing were varied. The results reveal not only the appearance of gas cavitation, but also the development of previously unsuspected vapor cavitation. It was found that gas cavitation increases with time until, after many hundreds of pressure cycles, there is a constant amount of gas kept in the cavitation zone of the bearing. The gas can have pressures of many times the atmospheric pressure. Vapor cavitation bubbles, on the other hand, collapse at pressures lower than the atmospheric pressure and cannot be transported through a high-pressure zone, nor does the amount of vapor cavitation in a bearing increase with time. Analysis is given to support the experimental findings for both gas and vapor cavitation. Previously announced in STAR as N82-20543

  15. Measurement of inkjet first-drop behavior using a high-speed camera.

    PubMed

    Kwon, Kye-Si; Kim, Hyung-Seok; Choi, Moohyun

    2016-03-01

    Drop-on-demand inkjet printing has been used as a manufacturing tool for printed electronics, and it has several advantages since a droplet of an exact amount can be deposited on an exact location. Such technology requires positioning the inkjet head on the printing location without jetting, so a jetting pause (non-jetting) idle time is required. Nevertheless, the behavior of the first few drops after the non-jetting pause time is well known to be possibly different from that which occurs in the steady state. The abnormal behavior of the first few drops may result in serious problems regarding printing quality. Therefore, a proper evaluation of a first-droplet failure has become important for the inkjet industry. To this end, in this study, we propose the use of a high-speed camera to evaluate first-drop dissimilarity. For this purpose, the image acquisition frame rate was determined to be an integer multiple of the jetting frequency, and in this manner, we can directly compare the droplet locations of each drop in order to characterize the first-drop behavior. Finally, we evaluate the effect of a sub-driving voltage during the non-jetting pause time to effectively suppress the first-drop dissimilarity. PMID:27036813

  16. Synchronization of high speed framing camera and intense electron-beam accelerator.

    PubMed

    Cheng, Xin-Bing; Liu, Jin-Liang; Hong, Zhi-Qiang; Qian, Bao-Liang

    2012-06-01

    A new trigger program is proposed to realize the synchronization of high speed framing camera (HSFC) and intense electron-beam accelerator (IEBA). The trigger program which include light signal acquisition radiated from main switch of IEBA and signal processing circuit could provide a trigger signal with rise time of 17 ns and amplitude of about 5 V. First, the light signal was collected by an avalanche photodiode (APD) module, and the delay time between the output voltage of APD and load voltage of IEBA was tested, it was about 35 ns. Subsequently, the output voltage of APD was processed further by the signal processing circuit to obtain the trigger signal. At last, by combining the trigger program with an IEBA, the trigger program operated stably, and a delay time of 30 ns between the trigger signal of HSFC and output voltage of IEBA was obtained. Meanwhile, when surface flashover occurred at the high density polyethylene sample, the delay time between the trigger signal of HSFC and flashover current was up to 150 ns, which satisfied the need of synchronization of HSFC and IEBA. So the experiment results proved that the trigger program could compensate the time (called compensated time) of the trigger signal processing time and the inherent delay time of the HSFC. PMID:22755659

  17. Initial laboratory evaluation of color video cameras

    SciTech Connect

    Terry, P L

    1991-01-01

    Sandia National Laboratories has considerable experience with monochrome video cameras used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than identify an intruder. Monochrome cameras are adequate for that application and were selected over color cameras because of their greater sensitivity and resolution. There is a growing interest in the identification function of security video systems for both access control and insider protection. Color information is useful for identification purposes, and color camera technology is rapidly changing. Thus, Sandia National Laboratories established an ongoing program to evaluate color solid-state cameras. Phase one resulted in the publishing of a report titled, Initial Laboratory Evaluation of Color Video Cameras (SAND--91-2579).'' It gave a brief discussion of imager chips and color cameras and monitors, described the camera selection, detailed traditional test parameters and procedures, and gave the results of the evaluation of twelve cameras. In phase two six additional cameras were tested by the traditional methods and all eighteen cameras were tested by newly developed methods. This report details both the traditional and newly developed test parameters and procedures, and gives the results of both evaluations.

  18. Accuracy of two-color pyrometry using color high-speed cameras for measurement of luminous flames

    NASA Astrophysics Data System (ADS)

    Usui, Hiroyuki; Mitsui, Kenji

    2007-01-01

    By the recent development in electronics, including new solid-state image sensors such as area CCD and C-MOS sensors and the progress of image processing techniques, new imaging radiometers have been developed which two-dimensionally acquire image data of objects moving at a high speed and under high temperature, and (graphically) present the temperature distribution over the object immediately. We successfully measured the temperature distribution and the term KL distribution, which is the absorption strength of combustion in diesel engine cylinders or other luminous flames taking place at a high speed, using single-sensor color high-speed cameras and applying two-color pyrometry introduced by H. C. Hottel and F. P. Btoughton. The measurement accuracy depends on the accuracy of color reproducibility of the high-speed camera being used which is considered a brightness pyrometer, because two-color pyrometry for measuring luminous flames is based on the brightness temperature at two wavelength bands such as red and green. In this paper, we present a method of maintaining the accuracy of measurement using a high-speed camera as a brightness pyrometer and of two-color pyrometry that was developed based on it.

  19. Practical use of high-speed cameras for research and development within the automotive industry: yesterday and today

    NASA Astrophysics Data System (ADS)

    Steinmetz, Klaus

    1995-05-01

    Within the automotive industry, especially for the development and improvement of safety systems, we find a lot of high accelerated motions, that can not be followed and consequently not be analyzed by human eye. For the vehicle safety tests at AUDI, which are performed as 'Crash Tests', 'Sled Tests' and 'Static Component Tests', 'Stalex', 'Hycam', and 'Locam' cameras are in use. Nowadays the automobile production is inconceivable without the use of high speed cameras.

  20. Experimental study of snow accretion on overhead transmission lines using a wind tunnel and a high-speed camera

    NASA Astrophysics Data System (ADS)

    Yasui, Mitsuru; Kagami, Jun; Ando, Hitoshi; Hamada, Yutaka

    1995-05-01

    The experimental study of snow accretion on overhead power transmission lines was carried out to obtain data on accretion rates using the artificial snow accretion test equipment and a high speed camera. We evaluated the accretion rate relative to temperature and wind velocity under simulated conditions of natural snowing and strong winds.

  1. High-speed light field camera and frequency division multiplexing for fast multi-plane velocity measurements.

    PubMed

    Fischer, Andreas; Kupsch, Christian; Gürtler, Johannes; Czarske, Jürgen

    2015-09-21

    Non-intrusive fast 3d measurements of volumetric velocity fields are necessary for understanding complex flows. Using high-speed cameras and spectroscopic measurement principles, where the Doppler frequency of scattered light is evaluated within the illuminated plane, each pixel allows one measurement and, thus, planar measurements with high data rates are possible. While scanning is one standard technique to add the third dimension, the volumetric data is not acquired simultaneously. In order to overcome this drawback, a high-speed light field camera is proposed for obtaining volumetric data with each single frame. The high-speed light field camera approach is applied to a Doppler global velocimeter with sinusoidal laser frequency modulation. As a result, a frequency multiplexing technique is required in addition to the plenoptic refocusing for eliminating the crosstalk between the measurement planes. However, the plenoptic refocusing is still necessary in order to achieve a large refocusing range for a high numerical aperture that minimizes the measurement uncertainty. Finally, two spatially separated measurement planes with 25×25 pixels each are simultaneously acquired with a measurement rate of 0.5 kHz with a single high-speed camera.

  2. An Impact Velocity Device Design for Blood Spatter Pattern Generation with Considerations for High-Speed Video Analysis.

    PubMed

    Stotesbury, Theresa; Illes, Mike; Vreugdenhil, Andrew J

    2016-03-01

    A mechanical device that uses gravitational and spring compression forces to create spatter patterns of known impact velocities is presented and discussed. The custom-made device uses either two or four springs (k1 = 267.8 N/m, k2 = 535.5 N/m) in parallel to create seventeen reproducible impact velocities between 2.1 and 4.0 m/s. The impactor is held at several known spring extensions using an electromagnet. Trigger inputs to the high-speed video camera allow the user to control the magnet's release while capturing video footage simultaneously. A polycarbonate base is used to allow for simultaneous monitoring of the side and bottom views of the impact event. Twenty-four patterns were created across the impact velocity range and analyzed using HemoSpat. Area of origin estimations fell within an acceptable range (ΔXav = -5.5 ± 1.9 cm, ΔYav = -2.6 ± 2.8 cm, ΔZav = +5.5 ± 3.8 cm), supporting distribution analysis for the use in research or bloodstain pattern training. This work provides a framework for those interested in developing a robust impact device.

  3. An Impact Velocity Device Design for Blood Spatter Pattern Generation with Considerations for High-Speed Video Analysis.

    PubMed

    Stotesbury, Theresa; Illes, Mike; Vreugdenhil, Andrew J

    2016-03-01

    A mechanical device that uses gravitational and spring compression forces to create spatter patterns of known impact velocities is presented and discussed. The custom-made device uses either two or four springs (k1 = 267.8 N/m, k2 = 535.5 N/m) in parallel to create seventeen reproducible impact velocities between 2.1 and 4.0 m/s. The impactor is held at several known spring extensions using an electromagnet. Trigger inputs to the high-speed video camera allow the user to control the magnet's release while capturing video footage simultaneously. A polycarbonate base is used to allow for simultaneous monitoring of the side and bottom views of the impact event. Twenty-four patterns were created across the impact velocity range and analyzed using HemoSpat. Area of origin estimations fell within an acceptable range (ΔXav = -5.5 ± 1.9 cm, ΔYav = -2.6 ± 2.8 cm, ΔZav = +5.5 ± 3.8 cm), supporting distribution analysis for the use in research or bloodstain pattern training. This work provides a framework for those interested in developing a robust impact device. PMID:27404625

  4. Characterising the dynamics of expirated bloodstain pattern formation using high-speed digital video imaging.

    PubMed

    Donaldson, Andrea E; Walker, Nicole K; Lamont, Iain L; Cordiner, Stephen J; Taylor, Michael C

    2011-11-01

    During forensic investigations, it is often important to be able to distinguish between impact spatter patterns (blood from gunshots, explosives, blunt force trauma and/or machinery accidents) and bloodstain patterns generated by expiration (blood from the mouth, nose or lungs). These patterns can be difficult to distinguish on the basis of the size of the bloodstains. In this study, high-speed digital video imaging has been used to investigate the formation of expirated bloodstain patterns generated by breathing, spitting and coughing mechanisms. Bloodstain patterns from all three expiration mechanisms were dominated by the presence of stains less than 0.5 mm in diameter. Video analysis showed that in the process of coughing blood, high-velocity, very small blood droplets were ejected first. These were followed by lower velocity, larger droplets, strands and plumes of liquid held together in part by saliva. The video images showed the formation of bubble rings and beaded stains, traditional markers for classifying expirated patterns. However, the expulsion mechanism, the distance travelled by the blood droplets, and the type of surface the blood was deposited on were all factors determining whether beaded stains were generated.

  5. Lifetime and structures of TLEs captured by high-speed camera on board aircraft

    NASA Astrophysics Data System (ADS)

    Takahashi, Y.; Sanmiya, Y.; Sato, M.; Kudo, T.; Inoue, T.

    2012-12-01

    Temporal development of sprite streamer is the manifestation of the local electric field and conductivity. Therefore, in order to understand the mechanisms of sprite, which show a large variety in temporal and spatial structures, the detailed analysis of both fine and macro-structures with high time resolution are to be the key approach. However, due to the long distance from the optical equipments to the phenomena and to the contamination by aerosols, it's not easy to get clear images of TLEs on the ground. In the period of June 27 - July 10, 2011, a combined aircraft and ground-based campaign, in support of NHK Cosmic Shore project, was carried with two jet airplanes under collaboration between NHK, Japan Broadcasting Corporation, and universities. On 8 nights out of 16 standing-by, the jets took off from the airport near Denver, Colorado, and an airborne high speed camera captured over 60 TLE events at a frame rate of 8000-10,000 /sec. Some of them show several tens of streamers in one sprite event, which repeat splitting at the down-going end of streamers or beads. The velocities of the bottom ends and the variations of their brightness are traced carefully. It is found that the top velocity is maintained only for the brightest beads and others become slow just after the splitting. Also the whole luminosity of one sprite event has short time duration with rapid downward motion if the charge moment change of the parent lightning is large. The relationship between diffuse glows such as elves and sprite halos, and subsequent discrete structure of sprite streamers is also examined. In most cases the halo and elves seem to show inhomogenous structures before being accompanied by streamers, which develop to bright spots or streamers with acceleration of the velocity. Those characteristics of velocity and lifetime of TLEs provide key information of their generation mechanism.

  6. Characterization of near-bed sediment transport in air and water by high-speed video

    NASA Astrophysics Data System (ADS)

    Martin, C. S.; Hamm, N. T.; Cushman-Roisin, B.; Dade, W. B.

    2010-12-01

    Near-bed sediment transport comprises a large fraction of the total mass flux of environmental flows, yet is difficult to characterize at fine scales without disturbing the flow. Particle-tracking velocimetry by means of high-speed video has proven to be an effective technique for quantifying particle behavior under this constraint. We present here results of experiments examining: i) the vertical structure of mass and momentum and ii) initial properties of particle trajectories within a layer of sediment transport immediately above a bed of loose grains in channel flows. Observations were conducted in both air and water of test particles with the density of quartz and with median diameters that ranged from 30 µm to 600 µm. Analysis of such a wide range of sediment transport conditions by the same method permits an evaluation of the fundamental structure of the near-bed sediment transport layer, including particle concentration and particle velocity. With appropriate normalization, self-consistent structure is identified for all particles in air, medium sand in water, and fine sand and silt in water. The integral values by which the data were normalized are found to be consistent with the relevant physical properties of the sediment transporting flow. This study advances the use of high-speed videography as a method by which to investigate the detailed mechanics of particle motion in a near-bed boundary layer, which in turn, can provide boundary conditions used for modeling sediment transport in a variety of applications.

  7. High-resolution, high-speed, three-dimensional video imaging with digital fringe projection techniques.

    PubMed

    Ekstrand, Laura; Karpinsky, Nikolaus; Wang, Yajun; Zhang, Song

    2013-01-01

    Digital fringe projection (DFP) techniques provide dense 3D measurements of dynamically changing surfaces. Like the human eyes and brain, DFP uses triangulation between matching points in two views of the same scene at different angles to compute depth. However, unlike a stereo-based method, DFP uses a digital video projector to replace one of the cameras(1). The projector rapidly projects a known sinusoidal pattern onto the subject, and the surface of the subject distorts these patterns in the camera's field of view. Three distorted patterns (fringe images) from the camera can be used to compute the depth using triangulation. Unlike other 3D measurement methods, DFP techniques lead to systems that tend to be faster, lower in equipment cost, more flexible, and easier to develop. DFP systems can also achieve the same measurement resolution as the camera. For this reason, DFP and other digital structured light techniques have recently been the focus of intense research (as summarized in(1-5)). Taking advantage of DFP, the graphics processing unit, and optimized algorithms, we have developed a system capable of 30 Hz 3D video data acquisition, reconstruction, and display for over 300,000 measurement points per frame(6,7). Binary defocusing DFP methods can achieve even greater speeds(8). Diverse applications can benefit from DFP techniques. Our collaborators have used our systems for facial function analysis(9), facial animation(10), cardiac mechanics studies(11), and fluid surface measurements, but many other potential applications exist. This video will teach the fundamentals of DFP techniques and illustrate the design and operation of a binary defocusing DFP system. PMID:24326674

  8. High-speed video analysis of wing-snapping in two manakin clades (Pipridae: Aves).

    PubMed

    Bostwick, Kimberly S; Prum, Richard O

    2003-10-01

    Basic kinematic and detailed physical mechanisms of avian, non-vocal sound production are both unknown. Here, for the first time, field-generated high-speed video recordings and acoustic analyses are used to test numerous competing hypotheses of the kinematics underlying sonations, or non-vocal communicative sounds, produced by two genera of Pipridae, Manacus and Pipra (Aves). Eleven behaviorally and acoustically distinct sonations are characterized, five of which fall into a specific acoustic class of relatively loud, brief, broad-frequency sound pulses, or snaps. The hypothesis that one kinematic mechanism of snap production is used within and between birds in general, and manakins specifically, is rejected. Instead, it is verified that three of four competing hypotheses of the kinematic mechanisms used for producing snaps, namely: (1). above-the-back wing-against-wing claps, (2). wing-against-body claps and (3). wing-into-air flicks, are employed between these two clades, and a fourth mechanism, (4). wing-against-tail feather interactions, is discovered. The kinematic mechanisms used to produce snaps are invariable within each identified sonation, despite the fact that a diversity of kinematic mechanisms are used among sonations. The other six sonations described are produced by kinematic mechanisms distinct from those used to create snaps, but are difficult to distinguish from each other and from the kinematics of flight. These results provide the first detailed kinematic information on mechanisms of sonation in birds in general, and the Pipridae specifically. Further, these results provide the first evidence that acoustically similar avian sonations, such as brief, broad frequency snaps, can be produced by diverse kinematic means, both among and within species. The use of high-speed video recordings in the field in a comparative manner documents the diversity of kinematic mechanisms used to sonate, and uncovers a hidden, sexually selected radiation of

  9. Laboratory calibration and characterization of video cameras

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Shortis, M. R.; Goad, W. K.

    1990-01-01

    Some techniques for laboratory calibration and characterization of video cameras used with frame grabber boards are presented. A laser-illuminated displaced reticle technique (with camera lens removed) is used to determine the camera/grabber effective horizontal and vertical pixel spacing as well as the angle of nonperpendicularity of the axes. The principal point of autocollimation and point of symmetry are found by illuminating the camera with an unexpanded laser beam, either aligned with the sensor or lens. Lens distortion and the principal distance are determined from images of a calibration plate suitably aligned with the camera. Calibration and characterization results for several video cameras are presented. Differences between these laboratory techniques and test range and plumb line calibration are noted.

  10. Laboratory Calibration and Characterization of Video Cameras

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Shortis, M. R.; Goad, W. K.

    1989-01-01

    Some techniques for laboratory calibration and characterization of video cameras used with frame grabber boards are presented. A laser-illuminated displaced reticle technique (with camera lens removed) is used to determine the camera/grabber effective horizontal and vertical pixel spacing as well as the angle of non-perpendicularity of the axes. The principal point of autocollimation and point of symmetry are found by illuminating the camera with an unexpanded laser beam, either aligned with the sensor or lens. Lens distortion and the principal distance are determined from images of a calibration plate suitable aligned with the camera. Calibration and characterization results for several video cameras are presented. Differences between these laboratory techniques and test range and plumb line calibration are noted.

  11. Video Analysis with a Web Camera

    ERIC Educational Resources Information Center

    Wyrembeck, Edward P.

    2009-01-01

    Recent advances in technology have made video capture and analysis in the introductory physics lab even more affordable and accessible. The purchase of a relatively inexpensive web camera is all you need if you already have a newer computer and Vernier's Logger Pro 3 software. In addition to Logger Pro 3, other video analysis tools such as…

  12. Direct observation of pH-induced coalescence of latex-stabilized bubbles using high-speed video imaging.

    PubMed

    Ata, Seher; Davis, Elizabeth S; Dupin, Damien; Armes, Steven P; Wanless, Erica J

    2010-06-01

    The coalescence of pairs of 2 mm air bubbles grown in a dilute electrolyte solution containing a lightly cross-linked 380 nm diameter PEGMA-stabilized poly(2-vinylpyridine) (P2VP) latex was monitored using a high-speed video camera. The air bubbles were highly stable at pH 10 when coated with this latex, although coalescence could be induced by increasing the bubble volume when in contact. Conversely, coalescence was rapid when the bubbles were equilibrated at pH 2, since the latex undergoes a latex-to-microgel transition and the swollen microgel particles are no longer adsorbed at the air-water interface. Rapid coalescence was also observed for latex-coated bubbles equilibrated at pH 10 and then abruptly adjusted to pH 2. Time-dependent postrupture oscillations in the projected surface area of coalescing P2VP-coated bubble pairs were studied using a high-speed video camera in order to reinvestigate the rapid acid-induced catastrophic foam collapse previously reported [Dupin, D.; et al. J. Mater. Chem. 2008, 18, 545]. At pH 10, the P2VP latex particles adsorbed at the surface of coalescing bubbles reduce the oscillation frequency significantly. This is attributed to a close-packed latex monolayer, which increases the bubble stiffness and hence restricts surface deformation. The swollen P2VP microgel particles that are formed in acid also affected the coalescence dynamics. It was concluded that there was a high concentration of swollen microgel at the air-water interface, which created a localized, viscous surface gel layer that inhibited at least the first period of the surface area oscillation. Close comparison between latex-coated bubbles at pH 10 and those coated with 66 microm spherical glass beads indicated that the former system exhibits more elastic behavior. This was attributed to the compressibility of the latex monolayer on the bubble surface during coalescence. A comparable elastic response was observed for similar sized titania particles, suggesting

  13. High speed television camera system processes photographic film data for digital computer analysis

    NASA Technical Reports Server (NTRS)

    Habbal, N. A.

    1970-01-01

    Data acquisition system translates and processes graphical information recorded on high speed photographic film. It automatically scans the film and stores the information with a minimal use of the computer memory.

  14. High-resolution, High-speed, Three-dimensional Video Imaging with Digital Fringe Projection Techniques

    PubMed Central

    Ekstrand, Laura; Karpinsky, Nikolaus; Wang, Yajun; Zhang, Song

    2013-01-01

    Digital fringe projection (DFP) techniques provide dense 3D measurements of dynamically changing surfaces. Like the human eyes and brain, DFP uses triangulation between matching points in two views of the same scene at different angles to compute depth. However, unlike a stereo-based method, DFP uses a digital video projector to replace one of the cameras1. The projector rapidly projects a known sinusoidal pattern onto the subject, and the surface of the subject distorts these patterns in the camera’s field of view. Three distorted patterns (fringe images) from the camera can be used to compute the depth using triangulation. Unlike other 3D measurement methods, DFP techniques lead to systems that tend to be faster, lower in equipment cost, more flexible, and easier to develop. DFP systems can also achieve the same measurement resolution as the camera. For this reason, DFP and other digital structured light techniques have recently been the focus of intense research (as summarized in1-5). Taking advantage of DFP, the graphics processing unit, and optimized algorithms, we have developed a system capable of 30 Hz 3D video data acquisition, reconstruction, and display for over 300,000 measurement points per frame6,7. Binary defocusing DFP methods can achieve even greater speeds8. Diverse applications can benefit from DFP techniques. Our collaborators have used our systems for facial function analysis9, facial animation10, cardiac mechanics studies11, and fluid surface measurements, but many other potential applications exist. This video will teach the fundamentals of DFP techniques and illustrate the design and operation of a binary defocusing DFP system. PMID:24326674

  15. New fully adaptive DPCM system for high-speed video applications

    NASA Astrophysics Data System (ADS)

    Thomas, Joseph R.

    1996-11-01

    This paper reports a new fully adaptive DPCM architecture targeted at meeting high-speed video source coding requirements. THe presence of two feedback loops in the ADPCM computational scheme, viz., the feedback necessary for the prediction, and that necessary for adapting the prediction, inherently limit the sampling rate that can be supported. In order to systolize/pipeline these computations we delay both the adaptation and prediction computations so that the required algorithmic delays are now available in the feedback loops. We create a 2-slow simulator of the system, use a system clock that is twice as fast as the sample rate, retime register delays to systolize/pipeline the computations, and project the prediction and adaptation computations onto a common set of multiply accumulate processor modules. This yields pipelined systolic arrays for both the coder and the decoder, using significantly reduced adaptation delay and minimal algorithmic modifications compared with recently proposed architectures, implying improved convergence behavior of the adaptive predictions and higher SNR in the received video frame at high sample rates, besides a hardware efficient, reduced area implementation.

  16. Photogrammetric Applications of Immersive Video Cameras

    NASA Astrophysics Data System (ADS)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  17. Eulerian frequency analysis of structural vibrations from high-speed video

    NASA Astrophysics Data System (ADS)

    Venanzoni, Andrea; De Ryck, Laurent; Cuenca, Jacques

    2016-06-01

    An approach for the analysis of the frequency content of structural vibrations from high-speed video recordings is proposed. The techniques and tools proposed rely on an Eulerian approach, that is, using the time history of pixels independently to analyse structural motion, as opposed to Lagrangian approaches, where the motion of the structure is tracked in time. The starting point is an existing Eulerian motion magnification method, which consists in decomposing the video frames into a set of spatial scales through a so-called Laplacian pyramid [1]. Each scale - or level - can be amplified independently to reconstruct a magnified motion of the observed structure. The approach proposed here provides two analysis tools or pre-amplification steps. The first tool provides a representation of the global frequency content of a video per pyramid level. This may be further enhanced by applying an angular filter in the spatial frequency domain to each frame of the video before the Laplacian pyramid decomposition, which allows for the identification of the frequency content of the structural vibrations in a particular direction of space. This proposed tool complements the existing Eulerian magnification method by amplifying selectively the levels containing relevant motion information with respect to their frequency content. This magnifies the displacement while limiting the noise contribution. The second tool is a holographic representation of the frequency content of a vibrating structure, yielding a map of the predominant frequency components across the structure. In contrast to the global frequency content representation of the video, this tool provides a local analysis of the periodic gray scale intensity changes of the frame in order to identify the vibrating parts of the structure and their main frequencies. Validation cases are provided and the advantages and limits of the approaches are discussed. The first validation case consists of the frequency content

  18. SPLASSH: Open source software for camera-based high-speed, multispectral in-vivo optical image acquisition

    PubMed Central

    Sun, Ryan; Bouchard, Matthew B.; Hillman, Elizabeth M. C.

    2010-01-01

    Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software’s framework and provide details to guide users with development of this and similar software. PMID:21258475

  19. Per-Pixel Coded Exposure for High-Speed and High-Resolution Imaging Using a Digital Micromirror Device Camera.

    PubMed

    Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei

    2016-03-04

    High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera.

  20. Per-Pixel Coded Exposure for High-Speed and High-Resolution Imaging Using a Digital Micromirror Device Camera

    PubMed Central

    Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei

    2016-01-01

    High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera. PMID:26959023

  1. Determining aerodynamic coefficients from high speed video of a free-flying model in a shock tunnel

    NASA Astrophysics Data System (ADS)

    Neely, Andrew J.; West, Ivan; Hruschka, Robert; Park, Gisu; Mudford, Neil R.

    2008-11-01

    This paper describes the application of the free flight technique to determine the aerodynamic coefficients of a model for the flow conditions produced in a shock tunnel. Sting-based force measurement techniques either lack the required temporal response or are restricted to large complex models. Additionally the free flight technique removes the flow interference produced by the sting that is present for these other techniques. Shock tunnel test flows present two major challenges to the practical implementation of the free flight technique. These are the millisecond-order duration of the test flows and the spatial and temporal nonuniformity of these flows. These challenges are overcome by the combination of an ultra-high speed digital video camera to record the trajectory, with spatial and temporal mapping of the test flow conditions. Use of a lightweight model ensures sufficient motion during the test time. The technique is demonstrated using the simple case of drag measurement on a spherical model, free flown in a Mach 10 shock tunnel condition.

  2. High-Speed Video-Oculography for Measuring Three-Dimensional Rotation Vectors of Eye Movements in Mice

    PubMed Central

    Takeda, Noriaki; Uno, Atsuhiko; Inohara, Hidenori; Shimada, Shoichi

    2016-01-01

    Background The mouse is the most commonly used animal model in biomedical research because of recent advances in molecular genetic techniques. Studies related to eye movement in mice are common in fields such as ophthalmology relating to vision, neuro-otology relating to the vestibulo-ocular reflex (VOR), neurology relating to the cerebellum’s role in movement, and psychology relating to attention. Recording eye movements in mice, however, is technically difficult. Methods We developed a new algorithm for analyzing the three-dimensional (3D) rotation vector of eye movement in mice using high-speed video-oculography (VOG). The algorithm made it possible to analyze the gain and phase of VOR using the eye’s angular velocity around the axis of eye rotation. Results When mice were rotated at 0.5 Hz and 2.5 Hz around the earth’s vertical axis with their heads in a 30° nose-down position, the vertical components of their left eye movements were in phase with the horizontal components. The VOR gain was 0.42 at 0.5 Hz and 0.74 at 2.5 Hz, and the phase lead of the eye movement against the turntable was 16.1° at 0.5 Hz and 4.88° at 2.5 Hz. Conclusions To the best of our knowledge, this is the first report of this algorithm being used to calculate a 3D rotation vector of eye movement in mice using high-speed VOG. We developed a technique for analyzing the 3D rotation vector of eye movements in mice with a high-speed infrared CCD camera. We concluded that the technique is suitable for analyzing eye movements in mice. We also include a C++ source code that can calculate the 3D rotation vectors of the eye position from two-dimensional coordinates of the pupil and the iris freckle in the image to this article. PMID:27023859

  3. High-speed video observations of the fine structure of a natural negative stepped leader at close distance

    NASA Astrophysics Data System (ADS)

    Qi, Qi; Lu, Weitao; Ma, Ying; Chen, Luwen; Zhang, Yijun; Rakov, Vladimir A.

    2016-09-01

    We present new high-speed video observations of a natural downward negative lightning flash that occurred at a close distance of 350 m. The stepped leader of this flash was imaged by three high-speed video cameras operating at framing rates of 1000, 10,000 and 50,000 frames per second, respectively. Synchronized electromagnetic field records were also obtained. Nine pronounced field pulses which we attributed to individual leader steps were recorded. The time intervals between the step pulses ranged from 13.9 to 23.9 μs, with a mean value of 17.4 μs. Further, for the first time, smaller pulses were observed between the pronounced step pulses in the magnetic field derivative records. Time intervals between the smaller pulses (indicative of intermittent processes between steps) ranged from 0.9 to 5.5 μs, with a mean of 2.2 μs and a standard deviation of 0.82 μs. A total of 23 luminous segments, commonly attributed to space stems/leaders, were captured. Their two-dimensional lengths varied from 1 to 13 m, with a mean of 5 m. The distances between the luminous segments and the existing leader channels ranged from 1 to 8 m, with a mean value of 4 m. Three possible scenarios of the evolution of space stems/leaders located beside the leader channel have been inferred: (A) the space stem/leader fails to make connection to the leader channel; (B) the space stem/leader connects to the existing leader channel, but may die off and be followed, tens of microseconds later, by a low luminosity streamer; (C) the space stem/leader connects to the existing channel and launches an upward-propagating luminosity wave. Weakly luminous filamentary structures, which we interpreted as corona streamers, were observed emanating from the leader tip. The stepped leader branches extended downward with speeds ranging from 4.1 × 105 to 14.6 × 105 m s- 1.

  4. High speed video shooting with continuous-wave laser illumination in laboratory modeling of wind - wave interaction

    NASA Astrophysics Data System (ADS)

    Kandaurov, Alexander; Troitskaya, Yuliya; Caulliez, Guillemette; Sergeev, Daniil; Vdovin, Maxim

    2014-05-01

    Three examples of usage of high-speed video filming in investigation of wind-wave interaction in laboratory conditions is described. Experiments were carried out at the Wind - wave stratified flume of IAP RAS (length 10 m, cross section of air channel 0.4 x 0.4 m, wind velocity up to 24 m/s) and at the Large Air-Sea Interaction Facility (LASIF) - MIO/Luminy (length 40 m, cross section of air channel 3.2 x 1.6 m, wind velocity up to 10 m/s). A combination of PIV-measurements, optical measurements of water surface form and wave gages were used for detailed investigation of the characteristics of the wind flow over the water surface. The modified PIV-method is based on the use of continuous-wave (CW) laser illumination of the airflow seeded by particles and high-speed video. During the experiments on the Wind - wave stratified flume of IAP RAS Green (532 nm) CW laser with 1.5 Wt output power was used as a source for light sheet. High speed digital camera Videosprint (VS-Fast) was used for taking visualized air flow images with the frame rate 2000 Hz. Velocity air flow field was retrieved by PIV images processing with adaptive cross-correlation method on the curvilinear grid following surface wave profile. The mean wind velocity profiles were retrieved using conditional in phase averaging like in [1]. In the experiments on the LASIF more powerful Argon laser (4 Wt, CW) was used as well as high-speed camera with higher sensitivity and resolution: Optronics Camrecord CR3000x2, frame rate 3571 Hz, frame size 259×1696 px. In both series of experiments spherical 0.02 mm polyamide particles with inertial time 7 ms were used for seeding airflow. New particle seeding system based on utilization of air pressure is capable of injecting 2 g of particles per second for 1.3 - 2.4 s without flow disturbance. Used in LASIF this system provided high particle density on PIV-images. In combination with high-resolution camera it allowed us to obtain momentum fluxes directly from

  5. High-speed video cinematographic demonstration of stalk and zooid contraction of Vorticella convallaria.

    PubMed Central

    Moriyama, Y; Hiyama, S; Asai, H

    1998-01-01

    Stalk contraction and zooid contraction of living Vorticella convallaria were studied by high-speed video cinematography. Contraction was monitored at a speed of 9000 frames per second to study the contractile process in detail. Complete stalk contraction required approximately 9 ms. The maximal contraction velocity, 8.8 cm/s, was observed 2 ms after the start of contraction. We found that a twist appeared in the zooid during contraction. As this twist unwound, the zooid began to rotate like a right-handed screw. The subsequent stalk contraction steps, the behavior of which was similar to that of a damped harmonic oscillator, were analyzed by means of the equation of motion. From the beginning of stalk contraction, the Hookean force constant increased, and reached an upper limit of 2.23 x 10(-4) N/m 2-3 ms after the start of contraction. Thus, within 2 ms, the contraction signal spread to the entire stalk, allowing the stalk to generate the full force of contraction. The tension of an extended stalk was estimated to be 5.58 x 10(-8) N from the Hookean force constant of a stalk. This value coincides with that of the isometric tension of a glycerol-treated V. convallaria, confirming that the contractile system of V. convallaria is well preserved despite glycerol treatment. PMID:9449349

  6. Measuring contraction propagation and localizing pacemaker cells using high speed video microscopy

    NASA Astrophysics Data System (ADS)

    Akl, Tony J.; Nepiyushchikh, Zhanna V.; Gashev, Anatoliy A.; Zawieja, David C.; Coté, Gerard L.

    2011-02-01

    Previous studies have shown the ability of many lymphatic vessels to contract phasically to pump lymph. Every lymphangion can act like a heart with pacemaker sites that initiate the phasic contractions. The contractile wave propagates along the vessel to synchronize the contraction. However, determining the location of the pacemaker sites within these vessels has proven to be very difficult. A high speed video microscopy system with an automated algorithm to detect pacemaker location and calculate the propagation velocity, speed, duration, and frequency of the contractions is presented in this paper. Previous methods for determining the contractile wave propagation velocity manually were time consuming and subject to errors and potential bias. The presented algorithm is semiautomated giving objective results based on predefined criteria with the option of user intervention. The system was first tested on simulation images and then on images acquired from isolated microlymphatic mesenteric vessels. We recorded contraction propagation velocities around 10 mm/s with a shortening speed of 20.4 to 27.1 μm/s on average and a contraction frequency of 7.4 to 21.6 contractions/min. The simulation results showed that the algorithm has no systematic error when compared to manual tracking. The system was used to determine the pacemaker location with a precision of 28 μm when using a frame rate of 300 frames per second.

  7. Measuring contraction propagation and localizing pacemaker cells using high speed video microscopy.

    PubMed

    Akl, Tony J; Nepiyushchikh, Zhanna V; Gashev, Anatoliy A; Zawieja, David C; Cot, Gerard L

    2011-02-01

    Previous studies have shown the ability of many lymphatic vessels to contract phasically to pump lymph. Every lymphangion can act like a heart with pacemaker sites that initiate the phasic contractions. The contractile wave propagates along the vessel to synchronize the contraction. However, determining the location of the pacemaker sites within these vessels has proven to be very difficult. A high speed video microscopy system with an automated algorithm to detect pacemaker location and calculate the propagation velocity, speed, duration, and frequency of the contractions is presented in this paper. Previous methods for determining the contractile wave propagation velocity manually were time consuming and subject to errors and potential bias. The presented algorithm is semiautomated giving objective results based on predefined criteria with the option of user intervention. The system was first tested on simulation images and then on images acquired from isolated microlymphatic mesenteric vessels. We recorded contraction propagation velocities around 10 mm/s with a shortening speed of 20.4 to 27.1 μm/s on average and a contraction frequency of 7.4 to 21.6 contractions/min. The simulation results showed that the algorithm has no systematic error when compared to manual tracking. The system was used to determine the pacemaker location with a precision of 28 μm when using a frame rate of 300 frames per second.

  8. Field-programmable gate array-based hardware architecture for high-speed camera with KAI-0340 CCD image sensor

    NASA Astrophysics Data System (ADS)

    Wang, Hao; Yan, Su; Zhou, Zuofeng; Cao, Jianzhong; Yan, Aqi; Tang, Linao; Lei, Yangjie

    2013-08-01

    We present a field-programmable gate array (FPGA)-based hardware architecture for high-speed camera which have fast auto-exposure control and colour filter array (CFA) demosaicing. The proposed hardware architecture includes the design of charge coupled devices (CCD) drive circuits, image processing circuits, and power supply circuits. CCD drive circuits transfer the TTL (Transistor-Transistor-Logic) level timing Sequences which is produced by image processing circuits to the timing Sequences under which CCD image sensor can output analog image signals. Image processing circuits convert the analog signals to digital signals which is processing subsequently, and the TTL timing, auto-exposure control, CFA demosaicing, and gamma correction is accomplished in this module. Power supply circuits provide the power for the whole system, which is very important for image quality. Power noises effect image quality directly, and we reduce power noises by hardware way, which is very effective. In this system, the CCD is KAI-0340 which is can output 210 full resolution frame-per-second, and our camera can work outstandingly in this mode. The speed of traditional auto-exposure control algorithms to reach a proper exposure level is so slow that it is necessary to develop a fast auto-exposure control method. We present a new auto-exposure algorithm which is fit high-speed camera. Color demosaicing is critical for digital cameras, because it converts a Bayer sensor mosaic output to a full color image, which determines the output image quality of the camera. Complexity algorithm can acquire high quality but cannot implement in hardware. An low-complexity demosaicing method is presented which can implement in hardware and satisfy the demand of quality. The experiment results are given in this paper in last.

  9. High-speed camera observation of multi-component droplet coagulation in an ultrasonic standing wave field

    NASA Astrophysics Data System (ADS)

    Reißenweber, Marina; Krempel, Sandro; Lindner, Gerhard

    2013-12-01

    With an acoustic levitator small particles can be aggregated near the nodes of a standing pressure field. Furthermore it is possible to atomize liquids on a vibrating surface. We used a combination of both mechanisms and atomized several liquids simultaneously, consecutively and emulsified in the ultrasonic field. Using a high-speed camera we observed the coagulation of the spray droplets into single large levitated droplets resolved in space and time. In case of subsequent atomization of two components the spray droplets of the second component were deposited on the surface of the previously coagulated droplet of the first component without mixing.

  10. Modeling Stepped Leaders Using a Time Dependent Multi-dipole Model and High-speed Video Data

    NASA Astrophysics Data System (ADS)

    Karunarathne, S.; Marshall, T.; Stolzenburg, M.; Warner, T. A.; Orville, R. E.

    2012-12-01

    In summer of 2011, we collected lightning data with 10 stations of electric field change meters (bandwidth of 0.16 Hz - 2.6 MHz) on and around NASA/Kennedy Space Center (KSC) covering nearly 70 km × 100 km area. We also had a high-speed video (HSV) camera recording 50,000 images per second collocated with one of the electric field change meters. In this presentation we describe our use of these data to model the electric field change caused by stepped leaders. Stepped leaders of a cloud to ground lightning flash typically create the initial path for the first return stroke (RS). Most of the time, stepped leaders have multiple complex branches, and one of these branches will create the ground connection for the RS to start. HSV data acquired with a short focal length lens at ranges of 5-25 km from the flash are useful for obtaining the 2-D location of these multiple branches developing at the same time. Using HSV data along with data from the KSC Lightning Detection and Ranging (LDAR2) system and the Cloud to Ground Lightning Surveillance System (CGLSS), the 3D path of a leader may be estimated. Once the path of a stepped leader is obtained, the time dependent multi-dipole model [ Lu, Winn,and Sonnenfeld, JGR 2011] can be used to match the electric field change at various sensor locations. Based on this model, we will present the time-dependent charge distribution along a leader channel and the total charge transfer during the stepped leader phase.

  11. Optimizing the input and output transmission lines that gate the microchannel plate in a high-speed framing camera

    NASA Astrophysics Data System (ADS)

    Lugten, John B.; Brown, Charles G.; Piston, Kenneth W.; Beeman, Bart V.; Allen, Fred V.; Boyle, Dustin T.; Brown, Christopher G.; Cruz, Jason G.; Kittle, Douglas R.; Lumbard, Alexander A.; Torres, Peter; Hargrove, Dana R.; Benedetti, Laura R.; Bell, Perry M.

    2015-08-01

    We present new designs for the launch and receiver boards used in a high speed x-ray framing camera at the National Ignition Facility. The new launch board uses a Klopfenstein taper to match the 50 ohm input impedance to the ~10 ohm microchannel plate. The new receiver board incorporates design changes resulting in an output monitor pulse shape that more accurately represents the pulse shape at the input and across the microchannel plate; this is valuable for assessing and monitoring the electrical performance of the assembled framing camera head. The launch and receiver boards maximize power coupling to the microchannel plate, minimize cross talk between channels, and minimize reflections. We discuss some of the design tradeoffs we explored, and present modeling results and measured performance. We also present our methods for dealing with the non-ideal behavior of coupling capacitors and terminating resistors. We compare the performance of these new designs to that of some earlier designs.

  12. High-speed 1280x1024 camera with 12-Gbyte SDRAM memory

    NASA Astrophysics Data System (ADS)

    Postnikov, Konstantin O.; Yakovlev, Alexey V.

    2001-04-01

    A 600 Frame/s camera based on 1.3 Megapixel CMOS sensor (PBMV13) with wide digital data output bus (10 parallel outputs of 10 bit worlds) was developed using high capacity SCRAM memory. This architecture allows to achieve 10 seconds of continuous recording of digital data from the sensor at 600 frames per second to the memory box with up to 12 1Gbytes SDRAM modules. Acquired data is transmitted through the fibre optic channel connected to the camera via FPDP interface to a PC type computer at the speed of 100 Mbyte per second and fibre cable length up to 10 km. All camera settings such as shutter time, frame rate, image size, present for changing integration time and frame rate, can be controlled by software. Camera specifications: shutter time - from 3.3 us to full frame at 1.6 us steps at 600 fps and then 1 frame steps down to 16 ms, frame rate - from 60 fps to 600 fps, image size 1280x1024, 1280x512, 1290x256, or 1280x128, changing on a fly - presetting two step table, memory capacity - depends on frame size (6000 frames with 1280x1024 or 48000 frames with 1280x128 resolution). Program can work with monochrome or color versions of the MV13 sensor.

  13. A novel optical apparatus for the study of rolling contact wear/fatigue based on a high-speed camera and multiple-source laser illumination.

    PubMed

    Bodini, I; Sansoni, G; Lancini, M; Pasinetti, S; Docchio, F

    2016-08-01

    Rolling contact wear/fatigue tests on wheel/rail specimens are important to produce wheels and rails of new materials for improved lifetime and performance, which are able to operate in harsh environments and at high rolling speeds. This paper presents a novel non-invasive, all-optical system, based on a high-speed video camera and multiple laser illumination sources, which is able to continuously monitor the dynamics of the specimens used to test wheel and rail materials, in a laboratory test bench. 3D macro-topography and angular position of the specimen are simultaneously performed, together with the acquisition of surface micro-topography, at speeds up to 500 rpm, making use of a fast camera and image processing algorithms. Synthetic indexes for surface micro-topography classification are defined, the 3D macro-topography is measured with a standard uncertainty down to 0.019 mm, and the angular position is measured on a purposely developed analog encoder with a standard uncertainty of 2.9°. The very small camera exposure time enables to obtain blur-free images with excellent definition. The system will be described with the aid of end-cycle specimens, as well as of in-test specimens. PMID:27587125

  14. A novel optical apparatus for the study of rolling contact wear/fatigue based on a high-speed camera and multiple-source laser illumination.

    PubMed

    Bodini, I; Sansoni, G; Lancini, M; Pasinetti, S; Docchio, F

    2016-08-01

    Rolling contact wear/fatigue tests on wheel/rail specimens are important to produce wheels and rails of new materials for improved lifetime and performance, which are able to operate in harsh environments and at high rolling speeds. This paper presents a novel non-invasive, all-optical system, based on a high-speed video camera and multiple laser illumination sources, which is able to continuously monitor the dynamics of the specimens used to test wheel and rail materials, in a laboratory test bench. 3D macro-topography and angular position of the specimen are simultaneously performed, together with the acquisition of surface micro-topography, at speeds up to 500 rpm, making use of a fast camera and image processing algorithms. Synthetic indexes for surface micro-topography classification are defined, the 3D macro-topography is measured with a standard uncertainty down to 0.019 mm, and the angular position is measured on a purposely developed analog encoder with a standard uncertainty of 2.9°. The very small camera exposure time enables to obtain blur-free images with excellent definition. The system will be described with the aid of end-cycle specimens, as well as of in-test specimens.

  15. A novel optical apparatus for the study of rolling contact wear/fatigue based on a high-speed camera and multiple-source laser illumination

    NASA Astrophysics Data System (ADS)

    Bodini, I.; Sansoni, G.; Lancini, M.; Pasinetti, S.; Docchio, F.

    2016-08-01

    Rolling contact wear/fatigue tests on wheel/rail specimens are important to produce wheels and rails of new materials for improved lifetime and performance, which are able to operate in harsh environments and at high rolling speeds. This paper presents a novel non-invasive, all-optical system, based on a high-speed video camera and multiple laser illumination sources, which is able to continuously monitor the dynamics of the specimens used to test wheel and rail materials, in a laboratory test bench. 3D macro-topography and angular position of the specimen are simultaneously performed, together with the acquisition of surface micro-topography, at speeds up to 500 rpm, making use of a fast camera and image processing algorithms. Synthetic indexes for surface micro-topography classification are defined, the 3D macro-topography is measured with a standard uncertainty down to 0.019 mm, and the angular position is measured on a purposely developed analog encoder with a standard uncertainty of 2.9°. The very small camera exposure time enables to obtain blur-free images with excellent definition. The system will be described with the aid of end-cycle specimens, as well as of in-test specimens.

  16. Two-dimensional thermal analysis for freezing of plant and animal cells by high-speed microscopic IR camera

    NASA Astrophysics Data System (ADS)

    Morikawa, Junko; Hashimoto, Toshimasa; Hayakawa, Eita; Uemura, Hideyuki

    2003-04-01

    Using a high speed IR camera for temperature sensor is a powerful new tool for thermal analysis in the cell scale biomaterials. In this study, we propose a new type of two-dimensional thermal analysis by means of a high speed IR camera with a microscopic lens, and applied it to the analysis of freezing of plant and animal cells. The latent heat on the freezing of super cooled onion epidermal cell was randomly observed by a unit cell size, one by one, under a cooling rate of 80degC/min with a spatial resolution of 7.5m. The freezing front of ice formation and the thermal diffusion behavior of generated latent heat were analyzed. As a result it was possible to determine simultaneously the intercellular/intracellular temperature distribution, the growing speed of freezing front in a single cell, and the thermal diffusion in the freezing process of living tissue. A new measuring system presented here will be significant in a transient process of biomaterials. A newly developed temperature wave methods for the measurement of in-plane thermal diffusivity was also applied to the cell systems.

  17. A study on ice crystal formation behavior at intracellular freezing of plant cells using a high-speed camera.

    PubMed

    Ninagawa, Takako; Eguchi, Akemi; Kawamura, Yukio; Konishi, Tadashi; Narumi, Akira

    2016-08-01

    Intracellular ice crystal formation (IIF) causes several problems to cryopreservation, and it is the key to developing improved cryopreservation techniques that can ensure the long-term preservation of living tissues. Therefore, the ability to capture clear intracellular freezing images is important for understanding both the occurrence and the IIF behavior. The authors developed a new cryomicroscopic system that was equipped with a high-speed camera for this study and successfully used this to capture clearer images of the IIF process in the epidermal tissues of strawberry geranium (Saxifraga stolonifera Curtis) leaves. This system was then used to examine patterns in the location and formation of intracellular ice crystals and to evaluate the degree of cell deformation because of ice crystals inside the cell and the growing rate and grain size of intracellular ice crystals at various cooling rates. The results showed that an increase in cooling rate influenced the formation pattern of intracellular ice crystals but had less of an effect on their location. Moreover, it reduced the degree of supercooling at the onset of intracellular freezing and the degree of cell deformation; the characteristic grain size of intracellular ice crystals was also reduced, but the growing rate of intracellular ice crystals was increased. Thus, the high-speed camera images could expose these changes in IIF behaviors with an increase in the cooling rate, and these are believed to have been caused by an increase in the degree of supercooling.

  18. High-speed real-time 3-D coordinates measurement based on fringe projection profilometry considering camera lens distortion

    NASA Astrophysics Data System (ADS)

    Feng, Shijie; Chen, Qian; Zuo, Chao; Sun, Jiasong; Yu, Shi Ling

    2014-10-01

    Optical three-dimensional (3-D) profilometry is gaining increasing attention for its simplicity, flexibility, high accuracy, and non-contact nature. Recent advances in imaging sensors and digital projection technology further its progress in high-speed, real-time applications, enabling 3-D shapes reconstruction of moving objects and dynamic scenes. However, the camera lens is never perfect and the lens distortion does influence the accuracy of the measurement result, which is often overlooked in the existing real-time 3-D shape measurement systems. To this end, here we present a novel high-speed real-time 3-D coordinates measuring technique based on fringe projection with the consideration of the camera lens distortion. A pixel mapping relation between a distorted image and a corrected one is pre-determined and stored in computer memory for real-time fringe correction. The out-of-plane height is obtained firstly and the acquisition for the two corresponding in-plane coordinates follows on the basis of the solved height. Besides, a method of lookup table (LUT) is introduced as well for fast data processing. Our experimental results reveal that the measurement error of the in-plane coordinates has been reduced by one order of magnitude and the accuracy of the out-plane coordinate been tripled after the distortions being eliminated. Moreover, owing to the generated LUTs, a 3-D reconstruction speed of 92.34 frames per second can be achieved.

  19. A study on ice crystal formation behavior at intracellular freezing of plant cells using a high-speed camera.

    PubMed

    Ninagawa, Takako; Eguchi, Akemi; Kawamura, Yukio; Konishi, Tadashi; Narumi, Akira

    2016-08-01

    Intracellular ice crystal formation (IIF) causes several problems to cryopreservation, and it is the key to developing improved cryopreservation techniques that can ensure the long-term preservation of living tissues. Therefore, the ability to capture clear intracellular freezing images is important for understanding both the occurrence and the IIF behavior. The authors developed a new cryomicroscopic system that was equipped with a high-speed camera for this study and successfully used this to capture clearer images of the IIF process in the epidermal tissues of strawberry geranium (Saxifraga stolonifera Curtis) leaves. This system was then used to examine patterns in the location and formation of intracellular ice crystals and to evaluate the degree of cell deformation because of ice crystals inside the cell and the growing rate and grain size of intracellular ice crystals at various cooling rates. The results showed that an increase in cooling rate influenced the formation pattern of intracellular ice crystals but had less of an effect on their location. Moreover, it reduced the degree of supercooling at the onset of intracellular freezing and the degree of cell deformation; the characteristic grain size of intracellular ice crystals was also reduced, but the growing rate of intracellular ice crystals was increased. Thus, the high-speed camera images could expose these changes in IIF behaviors with an increase in the cooling rate, and these are believed to have been caused by an increase in the degree of supercooling. PMID:27343136

  20. Clinically evaluated procedure for the reconstruction of vocal fold vibrations from endoscopic digital high-speed videos.

    PubMed

    Lohscheller, Jörg; Toy, Hikmet; Rosanowski, Frank; Eysholdt, Ulrich; Döllinger, Michael

    2007-08-01

    Investigation of voice disorders requires the examination of vocal fold vibrations. State of the art is the recording of endoscopic high-speed movies which capture vocal fold vibrations in real-time. It enables investigating the interrelation between disturbances of vocal fold vibrations and voice disorders. However, the lack of clinical studies and of a standardized procedure to reconstruct vocal fold vibrations from high-speed videos constrain the clinical acceptance of the high-speed technique. An image processing approach is presented that extracts the vibrating vocal fold edges from digital high-speed movies. The initial segmentation is principally based on a seeded region-growing algorithm. Even in movies with low image quality the algorithm segments successfully the glottal area by an introduced two-dimensional threshold matrix. Following segmentation, the vocal fold edges are reconstructed from the computed time-varying glottal area. The performance of the procedure was objectively evaluated within a study comprising 372 high-speed recordings. The accuracy of vocal fold reconstruction exceeds manual segmentation results obtained by clinical experts. The algorithm reaches an information flow-rate of up to 98 images per second. The robustness and high accuracy of the procedure makes it suitable for the application in clinical routine. It enables an objective and highly accurate description of vocal fold vibrations which is essential to realize extensive clinical studies which focus on the classification of voice disorders.

  1. Investigations of some aspects of the spray process in a single wire arc plasma spray system using high speed camera.

    PubMed

    Tiwari, N; Sahasrabudhe, S N; Tak, A K; Barve, D N; Das, A K

    2012-02-01

    A high speed camera has been used to record and analyze the evolution as well as particle behavior in a single wire arc plasma spray torch. Commercially available systems (spray watch, DPV 2000, etc.) focus onto a small area in the spray jet. They are not designed for tracking a single particle from the torch to the substrate. Using high speed camera, individual particles were tracked and their velocities were measured at various distances from the spray torch. Particle velocity information at different distances from the nozzle of the torch is very important to decide correct substrate position for the good quality of coating. The analysis of the images has revealed the details of the process of arc attachment to wire, melting of the wire, and detachment of the molten mass from the tip. Images of the wire and the arc have been recorded for different wire feed rates, gas flow rates, and torch powers, to determine compatible wire feed rates. High speed imaging of particle trajectories has been used for particle velocity determination using time of flight method. It was observed that the ripple in the power supply of the torch leads to large variation of instantaneous power fed to the torch. This affects the velocity of the spray particles generated at different times within one cycle of the ripple. It is shown that the velocity of a spray particle depends on the instantaneous torch power at the time of its generation. This correlation was established by experimental evidence in this paper. Once the particles leave the plasma jet, their forward speeds were found to be more or less invariant beyond 40 mm up to 500 mm from the nozzle exit.

  2. An unmanned watching system using video cameras

    SciTech Connect

    Kaneda, K.; Nakamae, E. ); Takahashi, E. ); Yazawa, K. )

    1990-04-01

    Techniques for detecting intruders at a remote location, such as a power plant or substation, or in an unmanned building at night, are significant in the field of unmanned watching systems. This article describes an unmanned watching system to detect trespassers in real time, applicable both indoors and outdoors, based on image processing. The main part of the proposed system consists of a video camera, an image processor and a microprocessor. Images are input from the video camera to the image processor every 1/60 second, and objects which enter the image are detected by measuring changes of intensity level in selected sensor areas. This article discusses the system configuration and the detection method. Experimental results under a range of environmental conditions are given.

  3. The High-Speed and Wide-Field TORTORA Camera: description & results .

    NASA Astrophysics Data System (ADS)

    Greco, G.; Beskin, G.; Karpov, S.; Guarnieri, A.; Bartolini, C.; Bondar, S.; Piccioni, A.; Molinari, E.

    We present the description and the most significant results of the wide-field and ultra-fast TORTORA camera devoted to the investigation of rapid changes in light intensity in a phenomenon occurring within an extremely short period of time and randomly distributed over the sky. In particular, the ground-based TORTORA observations synchronized with the gamma -ray BAT telescope on board of the Swift satellite has permitted to trace the optical burst time-structure of the Naked-Eye GRB 080319B with an unprecedented level of accuracy.

  4. SPADAS: a high-speed 3D single-photon camera for advanced driver assistance systems

    NASA Astrophysics Data System (ADS)

    Bronzi, D.; Zou, Y.; Bellisai, S.; Villa, F.; Tisa, S.; Tosi, A.; Zappa, F.

    2015-02-01

    Advanced Driver Assistance Systems (ADAS) are the most advanced technologies to fight road accidents. Within ADAS, an important role is played by radar- and lidar-based sensors, which are mostly employed for collision avoidance and adaptive cruise control. Nonetheless, they have a narrow field-of-view and a limited ability to detect and differentiate objects. Standard camera-based technologies (e.g. stereovision) could balance these weaknesses, but they are currently not able to fulfill all automotive requirements (distance range, accuracy, acquisition speed, and frame-rate). To this purpose, we developed an automotive-oriented CMOS single-photon camera for optical 3D ranging based on indirect time-of-flight (iTOF) measurements. Imagers based on Single-photon avalanche diode (SPAD) arrays offer higher sensitivity with respect to CCD/CMOS rangefinders, have inherent better time resolution, higher accuracy and better linearity. Moreover, iTOF requires neither high bandwidth electronics nor short-pulsed lasers, hence allowing the development of cost-effective systems. The CMOS SPAD sensor is based on 64 × 32 pixels, each able to process both 2D intensity-data and 3D depth-ranging information, with background suppression. Pixel-level memories allow fully parallel imaging and prevents motion artefacts (skew, wobble, motion blur) and partial exposure effects, which otherwise would hinder the detection of fast moving objects. The camera is housed in an aluminum case supporting a 12 mm F/1.4 C-mount imaging lens, with a 40°×20° field-of-view. The whole system is very rugged and compact and a perfect solution for vehicle's cockpit, with dimensions of 80 mm × 45 mm × 70 mm, and less that 1 W consumption. To provide the required optical power (1.5 W, eye safe) and to allow fast (up to 25 MHz) modulation of the active illumination, we developed a modular laser source, based on five laser driver cards, with three 808 nm lasers each. We present the full characterization of

  5. Photometric Calibration of Consumer Video Cameras

    NASA Technical Reports Server (NTRS)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to

  6. Frequency identification of vibration signals using video camera image data.

    PubMed

    Jeng, Yih-Nen; Wu, Chia-Hung

    2012-01-01

    This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC) can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system. PMID:23202026

  7. An Investigation On The Problems Of The Intermittent High-Speed Camera Of 360 Frames/S

    NASA Astrophysics Data System (ADS)

    Zhihong, Rong

    1989-06-01

    This paper discusses several problems on the JX-300 intermittent synchronous high-speed camera developed by the Institue of Optics and Electronics (10E), Academia Sinica in 1985. It is shown that when a framing rate is no more than 120 frames/s, a relatively high reliability is obtained resulting from low acceleration of the moving elements, weak intermittent pulldown strength, low frequency vibration, etc. At the time when a framing rate increases to over 200 frames/s, the photographic resolving power, as well as the film running reliability reduce due to the dramatic increase in vibration and pulldown strenth, which is similar to that in the stationary photography. It is getting worse when the framing rate is up to 300 frames/s. Therefore, deliberating on the choice of a claw mechanism having a framing rate of over 300 frames/s and conducting a series of technical measures are particularly important for a camera to obtain a sharp object image securely, otherwise it can hardly reach the framing rate of 300 frames/s for an intermittent camera. Even if this framing rate is attained, the image quality is also deformed and the mechanism would be rapidly worn off from high vibration.

  8. Observation of crack tip vicinity by the high-speed camera and research on dynamic fracture toughness

    NASA Astrophysics Data System (ADS)

    Ichinose, Kensuke; Moriwaki, Fumitaka; Gomi, Kenji

    2002-11-01

    In this paper, plastic region growing process which appeared in crack tip was visualized by stretcher strain, and it was observed using high-speed camera. Then, the fracture toughness value was calculated from the largest plastic region size. It was assumed that the relation equal to the case in which it is static under dynamic load was established, and we carried out dynamic experiment. The dynamic load was measured using piezo load-cell which is difficult to receive the effect of stress wave, the fracture toughness value was decided by the strain gauge method. For comparison, we also carried out static experiment comply ASTM E399-90. Then, the relationship between fracture toughness value calculated from the maximum load and it calculated from the largest plastic region size was investigated.

  9. Determining the TNT equivalence of gram-sized explosive charges using shock-wave shadowgraphy and high-speed video recording

    NASA Astrophysics Data System (ADS)

    Hargather, Michael

    2005-11-01

    Explosive materials are routinely characterized by their TNT equivalence. This can be determined by chemical composition calculations, measurements of shock wave overpressure, or measurements of the shock wave position vs. time. However, TNT equivalence is an imperfect criterion because it is only valid at a given radius from the explosion center (H. Kleine et al., Shock Waves 13(2):123-138, 2003). Here we use a large retroreflective shadowgraph system and a high-speed digital video camera to image the shock wave and record its location vs. time. Optical data obtained from different explosions can be combined to determine a characteristic shock wave x-t diagram, from which the overpressure and the TNT equivalent are determined at any radius. This method is applied to gram-sized triacetone triperoxide (TATP) charges. Such small charges can be used inexpensively and safely for explosives research.

  10. Measurement of liquid film flow on nuclear rod bundle in micro-scale by using very high speed camera system

    NASA Astrophysics Data System (ADS)

    Pham, Son; Kawara, Zensaku; Yokomine, Takehiko; Kunugi, Tomoaki

    2012-11-01

    Playing important roles in the mass and heat transfer as well as the safety of boiling water reactor, the liquid film flow on nuclear fuel rods has been studied by different measurement techniques such as ultrasonic transmission, conductivity probe, etc. Obtained experimental data of this annular two-phase flow, however, are still not enough to construct the physical model for critical heat flux analysis especially at the micro-scale. Remain problems are mainly caused by complicated geometry of fuel rod bundles, high velocity and very unstable interface behavior of liquid and gas flow. To get over these difficulties, a new approach using a very high speed digital camera system has been introduced in this work. The test section simulating a 3×3 rectangular rod bundle was made of acrylic to allow a full optical observation of the camera. Image data were taken through Cassegrain optical system to maintain the spatiotemporal resolution up to 7 μm and 20 μs. The results included not only the real-time visual information of flow patterns, but also the quantitative data such as liquid film thickness, the droplets' size and speed distributions, and the tilt angle of wavy surfaces. These databases could contribute to the development of a new model for the annular two-phase flow. Partly supported by the Global Center of Excellence (G-COE) program (J-051) of MEXT, Japan.

  11. Thermal/structural/optical integrated design for optical window of a high-speed aerial optical camera

    NASA Astrophysics Data System (ADS)

    Zhang, Gaopeng; Yang, Hongtao; Mei, Chao; Shi, Kui; Wu, Dengshan; Qiao, Mingrui

    2015-10-01

    In order to obtain high quality image of the aero optical remote sensor, it is important to analysis its thermal-optical performance on the condition of high speed and high altitude. Especially for the key imaging assembly, such as optical window, the temperature variation and temperature gradient can result in defocus and aberrations in optical system, which will lead to the poor quality image. In order to improve the optical performance of a high speed aerial camera optical window, the thermal/structural/optical integrated design method is developed. Firstly, the flight environment of optical window is analyzed. Based on the theory of aerodynamics and heat transfer, the convection heat transfer coefficient is calculated. The temperature distributing of optical window is simulated by the finite element analysis software. The maximum difference in temperature of the inside and outside of optical window is obtained. Then the deformation of optical window under the boundary condition of the maximum difference in temperature is calculated. The optical window surface deformation is fitted in Zernike polynomial as the interface, the calculated Zernike fitting coefficients is brought in and analyzed by CodeV Optical Software. At last, the transfer function diagrams of the optical system on temperature field are comparatively analyzed. By comparing and analyzing the result, it can be obtained that the optical path difference caused by thermal deformation of the optical window is 149.6 nm, which is under PV <=1 4λ .The simulation result meets the requirements of optical design very well. The above study can be used as an important reference for other optical window designs.

  12. Raster linearity of video cameras calibrated with precision tester

    NASA Technical Reports Server (NTRS)

    1964-01-01

    The time between transitions in the video output of a camera is measured when registered at reticle marks on the vidicon faceplate. This device permits precision calibration of raster linearity of television camera tubes.

  13. Large Area Divertor Temperature Measurements Using A High-speed Camera With Near-infrared FiIters in NSTX

    SciTech Connect

    Lyons, B C; Zweben, S J; Gray, T K; Hosea, J; Kaita, R; Kugel, H W; Maqueda, R J; McLean, A G; Roquemore, A L; Soukhanovskii, V A

    2011-04-05

    Fast cameras already installed on the National Spherical Torus Experiment (NSTX) have be equipped with near-infrared (NIR) filters in order to measure the surface temperature in the lower divertor region. Such a system provides a unique combination of high speed (> 50 kHz) and wide fi eld-of-view (> 50% of the divertor). Benchtop calibrations demonstrated the system's ability to measure thermal emission down to 330 oC. There is also, however, signi cant plasma light background in NSTX. Without improvements in background reduction, the current system is incapable of measuring signals below the background equivalent temperature (600 - 700 oC). Thermal signatures have been detected in cases of extreme divertor heating. It is observed that the divertor can reach temperatures around 800 oC when high harmonic fast wave (HHFW) heating is used. These temperature profiles were fi t using a simple heat diffusion code, providing a measurement of the heat flux to the divertor. Comparisons to other infrared thermography systems on NSTX are made.

  14. A compact single-camera system for high-speed, simultaneous 3-D velocity and temperature measurements.

    SciTech Connect

    Lu, Louise; Sick, Volker; Frank, Jonathan H.

    2013-09-01

    The University of Michigan and Sandia National Laboratories collaborated on the initial development of a compact single-camera approach for simultaneously measuring 3-D gasphase velocity and temperature fields at high frame rates. A compact diagnostic tool is desired to enable investigations of flows with limited optical access, such as near-wall flows in an internal combustion engine. These in-cylinder flows play a crucial role in improving engine performance. Thermographic phosphors were proposed as flow and temperature tracers to extend the capabilities of a novel, compact 3D velocimetry diagnostic to include high-speed thermometry. Ratiometric measurements were performed using two spectral bands of laser-induced phosphorescence emission from BaMg2Al10O17:Eu (BAM) phosphors in a heated air flow to determine the optimal optical configuration for accurate temperature measurements. The originally planned multi-year research project ended prematurely after the first year due to the Sandia-sponsored student leaving the research group at the University of Michigan.

  15. The concurrent validity and reliability of a low-cost, high-speed camera-based method for measuring the flight time of vertical jumps.

    PubMed

    Balsalobre-Fernández, Carlos; Tejero-González, Carlos M; del Campo-Vecino, Juan; Bavaresco, Nicolás

    2014-02-01

    Flight time is the most accurate and frequently used variable when assessing the height of vertical jumps. The purpose of this study was to analyze the validity and reliability of an alternative method (i.e., the HSC-Kinovea method) for measuring the flight time and height of vertical jumping using a low-cost high-speed Casio Exilim FH-25 camera (HSC). To this end, 25 subjects performed a total of 125 vertical jumps on an infrared (IR) platform while simultaneously being recorded with a HSC at 240 fps. Subsequently, 2 observers with no experience in video analysis analyzed the 125 videos independently using the open-license Kinovea 0.8.15 software. The flight times obtained were then converted into vertical jump heights, and the intraclass correlation coefficient (ICC), Bland-Altman plot, and Pearson correlation coefficient were calculated for those variables. The results showed a perfect correlation agreement (ICC = 1, p < 0.0001) between both observers' measurements of flight time and jump height and a highly reliable agreement (ICC = 0.997, p < 0.0001) between the observers' measurements of flight time and jump height using the HSC-Kinovea method and those obtained using the IR system, thus explaining 99.5% (p < 0.0001) of the differences (shared variance) obtained using the IR platform. As a result, besides requiring no previous experience in the use of this technology, the HSC-Kinovea method can be considered to provide similarly valid and reliable measurements of flight time and vertical jump height as more expensive equipment (i.e., IR). As such, coaches from many sports could use the HSC-Kinovea method to measure the flight time and height of their athlete's vertical jumps.

  16. The concurrent validity and reliability of a low-cost, high-speed camera-based method for measuring the flight time of vertical jumps.

    PubMed

    Balsalobre-Fernández, Carlos; Tejero-González, Carlos M; del Campo-Vecino, Juan; Bavaresco, Nicolás

    2014-02-01

    Flight time is the most accurate and frequently used variable when assessing the height of vertical jumps. The purpose of this study was to analyze the validity and reliability of an alternative method (i.e., the HSC-Kinovea method) for measuring the flight time and height of vertical jumping using a low-cost high-speed Casio Exilim FH-25 camera (HSC). To this end, 25 subjects performed a total of 125 vertical jumps on an infrared (IR) platform while simultaneously being recorded with a HSC at 240 fps. Subsequently, 2 observers with no experience in video analysis analyzed the 125 videos independently using the open-license Kinovea 0.8.15 software. The flight times obtained were then converted into vertical jump heights, and the intraclass correlation coefficient (ICC), Bland-Altman plot, and Pearson correlation coefficient were calculated for those variables. The results showed a perfect correlation agreement (ICC = 1, p < 0.0001) between both observers' measurements of flight time and jump height and a highly reliable agreement (ICC = 0.997, p < 0.0001) between the observers' measurements of flight time and jump height using the HSC-Kinovea method and those obtained using the IR system, thus explaining 99.5% (p < 0.0001) of the differences (shared variance) obtained using the IR platform. As a result, besides requiring no previous experience in the use of this technology, the HSC-Kinovea method can be considered to provide similarly valid and reliable measurements of flight time and vertical jump height as more expensive equipment (i.e., IR). As such, coaches from many sports could use the HSC-Kinovea method to measure the flight time and height of their athlete's vertical jumps. PMID:23689339

  17. Not So Fast: Swimming Behavior of Sailfish during Predator-Prey Interactions using High-Speed Video and Accelerometry.

    PubMed

    Marras, Stefano; Noda, Takuji; Steffensen, John F; Svendsen, Morten B S; Krause, Jens; Wilson, Alexander D M; Kurvers, Ralf H J M; Herbert-Read, James; Boswell, Kevin M; Domenici, Paolo

    2015-10-01

    Billfishes are considered among the fastest swimmers in the oceans. Despite early estimates of extremely high speeds, more recent work showed that these predators (e.g., blue marlin) spend most of their time swimming slowly, rarely exceeding 2 m s(-1). Predator-prey interactions provide a context within which one may expect maximal speeds both by predators and prey. Beyond speed, however, an important component determining the outcome of predator-prey encounters is unsteady swimming (i.e., turning and accelerating). Although large predators are faster than their small prey, the latter show higher performance in unsteady swimming. To contrast the evading behaviors of their highly maneuverable prey, sailfish and other large aquatic predators possess morphological adaptations, such as elongated bills, which can be moved more rapidly than the whole body itself, facilitating capture of the prey. Therefore, it is an open question whether such supposedly very fast swimmers do use high-speed bursts when feeding on evasive prey, in addition to using their bill for slashing prey. Here, we measured the swimming behavior of sailfish by using high-frequency accelerometry and high-speed video observations during predator-prey interactions. These measurements allowed analyses of tail beat frequencies to estimate swimming speeds. Our results suggest that sailfish burst at speeds of about 7 m s(-1) and do not exceed swimming speeds of 10 m s(-1) during predator-prey interactions. These speeds are much lower than previous estimates. In addition, the oscillations of the bill during swimming with, and without, extension of the dorsal fin (i.e., the sail) were measured. We suggest that extension of the dorsal fin may allow sailfish to improve the control of the bill and minimize its yaw, hence preventing disturbance of the prey. Therefore, sailfish, like other large predators, may rely mainly on accuracy of movement and the use of the extensions of their bodies, rather than resorting

  18. Initial laboratory evaluation of color video cameras: Phase 2

    SciTech Connect

    Terry, P.L.

    1993-07-01

    Sandia National Laboratories has considerable experience with monochrome video cameras used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than to identify intruders. The monochrome cameras were selected over color cameras because they have greater sensitivity and resolution. There is a growing interest in the identification function of security video systems for both access control and insider protection. Because color camera technology is rapidly changing and because color information is useful for identification purposes, Sandia National Laboratories has established an on-going program to evaluate the newest color solid-state cameras. Phase One of the Sandia program resulted in the SAND91-2579/1 report titled: Initial Laboratory Evaluation of Color Video Cameras. The report briefly discusses imager chips, color cameras, and monitors, describes the camera selection, details traditional test parameters and procedures, and gives the results reached by evaluating 12 cameras. Here, in Phase Two of the report, we tested 6 additional cameras using traditional methods. In addition, all 18 cameras were tested by newly developed methods. This Phase 2 report details those newly developed test parameters and procedures, and evaluates the results.

  19. High speed imaging technology: yesterday, today, and tomorrow

    NASA Astrophysics Data System (ADS)

    Pendley, Gil J.

    2003-07-01

    The purpose of this discussion is to familiarize readers with an overview of high-speed imaging technology as a means of analyzing objects in motion that occur too fast for the eye to see or conventional photography or video to capture. This information is intended to provide a brief historical narrative from the inception of high-speed imaging in the USA and the acceptance of digital video technology to augment or replace high-speed motion picture cameras. It is not intended a definitive work on the subject. For those interested in greater detail, such as application techniques, formulae, very high-speed and ultra speed technology etc. I recommend the latest text on the subject: High Speed Photography and Photonics first published in 1997 by Focal Press in the UK and copyrighted by the Association for High Speed Photography in the United Kingdom.

  20. Using a slit lamp-mounted digital high-speed camera for dynamic observation of phakic lenses during eye movements: a pilot study

    PubMed Central

    Leitritz, Martin Alexander; Ziemssen, Focke; Bartz-Schmidt, Karl Ulrich; Voykov, Bogomil

    2014-01-01

    Purpose To evaluate a digital high-speed camera combined with digital morphometry software for dynamic measurements of phakic intraocular lens movements to observe kinetic influences, particularly in fast direction changes and at lateral end points. Materials and methods A high-speed camera taking 300 frames per second observed movements of eight iris-claw intraocular lenses and two angle-supported intraocular lenses. Standardized saccades were performed by the patients to trigger mass inertia with lens position changes. Freeze images with maximum deviation were used for digital software-based morphometry analysis with ImageJ. Results Two eyes from each of five patients (median age 32 years, range 28–45 years) without findings other than refractive errors were included. The high-speed images showed sufficient usability for further morphometric processing. In the primary eye position, the median decentrations downward and in a lateral direction were −0.32 mm (range −0.69 to 0.024) and 0.175 mm (range −0.37 to 0.45), respectively. Despite the small sample size of asymptomatic patients, we found a considerable amount of lens dislocation. The median distance amplitude during eye movements was 0.158 mm (range 0.02–0.84). There was a slight positive correlation (r=0.39, P<0.001) between the grade of deviation in the primary position and the distance increase triggered by movements. Conclusion With the use of a slit lamp-mounted high-speed camera system and morphometry software, observation and objective measurements of iris-claw intraocular lenses and angle-supported intraocular lenses movements seem to be possible. Slight decentration in the primary position might be an indicator of increased lens mobility during kinetic stress during eye movements. Long-term assessment by high-speed analysis with higher case numbers has to clarify the relationship between progressing motility and endothelial cell damage. PMID:25071365

  1. Are Video Cameras the Key to School Safety?

    ERIC Educational Resources Information Center

    Maranzano, Chuck

    1998-01-01

    Describes one high school's use of video cameras as a preventive tool in stemming theft and violent episodes within schools. The top 10 design tips for preventing crime on campus are highlighted. (GR)

  2. Fused Six-Camera Video of STS-134 Launch

    NASA Video Gallery

    Imaging experts funded by the Space Shuttle Program and located at NASA's Ames Research Center prepared this video by merging nearly 20,000 photographs taken by a set of six cameras capturing 250 i...

  3. DETAIL VIEW OF VIDEO CAMERA, MAIN FLOOR LEVEL, PLATFORM ESOUTH, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    DETAIL VIEW OF VIDEO CAMERA, MAIN FLOOR LEVEL, PLATFORM E-SOUTH, HB-3, FACING SOUTHWEST - Cape Canaveral Air Force Station, Launch Complex 39, Vehicle Assembly Building, VAB Road, East of Kennedy Parkway North, Cape Canaveral, Brevard County, FL

  4. DETAIL VIEW OF A VIDEO CAMERA POSITIONED ALONG THE PERIMETER ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    DETAIL VIEW OF A VIDEO CAMERA POSITIONED ALONG THE PERIMETER OF THE MLP - Cape Canaveral Air Force Station, Launch Complex 39, Mobile Launcher Platforms, Launcher Road, East of Kennedy Parkway North, Cape Canaveral, Brevard County, FL

  5. Camcorder 101: Buying and Using Video Cameras.

    ERIC Educational Resources Information Center

    Catron, Louis E.

    1991-01-01

    Lists nine practical applications of camcorders to theater companies and programs. Discusses the purchase of video gear, camcorder features, accessories, the use of the camcorder in the classroom, theater management, student uses, and video production. (PRA)

  6. Analysis of unstructured video based on camera motion

    NASA Astrophysics Data System (ADS)

    Abdollahian, Golnaz; Delp, Edward J.

    2007-01-01

    Although considerable work has been done in management of "structured" video such as movies, sports, and television programs that has known scene structures, "unstructured" video analysis is still a challenging problem due to its unrestricted nature. The purpose of this paper is to address issues in the analysis of unstructured video and in particular video shot by a typical unprofessional user (i.e home video). We describe how one can make use of camera motion information for unstructured video analysis. A new concept, "camera viewing direction," is introduced as the building block of home video analysis. Motion displacement vectors are employed to temporally segment the video based on this concept. We then find the correspondence between the camera behavior with respect to the subjective importance of the information in each segment and describe how different patterns in the camera motion can indicate levels of interest in a particular object or scene. By extracting these patterns, the most representative frames, keyframes, for the scenes are determined and aggregated to summarize the video sequence.

  7. Implicit Memory in Monkeys: Development of a Delay Eyeblink Conditioning System with Parallel Electromyographic and High-Speed Video Measurements

    PubMed Central

    Suzuki, Kazutaka; Toyoda, Haruyoshi; Kano, Masanobu; Tsukada, Hideo; Kirino, Yutaka

    2015-01-01

    Delay eyeblink conditioning, a cerebellum-dependent learning paradigm, has been applied to various mammalian species but not yet to monkeys. We therefore developed an accurate measuring system that we believe is the first system suitable for delay eyeblink conditioning in a monkey species (Macaca mulatta). Monkey eyeblinking was simultaneously monitored by orbicularis oculi electromyographic (OO-EMG) measurements and a high-speed camera-based tracking system built around a 1-kHz CMOS image sensor. A 1-kHz tone was the conditioned stimulus (CS), while an air puff (0.02 MPa) was the unconditioned stimulus. EMG analysis showed that the monkeys exhibited a conditioned response (CR) incidence of more than 60% of trials during the 5-day acquisition phase and an extinguished CR during the 2-day extinction phase. The camera system yielded similar results. Hence, we conclude that both methods are effective in evaluating monkey eyeblink conditioning. This system incorporating two different measuring principles enabled us to elucidate the relationship between the actual presence of eyelid closure and OO-EMG activity. An interesting finding permitted by the new system was that the monkeys frequently exhibited obvious CRs even when they produced visible facial signs of drowsiness or microsleep. Indeed, the probability of observing a CR in a given trial was not influenced by whether the monkeys closed their eyelids just before CS onset, suggesting that this memory could be expressed independently of wakefulness. This work presents a novel system for cognitive assessment in monkeys that will be useful for elucidating the neural mechanisms of implicit learning in nonhuman primates. PMID:26068663

  8. Implicit Memory in Monkeys: Development of a Delay Eyeblink Conditioning System with Parallel Electromyographic and High-Speed Video Measurements.

    PubMed

    Kishimoto, Yasushi; Yamamoto, Shigeyuki; Suzuki, Kazutaka; Toyoda, Haruyoshi; Kano, Masanobu; Tsukada, Hideo; Kirino, Yutaka

    2015-01-01

    Delay eyeblink conditioning, a cerebellum-dependent learning paradigm, has been applied to various mammalian species but not yet to monkeys. We therefore developed an accurate measuring system that we believe is the first system suitable for delay eyeblink conditioning in a monkey species (Macaca mulatta). Monkey eyeblinking was simultaneously monitored by orbicularis oculi electromyographic (OO-EMG) measurements and a high-speed camera-based tracking system built around a 1-kHz CMOS image sensor. A 1-kHz tone was the conditioned stimulus (CS), while an air puff (0.02 MPa) was the unconditioned stimulus. EMG analysis showed that the monkeys exhibited a conditioned response (CR) incidence of more than 60% of trials during the 5-day acquisition phase and an extinguished CR during the 2-day extinction phase. The camera system yielded similar results. Hence, we conclude that both methods are effective in evaluating monkey eyeblink conditioning. This system incorporating two different measuring principles enabled us to elucidate the relationship between the actual presence of eyelid closure and OO-EMG activity. An interesting finding permitted by the new system was that the monkeys frequently exhibited obvious CRs even when they produced visible facial signs of drowsiness or microsleep. Indeed, the probability of observing a CR in a given trial was not influenced by whether the monkeys closed their eyelids just before CS onset, suggesting that this memory could be expressed independently of wakefulness. This work presents a novel system for cognitive assessment in monkeys that will be useful for elucidating the neural mechanisms of implicit learning in nonhuman primates.

  9. Implicit Memory in Monkeys: Development of a Delay Eyeblink Conditioning System with Parallel Electromyographic and High-Speed Video Measurements.

    PubMed

    Kishimoto, Yasushi; Yamamoto, Shigeyuki; Suzuki, Kazutaka; Toyoda, Haruyoshi; Kano, Masanobu; Tsukada, Hideo; Kirino, Yutaka

    2015-01-01

    Delay eyeblink conditioning, a cerebellum-dependent learning paradigm, has been applied to various mammalian species but not yet to monkeys. We therefore developed an accurate measuring system that we believe is the first system suitable for delay eyeblink conditioning in a monkey species (Macaca mulatta). Monkey eyeblinking was simultaneously monitored by orbicularis oculi electromyographic (OO-EMG) measurements and a high-speed camera-based tracking system built around a 1-kHz CMOS image sensor. A 1-kHz tone was the conditioned stimulus (CS), while an air puff (0.02 MPa) was the unconditioned stimulus. EMG analysis showed that the monkeys exhibited a conditioned response (CR) incidence of more than 60% of trials during the 5-day acquisition phase and an extinguished CR during the 2-day extinction phase. The camera system yielded similar results. Hence, we conclude that both methods are effective in evaluating monkey eyeblink conditioning. This system incorporating two different measuring principles enabled us to elucidate the relationship between the actual presence of eyelid closure and OO-EMG activity. An interesting finding permitted by the new system was that the monkeys frequently exhibited obvious CRs even when they produced visible facial signs of drowsiness or microsleep. Indeed, the probability of observing a CR in a given trial was not influenced by whether the monkeys closed their eyelids just before CS onset, suggesting that this memory could be expressed independently of wakefulness. This work presents a novel system for cognitive assessment in monkeys that will be useful for elucidating the neural mechanisms of implicit learning in nonhuman primates. PMID:26068663

  10. Synchronised electrical monitoring and high speed video of bubble growth associated with individual discharges during plasma electrolytic oxidation

    NASA Astrophysics Data System (ADS)

    Troughton, S. C.; Nominé, A.; Nominé, A. V.; Henrion, G.; Clyne, T. W.

    2015-12-01

    Synchronised electrical current and high speed video information are presented from individual discharges on Al substrates during PEO processing. Exposure time was 8 μs and linear spatial resolution 9 μm. Image sequences were captured for periods of 2 s, during which the sample surface was illuminated with short duration flashes (revealing bubbles formed where the discharge reached the surface of the coating). Correlations were thus established between discharge current, light emission from the discharge channel and (externally-illuminated) dimensions of the bubble as it expanded and contracted. Bubbles reached radii of 500 μm, within periods of 100 μs, with peak growth velocity about 10 m/s. It is deduced that bubble growth occurs as a consequence of the progressive volatilisation of water (electrolyte), without substantial increases in either pressure or temperature within the bubble. Current continues to flow through the discharge as the bubble expands, and this growth (and the related increase in electrical resistance) is thought to be responsible for the current being cut off (soon after the point of maximum radius). A semi-quantitative audit is presented of the transformations between different forms of energy that take place during the lifetime of a discharge.

  11. Instant Video Revisiting: The Video Camera as a "Tool of the Mind" for Young Children.

    ERIC Educational Resources Information Center

    Forman, George

    1999-01-01

    Once used only to record special events in the classroom, video cameras are now small enough and affordable enough to be used to document everyday events. Video cameras, with foldout screens, allow children to watch their activities immediately after they happen and to discuss them with a teacher. This article coins the term instant video…

  12. BEHAVIORAL INTERACTIONS OF THE BLACK IMPORTED FIRE ANT (SOLENOPSIS RICHTERI FOREL) AND ITS PARASITOID FLY (PSEUDACTEON CURVATUS BORGMEIER) AS REVEALED BY HIGH-SPEED VIDEO.

    Technology Transfer Automated Retrieval System (TEKTRAN)

    High-speed video recordings were used to study the interactions between the phorid fly (Pseudacteon curvatus), and the black imported fire ant (Solenopsis richteri) in the field. Phorid flies are extremely fast agile fliers that can hover and fly in all directions. Wingbeat frequency recorded with...

  13. Demonstrations of Optical Spectra with a Video Camera

    ERIC Educational Resources Information Center

    Kraftmakher, Yaakov

    2012-01-01

    The use of a video camera may markedly improve demonstrations of optical spectra. First, the output electrical signal from the camera, which provides full information about a picture to be transmitted, can be used for observing the radiant power spectrum on the screen of a common oscilloscope. Second, increasing the magnification by the camera…

  14. The calibration of video cameras for quantitative measurements

    NASA Technical Reports Server (NTRS)

    Snow, Walter L.; Childers, Brooks A.; Shortis, Mark R.

    1993-01-01

    Several different recent applications of velocimetry at Langley Research Center are described in order to show the need for video camera calibration for quantitative measurements. Problems peculiar to video sensing are discussed, including synchronization and timing, targeting, and lighting. The extension of the measurements to include radiometric estimates is addressed.

  15. Improving Photometric Calibration of Meteor Video Camera Systems

    NASA Technical Reports Server (NTRS)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2016-01-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Oce (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the rst point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric ux within the camera band-pass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at 0:20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0:05 ?? 0:10 mag in both ltered and un ltered camera observations with no evidence for lingering systematics.

  16. Assessment of the metrological performance of an in situ storage image sensor ultra-high speed camera for full-field deformation measurements

    NASA Astrophysics Data System (ADS)

    Rossi, Marco; Pierron, Fabrice; Forquin, Pascal

    2014-02-01

    Ultra-high speed (UHS) cameras allow us to acquire images typically up to about 1 million frames s-1 for a full spatial resolution of the order of 1 Mpixel. Different technologies are available nowadays to achieve these performances, an interesting one is the so-called in situ storage image sensor architecture where the image storage is incorporated into the sensor chip. Such an architecture is all solid state and does not contain movable devices as occurs, for instance, in the rotating mirror UHS cameras. One of the disadvantages of this system is the low fill factor (around 76% in the vertical direction and 14% in the horizontal direction) since most of the space in the sensor is occupied by memory. This peculiarity introduces a series of systematic errors when the camera is used to perform full-field strain measurements. The aim of this paper is to develop an experimental procedure to thoroughly characterize the performance of such kinds of cameras in full-field deformation measurement and identify the best operative conditions which minimize the measurement errors. A series of tests was performed on a Shimadzu HPV-1 UHS camera first using uniform scenes and then grids under rigid movements. The grid method was used as full-field measurement optical technique here. From these tests, it has been possible to appropriately identify the camera behaviour and utilize this information to improve actual measurements.

  17. Single software platform used for high speed data transfer implementation in a 65k pixel camera working in single photon counting mode

    NASA Astrophysics Data System (ADS)

    Maj, P.; Kasiński, K.; Gryboś, P.; Szczygieł, R.; Kozioł, A.

    2015-12-01

    Integrated circuits designed for specific applications generally use non-standard communication methods. Hybrid pixel detector readout electronics produces a huge amount of data as a result of number of frames per seconds. The data needs to be transmitted to a higher level system without limiting the ASIC's capabilities. Nowadays, the Camera Link interface is still one of the fastest communication methods, allowing transmission speeds up to 800 MB/s. In order to communicate between a higher level system and the ASIC with a dedicated protocol, an FPGA with dedicated code is required. The configuration data is received from the PC and written to the ASIC. At the same time, the same FPGA should be able to transmit the data from the ASIC to the PC at the very high speed. The camera should be an embedded system enabling autonomous operation and self-monitoring. In the presented solution, at least three different hardware platforms are used—FPGA, microprocessor with real-time operating system and the PC with end-user software. We present the use of a single software platform for high speed data transfer from 65k pixel camera to the personal computer.

  18. Court Reconstruction for Camera Calibration in Broadcast Basketball Videos.

    PubMed

    Wen, Pei-Chih; Cheng, Wei-Chih; Wang, Yu-Shuen; Chu, Hung-Kuo; Tang, Nick C; Liao, Hong-Yuan Mark

    2016-05-01

    We introduce a technique of calibrating camera motions in basketball videos. Our method particularly transforms player positions to standard basketball court coordinates and enables applications such as tactical analysis and semantic basketball video retrieval. To achieve a robust calibration, we reconstruct the panoramic basketball court from a video, followed by warping the panoramic court to a standard one. As opposed to previous approaches, which individually detect the court lines and corners of each video frame, our technique considers all video frames simultaneously to achieve calibration; hence, it is robust to illumination changes and player occlusions. To demonstrate the feasibility of our technique, we present a stroke-based system that allows users to retrieve basketball videos. Our system tracks player trajectories from broadcast basketball videos. It then rectifies the trajectories to a standard basketball court by using our camera calibration method. Consequently, users can apply stroke queries to indicate how the players move in gameplay during retrieval. The main advantage of this interface is an explicit query of basketball videos so that unwanted outcomes can be prevented. We show the results in Figs. 1, 7, 9, 10 and our accompanying video to exhibit the feasibility of our technique.

  19. Court Reconstruction for Camera Calibration in Broadcast Basketball Videos.

    PubMed

    Wen, Pei-Chih; Cheng, Wei-Chih; Wang, Yu-Shuen; Chu, Hung-Kuo; Tang, Nick C; Liao, Hong-Yuan Mark

    2016-05-01

    We introduce a technique of calibrating camera motions in basketball videos. Our method particularly transforms player positions to standard basketball court coordinates and enables applications such as tactical analysis and semantic basketball video retrieval. To achieve a robust calibration, we reconstruct the panoramic basketball court from a video, followed by warping the panoramic court to a standard one. As opposed to previous approaches, which individually detect the court lines and corners of each video frame, our technique considers all video frames simultaneously to achieve calibration; hence, it is robust to illumination changes and player occlusions. To demonstrate the feasibility of our technique, we present a stroke-based system that allows users to retrieve basketball videos. Our system tracks player trajectories from broadcast basketball videos. It then rectifies the trajectories to a standard basketball court by using our camera calibration method. Consequently, users can apply stroke queries to indicate how the players move in gameplay during retrieval. The main advantage of this interface is an explicit query of basketball videos so that unwanted outcomes can be prevented. We show the results in Figs. 1, 7, 9, 10 and our accompanying video to exhibit the feasibility of our technique. PMID:27504515

  20. High-speed imaging system for observation of discharge phenomena

    NASA Astrophysics Data System (ADS)

    Tanabe, R.; Kusano, H.; Ito, Y.

    2008-11-01

    A thin metal electrode tip instantly changes its shape into a sphere or a needlelike shape in a single electrical discharge of high current. These changes occur within several hundred microseconds. To observe these high-speed phenomena in a single discharge, an imaging system using a high-speed video camera and a high repetition rate pulse laser was constructed. A nanosecond laser, the wavelength of which was 532 nm, was used as the illuminating source of a newly developed high-speed video camera, HPV-1. The time resolution of our system was determined by the laser pulse width and was about 80 nanoseconds. The system can take one hundred pictures at 16- or 64-microsecond intervals in a single discharge event. A band-pass filter at 532 nm was placed in front of the camera to block the emission of the discharge arc at other wavelengths. Therefore, clear images of the electrode were recorded even during the discharge. If the laser was not used, only images of plasma during discharge and thermal radiation from the electrode after discharge were observed. These results demonstrate that the combination of a high repetition rate and a short pulse laser with a high speed video camera provides a unique and powerful method for high speed imaging.

  1. Synchronizing Light Pulses With Video Camera

    NASA Technical Reports Server (NTRS)

    Kalshoven, James E., Jr.; Tierney, Michael; Dabney, Philip

    1993-01-01

    Interface circuit triggers laser or other external source of light to flash in proper frame and field (at proper time) for video recording and playback in "pause" mode. Also increases speed of electronic shutter (if any) during affected frame to reduce visibility of background illumination relative to that of laser illumination.

  2. Optical System For An Electronic Still Video Camera

    NASA Astrophysics Data System (ADS)

    Yokota, H.; Kato, M.; Shiraishi, A.

    1987-06-01

    As a core of the Canon still video system, we have developed an electronic still video (SV) Tamera, RC-701, which records image information on a standard 2 inch video floppy disk. The RC-701 is composed of a 2/3 inch CCD, a viewfinder, signal processing circuitry, a disk drive unit and system control circuitry which controls the functions of all units. This report represents the analyses for designing the optimum viewfinder optical system of the RC-701, by solving various restricting factors caused from the structure of the SV camera, and introduces optical components used in the RC-701.

  3. Compact 3D flash lidar video cameras and applications

    NASA Astrophysics Data System (ADS)

    Stettner, Roger

    2010-04-01

    The theory and operation of Advanced Scientific Concepts, Inc.'s (ASC) latest compact 3D Flash LIDAR Video Cameras (3D FLVCs) and a growing number of technical problems and solutions are discussed. The solutions range from space shuttle docking, planetary entry, decent and landing, surveillance, autonomous and manned ground vehicle navigation and 3D imaging through particle obscurants.

  4. 67. DETAIL OF VIDEO CAMERA CONTROL PANEL LOCATED IMMEDIATELY WEST ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    67. DETAIL OF VIDEO CAMERA CONTROL PANEL LOCATED IMMEDIATELY WEST OF ASSISTANT LAUNCH CONDUCTOR PANEL SHOWN IN CA-133-1-A-66 - Vandenberg Air Force Base, Space Launch Complex 3, Launch Operations Building, Napa & Alden Roads, Lompoc, Santa Barbara County, CA

  5. Lights, Camera, Action! Using Video Recordings to Evaluate Teachers

    ERIC Educational Resources Information Center

    Petrilli, Michael J.

    2011-01-01

    Teachers and their unions do not want test scores to count for everything; classroom observations are key, too. But planning a couple of visits from the principal is hardly sufficient. These visits may "change the teacher's behavior"; furthermore, principals may not be the best judges of effective teaching. So why not put video cameras in…

  6. Using a Digital Video Camera to Study Motion

    ERIC Educational Resources Information Center

    Abisdris, Gil; Phaneuf, Alain

    2007-01-01

    To illustrate how a digital video camera can be used to analyze various types of motion, this simple activity analyzes the motion and measures the acceleration due to gravity of a basketball in free fall. Although many excellent commercially available data loggers and software can accomplish this task, this activity requires almost no financial…

  7. Relationship between structures of sprite streamers and inhomogeneity of preceding halos captured by high-speed camera during a combined aircraft and ground-based campaign

    NASA Astrophysics Data System (ADS)

    Takahashi, Y.; Sato, M.; Kudo, T.; Shima, Y.; Kobayashi, N.; Inoue, T.; Stenbaek-Nielsen, H. C.; McHarg, M. G.; Haaland, R. K.; Kammae, T.; Yair, Y.; Lyons, W. A.; Cummer, S. A.; Ahrns, J.; Yukman, P.; Warner, T. A.; Sonnenfeld, R. G.; Li, J.; Lu, G.

    2011-12-01

    The relationship between diffuse glows such as elves and sprite halos and subsequent discrete structure of sprite streamers is considered to be one of the keys to solve the mechanism causing a large variation of sprite structures. However, it's not easy to image at high frame rate both the diffuse and discrete structures simultaneously, since it requires high sensitivity, high spatial resolution and high signal-to-noise ratio. To capture the real spatial structure of TLEs without influence of atmospheric absorption, spacecraft would be the best solution. However, since the imaging observation from space is mostly made for TLEs appeared near the horizon, the range from spacecraft to TLEs becomes large, such as few thousand km, resulting in low spatial resolution. The aircraft can approach thunderstorm up to a few hundred km or less and can carry heavy high-speed cameras with huge size data memories. In the period of June 27 - July 10, 2011, a combined aircraft and ground-based campaign, in support of NHK Cosmic Shore project, was carried with two jet airplanes under collaboration between NHK (Japan Broadcasting Corporation) and universities. On 8 nights out of 16 standing-by, the jets took off from the airport near Denver, Colorado, and an airborne high speed camera captured over 40 TLE events at a frame rate of 8300 /sec. Here we introduce the time development of sprite streamers and the both large and fine structures of preceding halos showing inhomogeneity, suggesting a mechanism to cause the large variation of sprite types, such as crown like sprites.

  8. In-flight Video Captured by External Tank Camera System

    NASA Technical Reports Server (NTRS)

    2005-01-01

    In this July 26, 2005 video, Earth slowly fades into the background as the STS-114 Space Shuttle Discovery climbs into space until the External Tank (ET) separates from the orbiter. An External Tank ET Camera System featuring a Sony XC-999 model camera provided never before seen footage of the launch and tank separation. The camera was installed in the ET LO2 Feedline Fairing. From this position, the camera had a 40% field of view with a 3.5 mm lens. The field of view showed some of the Bipod area, a portion of the LH2 tank and Intertank flange area, and some of the bottom of the shuttle orbiter. Contained in an electronic box, the battery pack and transmitter were mounted on top of the Solid Rocker Booster (SRB) crossbeam inside the ET. The battery pack included 20 Nickel-Metal Hydride batteries (similar to cordless phone battery packs) totaling 28 volts DC and could supply about 70 minutes of video. Located 95 degrees apart on the exterior of the Intertank opposite orbiter side, there were 2 blade S-Band antennas about 2 1/2 inches long that transmitted a 10 watt signal to the ground stations. The camera turned on approximately 10 minutes prior to launch and operated for 15 minutes following liftoff. The complete camera system weighs about 32 pounds. Marshall Space Flight Center (MSFC), Johnson Space Center (JSC), Goddard Space Flight Center (GSFC), and Kennedy Space Center (KSC) participated in the design, development, and testing of the ET camera system.

  9. Fast roadway detection using car cabin video camera

    NASA Astrophysics Data System (ADS)

    Krokhina, Daria; Blinov, Veniamin; Gladilin, Sergey; Tarhanov, Ivan; Postnikov, Vassili

    2015-12-01

    We describe a fast method for road detection in images from a vehicle cabin camera. Straight section of roadway is detected using Fast Hough Transform and the method of dynamic programming. We assume that location of horizon line in the image and the road pattern are known. The developed method is fast enough to detect the roadway on each frame of the video stream in real time and may be further accelerated by the use of tracking.

  10. Using a CCD video camera for Galilean satellite eclipse timings

    NASA Astrophysics Data System (ADS)

    Bulder, Henk J. J.

    1993-02-01

    Looking for methods more objective than visual timings of the eclipses of Jupiter's Galilean satellites, the MXII CCD video camera was tested for this purpose. Special computer software was developed to track the objects of interest automatically. The first results show that this method is very promising; and, due to its simplicity, is well within the capabilities of many amateur astronomers. Comparison with other objective methods, such as photoelectric photometry, will be necessary to prove the value of this approach.

  11. Robust camera calibration for sport videos using court models

    NASA Astrophysics Data System (ADS)

    Farin, Dirk; Krabbe, Susanne; de With, Peter H. N.; Effelsberg, Wolfgang

    2003-12-01

    We propose an automatic camera calibration algorithm for court sports. The obtained camera calibration parameters are required for applications that need to convert positions in the video frame to real-world coordinates or vice versa. Our algorithm uses a model of the arrangement of court lines for calibration. Since the court model can be specified by the user, the algorithm can be applied to a variety of different sports. The algorithm starts with a model initialization step which locates the court in the image without any user assistance or a-priori knowledge about the most probable position. Image pixels are classified as court line pixels if they pass several tests including color and local texture constraints. A Hough transform is applied to extract line elements, forming a set of court line candidates. The subsequent combinatorial search establishes correspondences between lines in the input image and lines from the court model. For the succeeding input frames, an abbreviated calibration algorithm is used, which predicts the camera parameters for the new image and optimizes the parameters using a gradient-descent algorithm. We have conducted experiments on a variety of sport videos (tennis, volleyball, and goal area sequences of soccer games). Video scenes with considerable difficulties were selected to test the robustness of the algorithm. Results show that the algorithm is very robust to occlusions, partial court views, bad lighting conditions, or shadows.

  12. Identifying sports videos using replay, text, and camera motion features

    NASA Astrophysics Data System (ADS)

    Kobla, Vikrant; DeMenthon, Daniel; Doermann, David S.

    1999-12-01

    Automated classification of digital video is emerging as an important piece of the puzzle in the design of content management systems for digital libraries. The ability to classify videos into various classes such as sports, news, movies, or documentaries, increases the efficiency of indexing, browsing, and retrieval of video in large databases. In this paper, we discuss the extraction of features that enable identification of sports videos directly from the compressed domain of MPEG video. These features include detecting the presence of action replays, determining the amount of scene text in vide, and calculating various statistics on camera and/or object motion. The features are derived from the macroblock, motion,and bit-rate information that is readily accessible from MPEG video with very minimal decoding, leading to substantial gains in processing speeds. Full-decoding of selective frames is required only for text analysis. A decision tree classifier built using these features is able to identify sports clips with an accuracy of about 93 percent.

  13. Improving Photometric Calibration of Meteor Video Camera Systems

    NASA Technical Reports Server (NTRS)

    Ehlert, Steven; Kingery, Aaron; Cooke, William

    2016-01-01

    Current optical observations of meteors are commonly limited by systematic uncertainties in photometric calibration at the level of approximately 0.5 mag or higher. Future improvements to meteor ablation models, luminous efficiency models, or emission spectra will hinge on new camera systems and techniques that significantly reduce calibration uncertainties and can reliably perform absolute photometric measurements of meteors. In this talk we discuss the algorithms and tests that NASA's Meteoroid Environment Office (MEO) has developed to better calibrate photometric measurements for the existing All-Sky and Wide-Field video camera networks as well as for a newly deployed four-camera system for measuring meteor colors in Johnson-Cousins BV RI filters. In particular we will emphasize how the MEO has been able to address two long-standing concerns with the traditional procedure, discussed in more detail below.

  14. A multiscale product approach for an automatic classification of voice disorders from endoscopic high-speed videos.

    PubMed

    Unger, Jakob; Schuster, Maria; Hecker, Dietmar J; Schick, Bernhard; Lohscheller, Joerg

    2013-01-01

    Direct observation of vocal fold vibration is indispensable for a clinical diagnosis of voice disorders. Among current imaging techniques, high-speed videoendoscopy constitutes a state-of-the-art method capturing several thousand frames per second of the vocal folds during phonation. Recently, a method for extracting descriptive features from phonovibrograms, a two-dimensional image containing the spatio-temporal pattern of vocal fold dynamics, was presented. The derived features are closely related to a clinically established protocol for functional assessment of pathologic voices. The discriminative power of these features for different pathologic findings and configurations has not been assessed yet. In the current study, a collective of 220 subjects is considered for two- and multi-class problems of healthy and pathologic findings. The performance of the proposed feature set is compared to conventional feature reduction routines and was found to clearly outperform these. As such, the proposed procedure shows great potential for diagnostical issues of vocal fold disorders.

  15. Simultaneous Camera Path Optimization and Distraction Removal for Improving Amateur Video.

    PubMed

    Zhang, Fang-Lue; Wang, Jue; Zhao, Han; Martin, Ralph R; Hu, Shi-Min

    2015-12-01

    A major difference between amateur and professional video lies in the quality of camera paths. Previous work on video stabilization has considered how to improve amateur video by smoothing the camera path. In this paper, we show that additional changes to the camera path can further improve video aesthetics. Our new optimization method achieves multiple simultaneous goals: 1) stabilizing video content over short time scales; 2) ensuring simple and consistent camera paths over longer time scales; and 3) improving scene composition by automatically removing distractions, a common occurrence in amateur video. Our approach uses an L(1) camera path optimization framework, extended to handle multiple constraints. Two passes of optimization are used to address both low-level and high-level constraints on the camera path. The experimental and user study results show that our approach outputs video that is perceptually better than the input, or the results of using stabilization only.

  16. Social Justice through Literacy: Integrating Digital Video Cameras in Reading Summaries and Responses

    ERIC Educational Resources Information Center

    Liu, Rong; Unger, John A.; Scullion, Vicki A.

    2014-01-01

    Drawing data from an action-oriented research project for integrating digital video cameras into the reading process in pre-college courses, this study proposes using digital video cameras in reading summaries and responses to promote critical thinking and to teach social justice concepts. The digital video research project is founded on…

  17. Wireless capsule endoscopy video reduction based on camera motion estimation.

    PubMed

    Liu, Hong; Pan, Ning; Lu, Heng; Song, Enmin; Wang, Qian; Hung, Chih-Cheng

    2013-04-01

    Wireless capsule endoscopy (WCE) is a novel technology aiming for investigating the diseases and abnormalities in small intestine. The major drawback of WCE examination is that it takes a long time to examine the whole WCE video. In this paper, we present a new reduction scheme for WCE video to reduce the examination time. To achieve this task, a WCE video motion model is proposed. Under this motion model, the WCE imaging motion is estimated in two stages (the coarse level and the fine level). In the coarse level, the WCE camera motion is estimated with a combination of Bee Algorithm and Mutual Information. In the fine level, the local gastrointestinal tract motion is estimated with SIFT flow. Based on the result of WCE imaging motion estimation, the reduction scheme preserves key images in WCE video with scene changes. From experimental results, we notice that the proposed motion model is suitable for the motion estimation in successive WCE images. Through the comparison with APRS and FCM-NMF scheme, our scheme can produce an acceptable reduction sequence for browsing and examination. PMID:22868484

  18. Wireless capsule endoscopy video reduction based on camera motion estimation.

    PubMed

    Liu, Hong; Pan, Ning; Lu, Heng; Song, Enmin; Wang, Qian; Hung, Chih-Cheng

    2013-04-01

    Wireless capsule endoscopy (WCE) is a novel technology aiming for investigating the diseases and abnormalities in small intestine. The major drawback of WCE examination is that it takes a long time to examine the whole WCE video. In this paper, we present a new reduction scheme for WCE video to reduce the examination time. To achieve this task, a WCE video motion model is proposed. Under this motion model, the WCE imaging motion is estimated in two stages (the coarse level and the fine level). In the coarse level, the WCE camera motion is estimated with a combination of Bee Algorithm and Mutual Information. In the fine level, the local gastrointestinal tract motion is estimated with SIFT flow. Based on the result of WCE imaging motion estimation, the reduction scheme preserves key images in WCE video with scene changes. From experimental results, we notice that the proposed motion model is suitable for the motion estimation in successive WCE images. Through the comparison with APRS and FCM-NMF scheme, our scheme can produce an acceptable reduction sequence for browsing and examination.

  19. Non-mydriatic, wide field, fundus video camera

    NASA Astrophysics Data System (ADS)

    Hoeher, Bernhard; Voigtmann, Peter; Michelson, Georg; Schmauss, Bernhard

    2014-02-01

    We describe a method we call "stripe field imaging" that is capable of capturing wide field color fundus videos and images of the human eye at pupil sizes of 2mm. This means that it can be used with a non-dilated pupil even with bright ambient light. We realized a mobile demonstrator to prove the method and we could acquire color fundus videos of subjects successfully. We designed the demonstrator as a low-cost device consisting of mass market components to show that there is no major additional technical outlay to realize the improvements we propose. The technical core idea of our method is breaking the rotational symmetry in the optical design that is given in many conventional fundus cameras. By this measure we could extend the possible field of view (FOV) at a pupil size of 2mm from a circular field with 20° in diameter to a square field with 68° by 18° in size. We acquired a fundus video while the subject was slightly touching and releasing the lid. The resulting video showed changes at vessels in the region of the papilla and a change of the paleness of the papilla.

  20. Scientists Behind the Camera - Increasing Video Documentation in the Field

    NASA Astrophysics Data System (ADS)

    Thomson, S.; Wolfe, J.

    2013-12-01

    Over the last two years, Skypunch Creative has designed and implemented a number of pilot projects to increase the amount of video captured by scientists in the field. The major barrier to success that we tackled with the pilot projects was the conflicting demands of the time, space, storage needs of scientists in the field and the demands of shooting high quality video. Our pilots involved providing scientists with equipment, varying levels of instruction on shooting in the field and post-production resources (editing and motion graphics). In each project, the scientific team was provided with cameras (or additional equipment if they owned their own), tripods, and sometimes sound equipment, as well as an external hard drive to return the footage to us. Upon receiving the footage we professionally filmed follow-up interviews and created animations and motion graphics to illustrate their points. We also helped with the distribution of the final product (http://climatescience.tv/2012/05/the-story-of-a-flying-hippo-the-hiaper-pole-to-pole-observation-project/ and http://climatescience.tv/2013/01/bogged-down-in-alaska/). The pilot projects were a success. Most of the scientists returned asking for additional gear and support for future field work. Moving out of the pilot phase, to continue the project, we have produced a 14 page guide for scientists shooting in the field based on lessons learned - it contains key tips and best practice techniques for shooting high quality footage in the field. We have also expanded the project and are now testing the use of video cameras that can be synced with sensors so that the footage is useful both scientifically and artistically. Extract from A Scientist's Guide to Shooting Video in the Field

  1. High Speed Video Data Acquisition System (VDAS) for H. E. P. , including Reference Frame Subtractor, Data Compactor and 16 megabyte FIFO

    SciTech Connect

    Knickerbocker, K.L.; Baumbaugh, A.E.; Ruchti, R.; Baumbaugh, B.W.

    1987-02-01

    A Video-Data-Acquisition-System (VDAS) has been developed to record image data from a scintillating glass fiber-optic target developed for High Energy Physics. VDAS consists of a combination flash ADC, reference frame subtractor, high speed data compactor, an N megabyte First-In-First-Out (FIFO) memory (where N is a multiple of 4), and a single board computer as a control processor. System data rates are in excess of 30 megabytes/second. The reference frame subtractor, in conjunction with the data compactor, records only the differences from a standard frame. This greatly reduces the amount of data needed to record an image. Typical image sizes are reduced by as much as a factor of 20. With the exception of the ECL ADC board, the system uses standard TTL components to minimize power consumption and cost. VDAS operation as well as enhancements to the original system are discussed.

  2. Video-Camera-Based Position-Measuring System

    NASA Technical Reports Server (NTRS)

    Lane, John; Immer, Christopher; Brink, Jeffrey; Youngquist, Robert

    2005-01-01

    A prototype optoelectronic system measures the three-dimensional relative coordinates of objects of interest or of targets affixed to objects of interest in a workspace. The system includes a charge-coupled-device video camera mounted in a known position and orientation in the workspace, a frame grabber, and a personal computer running image-data-processing software. Relative to conventional optical surveying equipment, this system can be built and operated at much lower cost; however, it is less accurate. It is also much easier to operate than are conventional instrumentation systems. In addition, there is no need to establish a coordinate system through cooperative action by a team of surveyors. The system operates in real time at around 30 frames per second (limited mostly by the frame rate of the camera). It continuously tracks targets as long as they remain in the field of the camera. In this respect, it emulates more expensive, elaborate laser tracking equipment that costs of the order of 100 times as much. Unlike laser tracking equipment, this system does not pose a hazard of laser exposure. Images acquired by the camera are digitized and processed to extract all valid targets in the field of view. The three-dimensional coordinates (x, y, and z) of each target are computed from the pixel coordinates of the targets in the images to accuracy of the order of millimeters over distances of the orders of meters. The system was originally intended specifically for real-time position measurement of payload transfers from payload canisters into the payload bay of the Space Shuttle Orbiters (see Figure 1). The system may be easily adapted to other applications that involve similar coordinate-measuring requirements. Examples of such applications include manufacturing, construction, preliminary approximate land surveying, and aerial surveying. For some applications with rectangular symmetry, it is feasible and desirable to attach a target composed of black and white

  3. Deep-Sea Video Cameras Without Pressure Housings

    NASA Technical Reports Server (NTRS)

    Cunningham, Thomas

    2004-01-01

    Underwater video cameras of a proposed type (and, optionally, their light sources) would not be housed in pressure vessels. Conventional underwater cameras and their light sources are housed in pods that keep the contents dry and maintain interior pressures of about 1 atmosphere (.0.1 MPa). Pods strong enough to withstand the pressures at great ocean depths are bulky, heavy, and expensive. Elimination of the pods would make it possible to build camera/light-source units that would be significantly smaller, lighter, and less expensive. The depth ratings of the proposed camera/light source units would be essentially unlimited because the strengths of their housings would no longer be an issue. A camera according to the proposal would contain an active-pixel image sensor and readout circuits, all in the form of a single silicon-based complementary metal oxide/semiconductor (CMOS) integrated- circuit chip. As long as none of the circuitry and none of the electrical leads were exposed to seawater, which is electrically conductive, silicon integrated- circuit chips could withstand the hydrostatic pressure of even the deepest ocean. The pressure would change the semiconductor band gap by only a slight amount . not enough to degrade imaging performance significantly. Electrical contact with seawater would be prevented by potting the integrated-circuit chip in a transparent plastic case. The electrical leads for supplying power to the chip and extracting the video signal would also be potted, though not necessarily in the same transparent plastic. The hydrostatic pressure would tend to compress the plastic case and the chip equally on all sides; there would be no need for great strength because there would be no need to hold back high pressure on one side against low pressure on the other side. A light source suitable for use with the camera could consist of light-emitting diodes (LEDs). Like integrated- circuit chips, LEDs can withstand very large hydrostatic pressures. If

  4. High speed photography, videography, and photonics IV; Proceedings of the Meeting, San Diego, CA, Aug. 19, 20, 1986

    NASA Technical Reports Server (NTRS)

    Ponseggi, B. G. (Editor)

    1986-01-01

    Various papers on high-speed photography, videography, and photonics are presented. The general topics addressed include: photooptical and video instrumentation, streak camera data acquisition systems, photooptical instrumentation in wind tunnels, applications of holography and interferometry in wind tunnel research programs, and data analysis for photooptical and video instrumentation.

  5. The Camera Is Not a Methodology: Towards a Framework for Understanding Young Children's Use of Video Cameras

    ERIC Educational Resources Information Center

    Bird, Jo; Colliver, Yeshe; Edwards, Susan

    2014-01-01

    Participatory research methods argue that young children should be enabled to contribute their perspectives on research seeking to understand their worldviews. Visual research methods, including the use of still and video cameras with young children have been viewed as particularly suited to this aim because cameras have been considered easy and…

  6. ATR/OTR-SY Tank Camera Purge System and in Tank Color Video Imaging System

    SciTech Connect

    Werry, S.M.

    1995-06-06

    This procedure will document the satisfactory operation of the 101-SY tank Camera Purge System (CPS) and 101-SY in tank Color Camera Video Imaging System (CCVIS). Included in the CPRS is the nitrogen purging system safety interlock which shuts down all the color video imaging system electronics within the 101-SY tank vapor space during loss of nitrogen purge pressure.

  7. Effects of Phosphor Persistence on High-Speed Imaging of Transient Luminous Events

    NASA Astrophysics Data System (ADS)

    Qin, J.; Pasko, V. P.; Celestin, S. J.; Cummer, S. A.; McHarg, M. G.; Stenbaek-Nielsen, H. C.

    2014-12-01

    High-speed intensified cameras are commonly used to observe and study the transient luminous events known as sprite halos and sprite streamers occurring in the Earth's upper atmosphere in association with thunderstorm activity. In such observations the phosphor persistence in the image intensifier, depending on its characteristic decay time, might lead to a significant distortion of the optical signals recorded by those cameras. In the present work, we analyze observational data obtained using different camera systems to discuss the effects of phosphor persistence on high-speed video observations of sprites, and introduce a deconvolution technique to effectively reduce such effects. The discussed technique could also be used to enhance the high-speed images of other transient optical phenomena in the case when the phosphor persistence has a characteristic decay time that is comparable to the temporal resolution of the cameras required to resolve the phenomena.

  8. Application of Optical Measurement Techniques During Stages of Pregnancy: Use of Phantom High Speed Cameras for Digital Image Correlation (D.I.C.) During Baby Kicking and Abdomen Movements

    NASA Technical Reports Server (NTRS)

    Gradl, Paul

    2016-01-01

    Paired images were collected using a projected pattern instead of standard painting of the speckle pattern on her abdomen. High Speed cameras were post triggered after movements felt. Data was collected at 120 fps -limited due to 60hz frequency of projector. To ensure that kicks and movement data was real a background test was conducted with no baby movement (to correct for breathing and body motion).

  9. On the Complexity of Digital Video Cameras in/as Research: Perspectives and Agencements

    ERIC Educational Resources Information Center

    Bangou, Francis

    2014-01-01

    The goal of this article is to consider the potential for digital video cameras to produce as part of a research agencement. Our reflection will be guided by the current literature on the use of video recordings in research, as well as by the rhizoanalysis of two vignettes. The first of these vignettes is associated with a short video clip shot by…

  10. High speed photography, videography, and photonics III; Proceedings of the Meeting, San Diego, CA, August 22, 23, 1985

    NASA Technical Reports Server (NTRS)

    Ponseggi, B. G. (Editor); Johnson, H. C. (Editor)

    1985-01-01

    Papers are presented on the picosecond electronic framing camera, photogrammetric techniques using high-speed cineradiography, picosecond semiconductor lasers for characterizing high-speed image shutters, the measurement of dynamic strain by high-speed moire photography, the fast framing camera with independent frame adjustments, design considerations for a data recording system, and nanosecond optical shutters. Consideration is given to boundary-layer transition detectors, holographic imaging, laser holographic interferometry in wind tunnels, heterodyne holographic interferometry, a multispectral video imaging and analysis system, a gated intensified camera, a charge-injection-device profile camera, a gated silicon-intensified-target streak tube and nanosecond-gated photoemissive shutter tubes. Topics discussed include high time-space resolved photography of lasers, time-resolved X-ray spectrographic instrumentation for laser studies, a time-resolving X-ray spectrometer, a femtosecond streak camera, streak tubes and cameras, and a short pulse X-ray diagnostic development facility.

  11. [Research Award providing funds for a tracking video camera

    NASA Technical Reports Server (NTRS)

    Collett, Thomas

    2000-01-01

    The award provided funds for a tracking video camera. The camera has been installed and the system calibrated. It has enabled us to follow in real time the tracks of individual wood ants (Formica rufa) within a 3m square arena as they navigate singly in-doors guided by visual cues. To date we have been using the system on two projects. The first is an analysis of the navigational strategies that ants use when guided by an extended landmark (a low wall) to a feeding site. After a brief training period, ants are able to keep a defined distance and angle from the wall, using their memory of the wall's height on the retina as a controlling parameter. By training with walls of one height and length and testing with walls of different heights and lengths, we can show that ants adjust their distance from the wall so as to keep the wall at the height that they learned during training. Thus, their distance from the base of a tall wall is further than it is from the training wall, and the distance is shorter when the wall is low. The stopping point of the trajectory is defined precisely by the angle that the far end of the wall makes with the trajectory. Thus, ants walk further if the wall is extended in length and not so far if the wall is shortened. These experiments represent the first case in which the controlling parameters of an extended trajectory can be defined with some certainty. It raises many questions for future research that we are now pursuing.

  12. Eruptions on the fast track: application of Particle Tracking Velocimetry algorithms to visual and thermal high-speed videos of Strombolian explosions

    NASA Astrophysics Data System (ADS)

    Gaudin, Damien; Monica, Moroni; Jacopo, Taddeucci; Luca, Shindler; Piergiorgio, Scarlato

    2013-04-01

    Strombolian eruptions are characterized by mild, frequent explosions that eject gas and ash- to bomb-sized pyroclasts into the atmosphere. Studying these explosions is crucial, both for direct hazard assessment and for understanding eruption dynamics. Conventional thermal and optical imaging already allows characterizing several eruptive processes, but the quantification of key parameters linked to magma properties and conduit processes requires acquiring images at higher frequency. For example, high speed imaging already demonstrated how the size and the pressure of the gas bubble are linked to the decay of the ejection velocity of the particles, and the origin of the bombs, either fresh or recycled material, could be linked to their thermal evolution. However, the manual processing of the images is time consuming. Consequently, it does not allows neither the routine monitoring nor averaged statistics, since only a few relevant particles - usually the fastest - of a few explosions can be taken into account. In order to understand the dynamics of strombolian eruption, and particularly their cyclic behavior, the quantification of the total mass, heat and energy discharge are a crucial point. In this study, we use a Particle Tracking Velocimetry (PTV) algorithm jointly to traditional images processing to automatically extract the above parameters from visible and thermal high-speed videos of individual Strombolian explosions. PTV is an analysis technique where each single particle is detected and tracked during a series of images. Velocity, acceleration, and temperature can then be deduced and time averaged to get an extensive overview of each explosion. The suitability of PTV and its potential limitations in term of detection and representativity is investigated in various explosions of Stromboli (Italy), Yasur (Vanuatu) and Fuego (Guatemala) volcanoes. On most event, multiple sub-explosion are visible. In each sub-explosion, trends are noticeable : (1) the ejection

  13. High speed imaging television system

    DOEpatents

    Wilkinson, William O.; Rabenhorst, David W.

    1984-01-01

    A television system for observing an event which provides a composite video output comprising the serially interlaced images the system is greater than the time resolution of any of the individual cameras.

  14. High-speed camera imaging for laser ablation process: for further reliable elemental analysis using inductively coupled plasma-mass spectrometry.

    PubMed

    Hirata, Takafumi; Miyazaki, Zen

    2007-01-01

    Production of laser ablation-induced sample aerosols has been visualized using a high-speed camera device coupled with shadowgraphy technique. The time resolution of the method is 1 micros, and production of the sample grains was successfully defined by the imaging system. An argon-fluoride excimer laser operated at 193-nm wavelength was used to ablate the solid samples. When the laser was shot onto the sample (Si wafer), a dome-shaped dark area appeared at the ablation pit. This dark area reflects changes in refractive index of ambient He probably due to emission of electrons or ions from the ablation pit. The dark area expanded hemispherically from the ablation pit with a velocity close to the speed of sound (approximately 1000 m/s for He at 300 K). This was followed by the excitation or ionization of the vaporized sample, known as the plasma plume. Immediately after the formation of the plasma plume, sample aerosols were produced and released from the ablation pit along the propagation of the laser-induced shockwave. Production of the sample aerosols was significantly delayed (approximately 4 micros) from the onset of the laser shot. The typical speed of particles released from the ablation pit was 100-200 m/s, which was significantly slower than the reported velocity of the plasma plume expansion (104 m/s). Since the initial measured speed of the sample particles was rather close to the speed of sound, the sample aerosols could be rapidly decelerated to the terminal velocity by a gas drag force with ambient He. The release angle of the sample aerosols from the ablation pit was very shallow (<10 degrees ), which may be due to the downforce produced by the thermal expansion of the ambient gas above the ablation pit. The shallower release angle and the contribution of the downforce probably results in the redeposition of sample aerosols or vapor around the ablation pit. In fact, the degree of sample redeposition around the ablation pit can be effectively minimized

  15. Acceptance/operational test procedure 241-AN-107 Video Camera System

    SciTech Connect

    Pedersen, L.T.

    1994-11-18

    This procedure will document the satisfactory operation of the 241-AN-107 Video Camera System. The camera assembly, including camera mast, pan-and-tilt unit, camera, and lights, will be installed in Tank 241-AN-107 to monitor activities during the Caustic Addition Project. The camera focus, zoom, and iris remote controls will be functionally tested. The resolution and color rendition of the camera will be verified using standard reference charts. The pan-and-tilt unit will be tested for required ranges of motion, and the camera lights will be functionally tested. The master control station equipment, including the monitor, VCRs, printer, character generator, and video micrometer will be set up and performance tested in accordance with original equipment manufacturer`s specifications. The accuracy of the video micrometer to measure objects in the range of 0.25 inches to 67 inches will be verified. The gas drying distribution system will be tested to ensure that a drying gas can be flowed over the camera and lens in the event that condensation forms on these components. This test will be performed by attaching the gas input connector, located in the upper junction box, to a pressurized gas supply and verifying that the check valve, located in the camera housing, opens to exhaust the compressed gas. The 241-AN-107 camera system will also be tested to assure acceptable resolution of the camera imaging components utilizing the camera system lights.

  16. High speed data compactor

    DOEpatents

    Baumbaugh, Alan E.; Knickerbocker, Kelly L.

    1988-06-04

    A method and apparatus for suppressing from transmission, non-informational data words from a source of data words such as a video camera. Data words having values greater than a predetermined threshold are transmitted whereas data words having values less than a predetermined threshold are not transmitted but their occurrences instead are counted. Before being transmitted, the count of occurrences of invalid data words and valid data words are appended with flag digits which a receiving system decodes. The original data stream is fully reconstructable from the stream of valid data words and count of invalid data words.

  17. High speed handpieces.

    PubMed

    Bhandary, Nayan; Desai, Asavari; Shetty, Y Bharath

    2014-02-01

    High speed instruments are versatile instruments used by clinicians of all specialties of dentistry. It is important for clinicians to understand the types of high speed handpieces available and the mechanism of working. The centers for disease control and prevention have issued guidelines time and again for disinfection and sterilization of high speed handpieces. This article presents the recent developments in the design of the high speed handpieces. With a view to prevent hospital associated infections significant importance has been given to disinfection, sterilization & maintenance of high speed handpieces. How to cite the article: Bhandary N, Desai A, Shetty YB. High speed handpieces. J Int Oral Health 2014;6(1):130-2.

  18. A Raman Spectroscopy and High-Speed Video Experimental Study: The Effect of Pressure on the Solid-Liquid Transformation Kinetics of N-octane

    NASA Astrophysics Data System (ADS)

    Liu, C.; Wang, D.

    2015-12-01

    Phase transitions of minerals and rocks in the interior of the Earth, especially at elevated pressures and temperatures, can make the crystal structures and state parameters obviously changed, so it is very important for the physical and chemical properties of these materials. It is known that the transformation between solid and liquid is relatively common in nature, such as the melting of ice and the crystallization of mineral or water. The kinetics relevant to these transformations might provide valuable information on the reaction rate and the reaction mechanism involving nucleation and growth. An in-situ transformation kinetic study of n-octane, which served as an example for this type of phase transition, has been carried out using a hydrothermal diamond anvil cell (HDAC) and high-speed video technique, and that the overall purpose of this study is to develop a comprehensive understanding of the reaction mechanism and the influence of pressure on the different transformation rates. At ambient temperature, the liquid-solid transformation of n-octane first took place with increasing pressure, and then the solid phase gradually transformed into the liquid phase when the sample was heated to a certain temperature. Upon the cooling of the system, the liquid-solid transformation occurred again. According to the established quantitative assessments of the transformation rates, pressure and temperature, it showed that there was a negative pressure dependence of the solid-liquid transformation rate. However, the elevation of pressure can accelerate the liquid-solid transformation rate. Based on the calculated activation energy values, an interfacial reaction and diffusion dominated the solid-liquid transformation, but the liquid-solid transformation was mainly controlled by diffusion. This experimental technique is a powerful and effective tool for the transformation kinetics study of n-octane, and the obtained results are of great significance to the kinetics study

  19. Operational test procedure 241-AZ-101 waste tank color video camera system

    SciTech Connect

    Robinson, R.S.

    1996-10-30

    The purpose of this procedure is to provide a documented means of verifying that all of the functional components of the 241-AZ- 101 Waste Tank Video Camera System operate properly before and after installation.

  20. Lori Losey - The Woman Behind the Video Camera

    NASA Video Gallery

    The often-spectacular aerial video imagery of NASA flight research, airborne science missions and space satellite launches doesn't just happen. Much of it is the work of Lori Losey, senior video pr...

  1. Engineering task plan for flammable gas atmosphere mobile color video camera systems

    SciTech Connect

    Kohlman, E.H.

    1995-01-25

    This Engineering Task Plan (ETP) describes the design, fabrication, assembly, and testing of the mobile video camera systems. The color video camera systems will be used to observe and record the activities within the vapor space of a tank on a limited exposure basis. The units will be fully mobile and designed for operation in the single-shell flammable gas producing tanks. The objective of this tank is to provide two mobile camera systems for use in flammable gas producing single-shell tanks (SSTs) for the Flammable Gas Tank Safety Program. The camera systems will provide observation, video recording, and monitoring of the activities that occur in the vapor space of applied tanks. The camera systems will be designed to be totally mobile, capable of deployment up to 6.1 meters into a 4 inch (minimum) riser.

  2. High Speed Photometry for BUSCA

    NASA Astrophysics Data System (ADS)

    Cordes, O.; Reif, K.

    The camera BUSCA (Bonn University Simultaneous CAmera) is a standard instrument at the 2.2m telescope at Calar Alto Observatory (Spain) since 2001. At the moment some modifications of BUSCA are planned and partially realised. One major goal is the replacement of the old thick CCDs in the blue, yellow-green, and near-infrared channels. The newer CCDs have better cosmetics and performance in sensitivity. The other goal is to replace the old "Heidelberg"-style controller with a newly designed controller with the main focus on high-speed readout and on an advanced windowing mechanism. We present a theoretical analysis of the new controller design and its advantage in high speed photometry of rapidly pulsating stars. As an example PG1605+072 was chosen which was observed with BUSCA before in 2001 and 2002.

  3. Using a Video Camera to Measure the Radius of the Earth

    ERIC Educational Resources Information Center

    Carroll, Joshua; Hughes, Stephen

    2013-01-01

    A simple but accurate method for measuring the Earth's radius using a video camera is described. A video camera was used to capture a shadow rising up the wall of a tall building at sunset. A free program called ImageJ was used to measure the time it took the shadow to rise a known distance up the building. The time, distance and length of…

  4. Correction of spatially varying image and video motion blur using a hybrid camera.

    PubMed

    Tai, Yu-Wing; Du, Hao; Brown, Michael S; Lin, Stephen

    2010-06-01

    We describe a novel approach to reduce spatially varying motion blur in video and images using a hybrid camera system. A hybrid camera is a standard video camera that is coupled with an auxiliary low-resolution camera sharing the same optical path but capturing at a significantly higher frame rate. The auxiliary video is temporally sharper but at a lower resolution, while the lower frame-rate video has higher spatial resolution but is susceptible to motion blur. Our deblurring approach uses the data from these two video streams to reduce spatially varying motion blur in the high-resolution camera with a technique that combines both deconvolution and super-resolution. Our algorithm also incorporates a refinement of the spatially varying blur kernels to further improve results. Our approach can reduce motion blur from the high-resolution video as well as estimate new high-resolution frames at a higher frame rate. Experimental results on a variety of inputs demonstrate notable improvement over current state-of-the-art methods in image/video deblurring.

  5. High-speed imaging of explosive eruptions: applications and perspectives

    NASA Astrophysics Data System (ADS)

    Taddeucci, Jacopo; Scarlato, Piergiorgio; Gaudin, Damien; Capponi, Antonio; Alatorre-Ibarguengoitia, Miguel-Angel; Moroni, Monica

    2013-04-01

    Explosive eruptions, being by definition highly dynamic over short time scales, necessarily call for observational systems capable of relatively high sampling rates. "Traditional" tools, like as seismic and acoustic networks, have recently been joined by Doppler radar and electric sensors. Recent developments in high-speed camera systems now allow direct visual information of eruptions to be obtained with a spatial and temporal resolution suitable for the analysis of several key eruption processes. Here we summarize the methods employed to gather and process high-speed videos of explosive eruptions, and provide an overview of the several applications of these new type of data in understanding different aspects of explosive volcanism. Our most recent set up for high-speed imaging of explosive eruptions (FAMoUS - FAst, MUltiparametric Set-up,) includes: 1) a monochrome high speed camera, capable of 500 frames per second (fps) at high-definition (1280x1024 pixel) resolution and up to 200000 fps at reduced resolution; 2) a thermal camera capable of 50-200 fps at 480-120x640 pixel resolution; and 3) two acoustic to infrasonic sensors. All instruments are time-synchronized via a data logging system, a hand- or software-operated trigger, and via GPS, allowing signals from other instruments or networks to be directly recorded by the same logging unit or to be readily synchronized for comparison. FAMoUS weights less than 20 kg, easily fits into four, hand-luggage-sized backpacks, and can be deployed in less than 20' (and removed in less than 2', if needed). So far, explosive eruptions have been recorded in high-speed at several active volcanoes, including Fuego and Santiaguito (Guatemala), Stromboli (Italy), Yasur (Vanuatu), and Eyjafiallajokull (Iceland). Image processing and analysis from these eruptions helped illuminate several eruptive processes, including: 1) Pyroclasts ejection. High-speed videos reveal multiple, discrete ejection pulses within a single Strombolian

  6. Digital video technology and production 101: lights, camera, action.

    PubMed

    Elliot, Diane L; Goldberg, Linn; Goldberg, Michael J

    2014-01-01

    Videos are powerful tools for enhancing the reach and effectiveness of health promotion programs. They can be used for program promotion and recruitment, for training program implementation staff/volunteers, and as elements of an intervention. Although certain brief videos may be produced without technical assistance, others often require collaboration and contracting with professional videographers. To get practitioners started and to facilitate interactions with professional videographers, this Tool includes a guide to the jargon of video production and suggestions for how to integrate videos into health education and promotion work. For each type of video, production principles and issues to consider when working with a professional videographer are provided. The Tool also includes links to examples in each category of video applications to health promotion.

  7. Engineering task plan for Tanks 241-AN-103, 104, 105 color video camera systems

    SciTech Connect

    Kohlman, E.H.

    1994-11-17

    This Engineering Task Plan (ETP) describes the design, fabrication, assembly, and installation of the video camera systems into the vapor space within tanks 241-AN-103, 104, and 105. The one camera remotely operated color video systems will be used to observe and record the activities within the vapor space. Activities may include but are not limited to core sampling, auger activities, crust layer examination, monitoring of equipment installation/removal, and any other activities. The objective of this task is to provide a single camera system in each of the tanks for the Flammable Gas Tank Safety Program.

  8. Super deep 3D images from a 3D omnifocus video camera.

    PubMed

    Iizuka, Keigo

    2012-02-20

    When using stereographic image pairs to create three-dimensional (3D) images, a deep depth of field in the original scene enhances the depth perception in the 3D image. The omnifocus video camera has no depth of field limitations and produces images that are in focus throughout. By installing an attachment on the omnifocus video camera, real-time super deep stereoscopic pairs of video images were obtained. The deeper depth of field creates a larger perspective image shift, which makes greater demands on the binocular fusion of human vision. A means of reducing the perspective shift without harming the depth of field was found.

  9. High speed handpieces

    PubMed Central

    Bhandary, Nayan; Desai, Asavari; Shetty, Y Bharath

    2014-01-01

    High speed instruments are versatile instruments used by clinicians of all specialties of dentistry. It is important for clinicians to understand the types of high speed handpieces available and the mechanism of working. The centers for disease control and prevention have issued guidelines time and again for disinfection and sterilization of high speed handpieces. This article presents the recent developments in the design of the high speed handpieces. With a view to prevent hospital associated infections significant importance has been given to disinfection, sterilization & maintenance of high speed handpieces. How to cite the article: Bhandary N, Desai A, Shetty YB. High speed handpieces. J Int Oral Health 2014;6(1):130-2. PMID:24653618

  10. Feasibility study of transmission of OTV camera control information in the video vertical blanking interval

    NASA Technical Reports Server (NTRS)

    White, Preston A., III

    1994-01-01

    The Operational Television system at Kennedy Space Center operates hundreds of video cameras, many remotely controllable, in support of the operations at the center. This study was undertaken to determine if commercial NABTS (North American Basic Teletext System) teletext transmission in the vertical blanking interval of the genlock signals distributed to the cameras could be used to send remote control commands to the cameras and the associated pan and tilt platforms. Wavelength division multiplexed fiberoptic links are being installed in the OTV system to obtain RS-250 short-haul quality. It was demonstrated that the NABTS transmission could be sent over the fiberoptic cable plant without excessive video quality degradation and that video cameras could be controlled using NABTS transmissions over multimode fiberoptic paths as long as 1.2 km.

  11. Lights, Cameras, Pencils! Using Descriptive Video to Enhance Writing

    ERIC Educational Resources Information Center

    Hoffner, Helen; Baker, Eileen; Quinn, Kathleen Benson

    2008-01-01

    Students of various ages and abilities can increase their comprehension and build vocabulary with the help of a new technology, Descriptive Video. Descriptive Video (also known as described programming) was developed to give individuals with visual impairments access to visual media such as television programs and films. Described programs,…

  12. International Congress on High-Speed Photography and Photonics, 20th, Victoria, Canada, Sept. 21-25, 1992, Proceedings

    NASA Astrophysics Data System (ADS)

    Dewey, John M.; Racca, Roberto G.

    1993-03-01

    Papers included in this volume are on the topics of image converter and intensifier cameras; optomechanical high-speed cameras; applications; lasers and light sources for optomechanical high-speed cameras; holography, schlieren, interferometry, and spectroscopy; high-speed videography, CCDs, and sensors; picosecond and femtosecond techniques/image converter tubes; flash radiography; and image and data processing. Papers are presented on subnanosecond time-resolved imaging using an RF phase-sensitive image converter camera, an intelligent electronic control system for high-speed film cameras, microphotography of shocks in crystals, and light-beats generation using phase-modulated laser radiation. Also discussed is shock-wave detection by light deflection techniques, a new tubeless nanosecond streak camera based on optical deflection and direct CCD imaging, soft-X-ray time-resolved spectroscopy, repetitive compact flash X-ray generators for soft radiography, use of snapshot video and real-time image analysis for defect detection, direct observation of shaped-charge jets, and new high-speed cameras from Russia.

  13. Imaging with organic indicators and high-speed charge-coupled device cameras in neurons: some applications where these classic techniques have advantages.

    PubMed

    Ross, William N; Miyazaki, Kenichi; Popovic, Marko A; Zecevic, Dejan

    2015-04-01

    Dynamic calcium and voltage imaging is a major tool in modern cellular neuroscience. Since the beginning of their use over 40 years ago, there have been major improvements in indicators, microscopes, imaging systems, and computers. While cutting edge research has trended toward the use of genetically encoded calcium or voltage indicators, two-photon microscopes, and in vivo preparations, it is worth noting that some questions still may be best approached using more classical methodologies and preparations. In this review, we highlight a few examples in neurons where the combination of charge-coupled device (CCD) imaging and classical organic indicators has revealed information that has so far been more informative than results using the more modern systems. These experiments take advantage of the high frame rates, sensitivity, and spatial integration of the best CCD cameras. These cameras can respond to the faster kinetics of organic voltage and calcium indicators, which closely reflect the fast dynamics of the underlying cellular events.

  14. Kinematic Measurements of the Vocal-Fold Displacement Waveform in Typical Children and Adult Populations: Quantification of High-Speed Endoscopic Videos

    ERIC Educational Resources Information Center

    Patel, Rita; Donohue, Kevin D.; Unnikrishnan, Harikrishnan; Kryscio, Richard J.

    2015-01-01

    Purpose: This article presents a quantitative method for assessing instantaneous and average lateral vocal-fold motion from high-speed digital imaging, with a focus on developmental changes in vocal-fold kinematics during childhood. Method: Vocal-fold vibrations were analyzed for 28 children (aged 5-11 years) and 28 adults (aged 21-45 years)…

  15. Application of high-speed videography in sports analysis

    NASA Astrophysics Data System (ADS)

    Smith, Sarah L.

    1993-01-01

    The goal of sport biomechanists is to provide information to coaches and athletes about sport skill technique that will assist them in obtaining the highest levels of athletic performance. Within this technique evaluation process, two methodological approaches can be taken to study human movement. One method describes the motion being performed; the second approach focuses on understanding the forces causing the motion. It is with the movement description method that video image recordings offer a means for athletes, coaches, and sport biomechanists to analyze sport performance. Staff members of the Technique Evaluation Program provide video recordings of sport performance to athletes and coaches during training sessions held at the Olympic Training Center in Colorado Springs, Colorado. These video records are taken to provide a means for the qualitative evaluation or the quantitative analysis of sport skills as performed by elite athletes. High-speed video equipment (NAC HVRB-200 and NAC HSV-400 Video Systems) is used to capture various sport movement sequences that will permit coaches, athletes, and sport biomechanists to evaluate and/or analyze sport performance. The PEAK Performance Motion Measurement System allows sport biomechanists to measure selected mechanical variables appropriate to the sport being analyzed. Use of two high-speed cameras allows for three-dimensional analysis of the sport skill or the ability to capture images of an athlete's motion from two different perspectives. The simultaneous collection and synchronization of force data provides for a more comprehensive analysis and understanding of a particular sport skill. This process of combining force data with motion sequences has been done extensively with cycling. The decision to use high-speed videography rather than normal speed video is based upon the same criteria that are used in other settings. The rapidness of the sport movement sequence and the need to see the location of body parts

  16. The Importance of Camera Calibration and Distortion Correction to Obtain Measurements with Video Surveillance Systems

    NASA Astrophysics Data System (ADS)

    Cattaneo, C.; Mainetti, G.; Sala, R.

    2015-11-01

    Video surveillance systems are commonly used as important sources of quantitative information but from the acquired images it is possible to obtain a large amount of metric information. Yet, different methodological issues must be considered in order to perform accurate measurements using images. The most important one is the camera calibration, which is the estimation of the parameters defining the camera model. One of the most used camera calibration method is the Zhang's method, that allows the estimation of the linear parameters of the camera model. This method is very diffused as it requires a simple setup and it allows to calibrate cameras using a simple and fast procedure, but it does not consider lenses distortions, that must be taken into account with short focal lenses, commonly used in video surveillance systems. In order to perform accurate measurements, the linear camera model and the Zhang's method are improved in order to take nonlinear parameters into account and compensate the distortion contribute. In this paper we first describe the pinhole camera model that considers cameras as central projection systems. After a brief introduction to the camera calibration process and in particular the Zhang's method, we give a description of the different types of lens distortions and the techniques used for the distortion compensation. At the end some numerical example are shown in order to demonstrate the importance of the distortion compensation to obtain accurate measurements.

  17. A three-camera imaging microscope for high-speed single-molecule tracking and super-resolution imaging in living cells

    NASA Astrophysics Data System (ADS)

    English, Brian P.; Singer, Robert H.

    2015-08-01

    Our aim is to develop quantitative single-molecule assays to study when and where molecules are interacting inside living cells and where enzymes are active. To this end we present a three-camera imaging microscope for fast tracking of multiple interacting molecules simultaneously, with high spatiotemporal resolution. The system was designed around an ASI RAMM frame using three separate tube lenses and custom multi-band dichroics to allow for enhanced detection efficiency. The frame times of the three Andor iXon Ultra EMCCD cameras are hardware synchronized to the laser excitation pulses of the three excitation lasers, such that the fluorophores are effectively immobilized during frame acquisitions and do not yield detections that are motion-blurred. Stroboscopic illumination allows robust detection from even rapidly moving molecules while minimizing bleaching, and since snapshots can be spaced out with varying time intervals, stroboscopic illumination enables a direct comparison to be made between fast and slow molecules under identical light dosage. We have developed algorithms that accurately track and co-localize multiple interacting biomolecules. The three-color microscope combined with our co-movement algorithms have made it possible for instance to simultaneously image and track how the chromosome environment affects diffusion kinetics or determine how mRNAs diffuse during translation. Such multiplexed single-molecule measurements at a high spatiotemporal resolution inside living cells will provide a major tool for testing models relating molecular architecture and biological dynamics.

  18. A three-camera imaging microscope for high-speed single-molecule tracking and super-resolution imaging in living cells

    PubMed Central

    English, Brian P.; Singer, Robert H.

    2016-01-01

    Our aim is to develop quantitative single-molecule assays to study when and where molecules are interacting inside living cells and where enzymes are active. To this end we present a three-camera imaging microscope for fast tracking of multiple interacting molecules simultaneously, with high spatiotemporal resolution. The system was designed around an ASI RAMM frame using three separate tube lenses and custom multi-band dichroics to allow for enhanced detection efficiency. The frame times of the three Andor iXon Ultra EMCCD cameras are hardware synchronized to the laser excitation pulses of the three excitation lasers, such that the fluorophores are effectively immobilized during frame acquisitions and do not yield detections that are motion-blurred. Stroboscopic illumination allows robust detection from even rapidly moving molecules while minimizing bleaching, and since snapshots can be spaced out with varying time intervals, stroboscopic illumination enables a direct comparison to be made between fast and slow molecules under identical light dosage. We have developed algorithms that accurately track and co-localize multiple interacting biomolecules. The three-color microscope combined with our co-movement algorithms have made it possible for instance to simultaneously image and track how the chromosome environment affects diffusion kinetics or determine how mRNAs diffuse during translation. Such multiplexed single-molecule measurements at a high spatiotemporal resolution inside living cells will provide a major tool for testing models relating molecular architecture and biological dynamics. PMID:26819489

  19. Observations of in situ deep-sea marine bioluminescence with a high-speed, high-resolution sCMOS camera

    NASA Astrophysics Data System (ADS)

    Phillips, Brennan T.; Gruber, David F.; Vasan, Ganesh; Roman, Christopher N.; Pieribone, Vincent A.; Sparks, John S.

    2016-05-01

    Observing and measuring marine bioluminescence in situ presents unique challenges, characterized by the difficult task of approaching and imaging weakly illuminated bodies in a three-dimensional environment. To address this problem, a scientific complementary-metal-oxide-semiconductor (sCMOS) microscopy camera was outfitted for deep-sea imaging of marine bioluminescence. This system was deployed on multiple platforms (manned submersible, remotely operated vehicle, and towed body) in three oceanic regions (Western Tropical Pacific, Eastern Equatorial Pacific, and Northwestern Atlantic) to depths up to 2500 m. Using light stimulation, bioluminescent responses were recorded at high frame rates and in high resolution, offering unprecedented low-light imagery of deep-sea bioluminescence in situ. The kinematics of light production in several zooplankton groups was observed, and luminescent responses at different depths were quantified as intensity vs. time. These initial results signify a clear advancement in the bioluminescent imaging methods available for observation and experimentation in the deep-sea.

  20. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras

    PubMed Central

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-01-01

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961

  1. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.

    PubMed

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-01-01

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961

  2. Passive millimeter-wave video camera for aviation applications

    NASA Astrophysics Data System (ADS)

    Fornaca, Steven W.; Shoucri, Merit; Yujiri, Larry

    1998-07-01

    Passive Millimeter Wave (PMMW) imaging technology offers significant safety benefits to world aviation. Made possible by recent technological breakthroughs, PMMW imaging sensors provide visual-like images of objects under low visibility conditions (e.g., fog, clouds, snow, sandstorms, and smoke) which blind visual and infrared sensors. TRW has developed an advanced, demonstrator version of a PMMW imaging camera that, when front-mounted on an aircraft, gives images of the forward scene at a rate and quality sufficient to enhance aircrew vision and situational awareness under low visibility conditions. Potential aviation uses for a PMMW camera are numerous and include: (1) Enhanced vision for autonomous take- off, landing, and surface operations in Category III weather on Category I and non-precision runways; (2) Enhanced situational awareness during initial and final approach, including Controlled Flight Into Terrain (CFIT) mitigation; (3) Ground traffic control in low visibility; (4) Enhanced airport security. TRW leads a consortium which began flight tests with the demonstration PMMW camera in September 1997. Flight testing will continue in 1998. We discuss the characteristics of PMMW images, the current state of the technology, the integration of the camera with other flight avionics to form an enhanced vision system, and other aviation applications.

  3. Camera/Video Phones in Schools: Law and Practice

    ERIC Educational Resources Information Center

    Parry, Gareth

    2005-01-01

    The emergence of mobile phones with built-in digital cameras is creating legal and ethical concerns for school systems throughout the world. Users of such phones can instantly email, print or post pictures to other MMS1 phones or websites. Local authorities and schools in Britain, Europe, USA, Canada, Australia and elsewhere have introduced…

  4. BOREAS RSS-3 Imagery and Snapshots from a Helicopter-Mounted Video Camera

    NASA Technical Reports Server (NTRS)

    Walthall, Charles L.; Loechel, Sara; Nickeson, Jaime (Editor); Hall, Forrest G. (Editor)

    2000-01-01

    The BOREAS RSS-3 team collected helicopter-based video coverage of forested sites acquired during BOREAS as well as single-frame "snapshots" processed to still images. Helicopter data used in this analysis were collected during all three 1994 IFCs (24-May to 16-Jun, 19-Jul to 10-Aug, and 30-Aug to 19-Sep), at numerous tower and auxiliary sites in both the NSA and the SSA. The VHS-camera observations correspond to other coincident helicopter measurements. The field of view of the camera is unknown. The video tapes are in both VHS and Beta format. The still images are stored in JPEG format.

  5. High-Speed Photography

    SciTech Connect

    Paisley, D.L.; Schelev, M.Y.

    1998-08-01

    The applications of high-speed photography to a diverse set of subjects including inertial confinement fusion, laser surgical procedures, communications, automotive airbags, lightning etc. are briefly discussed. (AIP) {copyright} {ital 1998 Society of Photo-Optical Instrumentation Engineers.}

  6. High Speed Research Program

    NASA Technical Reports Server (NTRS)

    Anderson, Robert E.; Corsiglia, Victor R.; Schmitz, Frederic H. (Technical Monitor)

    1994-01-01

    An overview of the NASA High Speed Research Program will be presented from a NASA Headquarters perspective. The presentation will include the objectives of the program and an outline of major programmatic issues.

  7. Remote sensing applications with NH hyperspectral portable video camera

    NASA Astrophysics Data System (ADS)

    Takara, Yohei; Manago, Naohiro; Saito, Hayato; Mabuchi, Yusaku; Kondoh, Akihiko; Fujimori, Takahiro; Ando, Fuminori; Suzuki, Makoto; Kuze, Hiroaki

    2012-11-01

    Recent advances in image sensor and information technologies have enabled the development of small hyperspectral imaging systems. EBA JAPAN (Tokyo, Japan) has developed a novel grating-based, portable hyperspectral imaging camera NH-1 and NH-7 that can acquire a 2D spatial image (640 x 480 and 1280 x 1024 pixels, respectively) with a single exposure using an internal self-scanning system. The imagers cover a wavelength range of 350 - 1100 nm, with a spectral resolution of 5 nm. Because of their small weight of 750 g, the NH camera systems can easily be installed on a small UAV platform. We show the results from the analysis of data obtained by remote sensing applications including land vegetation and atmospheric monitoring from both ground- and airborne/UAV-based observations.

  8. Laser Imaging Video Camera Sees Through Fire, Fog, Smoke

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Under a series of SBIR contracts with Langley Research Center, inventor Richard Billmers refined a prototype for a laser imaging camera capable of seeing through fire, fog, smoke, and other obscurants. Now, Canton, Ohio-based Laser Imaging through Obscurants (LITO) Technologies Inc. is demonstrating the technology as a perimeter security system at Glenn Research Center and planning its future use in aviation, shipping, emergency response, and other fields.

  9. High-Speed Observer: Automated Streak Detection for the Aerospike Engine

    NASA Technical Reports Server (NTRS)

    Rieckhoff, T. J.; Covan, M. A.; OFarrell, J. M.

    2001-01-01

    A high-frame-rate digital video camera, installed on test stands at Stennis Space Center (SSC), has been used to capture images of the aerospike engine plume during test. These plume images are processed in real time to detect and differentiate anomalous plume events. Results indicate that the High-Speed Observer (HSO) system can detect anomalous plume streaking events that are indicative of aerospike engine malfunction.

  10. Development of low-noise high-speed analog ASIC for X-ray CCD cameras and wide-band X-ray imaging sensors

    NASA Astrophysics Data System (ADS)

    Nakajima, Hiroshi; Hirose, Shin-nosuke; Imatani, Ritsuko; Nagino, Ryo; Anabuki, Naohisa; Hayashida, Kiyoshi; Tsunemi, Hiroshi; Doty, John P.; Ikeda, Hirokazu; Kitamura, Hisashi; Uchihori, Yukio

    2016-09-01

    We report on the development and performance evaluation of the mixed-signal Application Specific Integrated Circuit (ASIC) developed for the signal processing of onboard X-ray CCD cameras and various types of X-ray imaging sensors in astrophysics. The quick and low-noise readout is essential for the pile-up free imaging spectroscopy with a future X-ray telescope. Our goal is the readout noise of 5e- r . m . s . at the pixel rate of 1 Mpix/s that is about 10 times faster than those of the currently working detectors. We successfully developed a low-noise ASIC as the front-end electronics of the Soft X-ray Imager onboard Hitomi that was launched on February 17, 2016. However, it has two analog-to-digital converters per chain due to the limited processing speed and hence we need to correct the difference of gain to obtain the X-ray spectra. Furthermore, its input equivalent noise performance is not satisfactory (> 100 μV) at the pixel rate higher than 500 kpix/s. Then we upgrade the design of the ASIC with the fourth-order ΔΣ modulators to enhance its inherent noise-shaping performance. Its performance is measured using pseudo CCD signals with variable processing speed. Although its input equivalent noise is comparable with the conventional one, the integrated non-linearity (0.1%) improves to about the half of that of the conventional one. The radiation tolerance is also measured with regard to the total ionizing dose effect and the single event latch-up using protons and Xenon, respectively. The former experiment shows that all of the performances does not change after imposing the dose corresponding to 590 years in a low earth orbit. We also put the upper limit on the frequency of the latch-up to be once per 48 years.

  11. Cost-effective multi-camera array for high quality video with very high dynamic range

    NASA Astrophysics Data System (ADS)

    Keinert, Joachim; Wetzel, Marcus; Schöberl, Michael; Schäfer, Peter; Zilly, Frederik; Bätz, Michel; Fößel, Siegfried; Kaup, André

    2014-03-01

    Temporal bracketing can create images with higher dynamic range than the underlying sensor. Unfortunately, moving objects cause disturbing artifacts. Moreover, the combination with high frame rates is almost unachiev­ able since a single video frame requires multiple sensor readouts. The combination of multiple synchronized side-by-side cameras equipped with different attenuation filters promises a remedy, since all exposures can be performed at the same time with the same duration using the playout video frame rate. However, a disparity correction is needed to compensate the spatial displacement of the cameras. Unfortunately, the requirements for a high quality disparity correction contradict the goal to increase dynamic range. When using two cameras, disparity correction needs objects to be properly exposed in both cameras. In contrast, a dynamic range in­crease needs the cameras to capture different luminance ranges. As this contradiction has not been addressed in literature so far, this paper proposes a novel solution based on a three camera setup. It enables accurate de­ termination of the disparities and an increase of the dynamic range by nearly a factor of two while still limiting costs. Compared to a two camera solution, the mean opinion score (MOS) is improved by 13.47 units in average for the Middleburry images.

  12. Observation of hydrothermal flows with acoustic video camera

    NASA Astrophysics Data System (ADS)

    Mochizuki, M.; Asada, A.; Tamaki, K.; Scientific Team Of Yk09-13 Leg 1

    2010-12-01

    Ridge 18-20deg.S, where hydrothermal plume signatures were previously perceived. DIDSON was equipped on the top of Shinkai6500 in order to get acoustic video images of hydrothermal plumes. In this cruise, seven dives of Shinkai6500 were conducted. The acoustic video images of the hydrothermal plumes had been captured in three of seven dives. These are only a few acoustic video images of the hydrothermal plumes. Processing and analyzing the acoustic video image data are going on. We will report the overview of the acoustic video image of the hydrothermal plumes and discuss possibility of DIDSON as an observation tool for seafloor hydrothermal activity.

  13. Nyquist Sampling Theorem: Understanding the Illusion of a Spinning Wheel Captured with a Video Camera

    ERIC Educational Resources Information Center

    Levesque, Luc

    2014-01-01

    Inaccurate measurements occur regularly in data acquisition as a result of improper sampling times. An understanding of proper sampling times when collecting data with an analogue-to-digital converter or video camera is crucial in order to avoid anomalies. A proper choice of sampling times should be based on the Nyquist sampling theorem. If the…

  14. Digital Video Cameras for Brainstorming and Outlining: The Process and Potential

    ERIC Educational Resources Information Center

    Unger, John A.; Scullion, Vicki A.

    2013-01-01

    This "Voices from the Field" paper presents methods and participant-exemplar data for integrating digital video cameras into the writing process across postsecondary literacy contexts. The methods and participant data are part of an ongoing action-based research project systematically designed to bring research and theory into practice…

  15. Field-based high-speed imaging of explosive eruptions

    NASA Astrophysics Data System (ADS)

    Taddeucci, J.; Scarlato, P.; Freda, C.; Moroni, M.

    2012-12-01

    Explosive eruptions involve, by definition, physical processes that are highly dynamic over short time scales. Capturing and parameterizing such processes is a major task in eruption understanding and forecasting, and a task that necessarily requires observational systems capable of high sampling rates. Seismic and acoustic networks are a prime tool for high-frequency observation of eruption, recently joined by Doppler radar and electric sensors. In comparison with the above monitoring systems, imaging techniques provide more complete and direct information of surface processes, but usually at a lower sampling rate. However, recent developments in high-speed imaging systems now allow such information to be obtained with a spatial and temporal resolution suitable for the analysis of several key eruption processes. Our most recent set up for high-speed imaging of explosive eruptions (FAMoUS - FAst, MUltiparametric Set-up,) includes: 1) a monochrome high speed camera, capable of 500 frames per second (fps) at high-definition (1280x1024 pixel) resolution and up to 200000 fps at reduced resolution; 2) a thermal camera capable of 50-200 fps at 480-120x640 pixel resolution; and 3) two acoustic to infrasonic sensors. All instruments are time-synchronized via a data logging system, a hand- or software-operated trigger, and via GPS, allowing signals from other instruments or networks to be directly recorded by the same logging unit or to be readily synchronized for comparison. FAMoUS weights less than 20 kg, easily fits into four, hand-luggage-sized backpacks, and can be deployed in less than 20' (and removed in less than 2', if needed). So far, explosive eruptions have been recorded in high-speed at several active volcanoes, including Fuego and Santiaguito (Guatemala), Stromboli (Italy), Yasur (Vanuatu), and Eyjafiallajokull (Iceland). Image processing and analysis from these eruptions helped illuminate several eruptive processes, including: 1) Pyroclasts ejection. High-speed

  16. Video content analysis on body-worn cameras for retrospective investigation

    NASA Astrophysics Data System (ADS)

    Bouma, Henri; Baan, Jan; ter Haar, Frank B.; Eendebak, Pieter T.; den Hollander, Richard J. M.; Burghouts, Gertjan J.; Wijn, Remco; van den Broek, Sebastiaan P.; van Rest, Jeroen H. C.

    2015-10-01

    In the security domain, cameras are important to assess critical situations. Apart from fixed surveillance cameras we observe an increasing number of sensors on mobile platforms, such as drones, vehicles and persons. Mobile cameras allow rapid and local deployment, enabling many novel applications and effects, such as the reduction of violence between police and citizens. However, the increased use of bodycams also creates potential challenges. For example: how can end-users extract information from the abundance of video, how can the information be presented, and how can an officer retrieve information efficiently? Nevertheless, such video gives the opportunity to stimulate the professionals' memory, and support complete and accurate reporting. In this paper, we show how video content analysis (VCA) can address these challenges and seize these opportunities. To this end, we focus on methods for creating a complete summary of the video, which allows quick retrieval of relevant fragments. The content analysis for summarization consists of several components, such as stabilization, scene selection, motion estimation, localization, pedestrian tracking and action recognition in the video from a bodycam. The different components and visual representations of summaries are presented for retrospective investigation.

  17. Acceptance/operational test procedure 101-AW tank camera purge system and 101-AW video camera system

    SciTech Connect

    Castleberry, J.L.

    1994-09-19

    This procedure will document the satisfactory operation of the 101-AW Tank Camera Purge System (CPS) and the 101-AW Video Camera System. The safety interlock which shuts down all the electronics inside the 101-AW vapor space, during loss of purge pressure, will be in place and tested to ensure reliable performance. This procedure is separated into four sections. Section 6.1 is performed in the 306 building prior to delivery to the 200 East Tank Farms and involves leak checking all fittings on the 101-AW Purge Panel for leakage using a Snoop solution and resolving the leakage. Section 7.1 verifies that PR-1, the regulator which maintains a positive pressure within the volume (cameras and pneumatic lines), is properly set. In addition the green light (PRESSURIZED) (located on the Purge Control Panel) is verified to turn on above 10 in. w.g. and after the time delay (TDR) has timed out. Section 7.2 verifies that the purge cycle functions properly, the red light (PURGE ON) comes on, and that the correct flowrate is obtained to meet the requirements of the National Fire Protection Association. Section 7.3 verifies that the pan and tilt, camera, associated controls and components operate correctly. This section also verifies that the safety interlock system operates correctly during loss of purge pressure. During the loss of purge operation the illumination of the amber light (PURGE FAILED) will be verified.

  18. Introducing a New High-Speed Imaging System for Measuring Raindrop Characteristics

    NASA Astrophysics Data System (ADS)

    Testik, F. Y.; Rahman, K.

    2013-12-01

    Here we present a new high-speed imaging system that we have developed for measuring rainfall microphysical quantities. This optical disdrometer system is capable of providing raindrop characteristics including drop diameter, fall velocity and acceleration, shape, and axis ratio. The main components of the system consist of a high-speed video camera capable of capturing 1000 frames per second, an LED light, a sensor unit to detect raindrops passing through the camera view frame, and a three-dimensional ultrasonic anemometer to measure the wind velocity. The entire imaging system is operated and synchronized using a LabView code developed in-house. In this system, the camera points at the LED light and records the silhouettes of the backlit drops. Because the digital storage limitations do not allow continuous recording of high-speed camera systems more than several seconds, we utilized a sensor system that triggers the camera when a raindrop is detected within the camera view frame at the focal plane. With the trigger signal, the camera records a predefined number of frames to the built-in storage space of the camera head. The images are downloaded to a computer for processing and storage once the rain event is over or the built-in storage space is full. The anemometer data is recorded continuously to the computer. The downloaded sharp, sequential raindrop images are digitally processed using a computer code that is developed in-house, which outputs accurate information on various raindrop characteristics (e.g., drop diameter, shape, axis ratio, fall velocity, and drop size distribution). The new high-speed imaging system is laboratory tested using high-precision spherical lenses with known diameters and also field tested under real rain events. The results of these tests will also be presented. This new imaging system was developed as part of a National Science Foundation grant (NSF Award # 1144846) to study raindrop characteristics and is expected to be an

  19. Potential of a newly developed high-speed near-infrared (NIR) camera (Compovision) in polymer industrial analyses: monitoring crystallinity and crystal evolution of polylactic acid (PLA) and concentration of PLA in PLA/Poly-(R)-3-hydroxybutyrate (PHB) blends.

    PubMed

    Ishikawa, Daitaro; Nishii, Takashi; Mizuno, Fumiaki; Sato, Harumi; Kazarian, Sergei G; Ozaki, Yukihiro

    2013-12-01

    This study was carried out to evaluate a new high-speed hyperspectral near-infrared (NIR) camera named Compovision. Quantitative analyses of the crystallinity and crystal evolution of biodegradable polymer, polylactic acid (PLA), and its concentration in PLA/poly-(R)-3-hydroxybutyrate (PHB) blends were investigated using near-infrared (NIR) imaging. This NIR camera can measure two-dimensional NIR spectral data in the 1000-2350 nm region obtaining images with wide field of view of 150 × 250 mm(2) (approximately 100  000 pixels) at high speeds (in less than 5 s). PLA with differing crystallinities between 0 and 50% blended samples with PHB in ratios of 80/20, 60/40, 40/60, 20/80, and pure films of 100% PLA and PHB were prepared. Compovision was used to collect respective NIR spectra in the 1000-2350 nm region and investigate the crystallinity of PLA and its concentration in the blends. The partial least squares (PLS) regression models for the crystallinity of PLA were developed using absorbance, second derivative, and standard normal variate (SNV) spectra from the most informative region of the spectra, between 1600 and 2000 nm. The predicted results of PLS models achieved using the absorbance and second derivative spectra were fairly good with a root mean square error (RMSE) of less than 6.1% and a determination of coefficient (R(2)) of more than 0.88 for PLS factor 1. The results obtained using the SNV spectra yielded the best prediction with the smallest RMSE of 2.93% and the highest R(2) of 0.976. Moreover, PLS models developed for estimating the concentration of PLA in the blend polymers using SNV spectra gave good predicted results where the RMSE was 4.94% and R(2) was 0.98. The SNV-based models provided the best-predicted results, since it can reduce the effects of the spectral changes induced by the inhomogeneity and the thickness of the samples. Wide area crystal evolution of PLA on a plate where a temperature slope of 70-105 °C had occurred was also

  20. Composite video and graphics display for camera viewing systems in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1993-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  1. Use of video laryngoscopy and camera phones to communicate progression of laryngeal edema in assessing for extubation: a case series.

    PubMed

    Newmark, Jordan L; Ahn, Young K; Adams, Mark C; Bittner, Edward A; Wilcox, Susan R

    2013-01-01

    Video laryngoscopy has demonstrated utility in airway management. For the present case series, we report the use of video laryngoscopy to evaluate the airway of critically ill, mechanically ventilated patients, as a means to reduce the risk of immediate postextubation stridor by assessing the degree of laryngeal edema. We also describe the use of cellular phone cameras to document and communicate airway edema in using video laryngoscopy for the patients' medical records. We found video laryngoscopy to be an effective method of assessing airway edema, and cellular phone cameras were useful for recording and documenting video laryngoscopy images for patients' medical records.

  2. High speed door assembly

    DOEpatents

    Shapiro, Carolyn

    1993-01-01

    A high speed door assembly, comprising an actuator cylinder and piston rods, a pressure supply cylinder and fittings, an electrically detonated explosive bolt, a honeycomb structured door, a honeycomb structured decelerator, and a structural steel frame encasing the assembly to close over a 3 foot diameter opening within 50 milliseconds of actuation, to contain hazardous materials and vapors within a test fixture.

  3. High speed door assembly

    DOEpatents

    Shapiro, C.

    1993-04-27

    A high speed door assembly is described, comprising an actuator cylinder and piston rods, a pressure supply cylinder and fittings, an electrically detonated explosive bolt, a honeycomb structured door, a honeycomb structured decelerator, and a structural steel frame encasing the assembly to close over a 3 foot diameter opening within 50 milliseconds of actuation, to contain hazardous materials and vapors within a test fixture.

  4. High speed civil transport

    NASA Technical Reports Server (NTRS)

    Mcknight, R. L.

    1992-01-01

    The design requirements of the High Speed Civil Transport (HSCT) are discussed. The following design concerns are presented: (1) environmental impact (emissions and noise); (2) critical components (the high temperature combustor and the lightweight exhaust nozzle); and (3) advanced materials (high temperature ceramic matrix composites (CMC's)/intermetallic matrix composites (IMC's)/metal matrix composites (MMC's)).

  5. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    PubMed Central

    Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu

    2016-01-01

    Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%. PMID:26950127

  6. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras.

    PubMed

    Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu

    2016-01-01

    Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%.

  7. Hardware-based smart camera for recovering high dynamic range video from multiple exposures

    NASA Astrophysics Data System (ADS)

    Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique

    2014-10-01

    In many applications such as video surveillance or defect detection, the perception of information related to a scene is limited in areas with strong contrasts. The high dynamic range (HDR) capture technique can deal with these limitations. The proposed method has the advantage of automatically selecting multiple exposure times to make outputs more visible than fixed exposure ones. A real-time hardware implementation of the HDR technique that shows more details both in dark and bright areas of a scene is an important line of research. For this purpose, we built a dedicated smart camera that performs both capturing and HDR video processing from three exposures. What is new in our work is shown through the following points: HDR video capture through multiple exposure control, HDR memory management, HDR frame generation, and representation under a hardware context. Our camera achieves a real-time HDR video output at 60 fps at 1.3 megapixels and demonstrates the efficiency of our technique through an experimental result. Applications of this HDR smart camera include the movie industry, the mass-consumer market, military, automotive industry, and surveillance.

  8. Comparison of Kodak Professional Digital Camera System images to conventional film, still video, and freeze-frame images

    NASA Astrophysics Data System (ADS)

    Kent, Richard A.; McGlone, John T.; Zoltowski, Norbert W.

    1991-06-01

    Electronic cameras provide near real time image evaluation with the benefits of digital storage methods for rapid transmission or computer processing and enhancement of images. But how does the image quality of their images compare to that of conventional film? A standard Nikon F-3TM 35 mm SLR camera was transformed into an electro-optical camera by replacing the film back with Kodak's KAF-1400V (or KAF-1300L) megapixel CCD array detector back and a processing accessory. Images taken with these Kodak electronic cameras were compared to those using conventional films and to several still video cameras. Quantitative and qualitative methods were used to compare images from these camera systems. Images captured on conventional video analog systems provide a maximum of 450 - 500 TV lines of resolution depending upon the camera resolution, storage method, and viewing system resolution. The Kodak Professional Digital Camera SystemTM exceeded this resolution and more closely approached that of film.

  9. A digital TV system for the detection of high speed human motion

    NASA Astrophysics Data System (ADS)

    Fang, R. C.

    1981-08-01

    Two array cameras and a force plate were linked to a PDP-11/34 minicomputer for an on-line recording of high speed human motion. A microprocessor-based interface system was constructed to allow preprocessing and coordinating of the video data before being transferred to the minicomputer. Control programs of the interface system are stored in the disk and loaded into the program storage areas of the microprocessor before the interface system starts its operation. Software programs for collecting and processing video and force data have been written. Experiments on the detection of human jumping have been carried out. Normal gait and amputee gait have also been recorded and analyzed.

  10. Video and acoustic camera techniques for studying fish under ice: a review and comparison

    SciTech Connect

    Mueller, Robert P.; Brown, Richard S.; Hop, Haakon H.; Moulton, Larry

    2006-09-05

    Researchers attempting to study the presence, abundance, size, and behavior of fish species in northern and arctic climates during winter face many challenges, including the presence of thick ice cover, snow cover, and, sometimes, extremely low temperatures. This paper describes and compares the use of video and acoustic cameras for determining fish presence and behavior in lakes, rivers, and streams with ice cover. Methods are provided for determining fish density and size, identifying species, and measuring swimming speed and successful applications of previous surveys of fish under the ice are described. These include drilling ice holes, selecting batteries and generators, deploying pan and tilt cameras, and using paired colored lasers to determine fish size and habitat associations. We also discuss use of infrared and white light to enhance image-capturing capabilities, deployment of digital recording systems and time-lapse techniques, and the use of imaging software. Data are presented from initial surveys with video and acoustic cameras in the Sagavanirktok River Delta, Alaska, during late winter 2004. These surveys represent the first known successful application of a dual-frequency identification sonar (DIDSON) acoustic camera under the ice that achieved fish detection and sizing at camera ranges up to 16 m. Feasibility tests of video and acoustic cameras for determining fish size and density at various turbidity levels are also presented. Comparisons are made of the different techniques in terms of suitability for achieving various fisheries research objectives. This information is intended to assist researchers in choosing the equipment that best meets their study needs.

  11. High speed imager test station

    DOEpatents

    Yates, George J.; Albright, Kevin L.; Turko, Bojan T.

    1995-01-01

    A test station enables the performance of a solid state imager (herein called a focal plane array or FPA) to be determined at high image frame rates. A programmable waveform generator is adapted to generate clock pulses at determinable rates for clock light-induced charges from a FPA. The FPA is mounted on an imager header board for placing the imager in operable proximity to level shifters for receiving the clock pulses and outputting pulses effective to clock charge from the pixels forming the FPA. Each of the clock level shifters is driven by leading and trailing edge portions of the clock pulses to reduce power dissipation in the FPA. Analog circuits receive output charge pulses clocked from the FPA pixels. The analog circuits condition the charge pulses to cancel noise in the pulses and to determine and hold a peak value of the charge for digitizing. A high speed digitizer receives the peak signal value and outputs a digital representation of each one of the charge pulses. A video system then displays an image associated with the digital representation of the output charge pulses clocked from the FPA. In one embodiment, the FPA image is formatted to a standard video format for display on conventional video equipment.

  12. High speed imager test station

    DOEpatents

    Yates, G.J.; Albright, K.L.; Turko, B.T.

    1995-11-14

    A test station enables the performance of a solid state imager (herein called a focal plane array or FPA) to be determined at high image frame rates. A programmable waveform generator is adapted to generate clock pulses at determinable rates for clock light-induced charges from a FPA. The FPA is mounted on an imager header board for placing the imager in operable proximity to level shifters for receiving the clock pulses and outputting pulses effective to clock charge from the pixels forming the FPA. Each of the clock level shifters is driven by leading and trailing edge portions of the clock pulses to reduce power dissipation in the FPA. Analog circuits receive output charge pulses clocked from the FPA pixels. The analog circuits condition the charge pulses to cancel noise in the pulses and to determine and hold a peak value of the charge for digitizing. A high speed digitizer receives the peak signal value and outputs a digital representation of each one of the charge pulses. A video system then displays an image associated with the digital representation of the output charge pulses clocked from the FPA. In one embodiment, the FPA image is formatted to a standard video format for display on conventional video equipment. 12 figs.

  13. High speed door assembly

    SciTech Connect

    Shapiro, C.

    1991-12-31

    This invention is comprised of a high speed door assembly, comprising an actuator cylinder and piston rods, a pressure supply cylinder and fittings, an electrically detonated explosive bolt, a honeycomb structured door, a honeycomb structured decelerator, and a structural steel frame encasing the assembly to close over a 3 foot diameter opening within 50 milliseconds of actuation, to contain hazardous materials and vapors within a test fixture.

  14. A Novel Method to Reduce Time Investment When Processing Videos from Camera Trap Studies

    PubMed Central

    Swinnen, Kristijn R. R.; Reijniers, Jonas; Breno, Matteo; Leirs, Herwig

    2014-01-01

    Camera traps have proven very useful in ecological, conservation and behavioral research. Camera traps non-invasively record presence and behavior of animals in their natural environment. Since the introduction of digital cameras, large amounts of data can be stored. Unfortunately, processing protocols did not evolve as fast as the technical capabilities of the cameras. We used camera traps to record videos of Eurasian beavers (Castor fiber). However, a large number of recordings did not contain the target species, but instead empty recordings or other species (together non-target recordings), making the removal of these recordings unacceptably time consuming. In this paper we propose a method to partially eliminate non-target recordings without having to watch the recordings, in order to reduce workload. Discrimination between recordings of target species and non-target recordings was based on detecting variation (changes in pixel values from frame to frame) in the recordings. Because of the size of the target species, we supposed that recordings with the target species contain on average much more movements than non-target recordings. Two different filter methods were tested and compared. We show that a partial discrimination can be made between target and non-target recordings based on variation in pixel values and that environmental conditions and filter methods influence the amount of non-target recordings that can be identified and discarded. By allowing a loss of 5% to 20% of recordings containing the target species, in ideal circumstances, 53% to 76% of non-target recordings can be identified and discarded. We conclude that adding an extra processing step in the camera trap protocol can result in large time savings. Since we are convinced that the use of camera traps will become increasingly important in the future, this filter method can benefit many researchers, using it in different contexts across the globe, on both videos and photographs. PMID:24918777

  15. ARINC 818 adds capabilities for high-speed sensors and systems

    NASA Astrophysics Data System (ADS)

    Keller, Tim; Grunwald, Paul

    2014-06-01

    ARINC 818, titled Avionics Digital Video Bus (ADVB), is the standard for cockpit video that has gained wide acceptance in both the commercial and military cockpits including the Boeing 787, the A350XWB, the A400M, the KC- 46A and many others. Initially conceived of for cockpit displays, ARINC 818 is now propagating into high-speed sensors, such as infrared and optical cameras due to its high-bandwidth and high reliability. The ARINC 818 specification that was initially release in the 2006 and has recently undergone a major update that will enhance its applicability as a high speed sensor interface. The ARINC 818-2 specification was published in December 2013. The revisions to the specification include: video switching, stereo and 3-D provisions, color sequential implementations, regions of interest, data-only transmissions, multi-channel implementations, bi-directional communication, higher link rates to 32Gbps, synchronization signals, options for high-speed coax interfaces and optical interface details. The additions to the specification are especially appealing for high-bandwidth, multi sensor systems that have issues with throughput bottlenecks and SWaP concerns. ARINC 818 is implemented on either copper or fiber optic high speed physical layers, and allows for time multiplexing multiple sensors onto a single link. This paper discusses each of the new capabilities in the ARINC 818-2 specification and the benefits for ISR and countermeasures implementations, several examples are provided.

  16. A passive terahertz video camera based on lumped element kinetic inductance detectors.

    PubMed

    Rowe, Sam; Pascale, Enzo; Doyle, Simon; Dunscombe, Chris; Hargrave, Peter; Papageorgio, Andreas; Wood, Ken; Ade, Peter A R; Barry, Peter; Bideaud, Aurélien; Brien, Tom; Dodd, Chris; Grainger, William; House, Julian; Mauskopf, Philip; Moseley, Paul; Spencer, Locke; Sudiwala, Rashmi; Tucker, Carole; Walker, Ian

    2016-03-01

    We have developed a passive 350 GHz (850 μm) video-camera to demonstrate lumped element kinetic inductance detectors (LEKIDs)--designed originally for far-infrared astronomy--as an option for general purpose terrestrial terahertz imaging applications. The camera currently operates at a quasi-video frame rate of 2 Hz with a noise equivalent temperature difference per frame of ∼0.1 K, which is close to the background limit. The 152 element superconducting LEKID array is fabricated from a simple 40 nm aluminum film on a silicon dielectric substrate and is read out through a single microwave feedline with a cryogenic low noise amplifier and room temperature frequency domain multiplexing electronics.

  17. A digital underwater video camera system for aquatic research in regulated rivers

    USGS Publications Warehouse

    Martin, Benjamin M.; Irwin, Elise R.

    2010-01-01

    We designed a digital underwater video camera system to monitor nesting centrarchid behavior in the Tallapoosa River, Alabama, 20 km below a peaking hydropower dam with a highly variable flow regime. Major components of the system included a digital video recorder, multiple underwater cameras, and specially fabricated substrate stakes. The innovative design of the substrate stakes allowed us to effectively observe nesting redbreast sunfish Lepomis auritus in a highly regulated river. Substrate stakes, which were constructed for the specific substratum complex (i.e., sand, gravel, and cobble) identified at our study site, were able to withstand a discharge level of approximately 300 m3/s and allowed us to simultaneously record 10 active nests before and during water releases from the dam. We believe our technique will be valuable for other researchers that work in regulated rivers to quantify behavior of aquatic fauna in response to a discharge disturbance.

  18. Using a video camera to measure the radius of the Earth

    NASA Astrophysics Data System (ADS)

    Carroll, Joshua; Hughes, Stephen

    2013-11-01

    A simple but accurate method for measuring the Earth’s radius using a video camera is described. A video camera was used to capture a shadow rising up the wall of a tall building at sunset. A free program called ImageJ was used to measure the time it took the shadow to rise a known distance up the building. The time, distance and length of the sidereal day were used to calculate the radius of the Earth. The radius was measured as 6394.3 ± 118 km, which is within 1.8% of the accepted average value of 6371 km and well within the experimental error. The experiment is suitable as a high school or university project and should produce a value for Earth’s radius within a few per cent at latitudes towards the equator, where at some times of the year the ecliptic is approximately normal to the horizon.

  19. Video camera system for locating bullet holes in targets at a ballistics tunnel

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Rummler, D. R.; Goad, W. K.

    1990-01-01

    A system consisting of a single charge coupled device (CCD) video camera, computer controlled video digitizer, and software to automate the measurement was developed to measure the location of bullet holes in targets at the International Shooters Development Fund (ISDF)/NASA Ballistics Tunnel. The camera/digitizer system is a crucial component of a highly instrumented indoor 50 meter rifle range which is being constructed to support development of wind resistant, ultra match ammunition. The system was designed to take data rapidly (10 sec between shoots) and automatically with little operator intervention. The system description, measurement concept, and procedure are presented along with laboratory tests of repeatability and bias error. The long term (1 hour) repeatability of the system was found to be 4 microns (one standard deviation) at the target and the bias error was found to be less than 50 microns. An analysis of potential errors and a technique for calibration of the system are presented.

  20. Operation and maintenance manual for the high resolution stereoscopic video camera system (HRSVS) system 6230

    SciTech Connect

    Pardini, A.F., Westinghouse Hanford

    1996-07-16

    The High Resolution Stereoscopic Video Cameral System (HRSVS),system 6230, is a stereoscopic camera system that will be used as an end effector on the LDUA to perform surveillance and inspection activities within Hanford waste tanks. It is attached to the LDUA by means of a Tool Interface Plate (TIP), which provides a feed through for all electrical and pneumatic utilities needed by the end effector to operate.

  1. Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System.

    PubMed

    Jung, Jaehoon; Yoon, Inhye; Paik, Joonki

    2016-06-25

    This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i) automatic camera calibration using both moving objects and a background structure; (ii) object depth estimation; and (iii) detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB) camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems.

  2. Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System

    PubMed Central

    Jung, Jaehoon; Yoon, Inhye; Paik, Joonki

    2016-01-01

    This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i) automatic camera calibration using both moving objects and a background structure; (ii) object depth estimation; and (iii) detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB) camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems. PMID:27347978

  3. Design and evaluation of controls for drift, video gain, and color balance in spaceborne facsimile cameras

    NASA Technical Reports Server (NTRS)

    Katzberg, S. J.; Kelly, W. L., IV; Rowland, C. W.; Burcher, E. E.

    1973-01-01

    The facsimile camera is an optical-mechanical scanning device which has become an attractive candidate as an imaging system for planetary landers and rovers. This paper presents electronic techniques which permit the acquisition and reconstruction of high quality images with this device, even under varying lighting conditions. These techniques include a control for low frequency noise and drift, an automatic gain control, a pulse-duration light modulation scheme, and a relative spectral gain control. Taken together, these techniques allow the reconstruction of radiometrically accurate and properly balanced color images from facsimile camera video data. These techniques have been incorporated into a facsimile camera and reproduction system, and experimental results are presented for each technique and for the complete system.

  4. Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System.

    PubMed

    Jung, Jaehoon; Yoon, Inhye; Paik, Joonki

    2016-01-01

    This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i) automatic camera calibration using both moving objects and a background structure; (ii) object depth estimation; and (iii) detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB) camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems. PMID:27347978

  5. Structural analysis of color video camera installation on tank 241AW101 (2 Volumes)

    SciTech Connect

    Strehlow, J.P.

    1994-08-24

    A video camera is planned to be installed on the radioactive storage tank 241AW101 at the DOE` s Hanford Site in Richland, Washington. The camera will occupy the 20 inch port of the Multiport Flange riser which is to be installed on riser 5B of the 241AW101 (3,5,10). The objective of the project reported herein was to perform a seismic analysis and evaluation of the structural components of the camera for a postulated Design Basis Earthquake (DBE) per the reference Structural Design Specification (SDS) document (6). The detail of supporting engineering calculations is documented in URS/Blume Calculation No. 66481-01-CA-03 (1).

  6. High Speed Vortex Flows

    NASA Technical Reports Server (NTRS)

    Wood, Richard M.; Wilcox, Floyd J., Jr.; Bauer, Steven X. S.; Allen, Jerry M.

    2000-01-01

    A review of the research conducted at the National Aeronautics and Space Administration (NASA), Langley Research Center (LaRC) into high-speed vortex flows during the 1970s, 1980s, and 1990s is presented. The data reviewed is for flat plates, cavities, bodies, missiles, wings, and aircraft. These data are presented and discussed relative to the design of future vehicles. Also presented is a brief historical review of the extensive body of high-speed vortex flow research from the 1940s to the present in order to provide perspective of the NASA LaRC's high-speed research results. Data are presented which show the types of vortex structures which occur at supersonic speeds and the impact of these flow structures to vehicle performance and control is discussed. The data presented shows the presence of both small- and large scale vortex structures for a variety of vehicles, from missiles to transports. For cavities, the data show very complex multiple vortex structures exist at all combinations of cavity depth to length ratios and Mach number. The data for missiles show the existence of very strong interference effects between body and/or fin vortices and the downstream fins. It was shown that these vortex flow interference effects could be both positive and negative. Data are shown which highlights the effect that leading-edge sweep, leading-edge bluntness, wing thickness, location of maximum thickness, and camber has on the aerodynamics of and flow over delta wings. The observed flow fields for delta wings (i.e. separation bubble, classical vortex, vortex with shock, etc.) are discussed in the context of' aircraft design. And data have been shown that indicate that aerodynamic performance improvements are available by considering vortex flows as a primary design feature. Finally a discussing of a design approach for wings which utilize vortex flows for improved aerodynamic performance at supersonic speed is presented.

  7. High speed flywheel

    DOEpatents

    McGrath, Stephen V.

    1991-01-01

    A flywheel for operation at high speeds utilizes two or more ringlike coments arranged in a spaced concentric relationship for rotation about an axis and an expansion device interposed between the components for accommodating radial growth of the components resulting from flywheel operation. The expansion device engages both of the ringlike components, and the structure of the expansion device ensures that it maintains its engagement with the components. In addition to its expansion-accommodating capacity, the expansion device also maintains flywheel stiffness during flywheel operation.

  8. Evaluation of lens distortion errors using an underwater camera system for video-based motion analysis

    NASA Technical Reports Server (NTRS)

    Poliner, Jeffrey; Fletcher, Lauren; Klute, Glenn K.

    1994-01-01

    Video-based motion analysis systems are widely employed to study human movement, using computers to capture, store, process, and analyze video data. This data can be collected in any environment where cameras can be located. One of the NASA facilities where human performance research is conducted is the Weightless Environment Training Facility (WETF), a pool of water which simulates zero-gravity with neutral buoyance. Underwater video collection in the WETF poses some unique problems. This project evaluates the error caused by the lens distortion of the WETF cameras. A grid of points of known dimensions was constructed and videotaped using a video vault underwater system. Recorded images were played back on a VCR and a personal computer grabbed and stored the images on disk. These images were then digitized to give calculated coordinates for the grid points. Errors were calculated as the distance from the known coordinates of the points to the calculated coordinates. It was demonstrated that errors from lens distortion could be as high as 8 percent. By avoiding the outermost regions of a wide-angle lens, the error can be kept smaller.

  9. Video camera observation for assessing overland flow patterns during rainfall events

    NASA Astrophysics Data System (ADS)

    Silasari, Rasmiaditya; Oismüller, Markus; Blöschl, Günter

    2015-04-01

    Physically based hydrological models have been widely used in various studies to model overland flow propagation in cases such as flood inundation and dam break flow. The capability of such models to simulate the formation of overland flow by spatial and temporal discretization of the empirical equations makes it possible for hydrologists to trace the overland flow generation both spatially and temporally across surface and subsurface domains. As the upscaling methods transforming hydrological process spatial patterns from the small obrseved scale to the larger catchment scale are still being progressively developed, the physically based hydrological models become a convenient tool to assess the patterns and their behaviors crucial in determining the upscaling process. Related studies in the past had successfully used these models as well as utilizing field observation data for model verification. The common observation data used for this verification are overland flow discharge during natural rainfall events and camera observations during synthetic events (staged field experiments) while the use of camera observations during natural events are hardly discussed in publications. This study advances in exploring the potential of video camera observations of overland flow generation during natural rainfall events to support the physically based hydrological model verification and the assessment of overland flow spatial patterns. The study is conducted within a 64ha catchment located at Petzenkirchen, Lower Austria, known as HOAL (Hydrological Open Air Laboratory). The catchment land covers are dominated by arable land (87%) with small portions (13%) of forest, pasture and paved surfaces. A 600m stream is running at southeast of the catchment flowing southward and equipped with flumes and pressure transducers measuring water level in minutely basis from various inlets along the stream (i.e. drainages, surface runoffs, springs) to be calculated into flow discharge. A

  10. High speed transient sampler

    DOEpatents

    McEwan, T.E.

    1995-11-28

    A high speed sampler comprises a meandered sample transmission line for transmitting an input signal, a straight strobe transmission line for transmitting a strobe signal, and a plurality of sampling gates along the transmission lines. The sampling gates comprise a four terminal diode bridge having a first strobe resistor connected from a first terminal of the bridge to the positive strobe line, a second strobe resistor coupled from the third terminal of the bridge to the negative strobe line, a tap connected to the second terminal of the bridge and to the sample transmission line, and a sample holding capacitor connected to the fourth terminal of the bridge. The resistance of the first and second strobe resistors is much higher than the signal transmission line impedance in the preferred system. This results in a sampling gate which applies a very small load on the sample transmission line and on the strobe generator. The sample holding capacitor is implemented using a smaller capacitor and a larger capacitor isolated from the smaller capacitor by resistance. The high speed sampler of the present invention is also characterized by other optimizations, including transmission line tap compensation, stepped impedance strobe line, a multi-layer physical layout, and unique strobe generator design. A plurality of banks of such samplers are controlled for concatenated or interleaved sample intervals to achieve long sample lengths or short sample spacing. 17 figs.

  11. High speed transient sampler

    DOEpatents

    McEwan, Thomas E.

    1995-01-01

    A high speed sampler comprises a meandered sample transmission line for transmitting an input signal, a straight strobe transmission line for transmitting a strobe signal, and a plurality of sampling gates along the transmission lines. The sampling gates comprise a four terminal diode bridge having a first strobe resistor connected from a first terminal of the bridge to the positive strobe line, a second strobe resistor coupled from the third terminal of the bridge to the negative strobe line, a tap connected to the second terminal of the bridge and to the sample transmission line, and a sample holding capacitor connected to the fourth terminal of the bridge. The resistance of the first and second strobe resistors is much higher than the signal transmission line impedance in the preferred system. This results in a sampling gate which applies a very small load on the sample transmission line and on the strobe generator. The sample holding capacitor is implemented using a smaller capacitor and a larger capacitor isolated from the smaller capacitor by resistance. The high speed sampler of the present invention is also characterized by other optimizations, including transmission line tap compensation, stepped impedance strobe line, a multi-layer physical layout, and unique strobe generator design. A plurality of banks of such samplers are controlled for concatenated or interleaved sample intervals to achieve long sample lengths or short sample spacing.

  12. High speed civil transport

    NASA Technical Reports Server (NTRS)

    Bogardus, Scott; Loper, Brent; Nauman, Chris; Page, Jeff; Parris, Rusty; Steinbach, Greg

    1990-01-01

    The design process of the High Speed Civil Transport (HSCT) combines existing technology with the expectation of future technology to create a Mach 3.0 transport. The HSCT was designed to have a range in excess of 6000 nautical miles and carry up to 300 passengers. This range will allow the HSCT to service the economically expanding Pacific Basin region. Effort was made in the design to enable the aircraft to use conventional airports with standard 12,000 foot runways. With a takeoff thrust of 250,000 pounds, the four supersonic through-flow engines will accelerate the HSCT to a cruise speed of Mach 3.0. The 679,000 pound (at takeoff) HSCT is designed to cruise at an altitude of 70,000 feet, flying above most atmospheric disturbances.

  13. Video Capture of Perforator Flap Harvesting Procedure with a Full High-definition Wearable Camera.

    PubMed

    Miyamoto, Shimpei

    2016-06-01

    Recent advances in wearable recording technology have enabled high-quality video recording of several surgical procedures from the surgeon's perspective. However, the available wearable cameras are not optimal for recording the harvesting of perforator flaps because they are too heavy and cannot be attached to the surgical loupe. The Ecous is a small high-resolution camera that was specially developed for recording loupe magnification surgery. This study investigated the use of the Ecous for recording perforator flap harvesting procedures. The Ecous SC MiCron is a high-resolution camera that can be mounted directly on the surgical loupe. The camera is light (30 g) and measures only 28 × 32 × 60 mm. We recorded 23 perforator flap harvesting procedures with the Ecous connected to a laptop through a USB cable. The elevated flaps included 9 deep inferior epigastric artery perforator flaps, 7 thoracodorsal artery perforator flaps, 4 anterolateral thigh flaps, and 3 superficial inferior epigastric artery flaps. All procedures were recorded with no equipment failure. The Ecous recorded the technical details of the perforator dissection at a high-resolution level. The surgeon did not feel any extra stress or interference when wearing the Ecous. The Ecous is an ideal camera for recording perforator flap harvesting procedures. It fits onto the surgical loupe perfectly without creating additional stress on the surgeon. High-quality video from the surgeon's perspective makes accurate documentation of the procedures possible, thereby enhancing surgical education and allowing critical self-reflection. PMID:27482504

  14. Video Capture of Perforator Flap Harvesting Procedure with a Full High-definition Wearable Camera

    PubMed Central

    2016-01-01

    Summary: Recent advances in wearable recording technology have enabled high-quality video recording of several surgical procedures from the surgeon’s perspective. However, the available wearable cameras are not optimal for recording the harvesting of perforator flaps because they are too heavy and cannot be attached to the surgical loupe. The Ecous is a small high-resolution camera that was specially developed for recording loupe magnification surgery. This study investigated the use of the Ecous for recording perforator flap harvesting procedures. The Ecous SC MiCron is a high-resolution camera that can be mounted directly on the surgical loupe. The camera is light (30 g) and measures only 28 × 32 × 60 mm. We recorded 23 perforator flap harvesting procedures with the Ecous connected to a laptop through a USB cable. The elevated flaps included 9 deep inferior epigastric artery perforator flaps, 7 thoracodorsal artery perforator flaps, 4 anterolateral thigh flaps, and 3 superficial inferior epigastric artery flaps. All procedures were recorded with no equipment failure. The Ecous recorded the technical details of the perforator dissection at a high-resolution level. The surgeon did not feel any extra stress or interference when wearing the Ecous. The Ecous is an ideal camera for recording perforator flap harvesting procedures. It fits onto the surgical loupe perfectly without creating additional stress on the surgeon. High-quality video from the surgeon’s perspective makes accurate documentation of the procedures possible, thereby enhancing surgical education and allowing critical self-reflection. PMID:27482504

  15. Video Capture of Perforator Flap Harvesting Procedure with a Full High-definition Wearable Camera.

    PubMed

    Miyamoto, Shimpei

    2016-06-01

    Recent advances in wearable recording technology have enabled high-quality video recording of several surgical procedures from the surgeon's perspective. However, the available wearable cameras are not optimal for recording the harvesting of perforator flaps because they are too heavy and cannot be attached to the surgical loupe. The Ecous is a small high-resolution camera that was specially developed for recording loupe magnification surgery. This study investigated the use of the Ecous for recording perforator flap harvesting procedures. The Ecous SC MiCron is a high-resolution camera that can be mounted directly on the surgical loupe. The camera is light (30 g) and measures only 28 × 32 × 60 mm. We recorded 23 perforator flap harvesting procedures with the Ecous connected to a laptop through a USB cable. The elevated flaps included 9 deep inferior epigastric artery perforator flaps, 7 thoracodorsal artery perforator flaps, 4 anterolateral thigh flaps, and 3 superficial inferior epigastric artery flaps. All procedures were recorded with no equipment failure. The Ecous recorded the technical details of the perforator dissection at a high-resolution level. The surgeon did not feel any extra stress or interference when wearing the Ecous. The Ecous is an ideal camera for recording perforator flap harvesting procedures. It fits onto the surgical loupe perfectly without creating additional stress on the surgeon. High-quality video from the surgeon's perspective makes accurate documentation of the procedures possible, thereby enhancing surgical education and allowing critical self-reflection.

  16. High-sensitive thermal video camera with self-scanned 128 InSb linear array

    NASA Astrophysics Data System (ADS)

    Fujisada, Hiroyuki

    1991-12-01

    A compact thermal video camera with very high sensitivity has been developed by using a self-scanned 128 InSb linear array photodiode. Two-dimensional images are formed by a self- scanning function of the linear array focal plane assembly in the horizontal direction and by a vibration mirror in the vertical direction. Images with 128 X 128 pixel number are obtained every 1/30 seconds. A small size InSb detector array with a total length of 7.68 mm is utilized in order to build the compact system. In addition, special consideration is given to a configuration of optics, vibration mirror, and focal plane assembly. Real-time signal processing by a microprocessor is carried out to compensate inhomogeneous sensitivities and irradiances for each detector. The standard NTSC TV format is employed for output video signals. The thermal video camera developed had a very high radiometric sensitivity. Minimum resolvable temperature difference (MRTD) is estimated at about 0.02 K for 300 K target. The stable operation is possible without blackbody reference, because of very small stray radiation.

  17. Real Time Speed Estimation of Moving Vehicles from Side View Images from an Uncalibrated Video Camera

    PubMed Central

    Doğan, Sedat; Temiz, Mahir Serhan; Külür, Sıtkı

    2010-01-01

    In order to estimate the speed of a moving vehicle with side view camera images, velocity vectors of a sufficient number of reference points identified on the vehicle must be found using frame images. This procedure involves two main steps. In the first step, a sufficient number of points from the vehicle is selected, and these points must be accurately tracked on at least two successive video frames. In the second step, by using the displacement vectors of the tracked points and passed time, the velocity vectors of those points are computed. Computed velocity vectors are defined in the video image coordinate system and displacement vectors are measured by the means of pixel units. Then the magnitudes of the computed vectors in image space should be transformed to the object space to find the absolute values of these magnitudes. This transformation requires an image to object space information in a mathematical sense that is achieved by means of the calibration and orientation parameters of the video frame images. This paper presents proposed solutions for the problems of using side view camera images mentioned here. PMID:22399909

  18. High-speed semiconductor devices

    NASA Astrophysics Data System (ADS)

    Sze, S. M.

    An introduction to the physical principles and operational characteristics of high-speed semiconductor devices is presented. Consideration is given to materials and technologies for high-speed devices, device building blocks, the submicron MOSFET, homogeneous field-effect transistors, and heterostructure field-effect transistors. Also considered are quantum-effect devices, microwave diodes, and high-speed photonic devices.

  19. High Speed Photographic Studies Of Rocket Engine Combustion

    NASA Astrophysics Data System (ADS)

    Uyemura, Tsuneyoshi; Ozono, Shigeo; Mizunuma, Toshio; Yamamoto, Yoshitaka; Kikusato, Yutaka; Eiraku, Masamitsu; Uchida, Yubu

    1983-03-01

    The high speed cameras were used to develop the new sounding rocket motor and to check the safety operation system. The new rocket motor was designed as a single stage rocket and its power was greater than the multi-stage K-9M rocket motor. The test combustion of this new type rocket engine was photographed by the high speed cameras to analyze the burning process. At the outside of rocket chamber, the cable which connect the detector of an engine nozzle with the telemeter system was fixed. To check the thero.,a1 influences of combustion flame to the cable, the thermo-tapes and high speed cameras were used Safety operation system was tested and photographed with high speed cameras using a S0-1510 model rocket.

  20. High speed packet switching

    NASA Technical Reports Server (NTRS)

    1991-01-01

    This document constitutes the final report prepared by Proteon, Inc. of Westborough, Massachusetts under contract NAS 5-30629 entitled High-Speed Packet Switching (SBIR 87-1, Phase 2) prepared for NASA-Greenbelt, Maryland. The primary goal of this research project is to use the results of the SBIR Phase 1 effort to develop a sound, expandable hardware and software router architecture capable of forwarding 25,000 packets per second through the router and passing 300 megabits per second on the router's internal busses. The work being delivered under this contract received its funding from three different sources: the SNIPE/RIG contract (Contract Number F30602-89-C-0014, CDRL Sequence Number A002), the SBIR contract, and Proteon. The SNIPE/RIG and SBIR contracts had many overlapping requirements, which allowed the research done under SNIPE/RIG to be applied to SBIR. Proteon funded all of the work to develop new router interfaces other than FDDI, in addition to funding the productization of the router itself. The router being delivered under SBIR will be a fully product-quality machine. The work done during this contract produced many significant findings and results, summarized here and explained in detail in later sections of this report. The SNIPE/RIG contract was completed. That contract had many overlapping requirements with the SBIR contract, and resulted in the successful demonstration and delivery of a high speed router. The development that took place during the SNIPE/RIG contract produced findings that included the choice of processor and an understanding of the issues surrounding inter processor communications in a multiprocessor environment. Many significant speed enhancements to the router software were made during that time. Under the SBIR contract (and with help from Proteon-funded work), it was found that a single processor router achieved a throughput significantly higher than originally anticipated. For this reason, a single processor router was

  1. Compact high-performance MWIR camera with exposure control and 12-bit video processor

    NASA Astrophysics Data System (ADS)

    Villani, Thomas S.; Loesser, Kenneth A.; Perna, Steve N.; McCarthy, D. R.; Pantuso, Francis P.

    1998-07-01

    The design and performance of a compact infrared camera system is presented. The 3 - 5 micron MWIR imaging system consists of a Stirling-cooled 640 X 480 staring PtSi infrared focal plane array (IRFPA) with a compact, high-performance 12-bit digital image processor. The low-noise CMOS IRFPA is X-Y addressable, utilizes on-chip-scanning registers and has electronic exposure control. The digital image processor uses 16-frame averaged, 2-point non-uniformity compensation and defective pixel substitution circuitry. There are separate 12- bit digital and analog I/O ports for display control and video output. The versatile camera system can be configured in NTSC, CCIR, and progressive scan readout formats and the exposure control settings are digitally programmable.

  2. Acute gastroenteritis and video camera surveillance: a cruise ship case report.

    PubMed

    Diskin, Arthur L; Caro, Gina M; Dahl, Eilif

    2014-01-01

    A 'faecal accident' was discovered in front of a passenger cabin of a cruise ship. After proper cleaning of the area the passenger was approached, but denied having any gastrointestinal symptoms. However, when confronted with surveillance camera evidence, she admitted having the accident and even bringing the towel stained with diarrhoea back to the pool towels bin. She was isolated until the next port where she was disembarked. Acute gastroenteritis (AGE) caused by Norovirus is very contagious and easily transmitted from person to person on cruise ships. The main purpose of isolation is to avoid public vomiting and faecal accidents. To quickly identify and isolate contagious passengers and crew and ensure their compliance are key elements in outbreak prevention and control, but this is difficult if ill persons deny symptoms. All passenger ships visiting US ports now have surveillance video cameras, which under certain circumstances can assist in finding potential index cases for AGE outbreaks.

  3. A video camera is mounted on the second stage of a Delta II rocket

    NASA Technical Reports Server (NTRS)

    1999-01-01

    At Launch Pad 17-A, Cape Canaveral Air Station, workers finish mounting a video camera on the second stage of a Boeing Delta II rocket that will launch the Stardust spacecraft on Feb. 6. Looking toward Earth, the camera will record the liftoff and separation of the first stage. Stardust is destined for a close encounter with the comet Wild 2 in January 2004. Using a silicon- based substance called aerogel, Stardust will capture comet particles flying off the nucleus of the comet. The spacecraft also will bring back samples of interstellar dust. These materials consist of ancient pre-solar interstellar grains and other remnants left over from the formation of the solar system. Scientists expect their analysis to provide important insights into the evolution of the sun and planets and possibly into the origin of life itself. The collected samples will return to Earth in a sample return capsule to be jettisoned as Stardust swings by Earth in January 2006.

  4. High-speed and ultrahigh-speed cinematographic recording techniques

    NASA Astrophysics Data System (ADS)

    Miquel, J. C.

    1980-12-01

    A survey is presented of various high-speed and ultrahigh-speed cinematographic recording systems (covering a range of speeds from 100 to 14-million pps). Attention is given to the functional and operational characteristics of cameras and to details of high-speed cinematography techniques (including image processing, and illumination). A list of cameras (many of them French) available in 1980 is presented

  5. A Semantic Autonomous Video Surveillance System for Dense Camera Networks in Smart Cities

    PubMed Central

    Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M.; Carro, Belén; Sánchez-Esguevillas, Antonio

    2012-01-01

    This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network. PMID:23112607

  6. A semantic autonomous video surveillance system for dense camera networks in Smart Cities.

    PubMed

    Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M; Carro, Belén; Sánchez-Esguevillas, Antonio

    2012-01-01

    This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network.

  7. A semantic autonomous video surveillance system for dense camera networks in Smart Cities.

    PubMed

    Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M; Carro, Belén; Sánchez-Esguevillas, Antonio

    2012-01-01

    This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network. PMID:23112607

  8. High speed civil transport

    NASA Technical Reports Server (NTRS)

    1991-01-01

    This report discusses the design and marketability of a next generation supersonic transport. Apogee Aeronautics Corporation has designated its High Speed Civil Transport (HSCT): Supercruiser HS-8. Since the beginning of the Concorde era, the general consensus has been that the proper time for the introduction of a next generation Supersonic Transport (SST) would depend upon the technical advances made in the areas of propulsion (reduction in emissions) and material composites (stronger, lighter materials). It is believed by many in the aerospace industry that these beforementioned technical advances lie on the horizon. With this being the case, this is the proper time to begin the design phase for the next generation HSCT. The design objective for a HSCT was to develop an aircraft that would be capable of transporting at least 250 passengers with baggage at a distance of 5500 nmi. The supersonic Mach number is currently unspecified. In addition, the design had to be marketable, cost effective, and certifiable. To achieve this goal, technical advances in the current SST's must be made, especially in the areas of aerodynamics and propulsion. As a result of these required aerodynamic advances, several different supersonic design concepts were reviewed.

  9. High Speed Ice Friction

    NASA Astrophysics Data System (ADS)

    Seymour-Pierce, Alexandra; Sammonds, Peter; Lishman, Ben

    2014-05-01

    Many different tribological experiments have been run to determine the frictional behaviour of ice at high speeds, ostensibly with the intention of applying results to everyday fields such as winter tyres and sports. However, experiments have only been conducted up to linear speeds of several metres a second, with few additional subject specific studies reaching speeds comparable to these applications. Experiments were conducted in the cold rooms of the Rock and Ice Physics Laboratory, UCL, on a custom built rotational tribometer based on previous literature designs. Preliminary results from experiments run at 2m/s for ice temperatures of 271 and 263K indicate that colder ice has a higher coefficient of friction, in accordance with the literature. These results will be presented, along with data from further experiments conducted at temperatures between 259-273K (in order to cover a wide range of the temperature dependent behaviour of ice) and speeds of 2-15m/s to produce a temperature-velocity-friction map for ice. The effect of temperature, speed and slider geometry on the deformation of ice will also be investigated. These speeds are approaching those exhibited by sports such as the luge (where athletes slide downhill on an icy track), placing the tribological work in context.

  10. System design description for the LDUA high resolution stereoscopic video camera system (HRSVS)

    SciTech Connect

    Pardini, A.F.

    1998-01-27

    The High Resolution Stereoscopic Video Camera System (HRSVS), system 6230, was designed to be used as an end effector on the LDUA to perform surveillance and inspection activities within a waste tank. It is attached to the LDUA by means of a Tool Interface Plate (TIP) which provides a feed through for all electrical and pneumatic utilities needed by the end effector to operate. Designed to perform up close weld and corrosion inspection roles in US T operations, the HRSVS will support and supplement the Light Duty Utility Arm (LDUA) and provide the crucial inspection tasks needed to ascertain waste tank condition.

  11. MOEMS-based time-of-flight camera for 3D video capturing

    NASA Astrophysics Data System (ADS)

    You, Jang-Woo; Park, Yong-Hwa; Cho, Yong-Chul; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Lee, Seung-Wan

    2013-03-01

    We suggest a Time-of-Flight (TOF) video camera capturing real-time depth images (a.k.a depth map), which are generated from the fast-modulated IR images utilizing a novel MOEMS modulator having switching speed of 20 MHz. In general, 3 or 4 independent IR (e.g. 850nm) images are required to generate a single frame of depth image. Captured video image of a moving object frequently shows motion drag between sequentially captured IR images, which results in so called `motion blur' problem even when the frame rate of depth image is fast (e.g. 30 to 60 Hz). We propose a novel `single shot' TOF 3D camera architecture generating a single depth image out of synchronized captured IR images. The imaging system constitutes of 2x2 imaging lens array, MOEMS optical shutters (modulator) placed on each lens aperture and a standard CMOS image sensor. The IR light reflected from object is modulated by optical shutters on the apertures of 2x2 lens array and then transmitted images are captured on the image sensor resulting in 2x2 sub-IR images. As a result, the depth image is generated with those simultaneously captured 4 independent sub-IR images, hence the motion blur problem is canceled. The resulting performance is very useful in the applications of 3D camera to a human-machine interaction device such as user interface of TV, monitor, or hand held devices and motion capturing of human body. In addition, we show that the presented 3D camera can be modified to capture color together with depth image simultaneously on `single shot' frame rate.

  12. High-speed laser photography

    NASA Astrophysics Data System (ADS)

    Becker, Roger J.

    1988-08-01

    High-speed movies of solid propellant deflagration have long provided useful qualitative information on propellant behavior. Consequently, an extension of performance to include quantitative behavior of the surface, particularly the spatial relationship of particles across the surface, the temporal behavior of particles through extended periods of time, and accurate measurements of particle sizes, is highly desirable. Such measurements require the ability to take detailed movies across an extensive surface through the propellant flame for longer periods than the residence time of a given particle. The modulation transfer function (MTF) of the camera optics and film will greatly affect performance. The MTF of the optics can be improved by a factor of two or more at practical spatial frequencies by the use of monochromatic light, such as the reflected light from a laser. The use of an intense, short-pulsed laser has the additional advantage of suppressing flame brightness and motion blur. High resolution at unity magnification is achieved by the use of 2 mJ of illumination energy per pulse in conjunction with a fine-grain film. The surfaces of the wide-distribution propellants were found to be molten.

  13. High speed transition prediction

    NASA Technical Reports Server (NTRS)

    Gasperas, Gediminis

    1992-01-01

    The main objective of this work period was to develop, acquire and apply state-of-the-art tools for the prediction of transition at high speeds at NASA Ames. Although various stability codes as well as basic state codes were acquired, the development of a new Parabolized Stability Equation (PSE) code was minimal. The time that was initially allocated for development was used on other tasks, in particular for the Leading Edge Suction problem, in acquiring proficiency in various graphics tools, and in applying these tools to evaluate various Navier-Stokes and Euler solutions. The second objective of this work period was to attend the Transition and Turbulence Workshop at NASA Langley in July and August, 1991. A report on the Workshop follows. From July 8, 1991 to August 2, 1991, the author participated in the Transition and Turbulence Workshop at NASA Langley. For purposes of interest here, analysis can be said to consist of solving simplified governing equations by various analytical methods, such as asymptotic methods, or by use of very meager computer resources. From the composition of the various groups at the Workshop, it can be seen that analytical methods are generally more popular in Great Britain than they are in the U.S., possibly due to historical factors and the lack of computer resources. Experimenters at the Workshop were mostly concerned with subsonic flows, and a number of demonstrations were provided, among which were a hot-wire experiment to probe the boundary layer on a rotating disc, a hot-wire rake to map a free shear layer behind a cylinder, and the use of heating strips on a flat plate to control instability waves and consequent transition. A highpoint of the demonstrations was the opportunity to observe the rather noisy 'quiet' supersonic pilot tunnel in operation.

  14. 11. INTERIOR VIEW OF 8FOOT HIGH SPEED WIND TUNNEL. SAME ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. INTERIOR VIEW OF 8-FOOT HIGH SPEED WIND TUNNEL. SAME CAMERA POSITION AS VA-118-B-10 LOOKING IN THE OPPOSITE DIRECTION. - NASA Langley Research Center, 8-Foot High Speed Wind Tunnel, 641 Thornell Avenue, Hampton, Hampton, VA

  15. Fusion: ultra-high-speed and IR image sensors

    NASA Astrophysics Data System (ADS)

    Etoh, T. Goji; Dao, V. T. S.; Nguyen, Quang A.; Kimata, M.

    2015-08-01

    Most targets of ultra-high-speed video cameras operating at more than 1 Mfps, such as combustion, crack propagation, collision, plasma, spark discharge, an air bag at a car accident and a tire under a sudden brake, generate sudden heat. Researchers in these fields require tools to measure the high-speed motion and heat simultaneously. Ultra-high frame rate imaging is achieved by an in-situ storage image sensor. Each pixel of the sensor is equipped with multiple memory elements to record a series of image signals simultaneously at all pixels. Image signals stored in each pixel are read out after an image capturing operation. In 2002, we developed an in-situ storage image sensor operating at 1 Mfps 1). However, the fill factor of the sensor was only 15% due to a light shield covering the wide in-situ storage area. Therefore, in 2011, we developed a backside illuminated (BSI) in-situ storage image sensor to increase the sensitivity with 100% fill factor and a very high quantum efficiency 2). The sensor also achieved a much higher frame rate,16.7 Mfps, thanks to the wiring on the front side with more freedom 3). The BSI structure has another advantage that it has less difficulties in attaching an additional layer on the backside, such as scintillators. This paper proposes development of an ultra-high-speed IR image sensor in combination of advanced nano-technologies for IR imaging and the in-situ storage technology for ultra-highspeed imaging with discussion on issues in the integration.

  16. Reflection imaging in the millimeter-wave range using a video-rate terahertz camera

    NASA Astrophysics Data System (ADS)

    Marchese, Linda E.; Terroux, Marc; Doucet, Michel; Blanchard, Nathalie; Pancrati, Ovidiu; Dufour, Denis; Bergeron, Alain

    2016-05-01

    The ability of millimeter waves (1-10 mm, or 30-300 GHz) to penetrate through dense materials, such as leather, wool, wood and gyprock, and to also transmit over long distances due to low atmospheric absorption, makes them ideal for numerous applications, such as body scanning, building inspection and seeing in degraded visual environments. Current drawbacks of millimeter wave imaging systems are they use single detector or linear arrays that require scanning or the two dimensional arrays are bulky, often consisting of rather large antenna-couple focal plane arrays (FPAs). Previous work from INO has demonstrated the capability of its compact lightweight camera, based on a 384 x 288 microbolometer pixel FPA with custom optics for active video-rate imaging at wavelengths of 118 μm (2.54 THz), 432 μm (0.69 THz), 663 μm (0.45 THz), and 750 μm (0.4 THz). Most of the work focused on transmission imaging, as a first step, but some preliminary demonstrations of reflection imaging at these were also reported. In addition, previous work also showed that the broadband FPA remains sensitive to wavelengths at least up to 3.2 mm (94 GHz). The work presented here demonstrates the ability of the INO terahertz camera for reflection imaging at millimeter wavelengths. Snapshots taken at video rates of objects show the excellent quality of the images. In addition, a description of the imaging system that includes the terahertz camera and different millimeter sources is provided.

  17. Calibration grooming and alignment for LDUA High Resolution Stereoscopic Video Camera System (HRSVS)

    SciTech Connect

    Pardini, A.F.

    1998-01-27

    The High Resolution Stereoscopic Video Camera System (HRSVS) was designed by the Savannah River Technology Center (SRTC) to provide routine and troubleshooting views of tank interiors during characterization and remediation phases of underground storage tank (UST) processing. The HRSVS is a dual color camera system designed to provide stereo viewing of the interior of the tanks including the tank wall in a Class 1, Division 1, flammable atmosphere. The HRSVS was designed with a modular philosophy for easy maintenance and configuration modifications. During operation of the system with the LDUA, the control of the camera system will be performed by the LDUA supervisory data acquisition system (SDAS). Video and control status 1458 will be displayed on monitors within the LDUA control center. All control functions are accessible from the front panel of the control box located within the Operations Control Trailer (OCT). The LDUA will provide all positioning functions within the waste tank for the end effector. Various electronic measurement instruments will be used to perform CG and A activities. The instruments may include a digital volt meter, oscilloscope, signal generator, and other electronic repair equipment. None of these instruments will need to be calibrated beyond what comes from the manufacturer. During CG and A a temperature indicating device will be used to measure the temperature of the outside of the HRSVS from initial startup until the temperature has stabilized. This device will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing. This sensor will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing.

  18. Complex effusive events at Kilauea as documented by the GOES satellite and remote video cameras

    USGS Publications Warehouse

    Harris, A.J.L.; Thornber, C.R.

    1999-01-01

    GOES provides thermal data for all of the Hawaiian volcanoes once every 15 min. We show how volcanic radiance time series produced from this data stream can be used as a simple measure of effusive activity. Two types of radiance trends in these time series can be used to monitor effusive activity: (a) Gradual variations in radiance reveal steady flow-field extension and tube development. (b) Discrete spikes correlate with short bursts of activity, such as lava fountaining or lava-lake overflows. We are confident that any effusive event covering more than 10,000 m2 of ground in less than 60 min will be unambiguously detectable using this approach. We demonstrate this capability using GOES, video camera and ground-based observational data for the current eruption of Kilauea volcano (Hawai'i). A GOES radiance time series was constructed from 3987 images between 19 June and 12 August 1997. This time series displayed 24 radiance spikes elevated more than two standard deviations above the mean; 19 of these are correlated with video-recorded short-burst effusive events. Less ambiguous events are interpreted, assessed and related to specific volcanic events by simultaneous use of permanently recording video camera data and ground-observer reports. The GOES radiance time series are automatically processed on data reception and made available in near-real-time, so such time series can contribute to three main monitoring functions: (a) automatically alerting major effusive events; (b) event confirmation and assessment; and (c) establishing effusive event chronology.

  19. Visual fatigue modeling for stereoscopic video shot based on camera motion

    NASA Astrophysics Data System (ADS)

    Shi, Guozhong; Sang, Xinzhu; Yu, Xunbo; Liu, Yangdong; Liu, Jing

    2014-11-01

    As three-dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.

  20. Flow visualization by mobile phone cameras

    NASA Astrophysics Data System (ADS)

    Cierpka, Christian; Hain, Rainer; Buchmann, Nicolas A.

    2016-06-01

    Mobile smart phones were completely changing people's communication within the last ten years. However, these devices do not only offer communication through different channels but also devices and applications for fun and recreation. In this respect, mobile phone cameras include now relatively fast (up to 240 Hz) cameras to capture high-speed videos of sport events or other fast processes. The article therefore explores the possibility to make use of this development and the wide spread availability of these cameras in the terms of velocity measurements for industrial or technical applications and fluid dynamics education in high schools and at universities. The requirements for a simplistic PIV (particle image velocimetry) system are discussed. A model experiment of a free water jet was used to prove the concept and shed some light on the achievable quality and determine bottle necks by comparing the results obtained with a mobile phone camera with data taken by a high-speed camera suited for scientific experiments.

  1. High-speed 3D imaging using two-wavelength parallel-phase-shift interferometry.

    PubMed

    Safrani, Avner; Abdulhalim, Ibrahim

    2015-10-15

    High-speed three dimensional imaging based on two-wavelength parallel-phase-shift interferometry is presented. The technique is demonstrated using a high-resolution polarization-based Linnik interferometer operating with three high-speed phase-masked CCD cameras and two quasi-monochromatic modulated light sources. The two light sources allow for phase unwrapping the single source wrapped phase so that relatively high step profiles having heights as large as 3.7 μm can be imaged in video rate with ±2  nm accuracy and repeatability. The technique is validated using a certified very large scale integration (VLSI) step standard followed by a demonstration from the semiconductor industry showing an integrated chip with 2.75 μm height copper micro pillars at different packing densities. PMID:26469586

  2. Identifying predators and fates of grassland passerine nests using miniature video cameras

    USGS Publications Warehouse

    Pietz, Pamela J.; Granfors, Diane A.

    2000-01-01

    Nest fates, causes of nest failure, and identities of nest predators are difficult to determine for grassland passerines. We developed a miniature video-camera system for use in grasslands and deployed it at 69 nests of 10 passerine species in North Dakota during 1996-97. Abandonment rates were higher at nests 1 day or night (22-116 hr) at 6 nests, 5 of which were depredated by ground squirrels or mice. For nests without cameras, estimated predation rates were lower for ground nests than aboveground nests (P = 0.055), but did not differ between open and covered nests (P = 0.74). Open and covered nests differed, however, when predation risk (estimated by initial-predation rate) was examined separately for day and night using camera-monitored nests; the frequency of initial predations that occurred during the day was higher for open nests than covered nests (P = 0.015). Thus, vulnerability of some nest types may depend on the relative importance of nocturnal and diurnal predators. Predation risk increased with nestling age from 0 to 8 days (P = 0.07). Up to 15% of fates assigned to camera-monitored nests were wrong when based solely on evidence that would have been available from periodic nest visits. There was no evidence of disturbance at nearly half the depredated nests, including all 5 depredated by large mammals. Overlap in types of sign left by different predator species, and variability of sign within species, suggests that evidence at nests is unreliable for identifying predators of grassland passerines.

  3. Optimizing Detection Rate and Characterization of Subtle Paroxysmal Neonatal Abnormal Facial Movements with Multi-Camera Video-Electroencephalogram Recordings.

    PubMed

    Pisani, Francesco; Pavlidis, Elena; Cattani, Luca; Ferrari, Gianluigi; Raheli, Riccardo; Spagnoli, Carlotta

    2016-06-01

    Objectives We retrospectively analyze the diagnostic accuracy for paroxysmal abnormal facial movements, comparing one camera versus multi-camera approach. Background Polygraphic video-electroencephalogram (vEEG) recording is the current gold standard for brain monitoring in high-risk newborns, especially when neonatal seizures are suspected. One camera synchronized with the EEG is commonly used. Methods Since mid-June 2012, we have started using multiple cameras, one of which point toward newborns' faces. We evaluated vEEGs recorded in newborns in the study period between mid-June 2012 and the end of September 2014 and compared, for each recording, the diagnostic accuracies obtained with one-camera and multi-camera approaches. Results We recorded 147 vEEGs from 87 newborns and found 73 episodes of paroxysmal facial abnormal movements in 18 vEEGs of 11 newborns with the multi-camera approach. By using the single-camera approach, only 28.8% of these events were identified (21/73). Ten positive vEEGs with multicamera with 52 paroxysmal facial abnormal movements (52/73, 71.2%) would have been considered as negative with the single-camera approach. Conclusions The use of one additional facial camera can significantly increase the diagnostic accuracy of vEEGs in the detection of paroxysmal abnormal facial movements in the newborns.

  4. Microscopic feature extraction from optical sections of contracting cardiac muscle cells recorded at high speed

    NASA Astrophysics Data System (ADS)

    Roos, Kenneth P.; Lake, David S.; Lubell, Bradford A.

    1991-05-01

    The rapid motion of microscopic features such as the cross-striations of contracting cardiac muscle cells are difficult to capture with conventional RS-170 video systems and image processing approaches. In this report, efforts to extract, enhance and analyze striation data from widefield optical sections of single contracting cells recorded with a charge-coupled device (CCD) video camera modified for high-speed RS-170 compatible operation are described. Each video field from the camera provides four 1/4 height images separated by 4 ms in time for a 240 Hz image acquisition rate. Data are continuously recorded on S-VHS video tape during each experiment. Selected image sequences are digitized field by field and stored in a computer system under automated software control. The four individual images in each video field are separated, geometrically corrected for time base error, and reassembled as a single sequence of images for interpretable visualization. The images are then processed with digital filters and gray scale expansion to preferentially enhance the cross-striations and minimize out of focus features. Regions within each image containing striations are identified and their positions determined and followed during the contraction cycle to obtain individual, regional and cellular sarcomere dynamics. This approach permits the critical evaluation of the magnitude, time course and uniformity of contractile function throughout the volume of a single cell with higher temporal and spatial resolutions than previously possible.

  5. Plant iodine-131 uptake in relation to root concentration as measured in minirhizotron by video camera:

    SciTech Connect

    Moss, K.J.

    1990-09-01

    Glass viewing tubes (minirhizotrons) were placed in the soil beneath native perennial bunchgrass (Agropyron spicatum). The tubes provided access for observing and quantifying plant roots with a miniature video camera and soil moisture estimates by neutron hydroprobe. The radiotracer I-131 was delivered to the root zone at three depths with differing root concentrations. The plant was subsequently sampled and analyzed for I-131. Plant uptake was greater when I-131 was applied at soil depths with higher root concentrations. When I-131 was applied at soil depths with lower root concentrations, plant uptake was less. However, the relationship between root concentration and plant uptake was not a direct one. When I-131 was delivered to deeper soil depths with low root concentrations, the quantity of roots there appeared to be less effective in uptake than the same quantity of roots at shallow soil depths with high root concentration. 29 refs., 6 figs., 11 tabs.

  6. Autonomous video camera system for monitoring impacts to benthic habitats from demersal fishing gear, including longlines

    NASA Astrophysics Data System (ADS)

    Kilpatrick, Robert; Ewing, Graeme; Lamb, Tim; Welsford, Dirk; Constable, Andrew

    2011-04-01

    Studies of the interactions of demersal fishing gear with the benthic environment are needed in order to manage conservation of benthic habitats. There has been limited direct assessment of these interactions through deployment of cameras on commercial fishing gear especially on demersal longlines. A compact, autonomous deep-sea video system was designed and constructed by the Australian Antarctic Division (AAD) for deployment on commercial fishing gear to observe interactions with benthos in the Southern Ocean finfish fisheries (targeting toothfish, Dissostichus spp). The Benthic Impacts Camera System (BICS) is capable of withstanding depths to 2500 m, has been successfully fitted to both longline and demersal trawl fishing gear, and is suitable for routine deployment by non-experts such as fisheries observers or crew. The system is entirely autonomous, robust, compact, easy to operate, and has minimal effect on the performance of the fishing gear it is attached to. To date, the system has successfully captured footage that demonstrates the interactions between demersal fishing gear and the benthos during routine commercial operations. It provides the first footage demonstrating the nature of the interaction between demersal longlines and benthic habitats in the Southern Ocean, as well as showing potential as a tool for rapidly assessing habitat types and presence of mobile biota such as krill ( Euphausia superba).

  7. Quantitative underwater 3D motion analysis using submerged video cameras: accuracy analysis and trajectory reconstruction.

    PubMed

    Silvatti, Amanda P; Cerveri, Pietro; Telles, Thiago; Dias, Fábio A S; Baroni, Guido; Barros, Ricardo M L

    2013-01-01

    In this study we aim at investigating the applicability of underwater 3D motion capture based on submerged video cameras in terms of 3D accuracy analysis and trajectory reconstruction. Static points with classical direct linear transform (DLT) solution, a moving wand with bundle adjustment and a moving 2D plate with Zhang's method were considered for camera calibration. As an example of the final application, we reconstructed the hand motion trajectories in different swimming styles and qualitatively compared this with Maglischo's model. Four highly trained male swimmers performed butterfly, breaststroke and freestyle tasks. The middle fingertip trajectories of both hands in the underwater phase were considered. The accuracy (mean absolute error) of the two calibration approaches (wand: 0.96 mm - 2D plate: 0.73 mm) was comparable to out of water results and highly superior to the classical DLT results (9.74 mm). Among all the swimmers, the hands' trajectories of the expert swimmer in the style were almost symmetric and in good agreement with Maglischo's model. The kinematic results highlight symmetry or asymmetry between the two hand sides, intra- and inter-subject variability in terms of the motion patterns and agreement or disagreement with the model. The two outcomes, calibration results and trajectory reconstruction, both move towards the quantitative 3D underwater motion analysis.

  8. A video camera is mounted on the second stage of a Delta II rocket

    NASA Technical Reports Server (NTRS)

    1999-01-01

    At Launch Pad 17-A, Cape Canaveral Air Station, workers check the mounting on a video camera on the second stage of a Boeing Delta II rocket that will launch the Stardust spacecraft on Feb. 6. Looking toward Earth, the camera will record the liftoff and separation of the first stage. Stardust is destined for a close encounter with the comet Wild 2 in January 2004. Using a silicon- based substance called aerogel, Stardust will capture comet particles flying off the nucleus of the comet. The spacecraft also will bring back samples of interstellar dust. These materials consist of ancient pre-solar interstellar grains and other remnants left over from the formation of the solar system. Scientists expect their analysis to provide important insights into the evolution of the sun and planets and possibly into the origin of life itself. The collected samples will return to Earth in a sample return capsule to be jettisoned as Stardust swings by Earth in January 2006.

  9. A video camera is mounted on the second stage of a Delta II rocket

    NASA Technical Reports Server (NTRS)

    1999-01-01

    At Launch Pad 17-A, Cape Canaveral Air Station, a worker (left) runs a wire through a mounting hole on the second stage of a Boeing Delta II rocket in order to affix an external video camera held by the worker at right. The Delta II will launch the Stardust spacecraft on Feb. 6. Looking toward Earth, the camera will record the liftoff and separation of the first stage. Stardust is destined for a close encounter with the comet Wild 2 in January 2004. Using a silicon-based substance called aerogel, Stardust will capture comet particles flying off the nucleus of the comet. The spacecraft also will bring back samples of interstellar dust. These materials consist of ancient pre-solar interstellar grains and other remnants left over from the formation of the solar system. Scientists expect their analysis to provide important insights into the evolution of the sun and planets and possibly into the origin of life itself. The collected samples will return to Earth in a sample return capsule to be jettisoned as Stardust swings by Earth in January 2006.

  10. A video camera is mounted on the second stage of a Delta II rocket

    NASA Technical Reports Server (NTRS)

    1999-01-01

    At Launch Pad 17-A, Cape Canaveral Air Station, a worker holds the video camera to be mounted on the second stage of a Boeing Delta II rocket that will launch the Stardust spacecraft on Feb. 6. His co-worker (right) makes equipment adjustments. Looking toward Earth, the camera will record the liftoff and separation of the first stage. Stardust is destined for a close encounter with the comet Wild 2 in January 2004. Using a silicon-based substance called aerogel, Stardust will capture comet particles flying off the nucleus of the comet. The spacecraft also will bring back samples of interstellar dust. These materials consist of ancient pre-solar interstellar grains and other remnants left over from the formation of the solar system. Scientists expect their analysis to provide important insights into the evolution of the sun and planets and possibly into the origin of life itself. The collected samples will return to Earth in a sample return capsule to be jettisoned as Stardust swings by Earth in January 2006.

  11. Hand contour detection in wearable camera video using an adaptive histogram region of interest

    PubMed Central

    2013-01-01

    Background Monitoring hand function at home is needed to better evaluate the effectiveness of rehabilitation interventions. Our objective is to develop wearable computer vision systems for hand function monitoring. The specific aim of this study is to develop an algorithm that can identify hand contours in video from a wearable camera that records the user’s point of view, without the need for markers. Methods The two-step image processing approach for each frame consists of: (1) Detecting a hand in the image, and choosing one seed point that lies within the hand. This step is based on a priori models of skin colour. (2) Identifying the contour of the region containing the seed point. This is accomplished by adaptively determining, for each frame, the region within a colour histogram that corresponds to hand colours, and backprojecting the image using the reduced histogram. Results In four test videos relevant to activities of daily living, the hand detector classification accuracy was 88.3%. The contour detection results were compared to manually traced contours in 97 test frames, and the median F-score was 0.86. Conclusion This algorithm will form the basis for a wearable computer-vision system that can monitor and log the interactions of the hand with its environment. PMID:24354542

  12. Monitoring the body temperature of cows and calves using video recordings from an infrared thermography camera.

    PubMed

    Hoffmann, Gundula; Schmidt, Mariana; Ammon, Christian; Rose-Meierhöfer, Sandra; Burfeind, Onno; Heuwieser, Wolfgang; Berg, Werner

    2013-06-01

    The aim of this study was to assess the variability of temperatures measured by a video-based infrared camera (IRC) in comparison to rectal and vaginal temperatures. The body surface temperatures of cows and calves were measured contactless at different body regions using videos from the IRC. Altogether, 22 cows and 9 calves were examined. The differences of the measured IRC temperatures among the body regions, i.e. eye (mean: 37.0 °C), back of the ear (35.6 °C), shoulder (34.9 °C) and vulva (37.2 °C), were significant (P < 0.01), except between eye and vulva (P = 0.99). The quartile ranges of the measured IRC temperatures at the 4 above mentioned regions were between 1.2 and 1.8 K. Of the investigated body regions the eye and the back of the ear proved to be suitable as practical regions for temperature monitoring. The temperatures of these 2 regions could be gained by the use of the maximum temperatures of the head and body area. Therefore, only the maximum temperatures of both areas were used for further analysis. The data analysis showed an increase for the maximum temperature measured by IRC at head and body area with an increase of rectal temperature in cows and calves. The use of infrared thermography videos has the advantage to analyze more than 1 picture per animal in a short period of time, and shows potential as a monitoring system for body temperatures in cattle.

  13. Characterization and Compensation of High Speed Digitizers

    SciTech Connect

    Fong, P; Teruya, A; Lowry, M

    2005-04-04

    Increasingly, ADC technology is being pressed into service for single single-shot instrumentation applications that were formerly served by vacuum-tube based oscilloscopes and streak cameras. ADC technology, while convenient, suffers significant performance impairments. Thus, in these demanding applications, a quantitative and accurate representation of these impairments is critical to an understanding of measurement accuracy. We have developed a phase-plane behavioral model, implemented it in SIMULINK and applied it to interleaved, high-speed ADCs (up to 4 gigasamples/sec). We have also developed and demonstrated techniques to effectively compensate for these impairments based upon the model.

  14. Gated high speed optical detector

    NASA Technical Reports Server (NTRS)

    Green, S. I.; Carson, L. M.; Neal, G. W.

    1973-01-01

    The design, fabrication, and test of two gated, high speed optical detectors for use in high speed digital laser communication links are discussed. The optical detectors used a dynamic crossed field photomultiplier and electronics including dc bias and RF drive circuits, automatic remote synchronization circuits, automatic gain control circuits, and threshold detection circuits. The equipment is used to detect binary encoded signals from a mode locked neodynium laser.

  15. 3-D high-speed imaging of volcanic bomb trajectory in basaltic explosive eruptions

    USGS Publications Warehouse

    Gaudin, D.; Taddeucci, J; Houghton, B. F.; Orr, Tim R.; Andronico, D.; Del Bello, E.; Kueppers, U.; Ricci, T.; Scarlato, P.

    2016-01-01

    Imaging, in general, and high speed imaging in particular are important emerging tools for the study of explosive volcanic eruptions. However, traditional 2-D video observations cannot measure volcanic ejecta motion toward and away from the camera, strongly hindering our capability to fully determine crucial hazard-related parameters such as explosion directionality and pyroclasts' absolute velocity. In this paper, we use up to three synchronized high-speed cameras to reconstruct pyroclasts trajectories in three dimensions. Classical stereographic techniques are adapted to overcome the difficult observation conditions of active volcanic vents, including the large number of overlapping pyroclasts which may change shape in flight, variable lighting and clouding conditions, and lack of direct access to the target. In particular, we use a laser rangefinder to measure the geometry of the filming setup and manually track pyroclasts on the videos. This method reduces uncertainties to 10° in azimuth and dip angle of the pyroclasts, and down to 20% in the absolute velocity estimation. We demonstrate the potential of this approach by three examples: the development of an explosion at Stromboli, a bubble burst at Halema'uma'u lava lake, and an in-flight collision between two bombs at Stromboli.

  16. A unified and efficient framework for court-net sports video analysis using 3D camera modeling

    NASA Astrophysics Data System (ADS)

    Han, Jungong; de With, Peter H. N.

    2007-01-01

    The extensive amount of video data stored on available media (hard and optical disks) necessitates video content analysis, which is a cornerstone for different user-friendly applications, such as, smart video retrieval and intelligent video summarization. This paper aims at finding a unified and efficient framework for court-net sports video analysis. We concentrate on techniques that are generally applicable for more than one sports type to come to a unified approach. To this end, our framework employs the concept of multi-level analysis, where a novel 3-D camera modeling is utilized to bridge the gap between the object-level and the scene-level analysis. The new 3-D camera modeling is based on collecting features points from two planes, which are perpendicular to each other, so that a true 3-D reference is obtained. Another important contribution is a new tracking algorithm for the objects (i.e. players). The algorithm can track up to four players simultaneously. The complete system contributes to summarization by various forms of information, of which the most important are the moving trajectory and real-speed of each player, as well as 3-D height information of objects and the semantic event segments in a game. We illustrate the performance of the proposed system by evaluating it for a variety of court-net sports videos containing badminton, tennis and volleyball, and we show that the feature detection performance is above 92% and events detection about 90%.

  17. An ultrahigh-speed color video camera operating at 1,000,000 fps with 288 frame memories

    NASA Astrophysics Data System (ADS)

    Kitamura, K.; Arai, T.; Yonai, J.; Hayashida, T.; Kurita, T.; Maruyama, H.; Namiki, J.; Yanagi, T.; Yoshida, T.; van Kuijk, H.; Bosiers, Jan T.; Saita, A.; Kanayama, S.; Hatade, K.; Kitagawa, S.; Etoh, T. Goji

    2008-11-01

    We developed an ultrahigh-speed color video camera that operates at 1,000,000 fps (frames per second) and had capacity to store 288 frame memories. In 2005, we developed an ultrahigh-speed, high-sensitivity portable color camera with a 300,000-pixel single CCD (ISIS-V4: In-situ Storage Image Sensor, Version 4). Its ultrahigh-speed shooting capability of 1,000,000 fps was made possible by directly connecting CCD storages, which record video images, to the photodiodes of individual pixels. The number of consecutive frames was 144. However, longer capture times were demanded when the camera was used during imaging experiments and for some television programs. To increase ultrahigh-speed capture times, we used a beam splitter and two ultrahigh-speed 300,000-pixel CCDs. The beam splitter was placed behind the pick up lens. One CCD was located at each of the two outputs of the beam splitter. The CCD driving unit was developed to separately drive two CCDs, and the recording period of the two CCDs was sequentially switched. This increased the recording capacity to 288 images, an increase of a factor of two over that of conventional ultrahigh-speed camera. A problem with the camera was that the incident light on each CCD was reduced by a factor of two by using the beam splitter. To improve the light sensitivity, we developed a microlens array for use with the ultrahigh-speed CCDs. We simulated the operation of the microlens array in order to optimize its shape and then fabricated it using stamping technology. Using this microlens increased the light sensitivity of the CCDs by an approximate factor of two. By using a beam splitter in conjunction with the microlens array, it was possible to make an ultrahigh-speed color video camera that has 288 frame memories but without decreasing the camera's light sensitivity.

  18. The Automatically Triggered Video or Imaging Station (ATVIS): An Inexpensive Way to Catch Geomorphic Events on Camera

    NASA Astrophysics Data System (ADS)

    Wickert, A. D.

    2010-12-01

    To understand how single events can affect landscape change, we must catch the landscape in the act. Direct observations are rare and often dangerous. While video is a good alternative, commercially-available video systems for field installation cost 11,000, weigh ~100 pounds (45 kg), and shoot 640x480 pixel video at 4 frames per second. This is the same resolution as a cheap point-and-shoot camera, with a frame rate that is nearly an order of magnitude worse. To overcome these limitations of resolution, cost, and portability, I designed and built a new observation station. This system, called ATVIS (Automatically Triggered Video or Imaging Station), costs 450--500 and weighs about 15 pounds. It can take roughly 3 hours of 1280x720 pixel video, 6.5 hours of 640x480 video, or 98,000 1600x1200 pixel photos (one photo every 7 seconds for 8 days). The design calls for a simple Canon point-and-shoot camera fitted with custom firmware that allows 5V pulses through its USB cable to trigger it to take a picture or to initiate or stop video recording. These pulses are provided by a programmable microcontroller that can take input from either sensors or a data logger. The design is easily modifiable to a variety of camera and sensor types, and can also be used for continuous time-lapse imagery. We currently have prototypes set up at a gully near West Bijou Creek on the Colorado high plains and at tributaries to Marble Canyon in northern Arizona. Hopefully, a relatively inexpensive and portable system such as this will allow geomorphologists to supplement sensor networks with photo or video monitoring and allow them to see—and better quantify—the fantastic array of processes that modify landscapes as they unfold. Camera station set up at Badger Canyon, Arizona.Inset: view into box. Clockwise from bottom right: camera, microcontroller (blue), DC converter (red), solar charge controller, 12V battery. Materials and installation assistance courtesy of Ron Griffiths and the

  19. High-speed imaging on static tensile test for unidirectional CFRP

    NASA Astrophysics Data System (ADS)

    Kusano, Hideaki; Aoki, Yuichiro; Hirano, Yoshiyasu; Kondo, Yasushi; Nagao, Yosuke

    2008-11-01

    The objective of this study is to clarify the fracture mechanism of unidirectional CFRP (Carbon Fiber Reinforced Plastics) under static tensile loading. The advantages of CFRP are higher specific stiffness and strength than the metal material. The use of CFRP is increasing in not only the aerospace and rapid transit railway industries but also the sports, leisure and automotive industries. The tensile fracture mechanism of unidirectional CFRP has not been experimentally made clear because the fracture speed of unidirectional CFRP is quite high. We selected the intermediate modulus and high strength unidirectional CFRP laminate which is a typical material used in the aerospace field. The fracture process under static tensile loading was captured by a conventional high-speed camera and a new type High-Speed Video Camera HPV-1. It was found that the duration of fracture is 200 microseconds or less, then images taken by a conventional camera doesn't have enough temporal-resolution. On the other hand, results obtained by HPV-1 have higher quality where the fracture process can be clearly observed.

  20. Color video camera capable of 1,000,000 fps with triple ultrahigh-speed image sensors

    NASA Astrophysics Data System (ADS)

    Maruyama, Hirotaka; Ohtake, Hiroshi; Hayashida, Tetsuya; Yamada, Masato; Kitamura, Kazuya; Arai, Toshiki; Tanioka, Kenkichi; Etoh, Takeharu G.; Namiki, Jun; Yoshida, Tetsuo; Maruno, Hiromasa; Kondo, Yasushi; Ozaki, Takao; Kanayama, Shigehiro

    2005-03-01

    We developed an ultrahigh-speed, high-sensitivity, color camera that captures moving images of phenomena too fast to be perceived by the human eye. The camera operates well even under restricted lighting conditions. It incorporates a special CCD device that is capable of ultrahigh-speed shots while retaining its high sensitivity. Its ultrahigh-speed shooting capability is made possible by directly connecting CCD storages, which record video images, to photodiodes of individual pixels. Its large photodiode area together with the low-noise characteristic of the CCD contributes to its high sensitivity. The camera can clearly capture events even under poor light conditions, such as during a baseball game at night. Our camera can record the very moment the bat hits the ball.

  1. Optimal camera exposure for video surveillance systems by predictive control of shutter speed, aperture, and gain

    NASA Astrophysics Data System (ADS)

    Torres, Juan; Menéndez, José Manuel

    2015-02-01

    This paper establishes a real-time auto-exposure method to guarantee that surveillance cameras in uncontrolled light conditions take advantage of their whole dynamic range while provide neither under nor overexposed images. State-of-the-art auto-exposure methods base their control on the brightness of the image measured in a limited region where the foreground objects are mostly located. Unlike these methods, the proposed algorithm establishes a set of indicators based on the image histogram that defines its shape and position. Furthermore, the location of the objects to be inspected is likely unknown in surveillance applications. Thus, the whole image is monitored in this approach. To control the camera settings, we defined a parameters function (Ef ) that linearly depends on the shutter speed and the electronic gain; and is inversely proportional to the square of the lens aperture diameter. When the current acquired image is not overexposed, our algorithm computes the value of Ef that would move the histogram to the maximum value that does not overexpose the capture. When the current acquired image is overexposed, it computes the value of Ef that would move the histogram to a value that does not underexpose the capture and remains close to the overexposed region. If the image is under and overexposed, the whole dynamic range of the camera is therefore used, and a default value of the Ef that does not overexpose the capture is selected. This decision follows the idea that to get underexposed images is better than to get overexposed ones, because the noise produced in the lower regions of the histogram can be removed in a post-processing step while the saturated pixels of the higher regions cannot be recovered. The proposed algorithm was tested in a video surveillance camera placed at an outdoor parking lot surrounded by buildings and trees which produce moving shadows in the ground. During the daytime of seven days, the algorithm was running alternatively together

  2. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room

    PubMed Central

    Nair, Akshay Gopinathan; Kamal, Saurabh; Dave, Tarjani Vivek; Mishra, Kapil; Reddy, Harsha S; Rocca, David Della; Rocca, Robert C Della; Andron, Aleza; Jain, Vandana

    2015-01-01

    Objective: To study the utility of a commercially available small, portable ultra-high definition (HD) camera (GoPro Hero 4) for intraoperative recording. Methods: A head mount was used to fix the camera on the operating surgeon's head. Due care was taken to protect the patient's identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each). The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. Results: The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon's head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. Conclusions: A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery. PMID:26655001

  3. High-Speed Electrochemical Imaging.

    PubMed

    Momotenko, Dmitry; Byers, Joshua C; McKelvey, Kim; Kang, Minkyung; Unwin, Patrick R

    2015-09-22

    The design, development, and application of high-speed scanning electrochemical probe microscopy is reported. The approach allows the acquisition of a series of high-resolution images (typically 1000 pixels μm(-2)) at rates approaching 4 seconds per frame, while collecting up to 8000 image pixels per second, about 1000 times faster than typical imaging speeds used up to now. The focus is on scanning electrochemical cell microscopy (SECCM), but the principles and practicalities are applicable to many electrochemical imaging methods. The versatility of the high-speed scan concept is demonstrated at a variety of substrates, including imaging the electroactivity of a patterned self-assembled monolayer on gold, visualization of chemical reactions occurring at single wall carbon nanotubes, and probing nanoscale electrocatalysts for water splitting. These studies provide movies of spatial variations of electrochemical fluxes as a function of potential and a platform for the further development of high speed scanning with other electrochemical imaging techniques.

  4. Visualization of high speed liquid jet impaction on a moving surface.

    PubMed

    Guo, Yuchen; Green, Sheldon

    2015-01-01

    Two apparatuses for examining liquid jet impingement on a high-speed moving surface are described: an air cannon device (for examining surface speeds between 0 and 25 m/sec) and a spinning disk device (for examining surface speeds between 15 and 100 m/sec). The air cannon linear traverse is a pneumatic energy-powered system that is designed to accelerate a metal rail surface mounted on top of a wooden projectile. A pressurized cylinder fitted with a solenoid valve rapidly releases pressurized air into the barrel, forcing the projectile down the cannon barrel. The projectile travels beneath a spray nozzle, which impinges a liquid jet onto its metal upper surface, and the projectile then hits a stopping mechanism. A camera records the jet impingement, and a pressure transducer records the spray nozzle backpressure. The spinning disk set-up consists of a steel disk that reaches speeds of 500 to 3,000 rpm via a variable frequency drive (VFD) motor. A spray system similar to that of the air cannon generates a liquid jet that impinges onto the spinning disc, and cameras placed at several optical access points record the jet impingement. Video recordings of jet impingement processes are recorded and examined to determine whether the outcome of impingement is splash, splatter, or deposition. The apparatuses are the first that involve the high speed impingement of low-Reynolds-number liquid jets on high speed moving surfaces. In addition to its rail industry applications, the described technique may be used for technical and industrial purposes such as steelmaking and may be relevant to high-speed 3D printing. PMID:25938331

  5. Visualization of high speed liquid jet impaction on a moving surface.

    PubMed

    Guo, Yuchen; Green, Sheldon

    2015-01-01

    Two apparatuses for examining liquid jet impingement on a high-speed moving surface are described: an air cannon device (for examining surface speeds between 0 and 25 m/sec) and a spinning disk device (for examining surface speeds between 15 and 100 m/sec). The air cannon linear traverse is a pneumatic energy-powered system that is designed to accelerate a metal rail surface mounted on top of a wooden projectile. A pressurized cylinder fitted with a solenoid valve rapidly releases pressurized air into the barrel, forcing the projectile down the cannon barrel. The projectile travels beneath a spray nozzle, which impinges a liquid jet onto its metal upper surface, and the projectile then hits a stopping mechanism. A camera records the jet impingement, and a pressure transducer records the spray nozzle backpressure. The spinning disk set-up consists of a steel disk that reaches speeds of 500 to 3,000 rpm via a variable frequency drive (VFD) motor. A spray system similar to that of the air cannon generates a liquid jet that impinges onto the spinning disc, and cameras placed at several optical access points record the jet impingement. Video recordings of jet impingement processes are recorded and examined to determine whether the outcome of impingement is splash, splatter, or deposition. The apparatuses are the first that involve the high speed impingement of low-Reynolds-number liquid jets on high speed moving surfaces. In addition to its rail industry applications, the described technique may be used for technical and industrial purposes such as steelmaking and may be relevant to high-speed 3D printing.

  6. SEAL FOR HIGH SPEED CENTRIFUGE

    DOEpatents

    Skarstrom, C.W.

    1957-12-17

    A seal is described for a high speed centrifuge wherein the centrifugal force of rotation acts on the gasket to form a tight seal. The cylindrical rotating bowl of the centrifuge contains a closure member resting on a shoulder in the bowl wall having a lower surface containing bands of gasket material, parallel and adjacent to the cylinder wall. As the centrifuge speed increases, centrifugal force acts on the bands of gasket material forcing them in to a sealing contact against the cylinder wall. This arrangememt forms a simple and effective seal for high speed centrifuges, replacing more costly methods such as welding a closure in place.

  7. Addition of a video camera system improves the ease of Airtraq(®) tracheal intubation during chest compression.

    PubMed

    Kohama, Hanako; Komasawa, Nobuyasu; Ueki, Ryusuke; Itani, Motoi; Nishi, Shin-ichi; Kaminoh, Yoshiroh

    2012-04-01

    Recent resuscitation guidelines for cardiopulmonary resuscitation emphasize that rescuers should perform tracheal intubation with minimal interruption of chest compressions. We evaluated the use of video guidance to facilitate tracheal intubation with the Airtraq (ATQ) laryngoscope during chest compression. Eighteen novice physicians in our anesthesia department performed tracheal intubation on a manikin using the ATQ with a video camera system (ATQ-V) or with no video guidance (ATQ-N) during chest compression. All participants were able to intubate the manikin using the ATQ-N without chest compression, but five failed during chest compression (P < 0.05). In contrast, all participants successfully secured the airway with the ATQ-V, with or without chest compression. Concurrent chest compression increased the time required for intubation with the ATQ-N (without chest compression 14.8 ± 4.5 s; with chest compression, 28.2 ± 10.6 s; P < 0.05), but not with the ATQ-V (without chest compression, 15.9 ± 5.8 s; with chest compression, 17.3 ± 5.3 s; P > 0.05). The ATQ video camera system improves the ease of tracheal intubation during chest compressions.

  8. Faster than "g", Revisited with High-Speed Imaging

    ERIC Educational Resources Information Center

    Vollmer, Michael; Mollmann, Klaus-Peter

    2012-01-01

    The introduction of modern high-speed cameras in physics teaching provides a tool not only for easy visualization, but also for quantitative analysis of many simple though fast occurring phenomena. As an example, we present a very well-known demonstration experiment--sometimes also discussed in the context of falling chimneys--which is commonly…

  9. High speed photography and photonics applications: An underutilized technology

    SciTech Connect

    Paisley, D.L.

    1996-10-01

    Snapshot: Paisley describes the development of high-speed photography including the role of streak cameras, fiber optics, and lasers. Progress in this field has created a powerful tool for viewing such ultrafast processes as hypersonic events and ballistics. {copyright} {ital 1996 Optical Society of America.} [1047-6938-96-10-9939-04

  10. Application Of High Speed Photography In Science And Technology

    NASA Astrophysics Data System (ADS)

    Wu Ji-Zong, Wu; Yu-Ju, Lin

    1983-03-01

    The service works in high-speed photography carried out by the Department of Precision Instruments, Tianjin University are described in this paper. A compensation type high-speed camera was used in these works. The photographic methods adopted and better results achieved in the studies of several technical fields, such as velocity field of flow of overflow surface of high dam, combustion process of internal combustion engine, metal cutting, electrical are welding, experiment of piling of steel tube piles for supporting the marine platforms and characteristics of motion of wrist watch escape mechanism and so on are illustrated in more detail. As the extension of human visual organs and for increasing the abi-lities of observing and studying the high-speed processes, high-speed photography plays a very important role. In order to promote the application and development on high-speed photography, we have carried out the consultative and service works inside and outside Tianjin Uni-versity. The Pentazet 35 compensation type high-speed camera, made in East Germany, was used to record the high-speed events in various kinds of technical investigations and necessary results have been ob-tained. 1. Measurement of flow velocity on the overflow surface of high dam. In the design of a key water control project with high head, it is extremely necessary to determinate various characteristics of flow velocity field on the overflow surface of high dam. Since the water flow on the surface of high overflow dam possesses the features of large flow velocity and shallow water depth, therefore it is difficult to use the conventional current meters such as pilot tube, miniature cur-rent meter or electrical measuring methods of non-electrical quantities for studying this problem. Adopting the high-speed photographic method to study analogously the characteristics of flow velocity field on the overflow surface of high dam is a kind of new measuring method. People

  11. Video camera-computer tracking of nematodeCaenorhabditis elegans to record behavioral responses.

    PubMed

    Dunsenbery, D B

    1985-09-01

    A new method is used to analyze responses to changes in the concentration of two chemical stimuli. Nematodes are allowed to move around on the surface of a thin layer of agar across which a stream of air blows to carry volatile stimuli. Darkfield illumination provides high-contrast images of the worms which are acquired by a video camera and fed to a microcomputer which is programed to simultaneously track and record the movements and changes in direction of as many as 25 animals. The results are reported in real time. The worms respond to an increase in CO2 concentration by decreasing the number moving and increasing the number of changes of direction. Both responses adapt to steady-state levels in about half a minute. This suggests that they respond by changing the probability of initiating a reversal bout. This observation adds a repellent to the class of stimuli thatC. elegans reponds to by klinokinesis. The resonses to changes in oxygen concentration are somewhat different. Movements and changes in direction both decrease when the oxygen concentration falls and increase when the concentration rises. No adaptation is seen within the one-minute time span observed. This observation provides further evidence that the response to oxygen differs from the response to other chemicals and may be sensed internally. These observations demonstrate that computer tracking is a sensitive method of analyzing animal behavior. It is further demonstrated that a significant response can be detected to a relatively weak stimulus in less than 5 min.

  12. High-speed thermo-microscope for imaging thermal desorption phenomena.

    PubMed

    Staymates, Matthew; Gillen, Greg

    2012-07-01

    In this work, we describe a thermo-microscope imaging system that can be used to visualize atmospheric pressure thermal desorption phenomena at high heating rates and frame rates. This versatile and portable instrument is useful for studying events during rapid heating of organic particles on the microscopic scale. The system consists of a zoom lens coupled to a high-speed video camera that is focused on the surface of an aluminum nitride heating element. We leverage high-speed videography with oblique incidence microscopy along with forward and back-scattered illumination to capture vivid images of thermal desorption events during rapid heating of chemical compounds. In a typical experiment, particles of the material of interest are rapidly heated beyond their boiling point while the camera captures images at several thousand frames/s. A data acquisition system, along with an embedded thermocouple and infrared pyrometer are used to measure the temperature of the heater surface. We demonstrate that, while a typical thermocouple lacks the response time to accurately measure temperature ramps that approach 150 °C/s, it is possible to calibrate the system by using a combination of infrared pyrometry, melting point standards, and a thermocouple. Several examples of high explosives undergoing rapid thermal desorption are also presented. PMID:22852730

  13. Fast auto-acquisition tomography tilt series by using HD video camera in ultra-high voltage electron microscope.

    PubMed

    Nishi, Ryuji; Cao, Meng; Kanaji, Atsuko; Nishida, Tomoki; Yoshida, Kiyokazu; Isakozawa, Shigeto

    2014-11-01

    The ultra-high voltage electron microscope (UHVEM) H-3000 with the world highest acceleration voltage of 3 MV can observe remarkable three dimensional microstructures of microns-thick samples[1]. Acquiring a tilt series of electron tomography is laborious work and thus an automatic technique is highly desired. We proposed the Auto-Focus system using image Sharpness (AFS)[2,3] for UHVEM tomography tilt series acquisition. In the method, five images with different defocus values are firstly acquired and the image sharpness are calculated. The sharpness are then fitted to a quasi-Gaussian function to decide the best focus value[3]. Defocused images acquired by the slow scan CCD (SS-CCD) camera (Hitachi F486BK) are of high quality but one minute is taken for acquisition of five defocused images.In this study, we introduce a high-definition video camera (HD video camera; Hamamatsu Photonics K. K. C9721S) for fast acquisition of images[4]. It is an analog camera but the camera image is captured by a PC and the effective image resolution is 1280×1023 pixels. This resolution is lower than that of the SS-CCD camera of 4096×4096 pixels. However, the HD video camera captures one image for only 1/30 second. In exchange for the faster acquisition the S/N of images are low. To improve the S/N, 22 captured frames are integrated so that each image sharpness is enough to become lower fitting error. As countermeasure against low resolution, we selected a large defocus step, which is typically five times of the manual defocus step, to discriminate different defocused images.By using HD video camera for autofocus process, the time consumption for each autofocus procedure was reduced to about six seconds. It took one second for correction of an image position and the total correction time was seven seconds, which was shorter by one order than that using SS-CCD camera. When we used SS-CCD camera for final image capture, it took 30 seconds to record one tilt image. We can obtain a tilt

  14. SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT

    SciTech Connect

    Ingram, S; Rao, A; Wendt, R; Castillo, R; Court, L; Yang, J; Beadle, B

    2014-06-01

    Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the camera by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination.

  15. On the use of Video Camera Systems in the Detection of Kuiper Belt Objects by Stellar Occultations

    NASA Astrophysics Data System (ADS)

    Subasinghe, Dilini

    2012-10-01

    Due to the distance between us and the Kuiper Belt, direct detection of Kuiper Belt Objects (KBOs) is not currently possible for objects less than 10 km in diameter. Indirect methods such as stellar occultations must be employed to remotely probe these bodies. The size, shape, as well as atmospheric properties and ring system information of a body (if any), can be collected through observations of stellar occultations. This method has been previously used with some success - Roques et al. (2006) detected 3 Trans-Neptunian objects; Schlichting et al. (2009) detected a single object in archival data. However, previous assessments of KBO occultation detection rates have been calculated only for telescopes - we extend this method to video camera systems. Building on Roques & Moncuquet (2000), we present a derivation that can be applied to any video camera system, taking into account camera specifications and diffraction effects. This allows for a determination of the number of observable KBO occultations per night. Example calculations are presented for some of the automated meteor camera systems currently in use at the University of Western Ontario. The results of this project will allow us to refine and improve our own camera system, as well as allow others to enhance their systems for KBO detection. Roques, F., Doressoundiram, A., Dhillon, V., Marsh, T., Bickerton, S., Kavelaars, J. J., Moncuquet, M., Auvergne, M., Belskaya, I., Chevreton, M., Colas, F., Fernandez, A., Fitzsimmons, A., Lecacheux, J., Mousis, O., Pau, S., Peixinho, N., & Tozzi, G. P. (2006). The Astronomical Journal, 132(2), 819-822. Roques, F., & Moncuquet, M. (2000). Icarus, 147(2), 530-544. Schlichting, H. E., Ofek, E. O., Wenz, M., Sari, R., Gal-Yam, A., Livio, M., Nelan, E., & Zucker, S. (2009). Nature, 462(7275), 895-897.

  16. Foraging at the edge of the world: low-altitude, high-speed manoeuvering in barn swallows.

    PubMed

    Warrick, Douglas R; Hedrick, Tyson L; Biewener, Andrew A; Crandell, Kristen E; Tobalske, Bret W

    2016-09-26

    While prior studies of swallow manoeuvering have focused on slow-speed flight and obstacle avoidance in still air, swallows survive by foraging at high speeds in windy environments. Recent advances in field-portable, high-speed video systems, coupled with precise anemometry, permit measures of high-speed aerial performance of birds in a natural state. We undertook the present study to test: (i) the manner in which barn swallows (Hirundo rustica) may exploit wind dynamics and ground effect while foraging and (ii) the relative importance of flapping versus gliding for accomplishing high-speed manoeuvers. Using multi-camera videography synchronized with wind-velocity measurements, we tracked coursing manoeuvers in pursuit of prey. Wind speed averaged 1.3-2.0 m s(-1) across the atmospheric boundary layer, exhibiting a shear gradient greater than expected, with instantaneous speeds of 0.02-6.1 m s(-1) While barn swallows tended to flap throughout turns, they exhibited reduced wingbeat frequency, relying on glides and partial bounds during maximal manoeuvers. Further, the birds capitalized on the near-earth wind speed gradient to gain kinetic and potential energy during both flapping and gliding turns; providing evidence that such behaviour is not limited to large, fixed-wing soaring seabirds and that exploitation of wind gradients by small aerial insectivores may be a significant aspect of their aeroecology.This article is part of the themed issue 'Moving in a moving medium: new perspectives on flight'.

  17. Foraging at the edge of the world: low-altitude, high-speed manoeuvering in barn swallows.

    PubMed

    Warrick, Douglas R; Hedrick, Tyson L; Biewener, Andrew A; Crandell, Kristen E; Tobalske, Bret W

    2016-09-26

    While prior studies of swallow manoeuvering have focused on slow-speed flight and obstacle avoidance in still air, swallows survive by foraging at high speeds in windy environments. Recent advances in field-portable, high-speed video systems, coupled with precise anemometry, permit measures of high-speed aerial performance of birds in a natural state. We undertook the present study to test: (i) the manner in which barn swallows (Hirundo rustica) may exploit wind dynamics and ground effect while foraging and (ii) the relative importance of flapping versus gliding for accomplishing high-speed manoeuvers. Using multi-camera videography synchronized with wind-velocity measurements, we tracked coursing manoeuvers in pursuit of prey. Wind speed averaged 1.3-2.0 m s(-1) across the atmospheric boundary layer, exhibiting a shear gradient greater than expected, with instantaneous speeds of 0.02-6.1 m s(-1) While barn swallows tended to flap throughout turns, they exhibited reduced wingbeat frequency, relying on glides and partial bounds during maximal manoeuvers. Further, the birds capitalized on the near-earth wind speed gradient to gain kinetic and potential energy during both flapping and gliding turns; providing evidence that such behaviour is not limited to large, fixed-wing soaring seabirds and that exploitation of wind gradients by small aerial insectivores may be a significant aspect of their aeroecology.This article is part of the themed issue 'Moving in a moving medium: new perspectives on flight'. PMID:27528781

  18. Superplane! High Speed Civil Transport

    NASA Technical Reports Server (NTRS)

    1998-01-01

    The High Speed Civil Transport (HSCT). This light-hearted promotional piece explains what the HSCT 'Superplane' is and what advantages it will have over current aircraft. As envisioned, the HSCT is a next-generation supersonic (faster than the speed of sound) passenger jet that would fly 300 passengers at more than 1,500 miles per hour -- more than twice the speed of sound. It will cross the Pacific or Atlantic in less than half the time of modern subsonic jets, and at a ticket price less than 20 percent above comparable, slower flights

  19. High-Speed TCP Testing

    NASA Technical Reports Server (NTRS)

    Brooks, David E.; Gassman, Holly; Beering, Dave R.; Welch, Arun; Hoder, Douglas J.; Ivancic, William D.

    1999-01-01

    Transmission Control Protocol (TCP) is the underlying protocol used within the Internet for reliable information transfer. As such, there is great interest to have all implementations of TCP efficiently interoperate. This is particularly important for links exhibiting long bandwidth-delay products. The tools exist to perform TCP analysis at low rates and low delays. However, for extremely high-rate and lone-delay links such as 622 Mbps over geosynchronous satellites, new tools and testing techniques are required. This paper describes the tools and techniques used to analyze and debug various TCP implementations over high-speed, long-delay links.

  20. High speed quantitative digital microscopy

    NASA Technical Reports Server (NTRS)

    Castleman, K. R.; Price, K. H.; Eskenazi, R.; Ovadya, M. M.; Navon, M. A.

    1984-01-01

    Modern digital image processing hardware makes possible quantitative analysis of microscope images at high speed. This paper describes an application to automatic screening for cervical cancer. The system uses twelve MC6809 microprocessors arranged in a pipeline multiprocessor configuration. Each processor executes one part of the algorithm on each cell image as it passes through the pipeline. Each processor communicates with its upstream and downstream neighbors via shared two-port memory. Thus no time is devoted to input-output operations as such. This configuration is expected to be at least ten times faster than previous systems.

  1. Magneto-optical system for high speed real time imaging.

    PubMed

    Baziljevich, M; Barness, D; Sinvani, M; Perel, E; Shaulov, A; Yeshurun, Y

    2012-08-01

    A new magneto-optical system has been developed to expand the range of high speed real time magneto-optical imaging. A special source for the external magnetic field has also been designed, using a pump solenoid to rapidly excite the field coil. Together with careful modifications of the cryostat, to reduce eddy currents, ramping rates reaching 3000 T/s have been achieved. Using a powerful laser as the light source, a custom designed optical assembly, and a high speed digital camera, real time imaging rates up to 30 000 frames per seconds have been demonstrated.

  2. Magneto-optical system for high speed real time imaging.

    PubMed

    Baziljevich, M; Barness, D; Sinvani, M; Perel, E; Shaulov, A; Yeshurun, Y

    2012-08-01

    A new magneto-optical system has been developed to expand the range of high speed real time magneto-optical imaging. A special source for the external magnetic field has also been designed, using a pump solenoid to rapidly excite the field coil. Together with careful modifications of the cryostat, to reduce eddy currents, ramping rates reaching 3000 T/s have been achieved. Using a powerful laser as the light source, a custom designed optical assembly, and a high speed digital camera, real time imaging rates up to 30 000 frames per seconds have been demonstrated. PMID:22938303

  3. High-speed phosphor thermometry.

    PubMed

    Fuhrmann, N; Baum, E; Brübach, J; Dreizler, A

    2011-10-01

    Phosphor thermometry is a semi-invasive surface temperature measurement technique utilising the luminescence properties of doped ceramic materials. Typically, these phosphor materials are coated onto the object of interest and are excited by a short UV laser pulse. Up to now, primarily Q-switched laser systems with repetition rates of 10 Hz were employed for excitation. Accordingly, this diagnostic tool was not applicable to resolve correlated temperature transients at time scales shorter than 100 ms. This contribution reports on the first realisation of a high-speed phosphor thermometry system employing a highly repetitive laser in the kHz regime and a fast decaying phosphor. A suitable material was characterised regarding its temperature lifetime characteristic and its measurement precision. Additionally, the influence of laser power on the phosphor coating was investigated in terms of heating effects. A demonstration of this high-speed technique has been conducted inside the thermally highly transient system of an optically accessible internal combustion engine. Temperatures have been measured with a repetition rate of 6 kHz corresponding to one sample per crank angle degree at 1000 rpm. PMID:22047319

  4. Remote Transmission at High Speed

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Omni and NASA Test Operations at Stennis entered a Dual-Use Agreement to develop the FOTR-125, a 125 megabit-per-second fiber-optic transceiver that allows accurate digital recordings over a great distance. The transceiver s fiber-optic link can be as long as 25 kilometers. This makes it much longer than the standard coaxial link, which can be no longer than 50 meters.The FOTR-125 utilizes laser diode transmitter modules and integrated receivers for the optical interface. Two transmitters and two receivers are employed at each end of the link with automatic or manual switchover to maximize the reliability of the communications link. NASA uses the transceiver in Stennis High-Speed Data Acquisition System (HSDAS). The HSDAS consists of several identical systems installed on the Center s test stands to process all high-speed data related to its propulsion test programs. These transceivers allow the recorder and HSDAS controls to be located in the Test Control Center in a remote location while the digitizer is located on the test stand.

  5. High-speed phosphor thermometry

    NASA Astrophysics Data System (ADS)

    Fuhrmann, N.; Baum, E.; Brübach, J.; Dreizler, A.

    2011-10-01

    Phosphor thermometry is a semi-invasive surface temperature measurement technique utilising the luminescence properties of doped ceramic materials. Typically, these phosphor materials are coated onto the object of interest and are excited by a short UV laser pulse. Up to now, primarily Q-switched laser systems with repetition rates of 10 Hz were employed for excitation. Accordingly, this diagnostic tool was not applicable to resolve correlated temperature transients at time scales shorter than 100 ms. This contribution reports on the first realisation of a high-speed phosphor thermometry system employing a highly repetitive laser in the kHz regime and a fast decaying phosphor. A suitable material was characterised regarding its temperature lifetime characteristic and its measurement precision. Additionally, the influence of laser power on the phosphor coating was investigated in terms of heating effects. A demonstration of this high-speed technique has been conducted inside the thermally highly transient system of an optically accessible internal combustion engine. Temperatures have been measured with a repetition rate of 6 kHz corresponding to one sample per crank angle degree at 1000 rpm.

  6. High-speed phosphor thermometry.

    PubMed

    Fuhrmann, N; Baum, E; Brübach, J; Dreizler, A

    2011-10-01

    Phosphor thermometry is a semi-invasive surface temperature measurement technique utilising the luminescence properties of doped ceramic materials. Typically, these phosphor materials are coated onto the object of interest and are excited by a short UV laser pulse. Up to now, primarily Q-switched laser systems with repetition rates of 10 Hz were employed for excitation. Accordingly, this diagnostic tool was not applicable to resolve correlated temperature transients at time scales shorter than 100 ms. This contribution reports on the first realisation of a high-speed phosphor thermometry system employing a highly repetitive laser in the kHz regime and a fast decaying phosphor. A suitable material was characterised regarding its temperature lifetime characteristic and its measurement precision. Additionally, the influence of laser power on the phosphor coating was investigated in terms of heating effects. A demonstration of this high-speed technique has been conducted inside the thermally highly transient system of an optically accessible internal combustion engine. Temperatures have been measured with a repetition rate of 6 kHz corresponding to one sample per crank angle degree at 1000 rpm.

  7. 2048 line by 2048 pixel high-speed image processor for digital fluoroscopy

    NASA Astrophysics Data System (ADS)

    Beardslee, Andrew W.; Stevener, Timothy L.; Lutz, Norm M.; Breithaupt, Dave W.

    1995-04-01

    Utilizing advances in camera technology and electronic components while developing an optimized system architecture resulted in the development of a 2048 line by 2048 pixel by 10- bit high-speed image processor for digital fluoroscopy. The image processor is capable of image acquisition of progressive or interlaced 2 K images at 7.5 frames per second, as well as true progressive or interlaced 1 K by 1 K image acquisition at 30 frames per second. High- speed components, some specifically designed for the system, are applied to perform 2048 line by 2048 pixel image processing at the required speeds. A multimode high-resolution TV camera with a 2000 line Plumbicon tube is used and the input video samples at 40 MHz to provide 10-bit digital image data. High-speed BTL imaging busses, 2 K video RAMs, and multiple processors are used within the system architecture to provide required processing bandwidth. Images are compressed using 2 to 1 lossless compression, and optionally lossy compression, to increase system performance and provide a cost-effective method to achieve required image storage capacity. A high resolution monitor is used for image display and a standard digital interface for hardcopy is provided which is capable of 2 K image transfer. A VME based CPU with a real-time multitasking operating system is used for system control and image management. The system architecture provides multiple image processing busses designed to provide simultaneous acquisition, review, and hardcopy operations. Functionally, the system architecture supports image acquisition and digitization, real-time image processing and display, image storage to RAM, archival to a hard drive, and hardcopy of an image to a digital laser. In addition, interfaces wit the x-ray generator and user interface devices are provided. The system may be configured to support multiple fluoroscopic suites, display configurations, and user interface stations. The 2048 line by 2048 pixel high-speed image

  8. 241-AZ-101 Waste Tank Color Video Camera System Shop Acceptance Test Report

    SciTech Connect

    WERRY, S.M.

    2000-03-23

    This report includes shop acceptance test results. The test was performed prior to installation at tank AZ-101. Both the camera system and camera purge system were originally sought and procured as a part of initial waste retrieval project W-151.

  9. High Speed Research - External Vision System (EVS)

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Imagine flying a supersonic passenger jet (like the Concorde) at 1500 mph with no front windows in the cockpit - it may one day be a reality, as seen in this animation still. NASA engineers are working to develop technology that would replace the forward cockpit windows in future supersonic passenger jets with large sensor displays. These displays would use video images, enhanced by computer-generated graphics, to take the place of the view out the front windows. The envisioned eXternal Visibility System (XVS) would guide pilots to an airport, warn them of other aircraft near their path, and provide additional visual aides for airport approaches, landings and takeoffs. Currently, supersonic transports like the Anglo-French Concorde droop the front of the jet (the 'nose') downward to allow the pilots to see forward during takeoffs and landings. By enhancing the pilots' vision with high-resolution video displays, future supersonic transport designers could eliminate the heavy and expensive, mechanically-drooped nose. A future U.S. supersonic passenger jet, as envisioned by NASA's High-Speed Research (HSR) program, would carry 300 passengers more than 5000 nautical miles per hour more than 1500 miles per hour (more than twice the speed of sound). Traveling from Los Angeles to Tokyo would take only four hours, with an anticipated fare increase of only 20 percent over current ticket prices for substantially slower subsonic flights. Animation by Joey Ponthieux, Computer Sciences Corporation, Inc.

  10. Hand-gesture extraction and recognition from the video sequence acquired by a dynamic camera using condensation algorithm

    NASA Astrophysics Data System (ADS)

    Luo, Dan; Ohya, Jun

    2009-01-01

    To achieve environments in which humans and mobile robots co-exist, technologies for recognizing hand gestures from the video sequence acquired by a dynamic camera could be useful for human-to-robot interface systems. Most of conventional hand gesture technologies deal with only still camera images. This paper proposes a very simple and stable method for extracting hand motion trajectories based on the Human-Following Local Coordinate System (HFLC System), which is obtained from the located human face and both hands. Then, we apply Condensation Algorithm to the extracted hand trajectories so that the hand motion is recognized. We demonstrate the effectiveness of the proposed method by conducting experiments on 35 kinds of sign language based hand gestures.

  11. Lights, Camera, Action! Learning about Management with Student-Produced Video Assignments

    ERIC Educational Resources Information Center

    Schultz, Patrick L.; Quinn, Andrew S.

    2014-01-01

    In this article, we present a proposal for fostering learning in the management classroom through the use of student-produced video assignments. We describe the potential for video technology to create active learning environments focused on problem solving, authentic and direct experiences, and interaction and collaboration to promote student…

  12. Lights, Camera, Action: Advancing Learning, Research, and Program Evaluation through Video Production in Educational Leadership Preparation

    ERIC Educational Resources Information Center

    Friend, Jennifer; Militello, Matthew

    2015-01-01

    This article analyzes specific uses of digital video production in the field of educational leadership preparation, advancing a three-part framework that includes the use of video in (a) teaching and learning, (b) research methods, and (c) program evaluation and service to the profession. The first category within the framework examines videos…

  13. Biofeedback control analysis using a synchronized system of two CCD video cameras and a force-plate sensor

    NASA Astrophysics Data System (ADS)

    Tsuruoka, Masako; Shibasaki, Ryosuke; Murai, Shunji

    1999-01-01

    The biofeedback control analysis of human movement has become increasingly important in rehabilitation, sports medicine and physical fitness. In this study, a synchronized system was developed for acquiring sequential data of a person's movement. The setup employs a video recorder system linked with two CCD video cameras and fore-plate sensor system, which are configured to stop and start simultaneously. The feedback control movement of postural stability was selected as a subject for analysis. The person's center of body gravity (COG) was calculated by measured 3-D coordinates of major joints using videometry with bundle adjustment and self-calibration. The raw serial data of COG and foot pressure by measured force plate sensor are difficult to analyze directly because of their complex fluctuations. Utilizing auto regressive modeling, the power spectrum and the impulse response of movement factors, enable analysis of their dynamic relations. This new biomedical engineering approach provides efficient information for medical evaluation of a person's stability.

  14. In-situ measurements of alloy oxidation/corrosion/erosion using a video camera and proximity sensor with microcomputer control

    NASA Technical Reports Server (NTRS)

    Deadmore, D. L.

    1984-01-01

    Two noncontacting and nondestructive, remotely controlled methods of measuring the progress of oxidation/corrosion/erosion of metal alloys, exposed to flame test conditions, are described. The external diameter of a sample under test in a flame was measured by a video camera width measurement system. An eddy current proximity probe system, for measurements outside of the flame, was also developed and tested. The two techniques were applied to the measurement of the oxidation of 304 stainless steel at 910 C using a Mach 0.3 flame. The eddy current probe system yielded a recession rate of 0.41 mils diameter loss per hour and the video system gave 0.27.

  15. Activity profiles and hook-tool use of New Caledonian crows recorded by bird-borne video cameras

    PubMed Central

    Troscianko, Jolyon; Rutz, Christian

    2015-01-01

    New Caledonian crows are renowned for their unusually sophisticated tool behaviour. Despite decades of fieldwork, however, very little is known about how they make and use their foraging tools in the wild, which is largely owing to the difficulties in observing these shy forest birds. To obtain first estimates of activity budgets, as well as close-up observations of tool-assisted foraging, we equipped 19 wild crows with self-developed miniature video cameras, yielding more than 10 h of analysable video footage for 10 subjects. While only four crows used tools during recording sessions, they did so extensively: across all 10 birds, we conservatively estimate that tool-related behaviour occurred in 3% of total observation time, and accounted for 19% of all foraging behaviour. Our video-loggers provided first footage of crows manufacturing, and using, one of their most complex tool types—hooked stick tools—under completely natural foraging conditions. We recorded manufacture from live branches of paperbark (Melaleuca sp.) and another tree species (thought to be Acacia spirorbis), and deployment of tools in a range of contexts, including on the forest floor. Taken together, our video recordings reveal an ‘expanded’ foraging niche for hooked stick tools, and highlight more generally how crows routinely switch between tool- and bill-assisted foraging. PMID:26701755

  16. Activity profiles and hook-tool use of New Caledonian crows recorded by bird-borne video cameras.

    PubMed

    Troscianko, Jolyon; Rutz, Christian

    2015-12-01

    New Caledonian crows are renowned for their unusually sophisticated tool behaviour. Despite decades of fieldwork, however, very little is known about how they make and use their foraging tools in the wild, which is largely owing to the difficulties in observing these shy forest birds. To obtain first estimates of activity budgets, as well as close-up observations of tool-assisted foraging, we equipped 19 wild crows with self-developed miniature video cameras, yielding more than 10 h of analysable video footage for 10 subjects. While only four crows used tools during recording sessions, they did so extensively: across all 10 birds, we conservatively estimate that tool-related behaviour occurred in 3% of total observation time, and accounted for 19% of all foraging behaviour. Our video-loggers provided first footage of crows manufacturing, and using, one of their most complex tool types--hooked stick tools--under completely natural foraging conditions. We recorded manufacture from live branches of paperbark (Melaleuca sp.) and another tree species (thought to be Acacia spirorbis), and deployment of tools in a range of contexts, including on the forest floor. Taken together, our video recordings reveal an 'expanded' foraging niche for hooked stick tools, and highlight more generally how crows routinely switch between tool- and bill-assisted foraging. PMID:26701755

  17. Small Scale High Speed Turbomachinery

    NASA Technical Reports Server (NTRS)

    London, Adam P. (Inventor); Droppers, Lloyd J. (Inventor); Lehman, Matthew K. (Inventor); Mehra, Amitav (Inventor)

    2015-01-01

    A small scale, high speed turbomachine is described, as well as a process for manufacturing the turbomachine. The turbomachine is manufactured by diffusion bonding stacked sheets of metal foil, each of which has been pre-formed to correspond to a cross section of the turbomachine structure. The turbomachines include rotating elements as well as static structures. Using this process, turbomachines may be manufactured with rotating elements that have outer diameters of less than four inches in size, and/or blading heights of less than 0.1 inches. The rotating elements of the turbomachines are capable of rotating at speeds in excess of 150 feet per second. In addition, cooling features may be added internally to blading to facilitate cooling in high temperature operations.

  18. High speed laser tomography system.

    PubMed

    Samsonov, D; Elsaesser, A; Edwards, A; Thomas, H M; Morfill, G E

    2008-03-01

    A high speed laser tomography system was developed capable of acquiring three-dimensional (3D) images of optically thin clouds of moving micron-sized particles. It operates by parallel-shifting an illuminating laser sheet with a pair of galvanometer-driven mirrors and synchronously recording two-dimensional (2D) images of thin slices of the imaged volume. The maximum scanning speed achieved was 120,000 slices/s, sequences of 24 volume scans (up to 256 slices each) have been obtained. The 2D slices were stacked to form 3D images of the volume, then the positions of the particles were identified and followed in the consecutive scans. The system was used to image a complex plasma with particles moving at speeds up to cm/s.

  19. High-speed data search

    NASA Technical Reports Server (NTRS)

    Driscoll, James N.

    1994-01-01

    The high-speed data search system developed for KSC incorporates existing and emerging information retrieval technology to help a user intelligently and rapidly locate information found in large textual databases. This technology includes: natural language input; statistical ranking of retrieved information; an artificial intelligence concept called semantics, where 'surface level' knowledge found in text is used to improve the ranking of retrieved information; and relevance feedback, where user judgements about viewed information are used to automatically modify the search for further information. Semantics and relevance feedback are features of the system which are not available commercially. The system further demonstrates focus on paragraphs of information to decide relevance; and it can be used (without modification) to intelligently search all kinds of document collections, such as collections of legal documents medical documents, news stories, patents, and so forth. The purpose of this paper is to demonstrate the usefulness of statistical ranking, our semantic improvement, and relevance feedback.

  20. High speed bus technology development

    NASA Astrophysics Data System (ADS)

    Modrow, Marlan B.; Hatfield, Donald W.

    1989-09-01

    The development and demonstration of the High Speed Data Bus system, a 50 Million bits per second (Mbps) local data network intended for avionics applications in advanced military aircraft is described. The Advanced System Avionics (ASA)/PAVE PILLAR program provided the avionics architecture concept and basic requirements. Designs for wire and fiber optic media were produced and hardware demonstrations were performed. An efficient, robust token-passing protocol was developed and partially demonstrated. The requirements specifications, the trade-offs made, and the resulting designs for both a coaxial wire media system and a fiber optics design are examined. Also, the development of a message-oriented media access protocol is described, from requirements definition through analysis, simulation and experimentation. Finally, the testing and demonstrations conducted on the breadboard and brassboard hardware is presented.

  1. Experiments on high speed ejectors

    NASA Technical Reports Server (NTRS)

    Wu, J. J.

    1986-01-01

    Experimental studies were conducted to investigate the flow and the performance of thrust augmenting ejectors for flight Mach numbers in the range of 0.5 to 0.8, primary air stagnation pressures up to 107 psig (738 kPa), and primary air stagnation temperatures up to 1250 F (677 C). The experiment verified the existence of the second solution ejector flow, where the flow after complete mixing is supersonic. Thrust augmentation in excess of 1.2 was demonstrated for both hot and cold primary jets. The experimental ejector performed better than the corresponding theoretical optimal first solution ejector, where the mixed flow is subsonic. Further studies are required to realize the full potential of the second solution ejector. The research program was started by the Flight Dynamics Research Corporation (FDRC) to investigate the characteristic of a high speed ejector which augments thrust of a jet at high flight speeds.

  2. High speed laser tomography system.

    PubMed

    Samsonov, D; Elsaesser, A; Edwards, A; Thomas, H M; Morfill, G E

    2008-03-01

    A high speed laser tomography system was developed capable of acquiring three-dimensional (3D) images of optically thin clouds of moving micron-sized particles. It operates by parallel-shifting an illuminating laser sheet with a pair of galvanometer-driven mirrors and synchronously recording two-dimensional (2D) images of thin slices of the imaged volume. The maximum scanning speed achieved was 120,000 slices/s, sequences of 24 volume scans (up to 256 slices each) have been obtained. The 2D slices were stacked to form 3D images of the volume, then the positions of the particles were identified and followed in the consecutive scans. The system was used to image a complex plasma with particles moving at speeds up to cm/s. PMID:18377040

  3. High speed sampler and demultiplexer

    DOEpatents

    McEwan, T.E.

    1995-12-26

    A high speed sampling demultiplexer based on a plurality of sampler banks, each bank comprising a sample transmission line for transmitting an input signal, a strobe transmission line for transmitting a strobe signal, and a plurality of sampling gates at respective positions along the sample transmission line for sampling the input signal in response to the strobe signal. Strobe control circuitry is coupled to the plurality of banks, and supplies a sequence of bank strobe signals to the strobe transmission lines in each of the plurality of banks, and includes circuits for controlling the timing of the bank strobe signals among the banks of samplers. Input circuitry is included for supplying the input signal to be sampled to the plurality of sample transmission lines in the respective banks. The strobe control circuitry can repetitively strobe the plurality of banks of samplers such that the banks of samplers are cycled to create a long sample length. Second tier demultiplexing circuitry is coupled to each of the samplers in the plurality of banks. The second tier demultiplexing circuitry senses the sample taken by the corresponding sampler each time the bank in which the sampler is found is strobed. A plurality of such samples can be stored by the second tier demultiplexing circuitry for later processing. Repetitive sampling with the high speed transient sampler induces an effect known as ``strobe kickout``. The sample transmission lines include structures which reduce strobe kickout to acceptable levels, generally 60 dB below the signal, by absorbing the kickout pulses before the next sampling repetition. 16 figs.

  4. High speed sampler and demultiplexer

    DOEpatents

    McEwan, Thomas E.

    1995-01-01

    A high speed sampling demultiplexer based on a plurality of sampler banks, each bank comprising a sample transmission line for transmitting an input signal, a strobe transmission line for transmitting a strobe signal, and a plurality of sampling gates at respective positions along the sample transmission line for sampling the input signal in response to the strobe signal. Strobe control circuitry is coupled to the plurality of banks, and supplies a sequence of bank strobe signals to the strobe transmission lines in each of the plurality of banks, and includes circuits for controlling the timing of the bank strobe signals among the banks of samplers. Input circuitry is included for supplying the input signal to be sampled to the plurality of sample transmission lines in the respective banks. The strobe control circuitry can repetitively strobe the plurality of banks of samplers such that the banks of samplers are cycled to create a long sample length. Second tier demultiplexing circuitry is coupled to each of the samplers in the plurality of banks. The second tier demultiplexing circuitry senses the sample taken by the corresponding sampler each time the bank in which the sampler is found is strobed. A plurality of such samples can be stored by the second tier demultiplexing circuitry for later processing. Repetitive sampling with the high speed transient sampler induces an effect known as "strobe kickout". The sample transmission lines include structures which reduce strobe kickout to acceptable levels, generally 60 dB below the signal, by absorbing the kickout pulses before the next sampling repetition.

  5. Compact all-CMOS spatiotemporal compressive sensing video camera with pixel-wise coded exposure.

    PubMed

    Zhang, Jie; Xiong, Tao; Tran, Trac; Chin, Sang; Etienne-Cummings, Ralph

    2016-04-18

    We present a low power all-CMOS implementation of temporal compressive sensing with pixel-wise coded exposure. This image sensor can increase video pixel resolution and frame rate simultaneously while reducing data readout speed. Compared to previous architectures, this system modulates pixel exposure at the individual photo-diode electronically without external optical components. Thus, the system provides reduction in size and power compare to previous optics based implementations. The prototype image sensor (127 × 90 pixels) can reconstruct 100 fps videos from coded images sampled at 5 fps. With 20× reduction in readout speed, our CMOS image sensor only consumes 14μW to provide 100 fps videos.

  6. Algorithms for High-Speed Noninvasive Eye-Tracking System

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Morookian, John-Michael; Lambert, James

    2010-01-01

    Two image-data-processing algorithms are essential to the successful operation of a system of electronic hardware and software that noninvasively tracks the direction of a person s gaze in real time. The system was described in High-Speed Noninvasive Eye-Tracking System (NPO-30700) NASA Tech Briefs, Vol. 31, No. 8 (August 2007), page 51. To recapitulate from the cited article: Like prior commercial noninvasive eyetracking systems, this system is based on (1) illumination of an eye by a low-power infrared light-emitting diode (LED); (2) acquisition of video images of the pupil, iris, and cornea in the reflected infrared light; (3) digitization of the images; and (4) processing the digital image data to determine the direction of gaze from the centroids of the pupil and cornea in the images. Most of the prior commercial noninvasive eyetracking systems rely on standard video cameras, which operate at frame rates of about 30 Hz. Such systems are limited to slow, full-frame operation. The video camera in the present system includes a charge-coupled-device (CCD) image detector plus electronic circuitry capable of implementing an advanced control scheme that effects readout from a small region of interest (ROI), or subwindow, of the full image. Inasmuch as the image features of interest (the cornea and pupil) typically occupy a small part of the camera frame, this ROI capability can be exploited to determine the direction of gaze at a high frame rate by reading out from the ROI that contains the cornea and pupil (but not from the rest of the image) repeatedly. One of the present algorithms exploits the ROI capability. The algorithm takes horizontal row slices and takes advantage of the symmetry of the pupil and cornea circles and of the gray-scale contrasts of the pupil and cornea with respect to other parts of the eye. The algorithm determines which horizontal image slices contain the pupil and cornea, and, on each valid slice, the end coordinates of the pupil and cornea

  7. Lights, Camera: Learning! Findings from studies of video in formal and informal science education

    NASA Astrophysics Data System (ADS)

    Borland, J.

    2013-12-01

    As part of the panel, media researcher, Jennifer Borland, will highlight findings from a variety of studies of videos across the spectrum of formal to informal learning, including schools, museums, and in viewers homes. In her presentation, Borland will assert that the viewing context matters a great deal, but there are some general take-aways that can be extrapolated to the use of educational video in a variety of settings. Borland has served as an evaluator on several video-related projects funded by NASA and the the National Science Foundation including: Data Visualization videos and Space Shows developed by the American Museum of Natural History, DragonflyTV, Earth the Operators Manual, The Music Instinct and Time Team America.

  8. Recent advances in high speed photography and associated technologies in the USA

    SciTech Connect

    Paisley, D.L.

    1988-01-01

    In the past decade, high speed photography has been rapidly incorporating electro-optics. More recently, optoelectronics and digital recording of images for specialized laboratory cameras and commerically available systems have helped broaden the versatility and applications of high speed photography and photonics. This paper will highlight some of these technologies and specialized systems. 10 refs., 22 figs.

  9. Lights, camera, action…critique? Submit videos to AGU communications workshop

    NASA Astrophysics Data System (ADS)

    Viñas, Maria-José

    2011-08-01

    What does it take to create a science video that engages the audience and draws thousands of views on YouTube? Those interested in finding out should submit their research-related videos to AGU's Fall Meeting science film analysis workshop, led by oceanographer turned documentary director Randy Olson. Olson, writer-director of two films (Flock of Dodos: The Evolution-Intelligent Design Circus and Sizzle: A Global Warming Comedy) and author of the book Don't Be Such a Scientist: Talking Substance in an Age of Style, will provide constructive criticism on 10 selected video submissions, followed by moderated discussion with the audience. To submit your science video (5 minutes or shorter), post it on YouTube and send the link to the workshop coordinator, Maria-José Viñas (mjvinas@agu.org), with the following subject line: Video submission for Olson workshop. AGU will be accepting submissions from researchers and media officers of scientific institutions until 6:00 P.M. eastern time on Friday, 4 November. Those whose videos are selected to be screened will be notified by Friday, 18 November. All are welcome to attend the workshop at the Fall Meeting.

  10. Faster Is Better: High-Speed Modems.

    ERIC Educational Resources Information Center

    Roth, Cliff

    1995-01-01

    Discusses using high-speed modems to access the Internet. Examines internal and external modems, data speeds, compression and error reduction, faxing and voice capabilities, and software features. Considers ISDN (Integrated Services Digital Network) as the future replacement of high-speed modems. Sidebars present high-speed modem product…

  11. Spatial and temporal scales of shoreline morphodynamics derived from video camera observations for the island of Sylt, German Wadden Sea

    NASA Astrophysics Data System (ADS)

    Blossier, Brice; Bryan, Karin R.; Daly, Christopher J.; Winter, Christian

    2016-08-01

    Spatial and temporal scales of beach morphodynamics were assessed for the island of Sylt, German Wadden Sea, based on continuous video camera monitoring data from 2011 to 2014 along a 1.3 km stretch of sandy beach. They served to quantify, at this location, the amount of shoreline variability covered by beach monitoring schemes, depending on the time interval and alongshore resolution of the surveys. Correlation methods, used to quantify the alongshore spatial scales of shoreline undulations, were combined with semi-empirical modelling and spectral analyses of shoreline temporal fluctuations. The data demonstrate that an alongshore resolution of 150 m and a monthly survey time interval capture 70% of the kilometre-scale shoreline variability over the 2011-2014 study period. An alongshore spacing of 10 m and a survey time interval of 5 days would be required to monitor 95% variance of the shoreline temporal fluctuations with steps of 5% changes in variance over space. Although monitoring strategies such as land or airborne surveying are reliable methods of data collection, video camera deployment remains the cheapest technique providing the high spatiotemporal resolution required to monitor subkilometre-scale morphodynamic processes involving, for example, small- to middle-sized beach nourishment.

  12. A high speed imaging system for nuclear diagnostics

    NASA Technical Reports Server (NTRS)

    Eyer, H. H.

    1976-01-01

    A high speed imaging system based on state-of-the-art photosensor arrays was designed for use in nuclear diagnostics. The system is comprised of a front end rapid scan solid state camera, a high speed digitizer, and a PCM line driver in a downhole package and a memory buffer system in an uphole trailer. The downhole camera takes a snapshot of a nuclear device created flux stream, digitizes the image and transmits it to the uphole memory system before being destroyed. The memory system performs two functions: it retains the data for local display and processing by a microprocessor, and it buffers the data for retransmission at slower rates to a computational facility. In the talk, the impetus for such a system as well as its operation was discussed, along with systems under development which incorporate higher data rates and more resolution.

  13. Applying compressive sensing to TEM video: A substantial frame rate increase on any camera

    SciTech Connect

    Stevens, Andrew; Kovarik, Libor; Abellan, Patricia; Yuan, Xin; Carin, Lawrence; Browning, Nigel D.

    2015-08-13

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ transmission electron microscopy (TEM) experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1 ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing (CS) methods to increase the frame rate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integrated into a single camera frame during the acquisition process, and then extracted upon readout using statistical CS inversion. Here we describe the background of CS and statistical methods in depth and simulate the frame rates and efficiencies for in-situ TEM experiments. Depending on the resolution and signal/noise of the image, it should be possible to increase the speed of any camera by more than an order of magnitude using this approach.

  14. Applying compressive sensing to TEM video: A substantial frame rate increase on any camera

    DOE PAGESBeta

    Stevens, Andrew; Kovarik, Libor; Abellan, Patricia; Yuan, Xin; Carin, Lawrence; Browning, Nigel D.

    2015-08-13

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ transmission electron microscopy (TEM) experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1 ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing (CS) methods to increase the frame rate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integratedmore » into a single camera frame during the acquisition process, and then extracted upon readout using statistical CS inversion. Here we describe the background of CS and statistical methods in depth and simulate the frame rates and efficiencies for in-situ TEM experiments. Depending on the resolution and signal/noise of the image, it should be possible to increase the speed of any camera by more than an order of magnitude using this approach.« less

  15. High-speed observations of Transient Luminous Events and Lightning (The 2008/2009 Ebro Valley campaign)

    NASA Astrophysics Data System (ADS)

    Montanyà, Joan; van der Velde, Oscar; Soula, Serge; Romero, David; Pineda, Nicolau; Solà, Glòria; March, Víctor

    2010-05-01

    The future ASIM mission will provide x/y rays detections from space to investigate the origins of the Terrestrial Gamma-ray Flashes and its possible relation with transient luminous events (TLE). In order to support the future space observations we are setting up some ground infrastructure located at the Ebro Valley region (Northeast of Spain). At the end of 2008 and during 2009 we carried out our first observation campaign in order to acquire experience to support the future ASIM mission. From January 2008 to February 2009 we focused on the observation of TLE's with our intensified high-speed camera system. We recorded 14 sprites, 19 elves and, in three sprites, we observed also halos (Montanyà et al. 2009). Unfortunately no high-speed records of TLEs where observed in the range of the (XDDE) VHF network. However, we have recorded several tens of TLEs at normal frame rate (25 fps) which are in the XDDE range (Van der Velde et al., 2009). Additionally, in August 2009 we installed our first camera for TLE observation in the Caribean. The camera is located in San Andrés Isl. (Colombia). From June 2009 to October 2009 we focused all of our efforts to record lightning at high-speed (10000 fps), vertical close electric fields and x-ray emissions from lightning. We recorded around 60 lightning flashes but we only clearly evidenced high energy detections in only one flash. The detections were produced during the leader phase of a cloud-to-ground flash. The leader signature on the recorded electric field was very short (around 1 ms) and, during this period, a burst of high energy emissions where detected. Then, few detections where produced just after the return stroke. The experience of this preliminary campaign has given us the basis for the future campaigns where we plan to count with two high-speed cameras and a Lightning Mapping Array. References Montanyà et al. (2009). High-Speed Intensified Video Recordings of Sprites and Elves over the Western Mediterranean Sea

  16. ADVANCED HIGH SPEED PROGRAMMABLE PREFORMING

    SciTech Connect

    Norris Jr, Robert E; Lomax, Ronny D; Xiong, Fue; Dahl, Jeffrey S; Blanchard, Patrick J

    2010-01-01

    Polymer-matrix composites offer greater stiffness and strength per unit weight than conventional materials resulting in new opportunities for lightweighting of automotive and heavy vehicles. Other benefits include design flexibility, less corrosion susceptibility, and the ability to tailor properties to specific load requirements. However, widespread implementation of structural composites requires lower-cost manufacturing processes than those that are currently available. Advanced, directed-fiber preforming processes have demonstrated exceptional value for rapid preforming of large, glass-reinforced, automotive composite structures. This is due to process flexibility and inherently low material scrap rate. Hence directed fiber performing processes offer a low cost manufacturing methodology for producing preforms for a variety of structural automotive components. This paper describes work conducted at the Oak Ridge National Laboratory (ORNL), focused on the development and demonstration of a high speed chopper gun to enhance throughput capabilities. ORNL and the Automotive Composites Consortium (ACC) revised the design of a standard chopper gun to expand the operational envelope, enabling delivery of up to 20kg/min. A prototype unit was fabricated and used to demonstrate continuous chopping of multiple roving at high output over extended periods. In addition fiber handling system modifications were completed to sustain the high output the modified chopper affords. These hardware upgrades are documented along with results of process characterization and capabilities assessment.

  17. High-speed pressure clamp.

    PubMed

    Besch, Stephen R; Suchyna, Thomas; Sachs, Frederick

    2002-10-01

    We built a high-speed, pneumatic pressure clamp to stimulate patch-clamped membranes mechanically. The key control element is a newly designed differential valve that uses a single, nickel-plated piezoelectric bending element to control both pressure and vacuum. To minimize response time, the valve body was designed with minimum dead volume. The result is improved response time and stability with a threefold decrease in actuation latency. Tight valve clearances minimize the steady-state air flow, permitting us to use small resonant-piston pumps to supply pressure and vacuum. To protect the valve from water contamination in the event of a broken pipette, an optical sensor detects water entering the valve and increases pressure rapidly to clear the system. The open-loop time constant for pressure is 2.5 ms for a 100-mmHg step, and the closed-loop settling time is 500-600 micros. Valve actuation latency is 120 micros. The system performance is illustrated for mechanically induced changes in patch capacitance.

  18. Quiet High-Speed Fan

    NASA Technical Reports Server (NTRS)

    Lieber, Lysbeth; Repp, Russ; Weir, Donald S.

    1996-01-01

    A calibration of the acoustic and aerodynamic prediction methods was performed and a baseline fan definition was established and evaluated to support the quiet high speed fan program. A computational fluid dynamic analysis of the NASA QF-12 Fan rotor, using the DAWES flow simulation program was performed to demonstrate and verify the causes of the relatively poor aerodynamic performance observed during the fan test. In addition, the rotor flowfield characteristics were qualitatively compared to the acoustic measurements to identify the key acoustic characteristics of the flow. The V072 turbofan source noise prediction code was used to generate noise predictions for the TFE731-60 fan at three operating conditions and compared to experimental data. V072 results were also used in the Acoustic Radiation Code to generate far field noise for the TFE731-60 nacelle at three speed points for the blade passage tone. A full 3-D viscous flow simulation of the current production TFE731-60 fan rotor was performed with the DAWES flow analysis program. The DAWES analysis was used to estimate the onset of multiple pure tone noise, based on predictions of inlet shock position as a function of the rotor tip speed. Finally, the TFE731-60 fan rotor wake structure predicted by the DAWES program was used to define a redesigned stator with the leading edge configured to minimize the acoustic effects of rotor wake / stator interaction, without appreciably degrading performance.

  19. Compact all-CMOS spatiotemporal compressive sensing video camera with pixel-wise coded exposure.

    PubMed

    Zhang, Jie; Xiong, Tao; Tran, Trac; Chin, Sang; Etienne-Cummings, Ralph

    2016-04-18

    We present a low power all-CMOS implementation of temporal compressive sensing with pixel-wise coded exposure. This image sensor can increase video pixel resolution and frame rate simultaneously while reducing data readout speed. Compared to previous architectures, this system modulates pixel exposure at the individual photo-diode electronically without external optical components. Thus, the system provides reduction in size and power compare to previous optics based implementations. The prototype image sensor (127 × 90 pixels) can reconstruct 100 fps videos from coded images sampled at 5 fps. With 20× reduction in readout speed, our CMOS image sensor only consumes 14μW to provide 100 fps videos. PMID:27137331

  20. Internet Teleprescence by Real-Time View-Dependent Image Generation with Omnidirectional Video Camera

    NASA Astrophysics Data System (ADS)

    Morita, Shinji; Yamazawa, Kazumasa; Yokoya, Naokazu

    2003-01-01

    This paper describes a new networked telepresence system which realizes virtual tours into a visualized dynamic real world without significant time delay. Our system is realized by the following three steps: (1) video-rate omnidirectional image acquisition, (2) transportation of an omnidirectional video stream via internet, and (3) real-time view-dependent perspective image generation from the omnidirectional video stream. Our system is applicable to real-time telepresence in the situation where the real world to be seen is far from an observation site, because the time delay from the change of user"s viewing direction to the change of displayed image is small and does not depend on the actual distance between both sites. Moreover, multiple users can look around from a single viewpoint in a visualized dynamic real world in different directions at the same time. In experiments, we have proved that the proposed system is useful for internet telepresence.

  1. Advances In High-Speed Photography 1972-1983

    NASA Astrophysics Data System (ADS)

    Courtney-Pratt, J. S.

    1984-01-01

    The variety, range and precision of methods available for photographic recording of fast phenomena have been increasing steadily. The capabilities of some of the newer techniques will be described. At the lower end of the speed range, the advances have been mainly in improvements in resolution, and in the introduction of video techniques. At the highest speeds the advances have included increases in dynamic range, a wider acceptance of image tubes, and a more careful analysis and characterization of their limitations. The variety, range and precision of methods available for photographic recording of fast phenomena have been increasing steadily. The capabilities of the newer techniques are considered, classifying the methods by the kind of record obtained. Descriptions of experimental techniques and apparatus, and illustrations, are given in an earlier article entitled, "A Review of the Methods of High-Speed Photography," published in Reports on Progress in Physics in 1957;[1

  2. and in "Advances in High Speed Photography 1957-1972" published in the Proceedings of the Tenth Internatiopal Cngress on High Speed Photography (HSP10) [116 and also in JSMPTE 82 167-175 (1973). L117o This present paper is in the nature of a survey of the limits to which the various techniques have been pressed as compared to the limits attained, or reported in the open literature, at the date of the reviews 10 and 25 years ago. There are a number of recent books and articles which also provide excellent surveys and impressive bibliographies:129 -138 Streak records with drum cameras can give a time resolution of 5 x 10-9 s.[2,3] Rotating mirror streak cameras with a single reflection[15] at present approach 10-9 s and may with multiple reflections achieve 10-10 s. The Schardin limit[ 4] for presently available rotor materials is 0.25 x 10-9 s, but this is predicated upon a single reflection of the light beam from the rotor and can be surpassed if the camera is designed to take

  3. Clinical diagnostic of pleural effusions using a high-speed viscosity measurement method

    NASA Astrophysics Data System (ADS)

    Hurth, Cedric; Klein, Katherine; van Nimwegen, Lena; Korn, Ronald; Vijayaraghavan, Krishnaswami; Zenhausern, Frederic

    2011-08-01

    We present a novel bio-analytical method to discriminate between transudative and exudative pleural effusions based on a high-speed video analysis of a solid glass sphere impacting a liquid. Since the result depends on the solution viscosity, it can ultimately replace the battery of biochemical assays currently used. We present results obtained on a series of 7 pleural effusions obtained from consenting patients by analyzing both the splash observed after the glass impactor hits the liquid surface, and in a configuration reminiscent of the drop ball viscometer with added sensitivity and throughput provided by the high-speed camera. The results demonstrate distinction between the pleural effusions and good correlation with the fluid chemistry analysis to accurately differentiate exudates and transudates for clinical purpose. The exudative effusions display a viscosity around 1.39 ± 0.08 cP whereas the transudative effusion was measured at 0.89 ± 0.09 cP, in good agreement with previous reports.

  4. Visual surveys can reveal rather different 'pictures' of fish densities: Comparison of trawl and video camera surveys in the Rockall Bank, NE Atlantic Ocean

    NASA Astrophysics Data System (ADS)

    McIntyre, F. D.; Neat, F.; Collie, N.; Stewart, M.; Fernandes, P. G.

    2015-01-01

    Visual surveys allow non-invasive sampling of organisms in the marine environment which is of particular importance in deep-sea habitats that are vulnerable to damage caused by destructive sampling devices such as bottom trawls. To enable visual surveying at depths greater than 200 m we used a deep towed video camera system, to survey large areas around the Rockall Bank in the North East Atlantic. The area of seabed sampled was similar to that sampled by a bottom trawl, enabling samples from the towed video camera system to be compared with trawl sampling to quantitatively assess the numerical density of deep-water fish populations. The two survey methods provided different results for certain fish taxa and comparable results for others. Fish that exhibited a detectable avoidance behaviour to the towed video camera system, such as the Chimaeridae, resulted in mean density estimates that were significantly lower (121 fish/km2) than those determined by trawl sampling (839 fish/km2). On the other hand, skates and rays showed no reaction to the lights in the towed body of the camera system, and mean density estimates of these were an order of magnitude higher (64 fish/km2) than the trawl (5 fish/km2). This is probably because these fish can pass under the footrope of the trawl due to their flat body shape lying close to the seabed but are easily detected by the benign towed video camera system. For other species, such as Molva sp, estimates of mean density were comparable between the two survey methods (towed camera, 62 fish/km2; trawl, 73 fish/km2). The towed video camera system presented here can be used as an alternative benign method for providing indices of abundance for species such as ling in areas closed to trawling, or for those fish that are poorly monitored by trawl surveying in any area, such as the skates and rays.

  5. Optimising camera traps for monitoring small mammals.

    PubMed

    Glen, Alistair S; Cockburn, Stuart; Nichols, Margaret; Ekanayake, Jagath; Warburton, Bruce

    2013-01-01

    Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1) trigger speed, 2) passive infrared vs. microwave sensor, 3) white vs. infrared flash, and 4) still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea), feral cats (Felis catus) and hedgehogs (Erinaceuseuropaeus). Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  6. Video-based realtime IMU-camera calibration for robot navigation

    NASA Astrophysics Data System (ADS)

    Petersen, Arne; Koch, Reinhard

    2012-06-01

    This paper introduces a new method for fast calibration of inertial measurement units (IMU) with cameras being rigidly coupled. That is, the relative rotation and translation between the IMU and the camera is estimated, allowing for the transfer of IMU data to the cameras coordinate frame. Moreover, the IMUs nuisance parameters (biases and scales) and the horizontal alignment of the initial camera frame are determined. Since an iterated Kalman Filter is used for estimation, information on the estimations precision is also available. Such calibrations are crucial for IMU-aided visual robot navigation, i.e. SLAM, since wrong calibrations cause biases and drifts in the estimated position and orientation. As the estimation is performed in realtime, the calibration can be done using a freehand movement and the estimated parameters can be validated just in time. This provides the opportunity of optimizing the used trajectory online, increasing the quality and minimizing the time effort for calibration. Except for a marker pattern, used for visual tracking, no additional hardware is required. As will be shown, the system is capable of estimating the calibration within a short period of time. Depending on the requested precision trajectories of 30 seconds to a few minutes are sufficient. This allows for calibrating the system at startup. By this, deviations in the calibration due to transport and storage can be compensated. The estimation quality and consistency are evaluated in dependency of the traveled trajectories and the amount of IMU-camera displacement and rotation misalignment. It is analyzed, how different types of visual markers, i.e. 2- and 3-dimensional patterns, effect the estimation. Moreover, the method is applied to mono and stereo vision systems, providing information on the applicability to robot systems. The algorithm is implemented using a modular software framework, such that it can be adopted to altered conditions easily.

  7. "Lights, Camera, Reflection": Using Peer Video to Promote Reflective Dialogue among Student Teachers

    ERIC Educational Resources Information Center

    Harford, Judith; MacRuairc, Gerry; McCartan, Dermot

    2010-01-01

    This paper examines the use of peer-videoing in the classroom as a means of promoting reflection among student teachers. Ten pre-service teachers participating in a teacher education programme in a university in the Republic of Ireland and ten pre-service teachers participating in a teacher education programme in a university in the North of…

  8. Authentic Camera-Produced Materials for Random-Access Video Delivery.

    ERIC Educational Resources Information Center

    Lide, Francis; Lide, Barbara

    A need exists for a large fund of short, disconnected video materials designed specifically for rapid selective access and for use either in the classroom or the language laboratory, starting at the earliest stages of instruction. A theoretical model for creating these materials could be the language learner in public places in the target culture,…

  9. Use of a UAV-mounted video camera to assess feeding behavior of Raramuri Criollo cows

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Interest in use of unmanned aerial vehicles in science has increased in recent years. It is predicted that they will be a preferred remote sensing platform for applications that inform sustainable rangeland management in the future. The objective of this study was to determine whether UAV video moni...

  10. High-speed cineradiography using electronic imaging

    NASA Astrophysics Data System (ADS)

    Lucero, Jacob P.; Fry, David A.; Gaskill, William E.; Henderson, R. L.; Crawford, Ted R.; Carey, N. E.

    1993-01-01

    The Los Alamos National Laboratory has constructed and is now operating a cineradiography system for imaging and evaluation of ballistic interaction events at the 1200 meter range of the Terminal Effects Research and Analysis (TERA) Group at the New Mexico Institute of Mining and Technology. Cineradiography is part of a complete firing, tracking, and analysis system at the range. The cine system consists of flash x-ray sources illuminating a one-half meter by two meter fast phosphor screen which is viewed by gated-intensified high resolution still video cameras via turning mirrors. The entire system is armored to protect against events containing up to 13.5 kg of high explosive. Digital images are available for immediate display and processing. The system is capable of frame rates up to 105/sec for up to five total images.

  11. High speed cineradiography using electronic imaging

    NASA Astrophysics Data System (ADS)

    Lucero, J. P.; Fry, D. A.; Gaskill, W. E.; Henderson, R. L.; Crawford, T. R.; Carey, N. E.

    1992-12-01

    The Los Alamos National Laboratory has constructed and is now operating a cineradiography system for imaging and evaluation of ballistic interaction events at the 1200 meter range of the Terminal Effects Research and Analysis (TERA) Group at the New Mexico Institute of Mining and Technology. Cineradiography is part of a complete firing, tracking, and analysis system at the range. The cine system consists of flash x-ray sources illuminating a one-half meter by two meter fast phosphor screen which is viewed by gated-intensified high resolution still video cameras via turning mirrors. The entire system is armored to protect against events containing up to 13.5 kg of high explosive. Digital images are available for immediate display and processing. The system is capable of frame rates up to 10(exp 5)/sec for up to five total images.

  12. Real-time multi-camera video acquisition and processing platform for ADAS

    NASA Astrophysics Data System (ADS)

    Saponara, Sergio

    2016-04-01

    The paper presents the design of a real-time and low-cost embedded system for image acquisition and processing in Advanced Driver Assisted Systems (ADAS). The system adopts a multi-camera architecture to provide a panoramic view of the objects surrounding the vehicle. Fish-eye lenses are used to achieve a large Field of View (FOV). Since they introduce radial distortion of the images projected on the sensors, a real-time algorithm for their correction is also implemented in a pre-processor. An FPGA-based hardware implementation, re-using IP macrocells for several ADAS algorithms, allows for real-time processing of input streams from VGA automotive CMOS cameras.

  13. A risk-based coverage model for video surveillance camera control optimization

    NASA Astrophysics Data System (ADS)

    Zhang, Hongzhou; Du, Zhiguo; Zhao, Xingtao; Li, Peiyue; Li, Dehua

    2015-12-01

    Visual surveillance system for law enforcement or police case investigation is different from traditional application, for it is designed to monitor pedestrians, vehicles or potential accidents. Visual surveillance risk is defined as uncertainty of visual information of targets and events monitored in present work and risk entropy is introduced to modeling the requirement of police surveillance task on quality and quantity of vide information. the prosed coverage model is applied to calculate the preset FoV position of PTZ camera.

  14. High speed, real-time, camera bandwidth converter

    DOEpatents

    Bower, Dan E; Bloom, David A; Curry, James R

    2014-10-21

    Image data from a CMOS sensor with 10 bit resolution is reformatted in real time to allow the data to stream through communications equipment that is designed to transport data with 8 bit resolution. The incoming image data has 10 bit resolution. The communication equipment can transport image data with 8 bit resolution. Image data with 10 bit resolution is transmitted in real-time, without a frame delay, through the communication equipment by reformatting the image data.

  15. Electro-optical time marker for high-speed cameras

    NASA Technical Reports Server (NTRS)

    Copeland, J. T., Jr.

    1970-01-01

    Electro-optical device converts high-frequency electrical pulses into permanent optical records on film. Accurate, well defined images are formed of electronic pulses having repetition rates greater than 10,000 pulses/sec and pulse widths of 20 microseconds or less. Small electronic switch drives a silicon carbide electroluminescent diode.

  16. Review of high speed communications photomultiplier detectors

    NASA Technical Reports Server (NTRS)

    Enck, R. S.; Abraham, W. G.

    1978-01-01

    Four types of newly developed high speed photomultipliers are discussed: all electrostatic; static crossed field; dynamic crossed field; and hybrid (EBS). Design, construction, and performance parameters of each class are presented along with limitations of each class of device and prognosis for its future in high speed light detection. The particular advantage of these devices lies in high speed applications using low photon flux, large cathode areas, and broadband optical detection.

  17. The Future Of High Speed Photography

    NASA Astrophysics Data System (ADS)

    Courtney-Pratt, J. S.

    1987-09-01

    The variety, range and precision of methods available for photographic recording of fast phenomena have been increasing steadily. The capabilities of the techniques are considered, classifying the methods by the kind of record obtained. descriptions of experimental techniques and apparatus, and illustrations, are given in earlier articles: "A Review of the Methods of High-Speed Photography," Reports on Progress in Physics in 1957; "Advances in High-Speed Photography 1957-1972," Proceedings of the Tenth International Congress on High-Speed Photography and also JSMPTE 82, pp. 167-175 (1973); "Advances in High-Speed Photograph, updated to 1983 in the Proceedings of SPIE Volume 427.

  18. High-Speed Ring Bus

    NASA Technical Reports Server (NTRS)

    Wysocky, Terry; Kopf, Edward, Jr.; Katanyoutananti, Sunant; Steiner, Carl; Balian, Harry

    2010-01-01

    The high-speed ring bus at the Jet Propulsion Laboratory (JPL) allows for future growth trends in spacecraft seen with future scientific missions. This innovation constitutes an enhancement of the 1393 bus as documented in the Institute of Electrical and Electronics Engineers (IEEE) 1393-1999 standard for a spaceborne fiber-optic data bus. It allows for high-bandwidth and time synchronization of all nodes on the ring. The JPL ring bus allows for interconnection of active units with autonomous operation and increased fault handling at high bandwidths. It minimizes the flight software interface with an intelligent physical layer design that has few states to manage as well as simplified testability. The design will soon be documented in the AS-1393 standard (Serial Hi-Rel Ring Network for Aerospace Applications). The framework is designed for "Class A" spacecraft operation and provides redundant data paths. It is based on "fault containment regions" and "redundant functional regions (RFR)" and has a method for allocating cables that completely supports the redundancy in spacecraft design, allowing for a complete RFR to fail. This design reduces the mass of the bus by incorporating both the Control Unit and the Data Unit in the same hardware. The standard uses ATM (asynchronous transfer mode) packets, standardized by ITU-T, ANSI, ETSI, and the ATM Forum. The IEEE-1393 standard uses the UNI form of the packet and provides no protection for the data portion of the cell. The JPL design adds optional formatting to this data portion. This design extends fault protection beyond that of the interconnect. This includes adding protection to the data portion that is contained within the Bus Interface Units (BIUs) and by adding to the signal interface between the Data Host and the JPL 1393 Ring Bus. Data transfer on the ring bus does not involve a master or initiator. Following bus protocol, any BIU may transmit data on the ring whenever it has data received from its host. There

  19. Study of recognizing multiple persons' complicated hand gestures from the video sequence acquired by a moving camera

    NASA Astrophysics Data System (ADS)

    Dan, Luo; Ohya, Jun

    2010-02-01

    Recognizing hand gestures from the video sequence acquired by a dynamic camera could be a useful interface between humans and mobile robots. We develop a state based approach to extract and recognize hand gestures from moving camera images. We improved Human-Following Local Coordinate (HFLC) System, a very simple and stable method for extracting hand motion trajectories, which is obtained from the located human face, body part and hand blob changing factor. Condensation algorithm and PCA-based algorithm was performed to recognize extracted hand trajectories. In last research, this Condensation Algorithm based method only applied for one person's hand gestures. In this paper, we propose a principal component analysis (PCA) based approach to improve the recognition accuracy. For further improvement, temporal changes in the observed hand area changing factor are utilized as new image features to be stored in the database after being analyzed by PCA. Every hand gesture trajectory in the database is classified into either one hand gesture categories, two hand gesture categories, or temporal changes in hand blob changes. We demonstrate the effectiveness of the proposed method by conducting experiments on 45 kinds of sign language based Japanese and American Sign Language gestures obtained from 5 people. Our experimental recognition results show better performance is obtained by PCA based approach than the Condensation algorithm based method.

  20. Observation and analysis of high-speed human motion with frequent occlusion in a large area

    NASA Astrophysics Data System (ADS)

    Wang, Yuru; Liu, Jiafeng; Liu, Guojun; Tang, Xianglong; Liu, Peng

    2009-12-01

    The use of computer vision technology in collecting and analyzing statistics during sports matches or training sessions is expected to provide valuable information for tactics improvement. However, the measurements published in the literature so far are either unreliably documented to be used in training planning due to their limitations or unsuitable for studying high-speed motion in large area with frequent occlusions. A sports annotation system is introduced in this paper for tracking high-speed non-rigid human motion over a large playing area with the aid of motion camera, taking short track speed skating competitions as an example. The proposed system is composed of two sub-systems: precise camera motion compensation and accurate motion acquisition. In the video registration step, a distinctive invariant point feature detector (probability density grads detector) and a global parallax based matching points filter are used, to provide reliable and robust matching across a large range of affine distortion and illumination change. In the motion acquisition step, a two regions' relationship constrained joint color model and Markov chain Monte Carlo based joint particle filter are emphasized, by dividing the human body into two relative key regions. Several field tests are performed to assess measurement errors, including comparison to popular algorithms. With the help of the system presented, the system obtains position data on a 30 m × 60 m large rink with root-mean-square error better than 0.3975 m, velocity and acceleration data with absolute error better than 1.2579 m s-1 and 0.1494 m s-2, respectively.

  21. Liquid-crystal-display projector-based modulation transfer function measurements of charge-coupled-device video camera systems.

    PubMed

    Teipen, B T; MacFarlane, D L

    2000-02-01

    We demonstrate the ability to measure the system modulation transfer function (MTF) of both color and monochrome charge-coupled-device (CCD) video camera systems with a liquid-crystal-display (LCD) projector. Test matrices programmed to the LCD projector were chosen primarily to have a flat power spectral density (PSD) when averaged along one dimension. We explored several matrices and present results for a matrix produced with a random-number generator, a matrix of sequency-ordered Walsh functions, a pseudorandom Hadamard matrix, and a pseudorandom uniformly redundant array. All results are in agreement with expected filtering. The Walsh matrix and the Hadamard matrix show excellent agreement with the matrix from the random-number generator. We show that shift-variant effects between the LCD array and the CCD array can be kept small. This projector test method offers convenient measurement of the MTF of a low-cost video system. Such characterization is useful for an increasing number of machine vision applications and metrology applications. PMID:18337921

  1. A Refrigerated Web Camera for Photogrammetric Video Measurement inside Biomass Boilers and Combustion Analysis

    PubMed Central

    Porteiro, Jacobo; Riveiro, Belén; Granada, Enrique; Armesto, Julia; Eguía, Pablo; Collazo, Joaquín

    2011-01-01

    This paper describes a prototype instrumentation system for photogrammetric measuring of bed and ash layers, as well as for flying particle detection and pursuit using a single device (CCD) web camera. The system was designed to obtain images of the combustion process in the interior of a domestic boiler. It includes a cooling system, needed because of the high temperatures in the combustion chamber of the boiler. The cooling system was designed using CFD simulations to ensure effectiveness. This method allows more complete and real-time monitoring of the combustion process taking place inside a boiler. The information gained from this system may facilitate the optimisation of boiler processes. PMID:22319349

  2. Novel VLSI architecture for edge detection and image enhancement on video camera chips

    NASA Astrophysics Data System (ADS)

    Hammadou, Tarik; Bouzerdoum, Abdesselam; Bermak, Amine; Boussaid, Farid; Biglari, Moteza

    2001-05-01

    In this paper an image enhancing technique is described. It is based on Shunting Inhibitory Cellular Neural Networks. As the limitation of the linear approaches to image coding, enhancement, and feature extraction became apparent, research in image processing began to disperse into the three goal-driven directions. However SICNNs model simultaneously addresses the three problems of coding, enhancement, and extraction as it acts to compress the dynamic range, reorganize the signal to improve visibility, suppress noise, and identify local features. The algorithm we are describing is simple and cost-effective, and can be easily applied in real-time processing for digital still camera application.

  3. High-speed imaging of blood splatter patterns

    SciTech Connect

    McDonald, T.E.; Albright, K.A.; King, N.S.P.; Yates, G.J. ); Levine, G.F. . Bureau of Forensic Services)

    1993-01-01

    The interpretation of blood splatter patterns is an important element in reconstructing the events and circumstances of an accident or crime scene. Unfortunately, the interpretation of patterns and stains formed by blood droplets is not necessarily intuitive and study and analysis are required to arrive at a correct conclusion. A very useful tool in the study of blood splatter patterns is high-speed photography. Scientists at the Los Alamos National Laboratory, Department of Energy (DOE), and Bureau of Forensic Services, State of California, have assembled a high-speed imaging system designed to image blood splatter patterns. The camera employs technology developed by Los Alamos for the underground nuclear testing program and has also been used in a military mine detection program. The camera uses a solid-state CCD sensor operating at approximately 650 frames per second (75 MPixels per second) with a microchannel plate image intensifier that can provide shuttering as short as 5 ns. The images are captured with a laboratory high-speed digitizer and transferred to an IBM compatible PC for display and hard copy output for analysis. The imaging system is described in this paper.

  4. High-speed imaging of blood splatter patterns

    SciTech Connect

    McDonald, T.E.; Albright, K.A.; King, N.S.P.; Yates, G.J.; Levine, G.F.

    1993-05-01

    The interpretation of blood splatter patterns is an important element in reconstructing the events and circumstances of an accident or crime scene. Unfortunately, the interpretation of patterns and stains formed by blood droplets is not necessarily intuitive and study and analysis are required to arrive at a correct conclusion. A very useful tool in the study of blood splatter patterns is high-speed photography. Scientists at the Los Alamos National Laboratory, Department of Energy (DOE), and Bureau of Forensic Services, State of California, have assembled a high-speed imaging system designed to image blood splatter patterns. The camera employs technology developed by Los Alamos for the underground nuclear testing program and has also been used in a military mine detection program. The camera uses a solid-state CCD sensor operating at approximately 650 frames per second (75 MPixels per second) with a microchannel plate image intensifier that can provide shuttering as short as 5 ns. The images are captured with a laboratory high-speed digitizer and transferred to an IBM compatible PC for display and hard copy output for analysis. The imaging system is described in this paper.

  5. High-speed pulse-shape generator, pulse multiplexer

    DOEpatents

    Burkhart, Scott C.

    2002-01-01

    The invention combines arbitrary amplitude high-speed pulses for precision pulse shaping for the National Ignition Facility (NIF). The circuitry combines arbitrary height pulses which are generated by replicating scaled versions of a trigger pulse and summing them delayed in time on a pulse line. The combined electrical pulses are connected to an electro-optic modulator which modulates a laser beam. The circuit can also be adapted to combine multiple channels of high speed data into a single train of electrical pulses which generates the optical pulses for very high speed optical communication. The invention has application in laser pulse shaping for inertial confinement fusion, in optical data links for computers, telecommunications, and in laser pulse shaping for atomic excitation studies. The invention can be used to effect at least a 10.times. increase in all fiber communication lines. It allows a greatly increased data transfer rate between high-performance computers. The invention is inexpensive enough to bring high-speed video and data services to homes through a super modem.

  6. High-Speed Photography with Computer Control.

    ERIC Educational Resources Information Center

    Winters, Loren M.

    1991-01-01

    Describes the use of a microcomputer as an intervalometer for the control and timing of several flash units to photograph high-speed events. Applies this technology to study the oscillations of a stretched rubber band, the deceleration of high-speed projectiles in water, the splashes of milk drops, and the bursts of popcorn kernels. (MDH)

  7. Reducing Heating In High-Speed Cinematography

    NASA Technical Reports Server (NTRS)

    Slater, Howard A.

    1989-01-01

    Infrared-absorbing and infrared-reflecting glass filters simple and effective means for reducing rise in temperature during high-speed motion-picture photography. "Hot-mirror" and "cold-mirror" configurations, employed in projection of images, helps prevent excessive heating of scenes by powerful lamps used in high-speed photography.

  8. High-Speed Imaging of Shock-Wave Motion in Aviation Security Research

    NASA Astrophysics Data System (ADS)

    Anderson, B. W.; Settles, G. S.; Miller, J. D.; Keane, B. T.; Gatto, J. A.

    2001-11-01

    A high-speed drum camera is used in conjunction with Penn State's Full-Scale Schlieren Facility to capture blast wave phenomena in aviation security scenarios. Several hundred photographic frames at a rate of 45k frames/sec allow the imaging of entire explosive events typical of blasts inside an aircraft fuselage. The large (2.3 x 2.9 m) schlieren field-of-view further allows these experiments to be done at or near full-scale. Shock waves up to Mach 1.3 are produced by detonating small balloons filled with an oxygen-acetylene gas mixture. Blasts underneath actual aircraft seats occupied by mannequins reveal shock motion inside a passenger cabin. Blasts were also imaged within the luggage container of a 3/5-scale aircraft fuselage, including hull-holing, as occurred in the Pan Am 103 incident. Drum-camera frames are assembled into digital video clips of several seconds duration, which will be shown in the presentation. These brief movies provide the first-ever visualization of shock motion due to explosions onboard aircraft. They also illustrate the importance of shock imaging in aircraft-hardening experiments, and they provide data to validate numerical predictions of such events. Supported by FAA Grant 99-G-032.

  9. Record And Analysis Of High-Speed Photomicrography On Rheology Of Red Blood Cells In Vivo

    NASA Astrophysics Data System (ADS)

    Jian, Zhang; Yuju, Lin; Jizong, Wu; Qiang, Wang; Guishan, Li; Ni, Liang

    1989-06-01

    Microcirculation is the basic functional unit of blood circulation in human body. The oxygen needed and the carbon dioxide discharged in human body were accomplished in the case of flow and deformation of red blood cells (RBC) in capillaries. The rheology of RBC performs an important function for maintaining normal blood irrigation and nutritional metabolism. Obviously, for blood irrigation, dynamic mechanism of RBC, blood cell microrheology, law of mivrocirculation and cause of disease, it has very important significance to study quantitatively the rheology of RBC in the capillaries of live animal. In recent years, Tianjin University, cooperating with the Institute of Hematology, used the method of high speed photomicrography to record the flow states of RBC in the capillaries of the hamster cheek pouch and the frog web. Some systems were assembled through the study of luminous energy transmission, illumination system and optical match. These systems included the microhigh-speed camera system, the microhighspeed video recorder system and the microhighspeed camera system combining with an image enhancement tube. Some useful results were obtained by the photography of the flow states of RBC, film analysis and data processing. These results provided the beneficial data for the dynamic mechanism that RBC were deformed by the different blood flow field.

  10. Faster than g, revisited with high-speed imaging

    NASA Astrophysics Data System (ADS)

    Vollmer, Michael; Möllmann, Klaus-Peter

    2012-09-01

    The introduction of modern high-speed cameras in physics teaching provides a tool not only for easy visualization, but also for quantitative analysis of many simple though fast occurring phenomena. As an example, we present a very well-known demonstration experiment—sometimes also discussed in the context of falling chimneys—which is commonly described as faster than gravity, faster than g, free fall paradox or simply falling stick. So far, only a few experimental investigations have utilized photography with a maximum of 41 frames s-1. In this work, high-speed imaging with 1000 fps was used to verify theoretical predictions for the classical experiment. In addition, a modified experiment was performed to better distinguish various theoretical outcomes and also visualize the underlying physics. The topic is well suited for student projects in undergraduate courses which combine experimental laboratory work with computer modelling.

  11. Social Interactions of Juvenile Brown Boobies at Sea as Observed with Animal-Borne Video Cameras

    PubMed Central

    Yoda, Ken; Murakoshi, Miku; Tsutsui, Kota; Kohno, Hiroyoshi

    2011-01-01

    While social interactions play a crucial role on the development of young individuals, those of highly mobile juvenile birds in inaccessible environments are difficult to observe. In this study, we deployed miniaturised video recorders on juvenile brown boobies Sula leucogaster, which had been hand-fed beginning a few days after hatching, to examine how social interactions between tagged juveniles and other birds affected their flight and foraging behaviour. Juveniles flew longer with congeners, especially with adult birds, than solitarily. In addition, approximately 40% of foraging occurred close to aggregations of congeners and other species. Young seabirds voluntarily followed other birds, which may directly enhance their foraging success and improve foraging and flying skills during their developmental stage, or both. PMID:21573196

  12. A simple, inexpensive video camera setup for the study of avian nest activity

    USGS Publications Warehouse

    Sabine, J.B.; Meyers, J.M.; Schweitzer, Sara H.

    2005-01-01

    Time-lapse video photography has become a valuable tool for collecting data on avian nest activity and depredation; however, commercially available systems are expensive (>USA $4000/unit). We designed an inexpensive system to identify causes of nest failure of American Oystercatchers (Haematopus palliatus) and assessed its utility at Cumberland Island National Seashore, Georgia. We successfully identified raccoon (Procyon lotor), bobcat (Lynx rufus), American Crow (Corvus brachyrhynchos), and ghost crab (Ocypode quadrata) predation on oystercatcher nests. Other detected causes of nest failure included tidal overwash, horse trampling, abandonment, and human destruction. System failure rates were comparable with commercially available units. Our system's efficacy and low cost (<$800) provided useful data for the management and conservation of the American Oystercatcher.

  13. High Speed Quantum Key Distribution Over Optical Fiber Network System.

    PubMed

    Ma, Lijun; Mink, Alan; Tang, Xiao

    2009-01-01

    The National Institute of Standards and Technology (NIST) has developed a number of complete fiber-based high-speed quantum key distribution (QKD) systems that includes an 850 nm QKD system for a local area network (LAN), a 1310 nm QKD system for a metropolitan area network (MAN), and a 3-node quantum network controlled by a network manager. This paper discusses the key techniques used to implement these systems, which include polarization recovery, noise reduction, frequency up-conversion detection based on a periodically polled lithium nitrate (PPLN) waveguide, custom high-speed data handling boards and quantum network management. Using our quantum network, a QKD secured video surveillance application has been demonstrated. Our intention is to show the feasibility and sophistication of QKD systems based on current technology. PMID:27504218

  14. High-Speed Schlieren Movies of Decelerators at Supersonic Speeds

    NASA Technical Reports Server (NTRS)

    1960-01-01

    High-Speed Schlieren Movies of Decelerators at Supersonic Speeds. Tests were conducted on several types of porous parachutes, a paraglider, and a simulated retrorocket. Mach numbers ranged from 1.8-3.0, porosity from 20-80 percent, and camera speeds from 1680-3000 feet per second (fps) in trials with porous parachutes. Trials of reefed parachutes were conducted at Mach number 2.0 and reefing of 12-33 percent at camera speeds of 600 fps. A flexible parachute with an inflatable ring in the periphery of the canopy was tested at Reynolds number 750,000 per foot, Mach number 2.85, porosity of 28 percent, and camera speed of 36oo fps. A vortex-ring parachute was tested at Mach number 2.2 and camera speed of 3000 fps. The paraglider, with a sweepback of 45 degrees at an angle of attack of 45 degrees was tested at Mach number 2.65, drag coefficient of 0.200, and lift coefficient of 0.278 at a camera speed of 600 fps. A cold air jet exhausting upstream from the center of a bluff body was used to simulate a retrorocket. The free-stream Mach number was 2.0, free-stream dynamic pressure was 620 lb/sq ft, jet-exit static pressure ratio was 10.9, and camera speed was 600 fps. [Entire movie available on DVD from CASI as Doc ID 20070030973. Contact help@sti.nasa.gov

  15. Analysis of Small-Scale Convective Dynamics in a Crown Fire Using Infrared Video Camera Imagery.

    NASA Astrophysics Data System (ADS)

    Clark, Terry L.; Radke, Larry; Coen, Janice; Middleton, Don

    1999-10-01

    vortex tilting but in the sense that the tilted vortices come together to form the hairpin shape. As the vortices rise and come closer together their combined motion results in the vortex tilting forward at a relatively sharp angle, giving a hairpin shape. The development of these hairpin vortices over a range of scales may represent an important mechanism through which convection contributes to the fire spread.A major problem with the IR data analysis is understanding fully what it is that the camera is sampling, in order physically to interpret the data. The results indicate that because of the large amount of after-burning incandescent soot associated with the crown fire, the camera was viewing only a shallow depth into the flame front, and variabilities in the distribution of hot soot particles provide the structures necessary to derive image flow fields. The coherency of the derived horizontal velocities support this view because if the IR camera were seeing deep into or through the flame front, then the effect of the ubiquitous vertical rotations almost certainly would result in random and incoherent estimates for the horizontal flow fields. Animations of the analyzed imagery showed a remarkable level of consistency in both horizontal and vertical velocity flow structures from frame to frame in support of this interpretation. The fact that the 2D image represents a distorted surface also must be taken into account when interpreting the data.Suggestions for further field experimentation, software development, and testing are discussed in the conclusions. These suggestions may further understanding on this topic and increase the utility of this type of analysis to wildfire research.

  16. Jellyfish support high energy intake of leatherback sea turtles (Dermochelys coriacea): video evidence from animal-borne cameras.

    PubMed

    Heaslip, Susan G; Iverson, Sara J; Bowen, W Don; James, Michael C

    2012-01-01

    The endangered leatherback turtle is a large, highly migratory marine predator that inexplicably relies upon a diet of low-energy gelatinous zooplankton. The location of these prey may be predictable at large oceanographic scales, given that leatherback turtles perform long distance migrations (1000s of km) from nesting beaches to high latitude foraging grounds. However, little is known about the profitability of this migration and foraging strategy. We used GPS location data and video from animal-borne cameras to examine how prey characteristics (i.e., prey size, prey type, prey encounter rate) correlate with the daytime foraging behavior of leatherbacks (n = 19) in shelf waters off Cape Breton Island, NS, Canada, during August and September. Video was recorded continuously, averaged 1:53 h per turtle (range 0:08-3:38 h), and documented a total of 601 prey captures. Lion's mane jellyfish (Cyanea capillata) was the dominant prey (83-100%), but moon jellyfish (Aurelia aurita) were also consumed. Turtles approached and attacked most jellyfish within the camera's field of view and appeared to consume prey completely. There was no significant relationship between encounter rate and dive duration (p = 0.74, linear mixed-effects models). Handling time increased with prey size regardless of prey species (p = 0.0001). Estimates of energy intake averaged 66,018 kJ • d(-1) but were as high as 167,797 kJ • d(-1) corresponding to turtles consuming an average of 330 kg wet mass • d(-1) (up to 840 kg • d(-1)) or approximately 261 (up to 664) jellyfish • d(-1). Assuming our turtles averaged 455 kg body mass, they consumed an average of 73% of their body mass • d(-1) equating to an average energy intake of 3-7 times their daily metabolic requirements, depending on estimates used. This study provides evidence that feeding tactics used by leatherbacks in Atlantic Canadian waters are highly profitable and our results are consistent with estimates of mass gain prior to

  17. A CMOS high speed imaging system design based on FPGA

    NASA Astrophysics Data System (ADS)

    Tang, Hong; Wang, Huawei; Cao, Jianzhong; Qiao, Mingrui

    2015-10-01

    CMOS sensors have more advantages than traditional CCD sensors. The imaging system based on CMOS has become a hot spot in research and development. In order to achieve the real-time data acquisition and high-speed transmission, we design a high-speed CMOS imaging system on account of FPGA. The core control chip of this system is XC6SL75T and we take advantages of CameraLink interface and AM41V4 CMOS image sensors to transmit and acquire image data. AM41V4 is a 4 Megapixel High speed 500 frames per second CMOS image sensor with global shutter and 4/3" optical format. The sensor uses column parallel A/D converters to digitize the images. The CameraLink interface adopts DS90CR287 and it can convert 28 bits of LVCMOS/LVTTL data into four LVDS data stream. The reflected light of objects is photographed by the CMOS detectors. CMOS sensors convert the light to electronic signals and then send them to FPGA. FPGA processes data it received and transmits them to upper computer which has acquisition cards through CameraLink interface configured as full models. Then PC will store, visualize and process images later. The structure and principle of the system are both explained in this paper and this paper introduces the hardware and software design of the system. FPGA introduces the driven clock of CMOS. The data in CMOS is converted to LVDS signals and then transmitted to the data acquisition cards. After simulation, the paper presents a row transfer timing sequence of CMOS. The system realized real-time image acquisition and external controls.

  18. Evaluation of a high dynamic range video camera with non-regular sensor

    NASA Astrophysics Data System (ADS)

    Schöberl, Michael; Keinert, Joachim; Ziegler, Matthias; Seiler, Jürgen; Niehaus, Marco; Schuller, Gerald; Kaup, André; Foessel, Siegfried

    2013-01-01

    Although there is steady progress in sensor technology, imaging with a high dynamic range (HDR) is still difficult for motion imaging with high image quality. This paper presents our new approach for video acquisition with high dynamic range. The principle is based on optical attenuation of some of the pixels of an existing image sensor. This well known method traditionally trades spatial resolution for an increase in dynamic range. In contrast to existing work, we use a non-regular pattern of optical ND filters for attenuation. This allows for an image reconstruction that is able to recover high resolution images. The reconstruction is based on the assumption that natural images can be represented nearly sparse in transform domains, which allows for recovery of scenes with high detail. The proposed combination of non-regular sampling and image reconstruction leads to a system with an increase in dynamic range without sacrificing spatial resolution. In this paper, a further evaluation is presented on the achievable image quality. In our prototype we found that crosstalk is present and significant. The discussion thus shows the limits of the proposed imaging system.

  19. Laryngeal High-Speed Videoendoscopy: Rationale and Recommendation for Accurate and Consistent Terminology

    PubMed Central

    Deliyski, Dimitar D.; Hillman, Robert E.

    2015-01-01

    Purpose The authors discuss the rationale behind the term laryngeal high-speed videoendoscopy to describe the application of high-speed endoscopic imaging techniques to the visualization of vocal fold vibration. Method Commentary on the advantages of using accurate and consistent terminology in the field of voice research is provided. Specific justification is described for each component of the term high-speed videoendoscopy, which is compared and contrasted with alternative terminologies in the literature. Results In addition to the ubiquitous high-speed descriptor, the term endoscopy is necessary to specify the appropriate imaging technology and distinguish among modalities such as ultrasound, magnetic resonance imaging, and nonendoscopic optical imaging. Furthermore, the term video critically indicates the electronic recording of a sequence of optical still images representing scenes in motion, in contrast to strobed images using high-speed photography and non-optical high-speed magnetic resonance imaging. High-speed videoendoscopy thus concisely describes the technology and can be appended by the desired anatomical nomenclature such as laryngeal. Conclusions Laryngeal high-speed videoendoscopy strikes a balance between conciseness and specificity when referring to the typical high-speed imaging method performed on human participants. Guidance for the creation of future terminology provides clarity and context for current and future experiments and the dissemination of results among researchers. PMID:26375398

  20. Synchronizing Photography For High-Speed-Engine Research

    NASA Technical Reports Server (NTRS)

    Chun, K. S.

    1989-01-01

    Light flashes when shaft reaches predetermined angle. Synchronization system facilitates visualization of flow in high-speed internal-combustion engines. Designed for cinematography and holographic interferometry, system synchronizes camera and light source with predetermined rotational angle of engine shaft. 10-bit resolution of absolute optical shaft encoder adapted, and 2 to tenth power combinations of 10-bit binary data computed to corresponding angle values. Pre-computed angle values programmed into EPROM's (erasable programmable read-only memories) to use as angle lookup table. Resolves shaft angle to within 0.35 degree at rotational speeds up to 73,240 revolutions per minute.

  1. Insect wing deformation measurements using high speed digital holographic interferometry.

    PubMed

    Aguayo, Daniel D; Mendoza Santoyo, Fernando; De la Torre-I, Manuel H; Salas-Araiza, Manuel D; Caloca-Mendez, Cristian; Gutierrez Hernandez, David Asael

    2010-03-15

    An out-of-plane digital holographic interferometry system is used to detect and measure insect's wing micro deformations. The in-vivo phenomenon of the flapping is registered using a high power cw laser and a high speed camera. A series of digital holograms with the deformation encoded are obtained. Full field deformation maps are presented for an eastern tiger swallowtail butterfly (Pterourus multicaudata). Results show no uniform or symmetrical deformations between wings. These deformations are in the order of hundreds of nanometers over the entire surface. Out-of-plane deformation maps are presented using the unwrapped phase maps. PMID:20389581

  2. ControlNet features high speed

    SciTech Connect

    McEldowney, D.

    1996-11-01

    ControlNet is a high-speed, high-capacity network providing a connection among controllers and I/O subsystems. It was designed for applications in which data integrity, determinism, high speeds, and high data capacities are required. ControlNet addresses applications needing tighter control over processes as well as demanding remote I/O or interlocked PLC applications, both discrete- and process-related. Some examples include high-speed conveyors, transfer lines, cut-to-length lines, high-speed assembly, bottling, and packaging. Process examples, or those typically requiring heavy remote analog I/O, include water/wastewater, test stands, chemical, beverage, food, marine control, and utility balance-of-plant.

  3. Lubrication and cooling for high speed gears

    NASA Technical Reports Server (NTRS)

    Townsend, D. P.

    1985-01-01

    The problems and failures occurring with the operation of high speed gears are discussed. The gearing losses associated with high speed gearing such as tooth mesh friction, bearing friction, churning, and windage are discussed with various ways shown to help reduce these losses and thereby improve efficiency. Several different methods of oil jet lubrication for high speed gearing are given such as into mesh, out of mesh, and radial jet lubrication. The experiments and analytical results for the various methods of oil jet lubrication are shown with the strengths and weaknesses of each method discussed. The analytical and experimental results of gear lubrication and cooling at various test conditions are presented. These results show the very definite need of improved methods of gear cooling at high speed and high load conditions.

  4. Damping Bearings In High-Speed Turbomachines

    NASA Technical Reports Server (NTRS)

    Von Pragenau, George L.

    1994-01-01

    Paper presents comparison of damping bearings with traditional ball, roller, and hydrostatic bearings in high-speed cryogenic turbopumps. Concept of damping bearings described in "Damping Seals and Bearings for a Turbomachine" (MFS-28345).

  5. Active Structured Learning for High-Speed Object Detection

    NASA Astrophysics Data System (ADS)

    Lampert, Christoph H.; Peters, Jan

    High-speed smooth and accurate visual tracking of objects in arbitrary, unstructured environments is essential for robotics and human motion analysis. However, building a system that can adapt to arbitrary objects and a wide range of lighting conditions is a challenging problem, especially if hard real-time constraints apply like in robotics scenarios. In this work, we introduce a method for learning a discriminative object tracking system based on the recent structured regression framework for object localization. Using a kernel function that allows fast evaluation on the GPU, the resulting system can process video streams at speed of 100 frames per second or more.

  6. Investigation of diesel injection jets using high-speed photography and speed holography

    NASA Astrophysics Data System (ADS)

    Eisfeld, Fritz

    1991-04-01

    To reduce the particle emission of a Diesel engine it is necessary to improve our know- [edge on the penetration and the spreading of an injection jet. Therefore the motion of the fuel jet and his break up within the orifice and aLso in a test chamber was investigated using high speed cinematography. The possibility to use high speed holography was aLso tested and a new drum camera was developed.

  7. Bird-Borne Video-Cameras Show That Seabird Movement Patterns Relate to Previously Unrevealed Proximate Environment, Not Prey

    PubMed Central

    Tremblay, Yann; Thiebault, Andréa; Mullers, Ralf; Pistorius, Pierre

    2014-01-01

    The study of ecological and behavioral processes has been revolutionized in the last two decades with the rapid development of biologging-science. Recently, using image-capturing devices, some pilot studies demonstrated the potential of understanding marine vertebrate movement patterns in relation to their proximate, as opposed to remote sensed environmental contexts. Here, using miniaturized video cameras and GPS tracking recorders simultaneously, we show for the first time that information on the immediate visual surroundings of a foraging seabird, the Cape gannet, is fundamental in understanding the origins of its movement patterns. We found that movement patterns were related to specific stimuli which were mostly other predators such as gannets, dolphins or fishing boats. Contrary to a widely accepted idea, our data suggest that foraging seabirds are not directly looking for prey. Instead, they search for indicators of the presence of prey, the latter being targeted at the very last moment and at a very small scale. We demonstrate that movement patterns of foraging seabirds can be heavily driven by processes unobservable with conventional methodology. Except perhaps for large scale processes, local-enhancement seems to be the only ruling mechanism; this has profounds implications for ecosystem-based management of marine areas. PMID:24523892

  8. Development of Dynamic Spatial Video Camera (DSVC) for 4D observation, analysis and modeling of human body locomotion.

    PubMed

    Suzuki, Naoki; Hattori, Asaki; Hayashibe, Mitsuhiro; Suzuki, Shigeyuki; Otake, Yoshito

    2003-01-01

    We have developed an imaging system for free and quantitative observation of human locomotion in a time-spatial domain by way of real time imaging. The system is equipped with 60 computer controlled video cameras to film human locomotion from all angles simultaneously. Images are installed into the main graphic workstation and translated into a 2D image matrix. Observation of the subject from optional directions is able to be performed by selecting the view point from the optimum image sequence in this image matrix. This system also possesses a function to reconstruct 4D models of the subject's moving human body by using 60 images taken from all directions at one particular time. And this system also has the capability to visualize inner structures such as the skeletal or muscular systems of the subject by compositing computer graphics reconstructed from the MRI data set. We are planning to apply this imaging system to clinical observation in the area of orthopedics, rehabilitation and sports science.

  9. Assessing the application of an airborne intensified multispectral video camera to measure chlorophyll a in three Florida estuaries

    SciTech Connect

    Dierberg, F.E.; Zaitzeff, J.

    1997-08-01

    After absolute and spectral calibration, an airborne intensified, multispectral video camera was field tested for water quality assessments over three Florida estuaries (Tampa Bay, Indian River Lagoon, and the St. Lucie River Estuary). Univariate regression analysis of upwelling spectral energy vs. ground-truthed uncorrected chlorophyll a (Chl a) for each estuary yielded lower coefficients of determination (R{sup 2}) with increasing concentrations of Gelbstoff within an estuary. More predictive relationships were established by adding true color as a second independent variable in a bivariate linear regression model. These regressions successfully explained most of the variation in upwelling light energy (R{sup 2}=0.94, 0.82 and 0.74 for the Tampa Bay, Indian River Lagoon, and St. Lucie estuaries, respectively). Ratioed wavelength bands within the 625-710 nm range produced the highest correlations with ground-truthed uncorrected Chl a, and were similar to those reported as being the most predictive for Chl a in Tennessee reservoirs. However, the ratioed wavebands producing the best predictive algorithms for Chl a differed among the three estuaries due to the effects of varying concentrations of Gelbstoff on upwelling spectral signatures, which precluded combining the data into a common data set for analysis.

  10. High speed photography, videography, and photonics V; Proceedings of the Meeting, San Diego, CA, Aug. 17-19, 1987

    NASA Technical Reports Server (NTRS)

    Johnson, Howard C. (Editor)

    1988-01-01

    Recent advances in high-speed optical and electrooptic devices are discussed in reviews and reports. Topics examined include data quantification and related technologies, high-speed photographic applications and instruments, flash and cine radiography, and novel ultrafast methods. Also considered are optical streak technology, high-speed videographic and photographic equipment, and X-ray streak cameras. Extensive diagrams, drawings, graphs, sample images, and tables of numerical data are provided.

  11. High Speed Imaging of Cavitation around Dental Ultrasonic Scaler Tips.

    PubMed

    Vyas, Nina; Pecheva, Emilia; Dehghani, Hamid; Sammons, Rachel L; Wang, Qianxi X; Leppinen, David M; Walmsley, A Damien

    2016-01-01

    Cavitation occurs around dental ultrasonic scalers, which are used clinically for removing dental biofilm and calculus. However it is not known if this contributes to the cleaning process. Characterisation of the cavitation around ultrasonic scalers will assist in assessing its contribution and in developing new clinical devices for removing biofilm with cavitation. The aim is to use high speed camera imaging to quantify cavitation patterns around an ultrasonic scaler. A Satelec ultrasonic scaler operating at 29 kHz with three different shaped tips has been studied at medium and high operating power using high speed imaging at 15,000, 90,000 and 250,000 frames per second. The tip displacement has been recorded using scanning laser vibrometry. Cavitation occurs at the free end of the tip and increases with power while the area and width of the cavitation cloud varies for different shaped tips. The cavitation starts at the antinodes, with little or no cavitation at the node. High speed image sequences combined with scanning laser vibrometry show individual microbubbles imploding and bubble clouds lifting and moving away from the ultrasonic scaler tip, with larger tip displacement causing more cavitation. PMID:26934340

  12. High Speed Imaging of Cavitation around Dental Ultrasonic Scaler Tips

    PubMed Central

    Vyas, Nina; Pecheva, Emilia; Dehghani, Hamid; Sammons, Rachel L.; Wang, Qianxi X.; Leppinen, David M.; Walmsley, A. Damien

    2016-01-01

    Cavitation occurs around dental ultrasonic scalers, which are used clinically for removing dental biofilm and calculus. However it is not known if this contributes to the cleaning process. Characterisation of the cavitation around ultrasonic scalers will assist in assessing its contribution and in developing new clinical devices for removing biofilm with cavitation. The aim is to use high speed camera imaging to quantify cavitation patterns around an ultrasonic scaler. A Satelec ultrasonic scaler operating at 29 kHz with three different shaped tips has been studied at medium and high operating power using high speed imaging at 15,000, 90,000 and 250,000 frames per second. The tip displacement has been recorded using scanning laser vibrometry. Cavitation occurs at the free end of the tip and increases with power while the area and width of the cavitation cloud varies for different shaped tips. The cavitation starts at the antinodes, with little or no cavitation at the node. High speed image sequences combined with scanning laser vibrometry show individual microbubbles imploding and bubble clouds lifting and moving away from the ultrasonic scaler tip, with larger tip displacement causing more cavitation. PMID:26934340

  13. High Speed Imaging of Cavitation around Dental Ultrasonic Scaler Tips.

    PubMed

    Vyas, Nina; Pecheva, Emilia; Dehghani, Hamid; Sammons, Rachel L; Wang, Qianxi X; Leppinen, David M; Walmsley, A Damien

    2016-01-01

    Cavitation occurs around dental ultrasonic scalers, which are used clinically for removing dental biofilm and calculus. However it is not known if this contributes to the cleaning process. Characterisation of the cavitation around ultrasonic scalers will assist in assessing its contribution and in developing new clinical devices for removing biofilm with cavitation. The aim is to use high speed camera imaging to quantify cavitation patterns around an ultrasonic scaler. A Satelec ultrasonic scaler operating at 29 kHz with three different shaped tips has been studied at medium and high operating power using high speed imaging at 15,000, 90,000 and 250,000 frames per second. The tip displacement has been recorded using scanning laser vibrometry. Cavitation occurs at the free end of the tip and increases with power while the area and width of the cavitation cloud varies for different shaped tips. The cavitation starts at the antinodes, with little or no cavitation at the node. High speed image sequences combined with scanning laser vibrometry show individual microbubbles imploding and bubble clouds lifting and moving away from the ultrasonic scaler tip, with larger tip displacement causing more cavitation.

  14. High Speed Motion Neutron Radiography Of Dynamic Events

    NASA Astrophysics Data System (ADS)

    Robinson, A. H.; Bossi, R. H.; Barton, J. P.

    1983-03-01

    This paper describes the development of a technique that enables the neutron radiographic analysis of dynamic processes over a period lasting from one to ten milliseconds. The key to the technique is the use of a neutron pulse that is broad enough to span the duration of the brief event of interest and intense enough to permit recording of the results on a high-speed movie film at frame rates up to 10,000 frames/second. A system has been developed which utilizes the pulsing capability of the OSU TRIGA reactor. The system consists of the Oregon State University TRIGA reactor (pulsing to 3000 MW peak power), a neutron beam collimator, a scintillator neutron conversion screen coupled to an image intensifier, and a 16 mm high speed movie camera. The peak neutron flux incident at the object position is approximately 4 x 1011 n/cm2s with a pulse, full width at half maximum, of 9 ms. The system has been operated in the range of 2000 to 10,000 frames/second and has provided high-speed-motion neutron radiographs for evaluation of the firing cycle of 7.62 mm munition rounds within a steel rifle barrel. The system has also been used to demonstrate the ability to produce neutron radiographic movies of two-phase flow.

  15. Characterization and Modeling of High Speed, High Resolution Focal Plane Arrays

    NASA Astrophysics Data System (ADS)

    Graeve, Thorsten

    The work presented in this dissertation examines the characterization and modeling of visible charge-coupled devices (CCDs). A theoretical model is discussed that represents the parallel clock register of a CCD as a lumped system of discrete resistances and capacitances. This model can be used to simulate the electrical performance of the clock register. From the simulation results the clock pulse degradation in the lossy transmission line model of the clock electrode can be determined. An upper limit is found to the parallel clock frequency at which reasonable pulse shapes are preserved. In addition, the model is used to find the current flow and the power dissipation within the clock electrodes. Through simulations, the total power dissipation on a high-speed, high-resolution CCD can be calculated and compared to theoretical values obtained from a conventional model. The experimental part of this dissertation covers the theory and application of test methodology for the characterization of high-speed, high-resolution CCDs. Both standard and novel techniques for CCD evaluation are discussed, covering all standard figures-of-merit such as read noise, full-well capacity, dynamic range, conversion gain, charge transfer efficiency, MTF, quantum efficiency, non-uniformity, dark current, linearity and lag. This chapter is followed by a discussion of the test camera hardware and software that is used to develop characterization techniques and apply them to specific devices. Finally, the characterization results from applying these techniques to the English Electric Valve (EEV) CCD13 are presented. This device is a 512 by 512 pixel, 8-output, three-phase, full-frame CCD that was designed for readout periods of less than 2 ms. It has been characterized at data rates up to 1 MHz, resulting in video acquisition of 128 by 64 pixel subarrays at 100 frames per second. The results show that both experimental characterization and theoretical modeling are two important aspects of

  16. Application of high-speed real-time holographic interferometry to dynamic testing

    NASA Astrophysics Data System (ADS)

    Li, Yulin; Ji, Zhongying; Wang, Zhengrong; Kong, Yue; Liu, Gaixia

    1989-06-01

    Results of an application experiment of high-speed real-time holographic interferometry in dynamic nondestructive testing are discussed. Appropriate laser equipment, holographic interferometry, and high speed camera were combined to form a complete photographic recording system for changing speed and spatial distribution of the optical path-length difference of dynamic events. The principle of real-time holographic interference and criterion of interference fringes are analyzed. Exposure time, photographic frequency and image amplification and appropriate camera are selected based on the analysis. The experimental equipments and results analysis includes the combustion of gunpowder and priming powder, and electric arc welding.

  17. Aerodynamics of High-Speed Trains

    NASA Astrophysics Data System (ADS)

    Schetz, Joseph A.

    This review highlights the differences between the aerodynamics of high-speed trains and other types of transportation vehicles. The emphasis is on modern, high-speed trains, including magnetic levitation (Maglev) trains. Some of the key differences are derived from the fact that trains operate near the ground or a track, have much greater length-to-diameter ratios than other vehicles, pass close to each other and to trackside structures, are more subject to crosswinds, and operate in tunnels with entry and exit events. The coverage includes experimental techniques and results and analytical and numerical methods, concentrating on the most recent information available.

  18. High speed databus evaluation - Further work

    NASA Astrophysics Data System (ADS)

    Lee, Andrew J.

    Communication elements of avionic architectures and tools for assessing their capabilities are discussed with emphasis placed on the most recent study aimed at understanding and using of high speed databuses. The latter include Linear Token Passing Bus, High Speed Ring Bus, and Fiber Distributed Data Interface. Simulation techniques for evaluating the performance of communication system elements provide a cost-effective and time efficient method of assessment. Further work is aimed at providing a unique capability capable of simulating the hardware and software functionality as well as communication elements. This tool will be used to assess complete avionic architectures.

  19. Design of an Event-Driven, Random-Access, Windowing CCD-Based Camera

    NASA Astrophysics Data System (ADS)

    Monacos, S. P.; Lam, R. K.; Portillo, A. A.; Zhu, D. Q.; Ortiz, G. G.

    2003-11-01

    Commercially available cameras are not designed for a combination of single-frame and high-speed streaming digital video with real-time control of size and location of multiple regions-of-interest (ROIs). A message-passing paradigm is defined to achieve low-level camera control with high-level system operation. This functionality is achieved by asynchronously sending messages to the camera for event-driven operation, where an event is defined as image capture or pixel readout of a ROI, without knowledge of detailed in-camera timing. This methodology provides a random access, real-time, event-driven (RARE) camera for adaptive camera control and is well suited for target-tracking applications requiring autonomous control of multiple ROIs. This methodology additionally provides for reduced ROI readout time and higher frame rates as compared to a predecessor architecture [1] by avoiding external control intervention during the ROI readout process.

  20. Effects of chamber temperature and pressure on the characteristics of high speed diesel jets

    NASA Astrophysics Data System (ADS)

    Sittiwong, W.; Pianthong, K.; Seehanam, W.; Milton, B. E.; Takayama, K.

    2012-05-01

    This study is an investigation into the effects of temperature and pressure within a test chamber on the dynamic characteristics of injected supersonic diesel fuel jets. These jets were generated by the impact of a projectile driven by a horizontal single stage powder gun. A high speed video camera and a shadowgraph optical system were used to capture their dynamic characteristics. The test chamber had controlled air conditions of temperature and pressure up to 150 °C and 8.2 bar, respectively. It was found experimentally that, at the highest temperature, a maximum jet velocity of around 1,500 m/s was obtained. At this temperature, a narrow pointed jet appeared while at the highest pressure, a thick, blunt headed jet was obtained. Strong shock waves were generated in both cases at the jet head. For analytical prediction, equations of jet tip velocity and penetration from the work of Dent and of Hiroyasu were employed to describe the dynamic characteristics of the experiments at a standard condition of 1 bar, 30 °C. These analytical predictions show reasonable agreement to the experimental results, the experimental trend differing in slope because of the effect of the pressure, density fluctuation of the injection and the shock wave phenomena occurring during the jet generation process.

  1. CCD high-speed videography system with new concepts and techniques

    NASA Astrophysics Data System (ADS)

    Zheng, Zengrong; Zhao, Wenyi; Wu, Zhiqiang

    1997-05-01

    A novel CCD high speed videography system with brand-new concepts and techniques is developed by Zhejiang University recently. The system can send a series of short flash pulses to the moving object. All of the parameters, such as flash numbers, flash durations, flash intervals, flash intensities and flash colors, can be controlled according to needs by the computer. A series of moving object images frozen by flash pulses, carried information of moving object, are recorded by a CCD video camera, and result images are sent to a computer to be frozen, recognized and processed with special hardware and software. Obtained parameters can be displayed, output as remote controlling signals or written into CD. The highest videography frequency is 30,000 images per second. The shortest image freezing time is several microseconds. The system has been applied to wide fields of energy, chemistry, medicine, biological engineering, aero- dynamics, explosion, multi-phase flow, mechanics, vibration, athletic training, weapon development and national defense engineering. It can also be used in production streamline to carry out the online, real-time monitoring and controlling.

  2. A high-speed hydroplane accident.

    PubMed

    Flaherty, G N

    1975-03-29

    This report records the investigation into a high-speed hydroplane accident in which the driver died. He was ejected head first into the water at 117 to 126 ft/sec (80 to 85 mph), suffering brain damage and a fractured skull. Suggestions are made to minimize the effects of these inevitable crashes. PMID:1143139

  3. High speed hydrogen/graphite interaction

    NASA Technical Reports Server (NTRS)

    Kelly, A. J.; Hamman, R.; Sharma, O. P.; Harrje, D. T.

    1974-01-01

    Various aspects of a research program on high speed hydrogen/graphite interaction are presented. Major areas discussed are: (1) theoretical predictions of hydrogen/graphite erosion rates; (2) high temperature, nonequilibrium hydrogen flow in a nozzle; and (3) molecular beam studies of hydrogen/graphite erosion.

  4. Aerodynamic design on high-speed trains

    NASA Astrophysics Data System (ADS)

    Ding, San-San; Li, Qiang; Tian, Ai-Qin; Du, Jian; Liu, Jia-Li

    2016-04-01

    Compared with the traditional train, the operational speed of the high-speed train has largely improved, and the dynamic environment of the train has changed from one of mechanical domination to one of aerodynamic domination. The aerodynamic problem has become the key technological challenge of high-speed trains and significantly affects the economy, environment, safety, and comfort. In this paper, the relationships among the aerodynamic design principle, aerodynamic performance indexes, and design variables are first studied, and the research methods of train aerodynamics are proposed, including numerical simulation, a reduced-scale test, and a full-scale test. Technological schemes of train aerodynamics involve the optimization design of the streamlined head and the smooth design of the body surface. Optimization design of the streamlined head includes conception design, project design, numerical simulation, and a reduced-scale test. Smooth design of the body surface is mainly used for the key parts, such as electric-current collecting system, wheel truck compartment, and windshield. The aerodynamic design method established in this paper has been successfully applied to various high-speed trains (CRH380A, CRH380AM, CRH6, CRH2G, and the Standard electric multiple unit (EMU)) that have met expected design objectives. The research results can provide an effective guideline for the aerodynamic design of high-speed trains.

  5. Italian High-speed Airplane Engines

    NASA Technical Reports Server (NTRS)

    Bona, C F

    1940-01-01

    This paper presents an account of Italian high-speed engine designs. The tests were performed on the Fiat AS6 engine, and all components of that engine are discussed from cylinders to superchargers as well as the test set-up. The results of the bench tests are given along with the performance of the engines in various races.

  6. High-speed data word monitor

    NASA Technical Reports Server (NTRS)

    Wirth, M. N.

    1975-01-01

    Small, portable, self-contained device provides high-speed display of bit pattern or any selected portion of transmission, can suppress filler patterns so that display is not updated, and can freeze display so that specific event may be observed in detail.

  7. Compressive high speed flow microscopy with motion contrast (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Bosworth, Bryan; Stroud, Jasper R.; Tran, Dung N.; Tran, Trac D.; Chin, Sang; Foster, Mark A.

    2016-03-01

    High-speed continuous imaging systems are constrained by analog-to-digital conversion, storage, and transmission. However, real video signals of objects such as microscopic cells and particles require only a few percent or less of the full video bandwidth for high fidelity representation by modern compression algorithms. Compressed Sensing (CS) is a recent influential paradigm in signal processing that builds real-time compression into the acquisition step by computing inner products between the signal of interest and known random waveforms and then applying a nonlinear reconstruction algorithm. Here, we extend the continuous high-rate photonically-enabled compressed sensing (CHiRP-CS) framework to acquire motion contrast video of microscopic flowing objects. We employ chirp processing in optical fiber and high-speed electro-optic modulation to produce ultrashort pulses each with a unique pseudorandom binary sequence (PRBS) spectral pattern with 325 features per pulse at the full laser repetition rate (90 MHz). These PRBS-patterned pulses serve as random structured illumination inside a one-dimensional (1D) spatial disperser. By multiplexing the PRBS patterns with a user-defined repetition period, the difference signal y_i=&phi_i (x_i - x_{i-tau}) can be computed optically with balanced detection, where x is the image signal, phi_i is the PRBS pattern, and tau is the repetition period of the patterns. Two-dimensional (2D) image reconstruction via iterative alternating minimization to find the best locally-sparse representation yields an image of the edges in the flow direction, corresponding to the spatial and temporal 1D derivative. This provides both a favorable representation for image segmentation and a sparser representation for many objects that can improve image compression.

  8. An optical system for detecting 3D high-speed oscillation of a single ultrasound microbubble

    PubMed Central

    Liu, Yuan; Yuan, Baohong

    2013-01-01

    As contrast agents, microbubbles have been playing significant roles in ultrasound imaging. Investigation of microbubble oscillation is crucial for microbubble characterization and detection. Unfortunately, 3-dimensional (3D) observation of microbubble oscillation is challenging and costly because of the bubble size—a few microns in diameter—and the high-speed dynamics under MHz ultrasound pressure waves. In this study, a cost-efficient optical confocal microscopic system combined with a gated and intensified charge-coupled device (ICCD) camera were developed to detect 3D microbubble oscillation. The capability of imaging microbubble high-speed oscillation with much lower costs than with an ultra-fast framing or streak camera system was demonstrated. In addition, microbubble oscillations along both lateral (x and y) and axial (z) directions were demonstrated. Accordingly, this system is an excellent alternative for 3D investigation of microbubble high-speed oscillation, especially when budgets are limited. PMID:24049677

  9. High-speed measurement of nozzle swing angle of rocket engine based on monocular vision

    NASA Astrophysics Data System (ADS)

    Qu, Yufu; Yang, Haijuan

    2015-02-01

    A nozzle angle measurement system based on monocular vision is proposed to achieve high-speed and non-contact angle measurement of rocket engine nozzle. The measurement system consists of two illumination sources, a lens, a target board with spots, a high-speed camera, an image acquisition card and a PC. A target board with spots was fixed on the end of rocket engine nozzle. The image of the target board moved along with the rocket engine nozzle swing was captured by a high-speed camera and transferred to the PC by an image acquisition card. Then a data processing algorithm was utilized to acquire the swing angle of the engine nozzle. Experiment shows that the accuracy of swing angle measurement was 0.2° and the measurement frequency was up to 500Hz.

  10. An optical system for detecting 3D high-speed oscillation of a single ultrasound microbubble.

    PubMed

    Liu, Yuan; Yuan, Baohong

    2013-01-01

    As contrast agents, microbubbles have been playing significant roles in ultrasound imaging. Investigation of microbubble oscillation is crucial for microbubble characterization and detection. Unfortunately, 3-dimensional (3D) observation of microbubble oscillation is challenging and costly because of the bubble size-a few microns in diameter-and the high-speed dynamics under MHz ultrasound pressure waves. In this study, a cost-efficient optical confocal microscopic system combined with a gated and intensified charge-coupled device (ICCD) camera were developed to detect 3D microbubble oscillation. The capability of imaging microbubble high-speed oscillation with much lower costs than with an ultra-fast framing or streak camera system was demonstrated. In addition, microbubble oscillations along both lateral (x and y) and axial (z) directions were demonstrated. Accordingly, this system is an excellent alternative for 3D investigation of microbubble high-speed oscillation, especially when budgets are limited. PMID:24049677

  11. Pulse Detonation Engines for High Speed Flight

    NASA Technical Reports Server (NTRS)

    Povinelli, Louis A.

    2002-01-01

    Revolutionary concepts in propulsion are required in order to achieve high-speed cruise capability in the atmosphere and for low cost reliable systems for earth to orbit missions. One of the advanced concepts under study is the air-breathing pulse detonation engine. Additional work remains in order to establish the role and performance of a PDE in flight applications, either as a stand-alone device or as part of a combined cycle system. In this paper, we shall offer a few remarks on some of these remaining issues, i.e., combined cycle systems, nozzles and exhaust systems and thrust per unit frontal area limitations. Currently, an intensive experimental and numerical effort is underway in order to quantify the propulsion performance characteristics of this device. In this paper, we shall highlight our recent efforts to elucidate the propulsion potential of pulse detonation engines and their possible application to high-speed or hypersonic systems.

  12. High Speed Research Program Sonic Fatigue

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A. (Technical Monitor); Beier, Theodor H.; Heaton, Paul

    2005-01-01

    The objective of this sonic fatigue summary is to provide major findings and technical results of studies, initiated in 1994, to assess sonic fatigue behavior of structure that is being considered for the High Speed Civil Transport (HSCT). High Speed Research (HSR) program objectives in the area of sonic fatigue were to predict inlet, exhaust and boundary layer acoustic loads; measure high cycle fatigue data for materials developed during the HSR program; develop advanced sonic fatigue calculation methods to reduce required conservatism in airframe designs; develop damping techniques for sonic fatigue reduction where weight effective; develop wing and fuselage sonic fatigue design requirements; and perform sonic fatigue analyses on HSCT structural concepts to provide guidance to design teams. All goals were partially achieved, but none were completed due to the premature conclusion of the HSR program. A summary of major program findings and recommendations for continued effort are included in the report.

  13. Friction in high-speed impact experiments

    NASA Astrophysics Data System (ADS)

    Pelak, Robert A.; Rightley, Paul; Hammerberg, J. E.

    2000-04-01

    The physical interactions at the contact interface between two metals moving relative to one another are not well understood, particularly when the relative velocity between the bodies becomes a significant fraction of the sound speed in either material. Our goal is to characterize the interfacial dynamics occurring between two metal surfaces sliding at high loads (up to 300 kbar) and at high speeds (greater than 100 m/s). We are developing a technique where a high-speed spinning projectile is fired from a rifled gun at a rod instrumented with electrical resistance strain gauges for measuring both longitudinal and torsional strain waves. The observed traces, in conjunction with computer simulations, are used to estimate the normal and tangential force components at the interface to produce an estimate of the coefficient of friction. A preliminary estimate for a copper/steel interface is presented.

  14. High-speed massively parallel scanning

    DOEpatents

    Decker, Derek E.

    2010-07-06

    A new technique for recording a series of images of a high-speed event (such as, but not limited to: ballistics, explosives, laser induced changes in materials, etc.) is presented. Such technique(s) makes use of a lenslet array to take image picture elements (pixels) and concentrate light from each pixel into a spot that is much smaller than the pixel. This array of spots illuminates a detector region (e.g., film, as one embodiment) which is scanned transverse to the light, creating tracks of exposed regions. Each track is a time history of the light intensity for a single pixel. By appropriately configuring the array of concentrated spots with respect to the scanning direction of the detection material, different tracks fit between pixels and sufficient lengths are possible which can be of interest in several high-speed imaging applications.

  15. High speed printing with polygon scan heads

    NASA Astrophysics Data System (ADS)

    Stutz, Glenn

    2016-03-01

    To reduce and in many cases eliminate the costs associated with high volume printing of consumer and industrial products, this paper investigates and validates the use of the new generation of high speed pulse on demand (POD) lasers in concert with high speed (HS) polygon scan heads (PSH). Associated costs include consumables such as printing ink and nozzles, provisioning labor, maintenance and repair expense as well as reduction of printing lines due to high through put. Targets that are applicable and investigated include direct printing on plastics, printing on paper/cardboard as well as printing on labels. Market segments would include consumer products (CPG), medical and pharmaceutical products, universal ID (UID), and industrial products. In regards to the POD lasers employed, the wavelengths include UV(355nm), Green (532nm) and IR (1064nm) operating within the repetition range of 180 to 250 KHz.

  16. High speed receiver for capsule endoscope.

    PubMed

    Woo, S H; Yoon, K W; Moon, Y K; Lee, J H; Park, H J; Kim, T W; Choi, H C; Won, C H; Cho, J H

    2010-10-01

    In this study, a high-speed receiver for a capsule endoscope was proposed and implemented. The proposed receiver could receive 20 Mbps data that was sufficient to receive images with a higher resolution than conventional receivers. The receiver used a 1.2 GHz band to receive radio frequency (RF) signal, and demodulated the signal to an intermediate frequency (IF) stage (150 MHz). The demodulated signal was amplified, filtered, and under-sampled by a high-speed analog-to-digital converter (ADC). In order to decode the under-sampled data in real time, a simple frequency detection algorithm was selected and was implemented by using a FPGA. The implemented system could receive 20 Mbps data.

  17. High-Speed Granular Chute Flows

    NASA Astrophysics Data System (ADS)

    McElwaine, J.

    2014-12-01

    Accurate models for high speed granular flows are critical for understanding long runout landslides and rockfalls. However reproducible experimental data is extremely limited and is mostly only available for steady state flows on moderate inclinations. We report on experiments over a much greater range of slope angles 30-50 degrees and flow depths 4-130 particle diameters with upto 20kg/s of sand flowing steadily. The data suggests that friction can be much larger than the μ(I)mu(I) rheology or kinetic theories predict and suggest and that there may be constant velocity states above the angle of vanishing hstop. We show similar high speed steady flows at angles up to 50 degress in Discrete Element Simuations and discuss how these can be understood theoretically.

  18. High-speed AFM for Studying Dynamic Biomolecular Processes

    NASA Astrophysics Data System (ADS)

    Ando, Toshio

    2008-03-01

    Biological molecules show their vital activities only in aqueous solutions. It had been one of dreams in biological sciences to directly observe biological macromolecules (protein, DNA) at work under a physiological condition because such observation is straightforward to understanding their dynamic behaviors and functional mechanisms. Optical microscopy has no sufficient spatial resolution and electron microscopy is not applicable to in-liquid samples. Atomic force microscopy (AFM) can visualize molecules in liquids at high resolution but its imaging rate was too low to capture dynamic biological processes. This slow imaging rate is because AFM employs mechanical probes (cantilevers) and mechanical scanners to detect the sample height at each pixel. It is quite difficult to quickly move a mechanical device of macroscopic size with sub-nanometer accuracy without producing unwanted vibrations. It is also difficult to maintain the delicate contact between a probe tip and fragile samples. Two key techniques are required to realize high-speed AFM for biological research; fast feedback control to maintain a weak tip-sample interaction force and a technique to suppress mechanical vibrations of the scanner. Various efforts have been carried out in the past decade to materialize high-speed AFM. The current high-speed AFM can capture images on video at 30-60 frames/s for a scan range of 250nm and 100 scan lines, without significantly disturbing week biomolecular interaction. Our recent studies demonstrated that this new microscope can reveal biomolecular processes such as myosin V walking along actin tracks and association/dissociation dynamics of chaperonin GroEL-GroES that occurs in a negatively cooperative manner. The capacity of nanometer-scale visualization of dynamic processes in liquids will innovate on biological research. In addition, it will open a new way to study dynamic chemical/physical processes of various phenomena that occur at the liquid-solid interfaces.

  19. Data Capture Technique for High Speed Signaling

    DOEpatents

    Barrett, Wayne Melvin; Chen, Dong; Coteus, Paul William; Gara, Alan Gene; Jackson, Rory; Kopcsay, Gerard Vincent; Nathanson, Ben Jesse; Vranas, Paylos Michael; Takken, Todd E.

    2008-08-26

    A data capture technique for high speed signaling to allow for optimal sampling of an asynchronous data stream. This technique allows for extremely high data rates and does not require that a clock be sent with the data as is done in source synchronous systems. The present invention also provides a hardware mechanism for automatically adjusting transmission delays for optimal two-bit simultaneous bi-directional (SiBiDi) signaling.

  20. Turbulence modeling for high speed compressible flows

    NASA Technical Reports Server (NTRS)

    Chandra, Suresh

    1993-01-01

    The following grant objectives were delineated in the proposal to NASA: to offer course work in computational fluid dynamics (CFD) and related areas to enable mechanical engineering students at North Carolina A&T State University (N.C. A&TSU) to pursue M.S. studies in CFD, and to enable students and faculty to engage in research in high speed compressible flows. Since no CFD-related activity existed at N.C. A&TSU before the start of the NASA grant period, training of students in the CFD area and initiation of research in high speed compressible flows were proposed as the key aspects of the project. To that end, graduate level courses in CFD, boundary layer theory, and fluid dynamics were offered. This effort included initiating a CFD course for graduate students. Also, research work was performed on studying compressibility effects in high speed flows. Specifically, a modified compressible dissipation model, which included a fourth order turbulent Mach number term, was incorporated into the SPARK code and verified for the air-air mixing layer case. The results obtained for this case were compared with a wide variety of experimental data to discern the trends in the mixing layer growth rates with varying convective Mach numbers. Comparison of the predictions of the study with the results of several analytical models was also carried out. The details of the research study are described in the publication entitled 'Compressibility Effects in Modeling Turbulent High Speed Mixing Layers,' which is attached to this report.

  1. Turbulence modeling for high speed compressible flows

    NASA Astrophysics Data System (ADS)

    Chandra, Suresh

    1993-08-01

    The following grant objectives were delineated in the proposal to NASA: to offer course work in computational fluid dynamics (CFD) and related areas to enable mechanical engineering students at North Carolina A&T State University (N.C. A&TSU) to pursue M.S. studies in CFD, and to enable students and faculty to engage in research in high speed compressible flows. Since no CFD-related activity existed at N.C. A&TSU before the start of the NASA grant period, training of students in the CFD area and initiation of research in high speed compressible flows were proposed as the key aspects of the project. To that end, graduate level courses in CFD, boundary layer theory, and fluid dynamics were offered. This effort included initiating a CFD course for graduate students. Also, research work was performed on studying compressibility effects in high speed flows. Specifically, a modified compressible dissipation model, which included a fourth order turbulent Mach number term, was incorporated into the SPARK code and verified for the air-air mixing layer case. The results obtained for this case were compared with a wide variety of experimental data to discern the trends in the mixing layer growth rates with varying convective Mach numbers. Comparison of the predictions of the study with the results of several analytical models was also carried out. The details of the research study are described in the publication entitled 'Compressibility Effects in Modeling Turbulent High Speed Mixing Layers,' which is attached to this report.

  2. High speed digital holographic interferometry for hypersonic flow visualization

    NASA Astrophysics Data System (ADS)

    Hegde, G. M.; Jagdeesh, G.; Reddy, K. P. J.

    2013-06-01

    Optical imaging techniques have played a major role in understanding the flow dynamics of varieties of fluid flows, particularly in the study of hypersonic flows. Schlieren and shadowgraph techniques have been the flow diagnostic tools for the investigation of compressible flows since more than a century. However these techniques provide only the qualitative information about the flow field. Other optical techniques such as holographic interferometry and laser induced fluorescence (LIF) have been used extensively for extracting quantitative information about the high speed flows. In this paper we present the application of digital holographic interferometry (DHI) technique integrated with short duration hypersonic shock tunnel facility having 1 ms test time, for quantitative flow visualization. Dynamics of the flow fields in hypersonic/supersonic speeds around different test models is visualized with DHI using a high-speed digital camera (0.2 million fps). These visualization results are compared with schlieren visualization and CFD simulation results. Fringe analysis is carried out to estimate the density of the flow field.

  3. Analysis of high-speed digital phonoscopy pediatric images

    NASA Astrophysics Data System (ADS)

    Unnikrishnan, Harikrishnan; Donohue, Kevin D.; Patel, Rita R.

    2012-02-01

    The quantitative characterization of vocal fold (VF) motion can greatly enhance the diagnosis and treatment of speech pathologies. The recent availability of high-speed systems has created new opportunities to understand VF dynamics. This paper presents quantitative methods for analyzing VF dynamics with high-speed digital phonoscopy, with a focus on expected VF changes during childhood. A robust method for automatic VF edge tracking during phonation is introduced and evaluated against 4 expert human observers. Results from 100 test frames show a subpixel difference between the VF edges selected by algorithm and expert observers. Waveforms created from the VF edge displacement are used to created motion features with limited sensitivity to variations of camera resolution on the imaging plane. New features are introduced based on acceleration ratios of critical points over each phonation cycle, which have the potential for studying issues related to impact stress. A novel denoising and hybrid interpolation/extrapolation scheme is also introduced to reduce the impact of quantization errors and large sampling intervals relative to the phonation cycle. Features extracted from groups of 4 adults and 5 children show large differences for features related to asymmetry between the right and left fold and consistent differences for impact acceleration ratio.

  4. Testing of high speed network components

    SciTech Connect

    Wing, W.R.

    1997-06-30

    At the time of the start of this project, a battle was being fought between the computer networking technologies and telephone networking technologies. The telecommunications industry wanted to standardize on Asynchronous Transfer Mode (ATM) as the technology of choice for carrying all cross-country traffic. The computer industry wanted to use Packet Transfer Mode (PTM). The project had several goals, some unspoken. At the highest, most obvious level, the project goals were to test the high-speed components being developed by the computer technology industry. However, in addition, both industrial partners were having trouble finding markets for the high-speed networking technology they were developing and deploying. Thus, a part of the project was to demonstrate applications developed at Oak Ridge which would stretch the limits of the network, and thus demonstrate the utility of high-speed networks. Finally, an unspoken goal of the computer technology industry was to convince the telecommunications industry that packet switching was superior to cell switching. Conversely, the telecommunications industry hoped to see the computer technology industry`s packet switch fail to perform in a real-world test. Project was terminated early due to failure of one of the CRADA partners to deliver needed component.

  5. Are traditional methods of determining nest predators and nest fates reliable? An experiment with Wood Thrushes (Hylocichla mustelina) using miniature video cameras

    USGS Publications Warehouse

    Williams, Gary E.; Wood, P.B.

    2002-01-01

    We used miniature infrared video cameras to monitor Wood Thrush (Hylocichla mustelina) nests during 1998-2000. We documented nest predators and examined whether evidence at nests can be used to predict predator identities and nest fates. Fifty-six nests were monitored; 26 failed, with 3 abandoned and 23 depredated. We predicted predator class (avian, mammalian, snake) prior to review of video footage and were incorrect 57% of the time. Birds and mammals were underrepresented whereas snakes were over-represented in our predictions. We documented ???9 nest-predator species, with the southern flying squirrel (Glaucomys volans) taking the most nests (n = 8). During 2000, we predicted fate (fledge or fail) of 27 nests; 23 were classified correctly. Traditional methods of monitoring nests appear to be effective for classifying success or failure of nests, but ineffective at classifying nest predators.

  6. Vacuum Camera Cooler

    NASA Technical Reports Server (NTRS)

    Laugen, Geoffrey A.

    2011-01-01

    Acquiring cheap, moving video was impossible in a vacuum environment, due to camera overheating. This overheating is brought on by the lack of cooling media in vacuum. A water-jacketed camera cooler enclosure machined and assembled from copper plate and tube has been developed. The camera cooler (see figure) is cup-shaped and cooled by circulating water or nitrogen gas through copper tubing. The camera, a store-bought "spy type," is not designed to work in a vacuum. With some modifications the unit can be thermally connected when mounted in the cup portion of the camera cooler. The thermal conductivity is provided by copper tape between parts of the camera and the cooled enclosure. During initial testing of the demonstration unit, the camera cooler kept the CPU (central processing unit) of this video camera at operating temperature. This development allowed video recording of an in-progress test, within a vacuum environment.

  7. High-speed photography of the first hydrogen-bomb explosion

    SciTech Connect

    Brixner, B.

    1992-01-01

    Obtaining detailed photographs of the early stages of the first hydrogen bomb explosion in 1952 posed a number of problems. First, it was necessary to invent a continuous-access camera which could solve the problem that existing million-picture-per-second cameras were blind most of the time. The solution here was to alter an existing camera design so that two modified cameras could be mounted around a single high-speed rotating mirror. A second problem, acquiring the necessary lenses of precisely specified focal lengths, was solved by obtaining a large number of production lenses from war surplus salvage. A third hurdle to be overcome was to test the new camera at an A-bomb explosion. Finally, it was necessary to solve the almost impossible difficulty of building a safe camera shelter close to a megaton explosion. This paper describes the way these problems were solved. Unfortunately the successful pictures that were taken are sill classified.

  8. High-speed photography of the first hydrogen-bomb explosion

    SciTech Connect

    Brixner, B.

    1992-09-01

    Obtaining detailed photographs of the early stages of the first hydrogen bomb explosion in 1952 posed a number of problems. First, it was necessary to invent a continuous-access camera which could solve the problem that existing million-picture-per-second cameras were blind most of the time. The solution here was to alter an existing camera design so that two modified cameras could be mounted around a single high-speed rotating mirror. A second problem, acquiring the necessary lenses of precisely specified focal lengths, was solved by obtaining a large number of production lenses from war surplus salvage. A third hurdle to be overcome was to test the new camera at an A-bomb explosion. Finally, it was necessary to solve the almost impossible difficulty of building a safe camera shelter close to a megaton explosion. This paper describes the way these problems were solved. Unfortunately the successful pictures that were taken are sill classified.

  9. High-speed photography of the first hydrogen-bomb explosion

    NASA Astrophysics Data System (ADS)

    Brixner, Berlyn

    1993-01-01

    Obtaining detailed photographs of the early stages of the first hydrogen bomb explosion in 1952 posed a number of problems. First, it was necessary to invent a continuous-access camera which could solve the problem that existing million-picture-per-second cameras were blind most of the time. The solution here was to alter an existing camera design so that two modified cameras could be mounted around a single high-speed rotating mirror. A second problem, acquiring the necessary lenses of precisely specified focal lengths, was solved by obtaining a large number of production lenses from war surplus salvage. A third hurdle to be overcome was to test the new camera at an A-bomb explosion. Finally, it was necessary to solve the almost impossible difficulty of building a safe camera shelter close to a megaton explosion. This paper describes the way these problems were solved. Unfortunately the successful pictures that were taken are still classified.

  10. Architectures and applications of high-speed vision

    NASA Astrophysics Data System (ADS)

    Watanabe, Yoshihiro; Oku, Hiromasa; Ishikawa, Masatoshi

    2014-11-01

    With the progress made in high-speed imaging technology, image processing systems that can process images at high frame rates, as well as their applications, are expected. In this article, we examine architectures for high-speed vision systems, and also dynamic image control, which can realize high-speed active optical systems. In addition, we also give an overview of some applications in which high-speed vision is used, including man-machine interfaces, image sensing, interactive displays, high-speed three-dimensional sensing, high-speed digital archiving, microvisual feedback, and high-speed intelligent robots.

  11. Camera Operator and Videographer

    ERIC Educational Resources Information Center

    Moore, Pam

    2007-01-01

    Television, video, and motion picture camera operators produce images that tell a story, inform or entertain an audience, or record an event. They use various cameras to shoot a wide range of material, including television series, news and sporting events, music videos, motion pictures, documentaries, and training sessions. Those who film or…

  12. Multiplex acquisition approach for high speed 3D measurements with a chromatic confocal microscope

    NASA Astrophysics Data System (ADS)

    Taphanel, Miro; Zink, Ralf; Längle, Thomas; Beyerer, Jürgen

    2015-05-01

    A technical realization of a multispectral camera is proposed, by multiplexing a light source with six different spectra. A monochrome line scan camera with six pixel rows is used as detector. The special feature of this acquisition approach is its high speed capability. The scan speed is as high as the frame rate of the line scan camera and not affected by the multiplexing. As application a chromatic confocal microscope was build up. From a data acquisition perspective up to 284 million 3D points per second can be measured. A real time signal processing is proposed, too.

  13. All aboard for high-speed rail

    SciTech Connect

    Herman, D.

    1996-09-01

    A sleek, bullet-nosed train whizzing across the countryside is a fairly common sight in many nations. Since the Train a Grande Vitesse (TGV)--the record-setting ``train with great speed``--was introduced in France in 1981, Germany, Japan, and other countries have joined the high-speed club. In addition, the Eurostar passenger train, which travels between Great Britain and France through the Channel Tunnel, can move at 186 miles per hour once it reaches French tracks. Despite the technology`s growth elsewhere, rapid rail travel has not been seen on US shores beyond a few test runs by various manufacturers. Before the end of the century, however, American train spotters will finally be able to see some very fast trains here too. In March, Washington, DC-based Amtrak announced the purchase of 18 American Flyer high-speed train sets for the Northeast Corridor, which stretches from Boston through new York to the nation`s capital. Furthermore, Florida will get its own system by 2004, and other states are now taking a look at the technology. The American Flyer--designed by Montreal-based Bombardier and TGV manufacturer GEC Alsthom Transport in Paris--should venture onto US rails by 1999. Traveling at up to 150 miles per hour, the American Flyer will cut the New York-Boston run from 4 1/2 hours to 3 hours and reduce New York-Washington trip time from 3 hours to less than 2 3/4. Amtrak hopes the new trains and better times will earn it a greater share of travelers from air shuttles and perhaps from Interstate 95. This article describes how technologies that tilt railcars and propel the world`s fastest trains will be merged into one train set for the American Flyer, Amtrak`s first trip along high-speed rails.

  14. High-speed wavelength-swept lasers

    NASA Astrophysics Data System (ADS)

    Hsu, Kevin

    2006-05-01

    High-speed wavelength-swept lasers capable of providing wide frequency chirp and flexible temporal waveforms could enable numerous advanced functionalities for defense and security applications. Powered by high spectral intensity at rapid sweep rates across a wide wavelength range in each of the 1060nm, 1300nm, and 1550nm spectral windows, these swept-laser systems have demonstrated real-time monitoring and superior signal-to-noise ratio measurements in optical frequency domain imaging, fiber-optic sensor arrays, and near-IR spectroscopy. These same capabilities show promising potentials in laser radar and remote sensing applications. The core of the high-speed swept laser incorporates a semiconductor gain module and a high-performance fiber Fabry- Perot tunable filter (FFP-TF) to provide rapid wavelength scanning operations. This unique design embodies the collective advantages of the semiconductor amplifier's broad gain-bandwidth with direct modulation capability, and the FFP-TF's wide tuning ranges (>200nm), high finesse (1000 to 10,000), low-loss (<3dB), and fast scan rates reaching 20KHz. As a result, the laser can sweep beyond 100nm in 25μsec, output a scanning peak power near mW level, and exhibit excellent peak signal-to-spontaneous-emission ratio >80dB in static mode. When configured as a seed laser followed by post amplification, the swept spectrum and power can be optimized for Doppler ranging and remote sensing applications. Furthermore, when combined with a dispersive element, the wavelength sweep can be converted into high-speed and wide-angle spatial scanning without moving parts.

  15. Aeroacoustic sources of high speed maglev trains

    NASA Astrophysics Data System (ADS)

    Hanson, Carl E.

    This paper summarizes information from several studies regarding aeroacoustic sources of highspeed magnetically levitated trains (maglev). At low speed, the propulsion system, auxiliary equipment, and mechanical/structural radiation are the predominant sources of noise from maglev. At high speed, aeroacoustic sources dominate the noise. Noise from airflow over a train (aeroacoustic noise) is generated by flow separation and reattachment at the front, turbulent boundary layer over the entire surface of the train, flow interactions with edges and appendages, and flow interactions between moving and stationary components of the system. This paper discusses aeroacoustic mechanisms at the noise, the mechanisms related to the turbulent boundary layer, and edge mechanisms.

  16. High-speed multispectral confocal imaging

    NASA Astrophysics Data System (ADS)

    Carver, Gary E.; Locknar, Sarah A.; Morrison, William A.; Farkas, Daniel L.

    2013-02-01

    A new approach for generating high-speed multispectral images has been developed. The central concept is that spectra can be acquired for each pixel in a confocal spatial scan by using a fast spectrometer based on optical fiber delay lines. This concept merges fast spectroscopy with standard spatial scanning to create datacubes in real time. The spectrometer is based on a serial array of reflecting spectral elements, delay lines between these elements, and a single element detector. The spatial, spectral, and temporal resolution of the instrument is described, and illustrated by multispectral images of laser-induced autofluorescence in biological tissues.

  17. High-speed multispectral confocal biomedical imaging

    PubMed Central

    Carver, Gary E.; Locknar, Sarah A.; Morrison, William A.; Krishnan Ramanujan, V.; Farkas, Daniel L.

    2014-01-01

    Abstract. A new approach for generating high-speed multispectral confocal images has been developed. The central concept is that spectra can be acquired for each pixel in a confocal spatial scan by using a fast spectrometer based on optical fiber delay lines. This approach merges fast spectroscopy with standard spatial scanning to create datacubes in real time. The spectrometer is based on a serial array of reflecting spectral elements, delay lines between these elements, and a single element detector. The spatial, spectral, and temporal resolution of the instrument is described and illustrated by multispectral images of laser-induced autofluorescence in biological tissues. PMID:24658777

  18. The Hubble Space Telescope high speed photometer

    NASA Technical Reports Server (NTRS)

    Vancitters, G. W., Jr.; Bless, R. C.; Dolan, J. F.; Elliot, J. L.; Robinson, E. L.; White, R. L.

    1988-01-01

    The Hubble Space Telescope will provide the opportunity to perform precise astronomical photometry above the disturbing effects of the atmosphere. The High Speed Photometer is designed to provide the observatory with a stable, precise photometer with wide dynamic range, broad wavelenth coverage, time resolution in the microsecond region, and polarimetric capability. Here, the scientific requirements for the instrument are examined, the unique design features of the photometer are explored, and the improvements to be expected over the performance of ground-based instruments are projected.

  19. Pulsed laser triggered high speed microfluidic switch

    NASA Astrophysics Data System (ADS)

    Wu, Ting-Hsiang; Gao, Lanyu; Chen, Yue; Wei, Kenneth; Chiou, Pei-Yu

    2008-10-01

    We report a high-speed microfluidic switch capable of achieving a switching time of 10 μs. The switching mechanism is realized by exciting dynamic vapor bubbles with focused laser pulses in a microfluidic polydimethylsiloxane (PDMS) channel. The bubble expansion deforms the elastic PDMS channel wall and squeezes the adjacent sample channel to control its fluid and particle flows as captured by the time-resolved imaging system. A switching of polystyrene microspheres in a Y-shaped channel has also been demonstrated. This ultrafast laser triggered switching mechanism has the potential to advance the sorting speed of state-of-the-art microscale fluorescence activated cell sorting devices.

  20. High Speed SPM of Functional Materials

    SciTech Connect

    Huey, Bryan D.

    2015-08-14

    The development and optimization of applications comprising functional materials necessitates a thorough understanding of their static and dynamic properties and performance at the nanoscale. Leveraging High Speed SPM and concepts enabled by it, efficient measurements and maps with nanoscale and nanosecond temporal resolution are uniquely feasible. This includes recent enhancements for topographic, conductivity, ferroelectric, and piezoelectric properties as originally proposed, as well as newly developed methods or improvements to AFM-based mechanical, friction, thermal, and photoconductivity measurements. The results of this work reveal fundamental mechanisms of operation, and suggest new approaches for improving the ultimate speed and/or efficiency, of data storage systems, magnetic-electric sensors, and solar cells.