Sample records for speed framing camera

  1. The application of high-speed photography in z-pinch high-temperature plasma diagnostics

    NASA Astrophysics Data System (ADS)

    Wang, Kui-lu; Qiu, Meng-tong; Hei, Dong-wei

    2007-01-01

    This invited paper is presented to discuss the application of high speed photography in z-pinch high temperature plasma diagnostics in recent years in Northwest Institute of Nuclear Technology in concentrative mode. The developments and applications of soft x-ray framing camera, soft x-ray curved crystal spectrometer, optical framing camera, ultraviolet four-frame framing camera and ultraviolet-visible spectrometer are introduced.

  2. A Reconfigurable Real-Time Compressive-Sampling Camera for Biological Applications

    PubMed Central

    Fu, Bo; Pitter, Mark C.; Russell, Noah A.

    2011-01-01

    Many applications in biology, such as long-term functional imaging of neural and cardiac systems, require continuous high-speed imaging. This is typically not possible, however, using commercially available systems. The frame rate and the recording time of high-speed cameras are limited by the digitization rate and the capacity of on-camera memory. Further restrictions are often imposed by the limited bandwidth of the data link to the host computer. Even if the system bandwidth is not a limiting factor, continuous high-speed acquisition results in very large volumes of data that are difficult to handle, particularly when real-time analysis is required. In response to this issue many cameras allow a predetermined, rectangular region of interest (ROI) to be sampled, however this approach lacks flexibility and is blind to the image region outside of the ROI. We have addressed this problem by building a camera system using a randomly-addressable CMOS sensor. The camera has a low bandwidth, but is able to capture continuous high-speed images of an arbitrarily defined ROI, using most of the available bandwidth, while simultaneously acquiring low-speed, full frame images using the remaining bandwidth. In addition, the camera is able to use the full-frame information to recalculate the positions of targets and update the high-speed ROIs without interrupting acquisition. In this way the camera is capable of imaging moving targets at high-speed while simultaneously imaging the whole frame at a lower speed. We have used this camera system to monitor the heartbeat and blood cell flow of a water flea (Daphnia) at frame rates in excess of 1500 fps. PMID:22028852

  3. Development of two-framing camera with large format and ultrahigh speed

    NASA Astrophysics Data System (ADS)

    Jiang, Xiaoguo; Wang, Yuan; Wang, Yi

    2012-10-01

    High-speed imaging facility is important and necessary for the formation of time-resolved measurement system with multi-framing capability. The framing camera which satisfies the demands of both high speed and large format needs to be specially developed in the ultrahigh speed research field. A two-framing camera system with high sensitivity and time-resolution has been developed and used for the diagnosis of electron beam parameters of Dragon-I linear induction accelerator (LIA). The camera system, which adopts the principle of light beam splitting in the image space behind the lens with long focus length, mainly consists of lens-coupled gated image intensifier, CCD camera and high-speed shutter trigger device based on the programmable integrated circuit. The fastest gating time is about 3 ns, and the interval time between the two frames can be adjusted discretely at the step of 0.5 ns. Both the gating time and the interval time can be tuned to the maximum value of about 1 s independently. Two images with the size of 1024×1024 for each can be captured simultaneously in our developed camera. Besides, this camera system possesses a good linearity, uniform spatial response and an equivalent background illumination as low as 5 electrons/pix/sec, which fully meets the measurement requirements of Dragon-I LIA.

  4. High speed photography, videography, and photonics III; Proceedings of the Meeting, San Diego, CA, August 22, 23, 1985

    NASA Technical Reports Server (NTRS)

    Ponseggi, B. G. (Editor); Johnson, H. C. (Editor)

    1985-01-01

    Papers are presented on the picosecond electronic framing camera, photogrammetric techniques using high-speed cineradiography, picosecond semiconductor lasers for characterizing high-speed image shutters, the measurement of dynamic strain by high-speed moire photography, the fast framing camera with independent frame adjustments, design considerations for a data recording system, and nanosecond optical shutters. Consideration is given to boundary-layer transition detectors, holographic imaging, laser holographic interferometry in wind tunnels, heterodyne holographic interferometry, a multispectral video imaging and analysis system, a gated intensified camera, a charge-injection-device profile camera, a gated silicon-intensified-target streak tube and nanosecond-gated photoemissive shutter tubes. Topics discussed include high time-space resolved photography of lasers, time-resolved X-ray spectrographic instrumentation for laser studies, a time-resolving X-ray spectrometer, a femtosecond streak camera, streak tubes and cameras, and a short pulse X-ray diagnostic development facility.

  5. Full-frame, high-speed 3D shape and deformation measurements using stereo-digital image correlation and a single color high-speed camera

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2017-08-01

    Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.

  6. Development of high-speed video cameras

    NASA Astrophysics Data System (ADS)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  7. An ultrahigh-speed color video camera operating at 1,000,000 fps with 288 frame memories

    NASA Astrophysics Data System (ADS)

    Kitamura, K.; Arai, T.; Yonai, J.; Hayashida, T.; Kurita, T.; Maruyama, H.; Namiki, J.; Yanagi, T.; Yoshida, T.; van Kuijk, H.; Bosiers, Jan T.; Saita, A.; Kanayama, S.; Hatade, K.; Kitagawa, S.; Etoh, T. Goji

    2008-11-01

    We developed an ultrahigh-speed color video camera that operates at 1,000,000 fps (frames per second) and had capacity to store 288 frame memories. In 2005, we developed an ultrahigh-speed, high-sensitivity portable color camera with a 300,000-pixel single CCD (ISIS-V4: In-situ Storage Image Sensor, Version 4). Its ultrahigh-speed shooting capability of 1,000,000 fps was made possible by directly connecting CCD storages, which record video images, to the photodiodes of individual pixels. The number of consecutive frames was 144. However, longer capture times were demanded when the camera was used during imaging experiments and for some television programs. To increase ultrahigh-speed capture times, we used a beam splitter and two ultrahigh-speed 300,000-pixel CCDs. The beam splitter was placed behind the pick up lens. One CCD was located at each of the two outputs of the beam splitter. The CCD driving unit was developed to separately drive two CCDs, and the recording period of the two CCDs was sequentially switched. This increased the recording capacity to 288 images, an increase of a factor of two over that of conventional ultrahigh-speed camera. A problem with the camera was that the incident light on each CCD was reduced by a factor of two by using the beam splitter. To improve the light sensitivity, we developed a microlens array for use with the ultrahigh-speed CCDs. We simulated the operation of the microlens array in order to optimize its shape and then fabricated it using stamping technology. Using this microlens increased the light sensitivity of the CCDs by an approximate factor of two. By using a beam splitter in conjunction with the microlens array, it was possible to make an ultrahigh-speed color video camera that has 288 frame memories but without decreasing the camera's light sensitivity.

  8. Development of a driving method suitable for ultrahigh-speed shooting in a 2M-fps 300k-pixel single-chip color camera

    NASA Astrophysics Data System (ADS)

    Yonai, J.; Arai, T.; Hayashida, T.; Ohtake, H.; Namiki, J.; Yoshida, T.; Etoh, T. Goji

    2012-03-01

    We have developed an ultrahigh-speed CCD camera that can capture instantaneous phenomena not visible to the human eye and impossible to capture with a regular video camera. The ultrahigh-speed CCD was specially constructed so that the CCD memory between the photodiode and the vertical transfer path of each pixel can store 144 frames each. For every one-frame shot, the electric charges generated from the photodiodes are transferred in one step to the memory of all the parallel pixels, making ultrahigh-speed shooting possible. Earlier, we experimentally manufactured a 1M-fps ultrahigh-speed camera and tested it for broadcasting applications. Through those tests, we learned that there are cases that require shooting speeds (frame rate) of more than 1M fps; hence we aimed to develop a new ultrahigh-speed camera that will enable much faster shooting speeds than what is currently possible. Since shooting at speeds of more than 200,000 fps results in decreased image quality and abrupt heating of the image sensor and drive circuit board, faster speeds cannot be achieved merely by increasing the drive frequency. We therefore had to improve the image sensor wiring layout and the driving method to develop a new 2M-fps, 300k-pixel ultrahigh-speed single-chip color camera for broadcasting purposes.

  9. Ultrahigh- and high-speed photography, videography, and photonics '91; Proceedings of the Meeting, San Diego, CA, July 24-26, 1991

    NASA Astrophysics Data System (ADS)

    Jaanimagi, Paul A.

    1992-01-01

    This volume presents papers grouped under the topics on advances in streak and framing camera technology, applications of ultrahigh-speed photography, characterizing high-speed instrumentation, high-speed electronic imaging technology and applications, new technology for high-speed photography, high-speed imaging and photonics in detonics, and high-speed velocimetry. The papers presented include those on a subpicosecond X-ray streak camera, photocathodes for ultrasoft X-ray region, streak tube dynamic range, high-speed TV cameras for streak tube readout, femtosecond light-in-flight holography, and electrooptical systems characterization techniques. Attention is also given to high-speed electronic memory video recording techniques, high-speed IR imaging of repetitive events using a standard RS-170 imager, use of a CCD array as a medium-speed streak camera, the photography of shock waves in explosive crystals, a single-frame camera based on the type LD-S-10 intensifier tube, and jitter diagnosis for pico- and femtosecond sources.

  10. Coincidence ion imaging with a fast frame camera

    NASA Astrophysics Data System (ADS)

    Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander H.; Fan, Lin; Li, Wen

    2014-12-01

    A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.

  11. Solid-state framing camera with multiple time frames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, K. L.; Stewart, R. E.; Steele, P. T.

    2013-10-07

    A high speed solid-state framing camera has been developed which can operate over a wide range of photon energies. This camera measures the two-dimensional spatial profile of the flux incident on a cadmium selenide semiconductor at multiple times. This multi-frame camera has been tested at 3.1 eV and 4.5 keV. The framing camera currently records two frames with a temporal separation between the frames of 5 ps but this separation can be varied between hundreds of femtoseconds up to nanoseconds and the number of frames can be increased by angularly multiplexing the probe beam onto the cadmium selenide semiconductor.

  12. High-speed imaging using 3CCD camera and multi-color LED flashes

    NASA Astrophysics Data System (ADS)

    Hijazi, Ala; Friedl, Alexander; Cierpka, Christian; Kähler, Christian; Madhavan, Vis

    2017-11-01

    This paper demonstrates the possibility of capturing full-resolution, high-speed image sequences using a regular 3CCD color camera in conjunction with high-power light emitting diodes of three different colors. This is achieved using a novel approach, referred to as spectral-shuttering, where a high-speed image sequence is captured using short duration light pulses of different colors that are sent consecutively in very close succession. The work presented in this paper demonstrates the feasibility of configuring a high-speed camera system using low cost and readily available off-the-shelf components. This camera can be used for recording six-frame sequences at frame rates up to 20 kHz or three-frame sequences at even higher frame rates. Both color crosstalk and spatial matching between the different channels of the camera are found to be within acceptable limits. A small amount of magnification difference between the different channels is found and a simple calibration procedure for correcting the images is introduced. The images captured using the approach described here are of good quality to be used for obtaining full-field quantitative information using techniques such as digital image correlation and particle image velocimetry. A sequence of six high-speed images of a bubble splash recorded at 400 Hz is presented as a demonstration.

  13. Coincidence ion imaging with a fast frame camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei

    2014-12-15

    A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots onmore » each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.« less

  14. Coincidence electron/ion imaging with a fast frame camera

    NASA Astrophysics Data System (ADS)

    Li, Wen; Lee, Suk Kyoung; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander; Fan, Lin

    2015-05-01

    A new time- and position- sensitive particle detection system based on a fast frame CMOS camera is developed for coincidence electron/ion imaging. The system is composed of three major components: a conventional microchannel plate (MCP)/phosphor screen electron/ion imager, a fast frame CMOS camera and a high-speed digitizer. The system collects the positional information of ions/electrons from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of MCPs processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of electron/ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide. We further show that a time resolution of 30 ps can be achieved when measuring electron TOF spectrum and this enables the new system to achieve a good energy resolution along the TOF axis.

  15. High speed imaging - An important industrial tool

    NASA Technical Reports Server (NTRS)

    Moore, Alton; Pinelli, Thomas E.

    1986-01-01

    High-speed photography, which is a rapid sequence of photographs that allow an event to be analyzed through the stoppage of motion or the production of slow-motion effects, is examined. In high-speed photography 16, 35, and 70 mm film and framing rates between 64-12,000 frames per second are utilized to measure such factors as angles, velocities, failure points, and deflections. The use of dual timing lamps in high-speed photography and the difficulties encountered with exposure and programming the camera and event are discussed. The application of video cameras to the recording of high-speed events is described.

  16. Multiple-frame IR photo-recorder KIT-3M

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roos, E; Wilkins, P; Nebeker, N

    2006-05-15

    This paper reports the experimental results of a high-speed multi-frame infrared camera which has been developed in Sarov at VNIIEF. Earlier [1] we discussed the possibility of creation of the multi-frame infrared radiation photo-recorder with framing frequency about 1 MHz. The basis of the photo-recorder is a semiconductor ionization camera [2, 3], which converts IR radiation of spectral range 1-10 micrometers into a visible image. Several sequential thermal images are registered by using the IR converter in conjunction with a multi-frame electron-optical camera. In the present report we discuss the performance characteristics of a prototype commercial 9-frame high-speed IR photo-recorder.more » The image converter records infrared images of thermal fields corresponding to temperatures ranging from 300 C to 2000 C with an exposure time of 1-20 {micro}s at a frame frequency up to 500 KHz. The IR-photo-recorder camera is useful for recording the time evolution of thermal fields in fast processes such as gas dynamics, ballistics, pulsed welding, thermal processing, automotive industry, aircraft construction, in pulsed-power electric experiments, and for the measurement of spatial mode characteristics of IR-laser radiation.« less

  17. Motion-Blur-Free High-Speed Video Shooting Using a Resonant Mirror

    PubMed Central

    Inoue, Michiaki; Gu, Qingyi; Takaki, Takeshi; Ishii, Idaku; Tajima, Kenji

    2017-01-01

    This study proposes a novel concept of actuator-driven frame-by-frame intermittent tracking for motion-blur-free video shooting of fast-moving objects. The camera frame and shutter timings are controlled for motion blur reduction in synchronization with a free-vibration-type actuator vibrating with a large amplitude at hundreds of hertz so that motion blur can be significantly reduced in free-viewpoint high-frame-rate video shooting for fast-moving objects by deriving the maximum performance of the actuator. We develop a prototype of a motion-blur-free video shooting system by implementing our frame-by-frame intermittent tracking algorithm on a high-speed video camera system with a resonant mirror vibrating at 750 Hz. It can capture 1024 × 1024 images of fast-moving objects at 750 fps with an exposure time of 0.33 ms without motion blur. Several experimental results for fast-moving objects verify that our proposed method can reduce image degradation from motion blur without decreasing the camera exposure time. PMID:29109385

  18. Per-Pixel Coded Exposure for High-Speed and High-Resolution Imaging Using a Digital Micromirror Device Camera

    PubMed Central

    Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei

    2016-01-01

    High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera. PMID:26959023

  19. Per-Pixel Coded Exposure for High-Speed and High-Resolution Imaging Using a Digital Micromirror Device Camera.

    PubMed

    Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei

    2016-03-04

    High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera.

  20. High-Speed Videography Instrumentation And Procedures

    NASA Astrophysics Data System (ADS)

    Miller, C. E.

    1982-02-01

    High-speed videography has been an electronic analog of low-speed film cameras, but having the advantages of instant-replay and simplicity of operation. Recent advances have pushed frame-rates into the realm of the rotating prism camera. Some characteristics of videography systems are discussed in conjunction with applications in sports analysis, and with sports equipment testing.

  1. Object tracking using multiple camera video streams

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  2. Solid state replacement of rotating mirror cameras

    NASA Astrophysics Data System (ADS)

    Frank, Alan M.; Bartolick, Joseph M.

    2007-01-01

    Rotating mirror cameras have been the mainstay of mega-frame per second imaging for decades. There is still no electronic camera that can match a film based rotary mirror camera for the combination of frame count, speed, resolution and dynamic range. The rotary mirror cameras are predominantly used in the range of 0.1 to 100 micro-seconds per frame, for 25 to more than a hundred frames. Electron tube gated cameras dominate the sub microsecond regime but are frame count limited. Video cameras are pushing into the microsecond regime but are resolution limited by the high data rates. An all solid state architecture, dubbed 'In-situ Storage Image Sensor' or 'ISIS', by Prof. Goji Etoh has made its first appearance into the market and its evaluation is discussed. Recent work at Lawrence Livermore National Laboratory has concentrated both on evaluation of the presently available technologies and exploring the capabilities of the ISIS architecture. It is clear though there is presently no single chip camera that can simultaneously match the rotary mirror cameras, the ISIS architecture has the potential to approach their performance.

  3. Applying compressive sensing to TEM video: A substantial frame rate increase on any camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, Andrew; Kovarik, Libor; Abellan, Patricia

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ transmission electron microscopy (TEM) experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1 ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing (CS) methods to increase the frame rate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integratedmore » into a single camera frame during the acquisition process, and then extracted upon readout using statistical CS inversion. Here we describe the background of CS and statistical methods in depth and simulate the frame rates and efficiencies for in-situ TEM experiments. Depending on the resolution and signal/noise of the image, it should be possible to increase the speed of any camera by more than an order of magnitude using this approach.« less

  4. Applying compressive sensing to TEM video: A substantial frame rate increase on any camera

    DOE PAGES

    Stevens, Andrew; Kovarik, Libor; Abellan, Patricia; ...

    2015-08-13

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ transmission electron microscopy (TEM) experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1 ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing (CS) methods to increase the frame rate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integratedmore » into a single camera frame during the acquisition process, and then extracted upon readout using statistical CS inversion. Here we describe the background of CS and statistical methods in depth and simulate the frame rates and efficiencies for in-situ TEM experiments. Depending on the resolution and signal/noise of the image, it should be possible to increase the speed of any camera by more than an order of magnitude using this approach.« less

  5. Slow Speed--Fast Motion: Time-Lapse Recordings in Physics Education

    ERIC Educational Resources Information Center

    Vollmer, Michael; Möllmann, Klaus-Peter

    2018-01-01

    Video analysis with a 30 Hz frame rate is the standard tool in physics education. The development of affordable high-speed-cameras has extended the capabilities of the tool for much smaller time scales to the 1 ms range, using frame rates of typically up to 1000 frames s[superscript -1], allowing us to study transient physics phenomena happening…

  6. Hypervelocity impact studies using a rotating mirror framing laser shadowgraph camera

    NASA Technical Reports Server (NTRS)

    Parker, Vance C.; Crews, Jeanne Lee

    1988-01-01

    The need to study the effects of the impact of micrometeorites and orbital debris on various space-based systems has brought together the technologies of several companies and individuals in order to provide a successful instrumentation package. A light gas gun was employed to accelerate small projectiles to speeds in excess of 7 km/sec. Their impact on various targets is being studied with the help of a specially designed continuous-access rotating-mirror framing camera. The camera provides 80 frames of data at up to 1 x 10 to the 6th frames/sec with exposure times of 20 nsec.

  7. Precision of FLEET Velocimetry Using High-speed CMOS Camera Systems

    NASA Technical Reports Server (NTRS)

    Peters, Christopher J.; Danehy, Paul M.; Bathel, Brett F.; Jiang, Naibo; Calvert, Nathan D.; Miles, Richard B.

    2015-01-01

    Femtosecond laser electronic excitation tagging (FLEET) is an optical measurement technique that permits quantitative velocimetry of unseeded air or nitrogen using a single laser and a single camera. In this paper, we seek to determine the fundamental precision of the FLEET technique using high-speed complementary metal-oxide semiconductor (CMOS) cameras. Also, we compare the performance of several different high-speed CMOS camera systems for acquiring FLEET velocimetry data in air and nitrogen free-jet flows. The precision was defined as the standard deviation of a set of several hundred single-shot velocity measurements. Methods of enhancing the precision of the measurement were explored such as digital binning (similar in concept to on-sensor binning, but done in post-processing), row-wise digital binning of the signal in adjacent pixels and increasing the time delay between successive exposures. These techniques generally improved precision; however, binning provided the greatest improvement to the un-intensified camera systems which had low signal-to-noise ratio. When binning row-wise by 8 pixels (about the thickness of the tagged region) and using an inter-frame delay of 65 micro sec, precisions of 0.5 m/s in air and 0.2 m/s in nitrogen were achieved. The camera comparison included a pco.dimax HD, a LaVision Imager scientific CMOS (sCMOS) and a Photron FASTCAM SA-X2, along with a two-stage LaVision High Speed IRO intensifier. Excluding the LaVision Imager sCMOS, the cameras were tested with and without intensification and with both short and long inter-frame delays. Use of intensification and longer inter-frame delay generally improved precision. Overall, the Photron FASTCAM SA-X2 exhibited the best performance in terms of greatest precision and highest signal-to-noise ratio primarily because it had the largest pixels.

  8. High-speed optical 3D sensing and its applications

    NASA Astrophysics Data System (ADS)

    Watanabe, Yoshihiro

    2016-12-01

    This paper reviews high-speed optical 3D sensing technologies for obtaining the 3D shape of a target using a camera. The focusing speed is from 100 to 1000 fps, exceeding normal camera frame rates, which are typically 30 fps. In particular, contactless, active, and real-time systems are introduced. Also, three example applications of this type of sensing technology are introduced, including surface reconstruction from time-sequential depth images, high-speed 3D user interaction, and high-speed digital archiving.

  9. 3-D Velocimetry of Strombolian Explosions

    NASA Astrophysics Data System (ADS)

    Taddeucci, J.; Gaudin, D.; Orr, T. R.; Scarlato, P.; Houghton, B. F.; Del Bello, E.

    2014-12-01

    Using two synchronized high-speed cameras we were able to reconstruct the three-dimensional displacement and velocity field of bomb-sized pyroclasts in Strombolian explosions at Stromboli Volcano. Relatively low-intensity Strombolian-style activity offers a rare opportunity to observe volcanic processes that remain hidden from view during more violent explosive activity. Such processes include the ejection and emplacement of bomb-sized clasts along pure or drag-modified ballistic trajectories, in-flight bomb collision, and gas liberation dynamics. High-speed imaging of Strombolian activity has already opened new windows for the study of the abovementioned processes, but to date has only utilized two-dimensional analysis with limited motion detection and ability to record motion towards or away from the observer. To overcome this limitation, we deployed two synchronized high-speed video cameras at Stromboli. The two cameras, located sixty meters apart, filmed Strombolian explosions at 500 and 1000 frames per second and with different resolutions. Frames from the two cameras were pre-processed and combined into a single video showing frames alternating from one to the other camera. Bomb-sized pyroclasts were then manually identified and tracked in the combined video, together with fixed reference points located as close as possible to the vent. The results from manual tracking were fed to a custom software routine that, knowing the relative position of the vent and cameras, and the field of view of the latter, provided the position of each bomb relative to the reference points. By tracking tens of bombs over five to ten frames at different intervals during one explosion, we were able to reconstruct the three-dimensional evolution of the displacement and velocity fields of bomb-sized pyroclasts during individual Strombolian explosions. Shifting jet directivity and dispersal angle clearly appear from the three-dimensional analysis.

  10. Inspecting rapidly moving surfaces for small defects using CNN cameras

    NASA Astrophysics Data System (ADS)

    Blug, Andreas; Carl, Daniel; Höfler, Heinrich

    2013-04-01

    A continuous increase in production speed and manufacturing precision raises a demand for the automated detection of small image features on rapidly moving surfaces. An example are wire drawing processes where kilometers of cylindrical metal surfaces moving with 10 m/s have to be inspected for defects such as scratches, dents, grooves, or chatter marks with a lateral size of 100 μm in real time. Up to now, complex eddy current systems are used for quality control instead of line cameras, because the ratio between lateral feature size and surface speed is limited by the data transport between camera and computer. This bottleneck is avoided by "cellular neural network" (CNN) cameras which enable image processing directly on the camera chip. This article reports results achieved with a demonstrator based on this novel analogue camera - computer system. The results show that computational speed and accuracy of the analogue computer system are sufficient to detect and discriminate the different types of defects. Area images with 176 x 144 pixels are acquired and evaluated in real time with frame rates of 4 to 10 kHz - depending on the number of defects to be detected. These frame rates correspond to equivalent line rates on line cameras between 360 and 880 kHz, a number far beyond the available features. Using the relation between lateral feature size and surface speed as a figure of merit, the CNN based system outperforms conventional image processing systems by an order of magnitude.

  11. Modeling of digital information optical encryption system with spatially incoherent illumination

    NASA Astrophysics Data System (ADS)

    Bondareva, Alyona P.; Cheremkhin, Pavel A.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.; Starikov, Sergey N.

    2015-10-01

    State of the art micromirror DMD spatial light modulators (SLM) offer unprecedented framerate up to 30000 frames per second. This, in conjunction with high speed digital camera, should allow to build high speed optical encryption system. Results of modeling of digital information optical encryption system with spatially incoherent illumination are presented. Input information is displayed with first SLM, encryption element - with second SLM. Factors taken into account are: resolution of SLMs and camera, holograms reconstruction noise, camera noise and signal sampling. Results of numerical simulation demonstrate high speed (several gigabytes per second), low bit error rate and high crypto-strength.

  12. Precision of FLEET Velocimetry Using High-Speed CMOS Camera Systems

    NASA Technical Reports Server (NTRS)

    Peters, Christopher J.; Danehy, Paul M.; Bathel, Brett F.; Jiang, Naibo; Calvert, Nathan D.; Miles, Richard B.

    2015-01-01

    Femtosecond laser electronic excitation tagging (FLEET) is an optical measurement technique that permits quantitative velocimetry of unseeded air or nitrogen using a single laser and a single camera. In this paper, we seek to determine the fundamental precision of the FLEET technique using high-speed complementary metal-oxide semiconductor (CMOS) cameras. Also, we compare the performance of several different high-speed CMOS camera systems for acquiring FLEET velocimetry data in air and nitrogen free-jet flows. The precision was defined as the standard deviation of a set of several hundred single-shot velocity measurements. Methods of enhancing the precision of the measurement were explored such as digital binning (similar in concept to on-sensor binning, but done in post-processing), row-wise digital binning of the signal in adjacent pixels and increasing the time delay between successive exposures. These techniques generally improved precision; however, binning provided the greatest improvement to the un-intensified camera systems which had low signal-to-noise ratio. When binning row-wise by 8 pixels (about the thickness of the tagged region) and using an inter-frame delay of 65 microseconds, precisions of 0.5 meters per second in air and 0.2 meters per second in nitrogen were achieved. The camera comparison included a pco.dimax HD, a LaVision Imager scientific CMOS (sCMOS) and a Photron FASTCAM SA-X2, along with a two-stage LaVision HighSpeed IRO intensifier. Excluding the LaVision Imager sCMOS, the cameras were tested with and without intensification and with both short and long inter-frame delays. Use of intensification and longer inter-frame delay generally improved precision. Overall, the Photron FASTCAM SA-X2 exhibited the best performance in terms of greatest precision and highest signal-to-noise ratio primarily because it had the largest pixels.

  13. A Summary of the Evaluation of PPG Herculite XP Glass in Punched Window and Storefront Assemblies

    DTIC Science & Technology

    2013-01-01

    frames for all IGU windows extruded from existing dies. The glazing was secured to the frame on all four sides with a 1/2-in bead width of DOW 995...lite and non-laminated IGU debris tests. A wood frame with a 4-in wide slit was placed behind the window to transform the debris cloud into a narrow...speed camera DIC Set-up laser deflection gauge shock tube window wood frame with slit high speed camerawell lit backdrop Debris Tracking Set-up laser

  14. Large format geiger-mode avalanche photodiode LADAR camera

    NASA Astrophysics Data System (ADS)

    Yuan, Ping; Sudharsanan, Rengarajan; Bai, Xiaogang; Labios, Eduardo; Morris, Bryan; Nicholson, John P.; Stuart, Gary M.; Danny, Harrison

    2013-05-01

    Recently Spectrolab has successfully demonstrated a compact 32x32 Laser Detection and Range (LADAR) camera with single photo-level sensitivity with small size, weight, and power (SWAP) budget for threedimensional (3D) topographic imaging at 1064 nm on various platforms. With 20-kHz frame rate and 500- ps timing uncertainty, this LADAR system provides coverage down to inch-level fidelity and allows for effective wide-area terrain mapping. At a 10 mph forward speed and 1000 feet above ground level (AGL), it covers 0.5 square-mile per hour with a resolution of 25 in2/pixel after data averaging. In order to increase the forward speed to fit for more platforms and survey a large area more effectively, Spectrolab is developing 32x128 Geiger-mode LADAR camera with 43 frame rate. With the increase in both frame rate and array size, the data collection rate is improved by 10 times. With a programmable bin size from 0.3 ps to 0.5 ns and 14-bit timing dynamic range, LADAR developers will have more freedom in system integration for various applications. Most of the special features of Spectrolab 32x32 LADAR camera, such as non-uniform bias correction, variable range gate width, windowing for smaller arrays, and short pixel protection, are implemented in this camera.

  15. Rapid and highly integrated FPGA-based Shack-Hartmann wavefront sensor for adaptive optics system

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Pin; Chang, Chia-Yuan; Chen, Shean-Jen

    2018-02-01

    In this study, a field programmable gate array (FPGA)-based Shack-Hartmann wavefront sensor (SHWS) programmed on LabVIEW can be highly integrated into customized applications such as adaptive optics system (AOS) for performing real-time wavefront measurement. Further, a Camera Link frame grabber embedded with FPGA is adopted to enhance the sensor speed reacting to variation considering its advantage of the highest data transmission bandwidth. Instead of waiting for a frame image to be captured by the FPGA, the Shack-Hartmann algorithm are implemented in parallel processing blocks design and let the image data transmission synchronize with the wavefront reconstruction. On the other hand, we design a mechanism to control the deformable mirror in the same FPGA and verify the Shack-Hartmann sensor speed by controlling the frequency of the deformable mirror dynamic surface deformation. Currently, this FPGAbead SHWS design can achieve a 266 Hz cyclic speed limited by the camera frame rate as well as leaves 40% logic slices for additionally flexible design.

  16. Synchronization of video recording and laser pulses including background light suppression

    NASA Technical Reports Server (NTRS)

    Kalshoven, Jr., James E. (Inventor); Tierney, Jr., Michael (Inventor); Dabney, Philip W. (Inventor)

    2004-01-01

    An apparatus for and a method of triggering a pulsed light source, in particular a laser light source, for predictable capture of the source by video equipment. A frame synchronization signal is derived from the video signal of a camera to trigger the laser and position the resulting laser light pulse in the appropriate field of the video frame and during the opening of the electronic shutter, if such shutter is included in the camera. Positioning of the laser pulse in the proper video field allows, after recording, for the viewing of the laser light image with a video monitor using the pause mode on a standard cassette-type VCR. This invention also allows for fine positioning of the laser pulse to fall within the electronic shutter opening. For cameras with externally controllable electronic shutters, the invention provides for background light suppression by increasing shutter speed during the frame in which the laser light image is captured. This results in the laser light appearing in one frame in which the background scene is suppressed with the laser light being uneffected, while in all other frames, the shutter speed is slower, allowing for the normal recording of the background scene. This invention also allows for arbitrary (manual or external) triggering of the laser with full video synchronization and background light suppression.

  17. Temporal compressive imaging for video

    NASA Astrophysics Data System (ADS)

    Zhou, Qun; Zhang, Linxia; Ke, Jun

    2018-01-01

    In many situations, imagers are required to have higher imaging speed, such as gunpowder blasting analysis and observing high-speed biology phenomena. However, measuring high-speed video is a challenge to camera design, especially, in infrared spectrum. In this paper, we reconstruct a high-frame-rate video from compressive video measurements using temporal compressive imaging (TCI) with a temporal compression ratio T=8. This means that, 8 unique high-speed temporal frames will be obtained from a single compressive frame using a reconstruction algorithm. Equivalently, the video frame rates is increased by 8 times. Two methods, two-step iterative shrinkage/threshold (TwIST) algorithm and the Gaussian mixture model (GMM) method, are used for reconstruction. To reduce reconstruction time and memory usage, each frame of size 256×256 is divided into patches of size 8×8. The influence of different coded mask to reconstruction is discussed. The reconstruction qualities using TwIST and GMM are also compared.

  18. International Congress on High-Speed Photography and Photonics, 19th, Cambridge, England, Sept. 16-21, 1990, Proceedings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garfield, B.R.; Rendell, J.T.

    1991-01-01

    The present conference discusses the application of schlieren photography in industry, laser fiber-optic high speed photography, holographic visualization of hypervelocity explosions, sub-100-picosec X-ray grating cameras, flash soft X-radiography, a novel approach to synchroballistic photography, a programmable image converter framing camera, high speed readout CCDs, an ultrafast optomechanical camera, a femtosec streak tube, a modular streak camera for laser ranging, and human-movement analysis with real-time imaging. Also discussed are high-speed photography of high-resolution moire patterns, a 2D electron-bombarded CCD readout for picosec electrooptical data, laser-generated plasma X-ray diagnostics, 3D shape restoration with virtual grating phase detection, Cu vapor lasers for highmore » speed photography, a two-frequency picosec laser with electrooptical feedback, the conversion of schlieren systems to high speed interferometers, laser-induced cavitation bubbles, stereo holographic cinematography, a gatable photonic detector, and laser generation of Stoneley waves at liquid-solid boundaries.« less

  19. HIGH SPEED KERR CELL FRAMING CAMERA

    DOEpatents

    Goss, W.C.; Gilley, L.F.

    1964-01-01

    The present invention relates to a high speed camera utilizing a Kerr cell shutter and a novel optical delay system having no moving parts. The camera can selectively photograph at least 6 frames within 9 x 10/sup -8/ seconds during any such time interval of an occurring event. The invention utilizes particularly an optical system which views and transmits 6 images of an event to a multi-channeled optical delay relay system. The delay relay system has optical paths of successively increased length in whole multiples of the first channel optical path length, into which optical paths the 6 images are transmitted. The successively delayed images are accepted from the exit of the delay relay system by an optical image focusing means, which in turn directs the images into a Kerr cell shutter disposed to intercept the image paths. A camera is disposed to simultaneously view and record the 6 images during a single exposure of the Kerr cell shutter. (AEC)

  20. High-speed plasma imaging: A lightning bolt

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wurden, G.A.; Whiteson, D.O.

    Using a gated intensified digital Kodak Ektapro camera system, the authors captured a lightning bolt at 1,000 frames per second, with 100-{micro}s exposure time on each consecutive frame. As a thunder storm approaches while darkness descended (7:50 pm) on July 21, 1994, they photographed lightning bolts with an f22 105-mm lens and 100% gain on the intensified camera. This 15-frame sequence shows a cloud to ground stroke at a distance of about 1.5 km, which has a series of stepped leaders propagating downwards, following by the upward-propagating main return stroke.

  1. Development Of A Dynamic Radiographic Capability Using High-Speed Video

    NASA Astrophysics Data System (ADS)

    Bryant, Lawrence E.

    1985-02-01

    High-speed video equipment can be used to optically image up to 2,000 full frames per second or 12,000 partial frames per second. X-ray image intensifiers have historically been used to image radiographic images at 30 frames per second. By combining these two types of equipment, it is possible to perform dynamic x-ray imaging of up to 2,000 full frames per second. The technique has been demonstrated using conventional, industrial x-ray sources such as 150 Kv and 300 Kv constant potential x-ray generators, 2.5 MeV Van de Graaffs, and linear accelerators. A crude form of this high-speed radiographic imaging has been shown to be possible with a cobalt 60 source. Use of a maximum aperture lens makes best use of the available light output from the image intensifier. The x-ray image intensifier input and output fluors decay rapidly enough to allow the high frame rate imaging. Data are presented on the maximum possible video frame rates versus x-ray penetration of various thicknesses of aluminum and steel. Photographs illustrate typical radiographic setups using the high speed imaging method. Video recordings show several demonstrations of this technique with the played-back x-ray images slowed down up to 100 times as compared to the actual event speed. Typical applications include boiling type action of liquids in metal containers, compressor operation with visualization of crankshaft, connecting rod and piston movement and thermal battery operation. An interesting aspect of this technique combines both the optical and x-ray capabilities to observe an object or event with both external and internal details with one camera in a visual mode and the other camera in an x-ray mode. This allows both kinds of video images to appear side by side in a synchronized presentation.

  2. A high sensitivity 20Mfps CMOS image sensor with readout speed of 1Tpixel/sec for visualization of ultra-high speed phenomena

    NASA Astrophysics Data System (ADS)

    Kuroda, R.; Sugawa, S.

    2017-02-01

    Ultra-high speed (UHS) CMOS image sensors with on-chop analog memories placed on the periphery of pixel array for the visualization of UHS phenomena are overviewed in this paper. The developed UHS CMOS image sensors consist of 400H×256V pixels and 128 memories/pixel, and the readout speed of 1Tpixel/sec is obtained, leading to 10 Mfps full resolution video capturing with consecutive 128 frames, and 20 Mfps half resolution video capturing with consecutive 256 frames. The first development model has been employed in the high speed video camera and put in practical use in 2012. By the development of dedicated process technologies, photosensitivity improvement and power consumption reduction were simultaneously achieved, and the performance improved version has been utilized in the commercialized high-speed video camera since 2015 that offers 10 Mfps with ISO16,000 photosensitivity. Due to the improved photosensitivity, clear images can be captured and analyzed even under low light condition, such as under a microscope as well as capturing of UHS light emission phenomena.

  3. Commercially available high-speed system for recording and monitoring vocal fold vibrations.

    PubMed

    Sekimoto, Sotaro; Tsunoda, Koichi; Kaga, Kimitaka; Makiyama, Kiyoshi; Tsunoda, Atsunobu; Kondo, Kenji; Yamasoba, Tatsuya

    2009-12-01

    We have developed a special purpose adaptor making it possible to use a commercially available high-speed camera to observe vocal fold vibrations during phonation. The camera can capture dynamic digital images at speeds of 600 or 1200 frames per second. The adaptor is equipped with a universal-type attachment and can be used with most endoscopes sold by various manufacturers. Satisfactory images can be obtained with a rigid laryngoscope even with the standard light source. The total weight of the adaptor and camera (including battery) is only 1010 g. The new system comprising the high-speed camera and the new adaptor can be purchased for about $3000 (US), while the least expensive stroboscope costs about 10 times that price, and a high-performance high-speed imaging system may cost 100 times as much. Therefore the system is both cost-effective and useful in the outpatient clinic or casualty setting, on house calls, and for the purpose of student or patient education.

  4. High-Speed Camera and High-Vision Camera Observations of TLEs from Jet Aircraft in Winter Japan and in Summer US

    NASA Astrophysics Data System (ADS)

    Sato, M.; Takahashi, Y.; Kudo, T.; Yanagi, Y.; Kobayashi, N.; Yamada, T.; Project, N.; Stenbaek-Nielsen, H. C.; McHarg, M. G.; Haaland, R. K.; Kammae, T.; Cummer, S. A.; Yair, Y.; Lyons, W. A.; Ahrns, J.; Yukman, P.; Warner, T. A.; Sonnenfeld, R. G.; Li, J.; Lu, G.

    2011-12-01

    The time evolution and spatial distributions of transient luminous events (TLEs) are the key parameters to identify the relationship between TLEs and parent lightning discharges, roles of electromagnetic pulses (EMPs) emitted by horizontal and vertical lightning currents in the formation of TLEs, and the occurrence condition and mechanisms of TLEs. Since the time scales of TLEs is typically less than a few milliseconds, new imaging technique that enable us to capture images with a high time resolution of < 1ms is awaited. By courtesy of "Cosmic Shore" Project conducted by Japan Broadcasting Corporation (NHK), we have carried out optical observations using a high-speed Image-Intensified (II) CMOS camera and a high-vision three-CCD camera from a jet aircraft on November 28 and December 3, 2010 in winter Japan. Using the high-speed II-CMOS camera, it is possible to capture images with 8,300 frames per second (fps), which corresponds to the time resolution of 120 us. Using the high-vision three-CCD camera, it is possible to capture high quality, true color images of TLEs with a 1920x1080 pixel size and with a frame rate of 30 fps. During the two observation flights, we have succeeded to detect 28 sprite events, and 3 elves events totally. In response to this success, we have conducted a combined aircraft and ground-based campaign of TLE observations at the High Plains in summer US. We have installed same NHK high-speed and high-vision cameras in a jet aircraft. In the period from June 27 and July 10, 2011, we have operated aircraft observations in 8 nights, and we have succeeded to capture TLE images for over a hundred events by the high-vision camera and succeeded to acquire over 40 high-speed images simultaneously. At the presentation, we will introduce the outlines of the two aircraft campaigns, and will introduce the characteristics of the time evolution and spatial distributions of TLEs observed in winter Japan, and will show the initial results of high-speed image data analysis of TLEs in summer US.

  5. Event-Driven Random-Access-Windowing CCD Imaging System

    NASA Technical Reports Server (NTRS)

    Monacos, Steve; Portillo, Angel; Ortiz, Gerardo; Alexander, James; Lam, Raymond; Liu, William

    2004-01-01

    A charge-coupled-device (CCD) based high-speed imaging system, called a realtime, event-driven (RARE) camera, is undergoing development. This camera is capable of readout from multiple subwindows [also known as regions of interest (ROIs)] within the CCD field of view. Both the sizes and the locations of the ROIs can be controlled in real time and can be changed at the camera frame rate. The predecessor of this camera was described in High-Frame-Rate CCD Camera Having Subwindow Capability (NPO- 30564) NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 26. The architecture of the prior camera requires tight coupling between camera control logic and an external host computer that provides commands for camera operation and processes pixels from the camera. This tight coupling limits the attainable frame rate and functionality of the camera. The design of the present camera loosens this coupling to increase the achievable frame rate and functionality. From a host computer perspective, the readout operation in the prior camera was defined on a per-line basis; in this camera, it is defined on a per-ROI basis. In addition, the camera includes internal timing circuitry. This combination of features enables real-time, event-driven operation for adaptive control of the camera. Hence, this camera is well suited for applications requiring autonomous control of multiple ROIs to track multiple targets moving throughout the CCD field of view. Additionally, by eliminating the need for control intervention by the host computer during the pixel readout, the present design reduces ROI-readout times to attain higher frame rates. This camera (see figure) includes an imager card consisting of a commercial CCD imager and two signal-processor chips. The imager card converts transistor/ transistor-logic (TTL)-level signals from a field programmable gate array (FPGA) controller card. These signals are transmitted to the imager card via a low-voltage differential signaling (LVDS) cable assembly. The FPGA controller card is connected to the host computer via a standard peripheral component interface (PCI).

  6. Integration of image capture and processing: beyond single-chip digital camera

    NASA Astrophysics Data System (ADS)

    Lim, SukHwan; El Gamal, Abbas

    2001-05-01

    An important trend in the design of digital cameras is the integration of capture and processing onto a single CMOS chip. Although integrating the components of a digital camera system onto a single chip significantly reduces system size and power, it does not fully exploit the potential advantages of integration. We argue that a key advantage of integration is the ability to exploit the high speed imaging capability of CMOS image senor to enable new applications such as multiple capture for enhancing dynamic range and to improve the performance of existing applications such as optical flow estimation. Conventional digital cameras operate at low frame rates and it would be too costly, if not infeasible, to operate their chips at high frame rates. Integration solves this problem. The idea is to capture images at much higher frame rates than he standard frame rate, process the high frame rate data on chip, and output the video sequence and the application specific data at standard frame rate. This idea is applied to optical flow estimation, where significant performance improvements are demonstrate over methods using standard frame rate sequences. We then investigate the constraints on memory size and processing power that can be integrated with a CMOS image sensor in a 0.18 micrometers process and below. We show that enough memory and processing power can be integrated to be able to not only perform the functions of a conventional camera system but also to perform applications such as real time optical flow estimation.

  7. Measuring full-field displacement spectral components using photographs taken with a DSLR camera via an analogue Fourier integral

    NASA Astrophysics Data System (ADS)

    Javh, Jaka; Slavič, Janko; Boltežar, Miha

    2018-02-01

    Instantaneous full-field displacement fields can be measured using cameras. In fact, using high-speed cameras full-field spectral information up to a couple of kHz can be measured. The trouble is that high-speed cameras capable of measuring high-resolution fields-of-view at high frame rates prove to be very expensive (from tens to hundreds of thousands of euro per camera). This paper introduces a measurement set-up capable of measuring high-frequency vibrations using slow cameras such as DSLR, mirrorless and others. The high-frequency displacements are measured by harmonically blinking the lights at specified frequencies. This harmonic blinking of the lights modulates the intensity changes of the filmed scene and the camera-image acquisition makes the integration over time, thereby producing full-field Fourier coefficients of the filmed structure's displacements.

  8. Driving techniques for high frame rate CCD camera

    NASA Astrophysics Data System (ADS)

    Guo, Weiqiang; Jin, Longxu; Xiong, Jingwu

    2008-03-01

    This paper describes a high-frame rate CCD camera capable of operating at 100 frames/s. This camera utilizes Kodak KAI-0340, an interline transfer CCD with 640(vertical)×480(horizontal) pixels. Two output ports are used to read out CCD data and pixel rates approaching 30 MHz. Because of its reduced effective opacity of vertical charge transfer registers, interline transfer CCD can cause undesired image artifacts, such as random white spots and smear generated in the registers. To increase frame rate, a kind of speed-up structure has been incorporated inside KAI-0340, then it is vulnerable to a vertical stripe effect. The phenomena which mentioned above may severely impair the image quality. To solve these problems, some electronic methods of eliminating these artifacts are adopted. Special clocking mode can dump the unwanted charge quickly, then the fast readout of the images, cleared of smear, follows immediately. Amplifier is used to sense and correct delay mismatch between the dual phase vertical clock pulses, the transition edges become close to coincident, so vertical stripes disappear. Results obtained with the CCD camera are shown.

  9. A Probability-Based Algorithm Using Image Sensors to Track the LED in a Vehicle Visible Light Communication System.

    PubMed

    Huynh, Phat; Do, Trong-Hop; Yoo, Myungsik

    2017-02-10

    This paper proposes a probability-based algorithm to track the LED in vehicle visible light communication systems using a camera. In this system, the transmitters are the vehicles' front and rear LED lights. The receivers are high speed cameras that take a series of images of the LEDs. ThedataembeddedinthelightisextractedbyfirstdetectingthepositionoftheLEDsintheseimages. Traditionally, LEDs are detected according to pixel intensity. However, when the vehicle is moving, motion blur occurs in the LED images, making it difficult to detect the LEDs. Particularly at high speeds, some frames are blurred at a high degree, which makes it impossible to detect the LED as well as extract the information embedded in these frames. The proposed algorithm relies not only on the pixel intensity, but also on the optical flow of the LEDs and on statistical information obtained from previous frames. Based on this information, the conditional probability that a pixel belongs to a LED is calculated. Then, the position of LED is determined based on this probability. To verify the suitability of the proposed algorithm, simulations are conducted by considering the incidents that can happen in a real-world situation, including a change in the position of the LEDs at each frame, as well as motion blur due to the vehicle speed.

  10. High-speed light field camera and frequency division multiplexing for fast multi-plane velocity measurements.

    PubMed

    Fischer, Andreas; Kupsch, Christian; Gürtler, Johannes; Czarske, Jürgen

    2015-09-21

    Non-intrusive fast 3d measurements of volumetric velocity fields are necessary for understanding complex flows. Using high-speed cameras and spectroscopic measurement principles, where the Doppler frequency of scattered light is evaluated within the illuminated plane, each pixel allows one measurement and, thus, planar measurements with high data rates are possible. While scanning is one standard technique to add the third dimension, the volumetric data is not acquired simultaneously. In order to overcome this drawback, a high-speed light field camera is proposed for obtaining volumetric data with each single frame. The high-speed light field camera approach is applied to a Doppler global velocimeter with sinusoidal laser frequency modulation. As a result, a frequency multiplexing technique is required in addition to the plenoptic refocusing for eliminating the crosstalk between the measurement planes. However, the plenoptic refocusing is still necessary in order to achieve a large refocusing range for a high numerical aperture that minimizes the measurement uncertainty. Finally, two spatially separated measurement planes with 25×25 pixels each are simultaneously acquired with a measurement rate of 0.5 kHz with a single high-speed camera.

  11. Brandaris 128 ultra-high-speed imaging facility: 10 years of operation, updates, and enhanced features

    NASA Astrophysics Data System (ADS)

    Gelderblom, Erik C.; Vos, Hendrik J.; Mastik, Frits; Faez, Telli; Luan, Ying; Kokhuis, Tom J. A.; van der Steen, Antonius F. W.; Lohse, Detlef; de Jong, Nico; Versluis, Michel

    2012-10-01

    The Brandaris 128 ultra-high-speed imaging facility has been updated over the last 10 years through modifications made to the camera's hardware and software. At its introduction the camera was able to record 6 sequences of 128 images (500 × 292 pixels) at a maximum frame rate of 25 Mfps. The segmented mode of the camera was revised to allow for subdivision of the 128 image sensors into arbitrary segments (1-128) with an inter-segment time of 17 μs. Furthermore, a region of interest can be selected to increase the number of recordings within a single run of the camera from 6 up to 125. By extending the imaging system with a laser-induced fluorescence setup, time-resolved ultra-high-speed fluorescence imaging of microscopic objects has been enabled. Minor updates to the system are also reported here.

  12. High-speed line-scan camera with digital time delay integration

    NASA Astrophysics Data System (ADS)

    Bodenstorfer, Ernst; Fürtler, Johannes; Brodersen, Jörg; Mayer, Konrad J.; Eckel, Christian; Gravogl, Klaus; Nachtnebel, Herbert

    2007-02-01

    Dealing with high-speed image acquisition and processing systems, the speed of operation is often limited by the amount of available light, due to short exposure times. Therefore, high-speed applications often use line-scan cameras, based on charge-coupled device (CCD) sensors with time delayed integration (TDI). Synchronous shift and accumulation of photoelectric charges on the CCD chip - according to the objects' movement - result in a longer effective exposure time without introducing additional motion blur. This paper presents a high-speed color line-scan camera based on a commercial complementary metal oxide semiconductor (CMOS) area image sensor with a Bayer filter matrix and a field programmable gate array (FPGA). The camera implements a digital equivalent to the TDI effect exploited with CCD cameras. The proposed design benefits from the high frame rates of CMOS sensors and from the possibility of arbitrarily addressing the rows of the sensor's pixel array. For the digital TDI just a small number of rows are read out from the area sensor which are then shifted and accumulated according to the movement of the inspected objects. This paper gives a detailed description of the digital TDI algorithm implemented on the FPGA. Relevant aspects for the practical application are discussed and key features of the camera are listed.

  13. A High-Speed Spectroscopy System for Observing Lightning and Transient Luminous Events

    NASA Astrophysics Data System (ADS)

    Boggs, L.; Liu, N.; Austin, M.; Aguirre, F.; Tilles, J.; Nag, A.; Lazarus, S. M.; Rassoul, H.

    2017-12-01

    Here we present a high-speed spectroscopy system that can be used to record atmospheric electrical discharges, including lightning and transient luminous events. The system consists of a Phantom V1210 high-speed camera, a Volume Phase Holographic (VPH) grism, an optional optical slit, and lenses. The spectrograph has the capability to record videos at speeds of 200,000 frames per second and has an effective wavelength band of 550-775 nm for the first order spectra. When the slit is used, the system has a spectral resolution of about 0.25 nm per pixel. We have constructed a durable enclosure made of heavy duty aluminum to house the high-speed spectrograph. It has two fans for continuous air flow and a removable tray to mount the spectrograph components. In addition, a Watec video camera (30 frames per second) is attached to the top of the enclosure to provide a scene view. A heavy duty Pelco pan/tilt motor is used to position the enclosure and can be controlled remotely through a Rasperry Pi computer. An observation campaign has been conducted during the summer and fall of 2017 at the Florida Institute of Technology. Several close cloud-to-ground discharges were recorded at 57,000 frames per second. The spectrum of a downward stepped negative leader and a positive cloud-to-ground return stroke will be reported on.

  14. Instantaneous phase-shifting Fizeau interferometry with high-speed pixelated phase-mask camera

    NASA Astrophysics Data System (ADS)

    Yatagai, Toyohiko; Jackin, Boaz Jessie; Ono, Akira; Kiyohara, Kosuke; Noguchi, Masato; Yoshii, Minoru; Kiyohara, Motosuke; Niwa, Hayato; Ikuo, Kazuyuki; Onuma, Takashi

    2015-08-01

    A Fizeou interferometer with instantaneous phase-shifting ability using a Wollaston prism is designed. to measure dynamic phase change of objects, a high-speed video camera of 10-5s of shutter speed is used with a pixelated phase-mask of 1024 × 1024 elements. The light source used is a laser of wavelength 532 nm which is split into orthogonal polarization states by passing through a Wollaston prism. By adjusting the tilt of the reference surface it is possible to make the reference and object beam with orthogonal polarizations states to coincide and interfere. Then the pixelated phase-mask camera calculate the phase changes and hence the optical path length difference. Vibration of speakers and turbulence of air flow were successfully measured in 7,000 frames/sec.

  15. International Congress on High Speed Photography and Photonics, 17th, Pretoria, Republic of South Africa, Sept. 1-5, 1986, Proceedings. Volumes 1 & 2

    NASA Astrophysics Data System (ADS)

    McDowell, M. W.; Hollingworth, D.

    1986-01-01

    The present conference discusses topics in mining applications of high speed photography, ballistic, shock wave and detonation studies employing high speed photography, laser and X-ray diagnostics, biomechanical photography, millisec-microsec-nanosec-picosec-femtosec photographic methods, holographic, schlieren, and interferometric techniques, and videography. Attention is given to such issues as the pulse-shaping of ultrashort optical pulses, the performance of soft X-ray streak cameras, multiple-frame image tube operation, moire-enlargement motion-raster photography, two-dimensional imaging with tomographic techniques, photochron TV streak cameras, and streak techniques in detonics.

  16. A framed, 16-image Kirkpatrick–Baez x-ray microscope

    DOE PAGES

    Marshall, F. J.; Bahr, R. E.; Goncharov, V. N.; ...

    2017-09-08

    A 16-image Kirkpatrick–Baez (KB)–type x-ray microscope consisting of compact KB mirrors has been assembled for the first time with mirrors aligned to allow it to be coupled to a high-speed framing camera. The high-speed framing camera has four independently gated strips whose emission sampling interval is ~30 ps. Images are arranged four to a strip with ~60-ps temporal spacing between frames on a strip. By spacing the timing of the strips, a frame spacing of ~15 ps is achieved. A framed resolution of ~6-um is achieved with this combination in a 400-um region of laser–plasma x-ray emission in the 2-more » to 8-keV energy range. A principal use of the microscope is to measure the evolution of the implosion stagnation region of cryogenic DT target implosions on the University of Rochester’s OMEGA Laser System. The unprecedented time and spatial resolution achieved with this framed, multi-image KB microscope have made it possible to accurately determine the cryogenic implosion core emission size and shape at the peak of stagnation. In conclusion, these core size measurements, taken in combination with those of ion temperature, neutron-production temporal width, and neutron yield allow for inference of core pressures, currently exceeding 50 GBar in OMEGA cryogenic target implosions.« less

  17. A framed, 16-image Kirkpatrick–Baez x-ray microscope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, F. J.; Bahr, R. E.; Goncharov, V. N.

    A 16-image Kirkpatrick–Baez (KB)–type x-ray microscope consisting of compact KB mirrors has been assembled for the first time with mirrors aligned to allow it to be coupled to a high-speed framing camera. The high-speed framing camera has four independently gated strips whose emission sampling interval is ~30 ps. Images are arranged four to a strip with ~60-ps temporal spacing between frames on a strip. By spacing the timing of the strips, a frame spacing of ~15 ps is achieved. A framed resolution of ~6-um is achieved with this combination in a 400-um region of laser–plasma x-ray emission in the 2-more » to 8-keV energy range. A principal use of the microscope is to measure the evolution of the implosion stagnation region of cryogenic DT target implosions on the University of Rochester’s OMEGA Laser System. The unprecedented time and spatial resolution achieved with this framed, multi-image KB microscope have made it possible to accurately determine the cryogenic implosion core emission size and shape at the peak of stagnation. In conclusion, these core size measurements, taken in combination with those of ion temperature, neutron-production temporal width, and neutron yield allow for inference of core pressures, currently exceeding 50 GBar in OMEGA cryogenic target implosions.« less

  18. High-Speed Videography Overview

    NASA Astrophysics Data System (ADS)

    Miller, C. E.

    1989-02-01

    The field of high-speed videography (HSV) has continued to mature in recent years, due to the introduction of a mixture of new technology and extensions of existing technology. Recent low frame-rate innovations have the potential to dramatically expand the areas of information gathering and motion analysis at all frame-rates. Progress at the 0 - rate is bringing the battle of film versus video to the field of still photography. The pressure to push intermediate frame rates higher continues, although the maximum achievable frame rate has remained stable for several years. Higher maximum recording rates appear technologically practical, but economic factors impose severe limitations to development. The application of diverse photographic techniques to video-based systems is under-exploited. The basics of HSV apply to other fields, such as machine vision and robotics. Present motion analysis systems continue to function mainly as an instant replay replacement for high-speed movie film cameras. The interrelationship among lighting, shuttering and spatial resolution is examined.

  19. A reference Pelton turbine - High speed visualization in the rotating frame

    NASA Astrophysics Data System (ADS)

    Solemslie, Bjørn W.; Dahlhaug, Ole G.

    2016-11-01

    To enable a detailed study the flow mechanisms effecting the flow within the reference Pelton runner designed at the Waterpower Laboratory (NTNLT) a flow visualization system has been developed. The system enables high speed filming of the hydraulic surface of a single bucket in the rotating frame of reference. It is built with an angular borescopes adapter entering the turbine along the rotational axis and a borescope embedded within a bucket. A stationary high speed camera located outside the turbine housing has been connected to the optical arrangement by a non-contact coupling. The view point of the system includes the whole hydraulic surface of one half of a bucket. The system has been designed to minimize the amount of vibrations and to ensure that the vibrations felt by the borescope are the same as those affecting the camera. The preliminary results captured with the system are promising and enable a detailed study of the flow within the turbine.

  20. Optimizing low-light microscopy with back-illuminated electron multiplying charge-coupled device: enhanced sensitivity, speed, and resolution.

    PubMed

    Coates, Colin G; Denvir, Donal J; McHale, Noel G; Thornbury, Keith D; Hollywood, Mark A

    2004-01-01

    The back-illuminated electron multiplying charge-coupled device (EMCCD) camera is having a profound influence on the field of low-light dynamic cellular microscopy, combining highest possible photon collection efficiency with the ability to virtually eliminate the readout noise detection limit. We report here the use of this camera, in 512 x 512 frame-transfer chip format at 10-MHz pixel readout speed, in optimizing a demanding ultra-low-light intracellular calcium flux microscopy setup. The arrangement employed includes a spinning confocal Nipkow disk, which, while facilitating the need to both generate images at very rapid frame rates and minimize background photons, yields very weak signals. The challenge for the camera lies not just in detecting as many of these scarce photons as possible, but also in operating at a frame rate that meets the temporal resolution requirements of many low-light microscopy approaches, a particular demand of smooth muscle calcium flux microscopy. Results presented illustrate both the significant sensitivity improvement offered by this technology over the previous standard in ultra-low-light CCD detection, the GenIII+intensified charge-coupled device (ICCD), and also portray the advanced temporal and spatial resolution capabilities of the EMCCD. Copyright 2004 Society of Photo-Optical Instrumentation Engineers.

  1. Lunar Roving Vehicle gets speed workout by Astronaut John Young

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The Lunar Roving Vehicle (LRV) gets a speed workout by Astronaut John W. Young in the 'Grand Prix' run during the third Apollo 16 extravehicular activity (EVA-3) at the Descartes landing site. This view is a frame from motion picture film exposed by a 16mm Maurer camera held by Astronaut Charels M. Duke Jr.

  2. Vibration extraction based on fast NCC algorithm and high-speed camera.

    PubMed

    Lei, Xiujun; Jin, Yi; Guo, Jie; Zhu, Chang'an

    2015-09-20

    In this study, a high-speed camera system is developed to complete the vibration measurement in real time and to overcome the mass introduced by conventional contact measurements. The proposed system consists of a notebook computer and a high-speed camera which can capture the images as many as 1000 frames per second. In order to process the captured images in the computer, the normalized cross-correlation (NCC) template tracking algorithm with subpixel accuracy is introduced. Additionally, a modified local search algorithm based on the NCC is proposed to reduce the computation time and to increase efficiency significantly. The modified algorithm can rapidly accomplish one displacement extraction 10 times faster than the traditional template matching without installing any target panel onto the structures. Two experiments were carried out under laboratory and outdoor conditions to validate the accuracy and efficiency of the system performance in practice. The results demonstrated the high accuracy and efficiency of the camera system in extracting vibrating signals.

  3. MPCM: a hardware coder for super slow motion video sequences

    NASA Astrophysics Data System (ADS)

    Alcocer, Estefanía; López-Granado, Otoniel; Gutierrez, Roberto; Malumbres, Manuel P.

    2013-12-01

    In the last decade, the improvements in VLSI levels and image sensor technologies have led to a frenetic rush to provide image sensors with higher resolutions and faster frame rates. As a result, video devices were designed to capture real-time video at high-resolution formats with frame rates reaching 1,000 fps and beyond. These ultrahigh-speed video cameras are widely used in scientific and industrial applications, such as car crash tests, combustion research, materials research and testing, fluid dynamics, and flow visualization that demand real-time video capturing at extremely high frame rates with high-definition formats. Therefore, data storage capability, communication bandwidth, processing time, and power consumption are critical parameters that should be carefully considered in their design. In this paper, we propose a fast FPGA implementation of a simple codec called modulo-pulse code modulation (MPCM) which is able to reduce the bandwidth requirements up to 1.7 times at the same image quality when compared with PCM coding. This allows current high-speed cameras to capture in a continuous manner through a 40-Gbit Ethernet point-to-point access.

  4. Vehicle counting system using real-time video processing

    NASA Astrophysics Data System (ADS)

    Crisóstomo-Romero, Pedro M.

    2006-02-01

    Transit studies are important for planning a road network with optimal vehicular flow. A vehicular count is essential. This article presents a vehicle counting system based on video processing. An advantage of such system is the greater detail than is possible to obtain, like shape, size and speed of vehicles. The system uses a video camera placed above the street to image transit in real-time. The video camera must be placed at least 6 meters above the street level to achieve proper acquisition quality. Fast image processing algorithms and small image dimensions are used to allow real-time processing. Digital filters, mathematical morphology, segmentation and other techniques allow identifying and counting all vehicles in the image sequences. The system was implemented under Linux in a 1.8 GHz Pentium 4 computer. A successful count was obtained with frame rates of 15 frames per second for images of size 240x180 pixels and 24 frames per second for images of size 180x120 pixels, thus being able to count vehicles whose speeds do not exceed 150 km/h.

  5. Studies on dynamic behavior of rotating mirrors

    NASA Astrophysics Data System (ADS)

    Li, Jingzhen; Sun, Fengshan; Gong, Xiangdong; Huang, Hongbin; Tian, Jie

    2005-02-01

    A rotating mirror is a kernel unit in a Miller-type high speed camera, which is both as an imaging element in optical path and as an element to implement ultrahigh speed photography. According to Schardin"s Principle, information capacity of an ultrahigh speed camera with rotating mirror depends on primary wavelength of lighting used by the camera and limit linear velocity on edge of the rotating-mirror: the latter is related to material (including specifications in technology), cross-section shape and lateral structure of rotating mirror. In this manuscript dynamic behavior of high strength aluminium alloy rotating mirrors is studied, from which it is preliminarily shown that an aluminium alloy rotating mirror can be absolutely used as replacement for a steel rotating-mirror or a titanium alloy rotating-mirror in framing photographic systems, and it could be also used as a substitute for a beryllium rotating-mirror in streak photographic systems.

  6. High speed movies of turbulence in Alcator C-Mod

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terry, J.L.; Zweben, S.J.; Bose, B.

    2004-10-01

    A high speed (250 kHz), 300 frame charge coupled device camera has been used to image turbulence in the Alcator C-Mod Tokamak. The camera system is described and some of its important characteristics are measured, including time response and uniformity over the field-of-view. The diagnostic has been used in two applications. One uses gas-puff imaging to illuminate the turbulence in the edge/scrape-off-layer region, where D{sub 2} gas puffs localize the emission in a plane perpendicular to the magnetic field when viewed by the camera system. The dynamics of the underlying turbulence around and outside the separatrix are detected in thismore » manner. In a second diagnostic application, the light from an injected, ablating, high speed Li pellet is observed radially from the outer midplane, and fast poloidal motion of toroidal striations are seen in the Li{sup +} light well inside the separatrix.« less

  7. Darwin's bee-trap: The kinetics of Catasetum, a new world orchid.

    PubMed

    Nicholson, Charles C; Bales, James W; Palmer-Fortune, Joyce E; Nicholson, Robert G

    2008-01-01

    The orchid genera Catasetum employs a hair-trigger activated, pollen release mechanism, which forcibly attaches pollen sacs onto foraging insects in the New World tropics. This remarkable adaptation was studied extensively by Charles Darwin and he termed this rapid response "sensitiveness." Using high speed video cameras with a frame speed of 1000 fps, this rapid release was filmed and from the subsequent footage, velocity, speed, acceleration, force and kinetic energy were computed.

  8. Extreme ultra-violet movie camera for imaging microsecond time scale magnetic reconnection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chai, Kil-Byoung; Bellan, Paul M.

    2013-12-15

    An ultra-fast extreme ultra-violet (EUV) movie camera has been developed for imaging magnetic reconnection in the Caltech spheromak/astrophysical jet experiment. The camera consists of a broadband Mo:Si multilayer mirror, a fast decaying YAG:Ce scintillator, a visible light block, and a high-speed visible light CCD camera. The camera can capture EUV images as fast as 3.3 × 10{sup 6} frames per second with 0.5 cm spatial resolution. The spectral range is from 20 eV to 60 eV. EUV images reveal strong, transient, highly localized bursts of EUV radiation when magnetic reconnection occurs.

  9. Multithreaded hybrid feature tracking for markerless augmented reality.

    PubMed

    Lee, Taehee; Höllerer, Tobias

    2009-01-01

    We describe a novel markerless camera tracking approach and user interaction methodology for augmented reality (AR) on unprepared tabletop environments. We propose a real-time system architecture that combines two types of feature tracking. Distinctive image features of the scene are detected and tracked frame-to-frame by computing optical flow. In order to achieve real-time performance, multiple operations are processed in a synchronized multi-threaded manner: capturing a video frame, tracking features using optical flow, detecting distinctive invariant features, and rendering an output frame. We also introduce user interaction methodology for establishing a global coordinate system and for placing virtual objects in the AR environment by tracking a user's outstretched hand and estimating a camera pose relative to it. We evaluate the speed and accuracy of our hybrid feature tracking approach, and demonstrate a proof-of-concept application for enabling AR in unprepared tabletop environments, using bare hands for interaction.

  10. A high-speed scintillation-based electronic portal imaging device to quantitatively characterize IMRT delivery.

    PubMed

    Ranade, Manisha K; Lynch, Bart D; Li, Jonathan G; Dempsey, James F

    2006-01-01

    We have developed an electronic portal imaging device (EPID) employing a fast scintillator and a high-speed camera. The device is designed to accurately and independently characterize the fluence delivered by a linear accelerator during intensity modulated radiation therapy (IMRT) with either step-and-shoot or dynamic multileaf collimator (MLC) delivery. Our aim is to accurately obtain the beam shape and fluence of all segments delivered during IMRT, in order to study the nature of discrepancies between the plan and the delivered doses. A commercial high-speed camera was combined with a terbium-doped gadolinium-oxy-sulfide (Gd2O2S:Tb) scintillator to form an EPID for the unaliased capture of two-dimensional fluence distributions of each beam in an IMRT delivery. The high speed EPID was synchronized to the accelerator pulse-forming network and gated to capture every possible pulse emitted from the accelerator, with an approximate frame rate of 360 frames-per-second (fps). A 62-segment beam from a head-and-neck IMRT treatment plan requiring 68 s to deliver was recorded with our high speed EPID producing approximately 6 Gbytes of imaging data. The EPID data were compared with the MLC instruction files and the MLC controller log files. The frames were binned to provide a frame rate of 72 fps with a signal-to-noise ratio that was sufficient to resolve leaf positions and segment fluence. The fractional fluence from the log files and EPID data agreed well. An ambiguity in the motion of the MLC during beam on was resolved. The log files reported leaf motions at the end of 33 of the 42 segments, while the EPID observed leaf motions in only 7 of the 42 segments. The static IMRT segment shapes observed by the high speed EPID were in good agreement with the shapes reported in the log files. The leaf motions observed during beam-on for step-and-shoot delivery were not temporally resolved by the log files.

  11. Relativistic Astronomy

    NASA Astrophysics Data System (ADS)

    Zhang, Bing; Li, Kunyang

    2018-02-01

    The “Breakthrough Starshot” aims at sending near-speed-of-light cameras to nearby stellar systems in the future. Due to the relativistic effects, a transrelativistic camera naturally serves as a spectrograph, a lens, and a wide-field camera. We demonstrate this through a simulation of the optical-band image of the nearby galaxy M51 in the rest frame of the transrelativistic camera. We suggest that observing celestial objects using a transrelativistic camera may allow one to study the astronomical objects in a special way, and to perform unique tests on the principles of special relativity. We outline several examples that suggest transrelativistic cameras may make important contributions to astrophysics and suggest that the Breakthrough Starshot cameras may be launched in any direction to serve as a unique astronomical observatory.

  12. Precise color images a high-speed color video camera system with three intensified sensors

    NASA Astrophysics Data System (ADS)

    Oki, Sachio; Yamakawa, Masafumi; Gohda, Susumu; Etoh, Takeharu G.

    1999-06-01

    High speed imaging systems have been used in a large field of science and engineering. Although the high speed camera systems have been improved to high performance, most of their applications are only to get high speed motion pictures. However, in some fields of science and technology, it is useful to get some other information, such as temperature of combustion flame, thermal plasma and molten materials. Recent digital high speed video imaging technology should be able to get such information from those objects. For this purpose, we have already developed a high speed video camera system with three-intensified-sensors and cubic prism image splitter. The maximum frame rate is 40,500 pps (picture per second) at 64 X 64 pixels and 4,500 pps at 256 X 256 pixels with 256 (8 bit) intensity resolution for each pixel. The camera system can store more than 1,000 pictures continuously in solid state memory. In order to get the precise color images from this camera system, we need to develop a digital technique, which consists of a computer program and ancillary instruments, to adjust displacement of images taken from two or three image sensors and to calibrate relationship between incident light intensity and corresponding digital output signals. In this paper, the digital technique for pixel-based displacement adjustment are proposed. Although the displacement of the corresponding circle was more than 8 pixels in original image, the displacement was adjusted within 0.2 pixels at most by this method.

  13. Cranz-Schardin camera with a large working distance for the observation of small scale high-speed flows.

    PubMed

    Skupsch, C; Chaves, H; Brücker, C

    2011-08-01

    The Cranz-Schardin camera utilizes a Q-switched Nd:YAG laser and four single CCD cameras. Light pulse energy in the range of 25 mJ and pulse duration of about 5 ns is provided by the laser. The laser light is converted to incoherent light by Rhodamine-B fluorescence dye in a cuvette. The laser beam coherence is intentionally broken in order to avoid speckle. Four light fibers collect the fluorescence light and are used for illumination. Different light fiber lengths enable a delay of illumination between consecutive images. The chosen interframe time is 25 ns, corresponding to 40 × 10(6) frames per second. Exemplarily, the camera is applied to observe the bow shock in front of a water jet, propagating in air at supersonic speed. The initial phase of the formation of a jet structure is recorded.

  14. Single software platform used for high speed data transfer implementation in a 65k pixel camera working in single photon counting mode

    NASA Astrophysics Data System (ADS)

    Maj, P.; Kasiński, K.; Gryboś, P.; Szczygieł, R.; Kozioł, A.

    2015-12-01

    Integrated circuits designed for specific applications generally use non-standard communication methods. Hybrid pixel detector readout electronics produces a huge amount of data as a result of number of frames per seconds. The data needs to be transmitted to a higher level system without limiting the ASIC's capabilities. Nowadays, the Camera Link interface is still one of the fastest communication methods, allowing transmission speeds up to 800 MB/s. In order to communicate between a higher level system and the ASIC with a dedicated protocol, an FPGA with dedicated code is required. The configuration data is received from the PC and written to the ASIC. At the same time, the same FPGA should be able to transmit the data from the ASIC to the PC at the very high speed. The camera should be an embedded system enabling autonomous operation and self-monitoring. In the presented solution, at least three different hardware platforms are used—FPGA, microprocessor with real-time operating system and the PC with end-user software. We present the use of a single software platform for high speed data transfer from 65k pixel camera to the personal computer.

  15. Advanced High-Definition Video Cameras

    NASA Technical Reports Server (NTRS)

    Glenn, William

    2007-01-01

    A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.

  16. Comparative Analysis of THOR-NT ATD vs. Hybrid III ATD in Laboratory Vertical Shock Testing

    DTIC Science & Technology

    2013-09-01

    were taken both pretest and post - test for each test event (figure 5). Figure 5. Rigid fixture placed on the drop table with ATD seated: Hybrid III...6 3. Experimental Procedure 6 3.1 Test Setup...frames per second and with a Vision Research Phantom V9.1 (Wayne, NJ) high-speed video camera, sampling 1000 frames per second. 3. Experimental

  17. Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications

    NASA Astrophysics Data System (ADS)

    Olson, Gaylord G.; Walker, Jo N.

    1997-09-01

    Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.

  18. Underwater Test Diagnostics Using Explosively Excited Argon And Laser Light Photography Techniques

    NASA Astrophysics Data System (ADS)

    Wisotski, John

    1990-01-01

    This paper presents results of photographic methods employed in underwater tests used to study high-velocity fragment deceleration, deformation and fracture during the perforation of water-backed plates. These methods employed overlapping ultra-high and very high speed camera recordings using explosively excited argon and ruby-laser light sources that gave ample light to penetrate across a 2.3-meter (7.54-foot) diameter tank of water with enough intensity to photograph displacement-time histories of steel cubes with impact speeds of 1000 to 1500 m/s (3280 to 4920 ft/s) at camera framing rates of 250,000 and 17,000 fr/s, respectively.

  19. Thermographic measurements of high-speed metal cutting

    NASA Astrophysics Data System (ADS)

    Mueller, Bernhard; Renz, Ulrich

    2002-03-01

    Thermographic measurements of a high-speed cutting process have been performed with an infrared camera. To realize images without motion blur the integration times were reduced to a few microseconds. Since the high tool wear influences the measured temperatures a set-up has been realized which enables small cutting lengths. Only single images have been recorded because the process is too fast to acquire a sequence of images even with the frame rate of the very fast infrared camera which has been used. To expose the camera when the rotating tool is in the middle of the camera image an experimental set-up with a light barrier and a digital delay generator with a time resolution of 1 ns has been realized. This enables a very exact triggering of the camera at the desired position of the tool in the image. Since the cutting depth is between 0.1 and 0.2 mm a high spatial resolution was also necessary which was obtained by a special close-up lens allowing a resolution of app. 45 microns. The experimental set-up will be described and infrared images and evaluated temperatures of a titanium alloy and a carbon steel will be presented for cutting speeds up to 42 m/s.

  20. A higher-speed compressive sensing camera through multi-diode design

    NASA Astrophysics Data System (ADS)

    Herman, Matthew A.; Tidman, James; Hewitt, Donna; Weston, Tyler; McMackin, Lenore

    2013-05-01

    Obtaining high frame rates is a challenge with compressive sensing (CS) systems that gather measurements in a sequential manner, such as the single-pixel CS camera. One strategy for increasing the frame rate is to divide the FOV into smaller areas that are sampled and reconstructed in parallel. Following this strategy, InView has developed a multi-aperture CS camera using an 8×4 array of photodiodes that essentially act as 32 individual simultaneously operating single-pixel cameras. Images reconstructed from each of the photodiode measurements are stitched together to form the full FOV. To account for crosstalk between the sub-apertures, novel modulation patterns have been developed to allow neighboring sub-apertures to share energy. Regions of overlap not only account for crosstalk energy that would otherwise be reconstructed as noise, but they also allow for tolerance in the alignment of the DMD to the lenslet array. Currently, the multi-aperture camera is built into a computational imaging workstation configuration useful for research and development purposes. In this configuration, modulation patterns are generated in a CPU and sent to the DMD via PCI express, which allows the operator to develop and change the patterns used in the data acquisition step. The sensor data is collected and then streamed to the workstation via an Ethernet or USB connection for the reconstruction step. Depending on the amount of data taken and the amount of overlap between sub-apertures, frame rates of 2-5 frames per second can be achieved. In a stand-alone camera platform, currently in development, pattern generation and reconstruction will be implemented on-board.

  1. Estimation of vibration frequency of loudspeaker diaphragm by parallel phase-shifting digital holography

    NASA Astrophysics Data System (ADS)

    Kakue, T.; Endo, Y.; Shimobaba, T.; Ito, T.

    2014-11-01

    We report frequency estimation of loudspeaker diaphragm vibrating at high speed by parallel phase-shifting digital holography which is a technique of single-shot phase-shifting interferometry. This technique records multiple phaseshifted holograms required for phase-shifting interferometry by using space-division multiplexing. We constructed a parallel phase-shifting digital holography system consisting of a high-speed polarization-imaging camera. This camera has a micro-polarizer array which selects four linear polarization axes for 2 × 2 pixels. We set a loudspeaker as an object, and recorded vibration of diaphragm of the loudspeaker by the constructed system. By the constructed system, we demonstrated observation of vibration displacement of loudspeaker diaphragm. In this paper, we aim to estimate vibration frequency of the loudspeaker diaphragm by applying the experimental results to frequency analysis. Holograms consisting of 128 × 128 pixels were recorded at a frame rate of 262,500 frames per second by the camera. A sinusoidal wave was input to the loudspeaker via a phone connector. We observed displacement of the loudspeaker diaphragm vibrating by the system. We also succeeded in estimating vibration frequency of the loudspeaker diaphragm by applying frequency analysis to the experimental results.

  2. An Application Of High-Speed Photography To The Real Ignition Course Of Composite Propellants

    NASA Astrophysics Data System (ADS)

    Fusheng, Zhang; Gongshan, Cheng; Yong, Zhang; Fengchun, Li; Fanpei, Lei

    1989-06-01

    That the actual solid rocket motor behavior and delay time of the ignition of Ap/HTPB composite propellant ignited by high energy pyrotechics contained condensed particles have been investigated is the key of this paper. In experiments, using high speed camera, the pressure transducer, the photodiode and synchro circuit control system designed by us synchronistically observe and record all course and details of the ignition. And pressure signal, photodiode signal and high speed photography frame are corresponded one by one.

  3. A phase-based stereo vision system-on-a-chip.

    PubMed

    Díaz, Javier; Ros, Eduardo; Sabatini, Silvio P; Solari, Fabio; Mota, Sonia

    2007-02-01

    A simple and fast technique for depth estimation based on phase measurement has been adopted for the implementation of a real-time stereo system with sub-pixel resolution on an FPGA device. The technique avoids the attendant problem of phase warping. The designed system takes full advantage of the inherent processing parallelism and segmentation capabilities of FPGA devices to achieve a computation speed of 65megapixels/s, which can be arranged with a customized frame-grabber module to process 211frames/s at a size of 640x480 pixels. The processing speed achieved is higher than conventional camera frame rates, thus allowing the system to extract multiple estimations and be used as a platform to evaluate integration schemes of a population of neurons without increasing hardware resource demands.

  4. Study of atmospheric discharges caracteristics using with a standard video camera

    NASA Astrophysics Data System (ADS)

    Ferraz, E. C.; Saba, M. M. F.

    In this study is showed some preliminary statistics on lightning characteristics such as: flash multiplicity, number of ground contact points, formation of new and altered channels and presence of continuous current in the strokes that form the flash. The analysis is based on the images of a standard video camera (30 frames.s-1). The results obtained for some flashes will be compared to the images of a high-speed CCD camera (1000 frames.s-1). The camera observing site is located in São José dos Campos (23°S,46° W) at an altitude of 630m. This observational site has nearly 360° field of view at a height of 25m. It is possible to visualize distant thunderstorms occurring within a radius of 25km from the site. The room, situated over a metal structure, has water and power supplies, a telephone line and a small crane on the roof. KEY WORDS: Video images, Lightning, Multiplicity, Stroke.

  5. Penetration into Granular Earth Materials (Topic H): A Multi-scale Physics-Based Approach Towards Developing a Greater Understanding of Dynamically Loaded Heterogeneous Systems

    DTIC Science & Technology

    2016-08-01

    7 2.1. DYNAMIC DART GUN EXPERIMENTS...penetration, and cavity formation associated with high-speed projectile penetration of sand. A new half-inch gun was constructed for this project. A...inch gun with them. Data was collected utilizing NSWC’s Cordin 550, 64 frame, high-speed camera. In addition, several student participated in the

  6. Lunar Roving Vehicle gets speed workout by Astronaut John Young

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The Lunar Roving Vehicle (LRV) gets a speed workout by Astronaut John W. Young in the 'Grand Prix' run during the third Apollo 16 extravehicular activity (EVA-3) at the Descartes landing site. Note the front wheels of the LRV are off the ground. This view is a frame from motion picture film exposed by a 16mm Maurer camera held by Astronaut Charles M. Duke Jr.

  7. Preliminary analysis on faint luminous lightning events recorded by multiple high speed cameras

    NASA Astrophysics Data System (ADS)

    Alves, J.; Saraiva, A. V.; Pinto, O.; Campos, L. Z.; Antunes, L.; Luz, E. S.; Medeiros, C.; Buzato, T. S.

    2013-12-01

    The objective of this work is the study of some faint luminous events produced by lightning flashes that were recorded simultaneously by multiple high-speed cameras during the previous RAMMER (Automated Multi-camera Network for Monitoring and Study of Lightning) campaigns. The RAMMER network is composed by three fixed cameras and one mobile color camera separated by, in average, distances of 13 kilometers. They were located in the Paraiba Valley (in the cities of São José dos Campos and Caçapava), SP, Brazil, arranged in a quadrilateral shape, centered in São José dos Campos region. This configuration allowed RAMMER to see a thunderstorm from different angles, registering the same lightning flashes simultaneously by multiple cameras. Each RAMMER sensor is composed by a triggering system and a Phantom high-speed camera version 9.1, which is set to operate at a frame rate of 2,500 frames per second with a lens Nikkor (model AF-S DX 18-55 mm 1:3.5 - 5.6 G in the stationary sensors, and a lens model AF-S ED 24 mm - 1:1.4 in the mobile sensor). All videos were GPS (Global Positioning System) time stamped. For this work we used a data set collected in four RAMMER manual operation days in the campaign of 2012 and 2013. On Feb. 18th the data set is composed by 15 flashes recorded by two cameras and 4 flashes recorded by three cameras. On Feb. 19th a total of 5 flashes was registered by two cameras and 1 flash registered by three cameras. On Feb. 22th we obtained 4 flashes registered by two cameras. Finally, in March 6th two cameras recorded 2 flashes. The analysis in this study proposes an evaluation methodology for faint luminous lightning events, such as continuing current. Problems in the temporal measurement of the continuing current can generate some imprecisions during the optical analysis, therefore this work aim to evaluate the effects of distance in this parameter with this preliminary data set. In the cases that include the color camera we analyzed the RGB mode (red, green, blue) and compared them with the data provided by the black and white cameras for the same event and the influence of these parameters with the luminosity intensity of the flashes. Two peculiar cases presented, from the data obtained at one site, a stroke, some continuing current during the interval between the strokes and, then, a subsequent stroke; however, the other site showed that the subsequent stroke was in fact an M-component, since the continuing current had not vanished after its parent stroke. These events generated a dubious classification for the same event that was based only in a visual analysis with high-speed cameras and they were analyzed in this work.

  8. An optical system for detecting 3D high-speed oscillation of a single ultrasound microbubble

    PubMed Central

    Liu, Yuan; Yuan, Baohong

    2013-01-01

    As contrast agents, microbubbles have been playing significant roles in ultrasound imaging. Investigation of microbubble oscillation is crucial for microbubble characterization and detection. Unfortunately, 3-dimensional (3D) observation of microbubble oscillation is challenging and costly because of the bubble size—a few microns in diameter—and the high-speed dynamics under MHz ultrasound pressure waves. In this study, a cost-efficient optical confocal microscopic system combined with a gated and intensified charge-coupled device (ICCD) camera were developed to detect 3D microbubble oscillation. The capability of imaging microbubble high-speed oscillation with much lower costs than with an ultra-fast framing or streak camera system was demonstrated. In addition, microbubble oscillations along both lateral (x and y) and axial (z) directions were demonstrated. Accordingly, this system is an excellent alternative for 3D investigation of microbubble high-speed oscillation, especially when budgets are limited. PMID:24049677

  9. A High-Speed Motion-Picture Study of Normal Combustion, Knock and Preignition in a Spark-Ignition Engines

    NASA Technical Reports Server (NTRS)

    Rothrock, A M; Spencer, R C; Miller, Cearcy D

    1941-01-01

    Combustion in a spark-ignition engine was investigated by means of the NACA high-speed motion-picture cameras. This camera is operated at a speed of 40,000 photographs a second and therefore makes possible the study of changes that take place in the intervals as short as 0.000025 second. When the motion pictures are projected at the normal speed of 16 frames a second, any rate of movement shown is slowed down 2500 times. Photographs are presented of normal combustion, of combustion from preignitions, and of knock both with and without preignition. The photographs of combustion show that knock may be preceded by a period of exothermic reaction in the end zone that persists for a time interval of as much as 0.0006 second. The knock takes place in 0.00005 second or less.

  10. Improving vehicle tracking rate and speed estimation in dusty and snowy weather conditions with a vibrating camera

    PubMed Central

    Yaghoobi Ershadi, Nastaran

    2017-01-01

    Traffic surveillance systems are interesting to many researchers to improve the traffic control and reduce the risk caused by accidents. In this area, many published works are only concerned about vehicle detection in normal conditions. The camera may vibrate due to wind or bridge movement. Detection and tracking of vehicles is a very difficult task when we have bad weather conditions in winter (snowy, rainy, windy, etc.), dusty weather in arid and semi-arid regions, at night, etc. Also, it is very important to consider speed of vehicles in the complicated weather condition. In this paper, we improved our method to track and count vehicles in dusty weather with vibrating camera. For this purpose, we used a background subtraction based strategy mixed with an extra processing to segment vehicles. In this paper, the extra processing included the analysis of the headlight size, location, and area. In our work, tracking was done between consecutive frames via a generalized particle filter to detect the vehicle and pair the headlights using the connected component analysis. So, vehicle counting was performed based on the pairing result, with Centroid of each blob we calculated distance between two frames by simple formula and hence dividing it by the time between two frames obtained from the video. Our proposed method was tested on several video surveillance records in different conditions such as dusty or foggy weather, vibrating camera, and in roads with medium-level traffic volumes. The results showed that the new proposed method performed better than our previously published method and other methods, including the Kalman filter or Gaussian model, in different traffic conditions. PMID:29261719

  11. Improving vehicle tracking rate and speed estimation in dusty and snowy weather conditions with a vibrating camera.

    PubMed

    Yaghoobi Ershadi, Nastaran

    2017-01-01

    Traffic surveillance systems are interesting to many researchers to improve the traffic control and reduce the risk caused by accidents. In this area, many published works are only concerned about vehicle detection in normal conditions. The camera may vibrate due to wind or bridge movement. Detection and tracking of vehicles is a very difficult task when we have bad weather conditions in winter (snowy, rainy, windy, etc.), dusty weather in arid and semi-arid regions, at night, etc. Also, it is very important to consider speed of vehicles in the complicated weather condition. In this paper, we improved our method to track and count vehicles in dusty weather with vibrating camera. For this purpose, we used a background subtraction based strategy mixed with an extra processing to segment vehicles. In this paper, the extra processing included the analysis of the headlight size, location, and area. In our work, tracking was done between consecutive frames via a generalized particle filter to detect the vehicle and pair the headlights using the connected component analysis. So, vehicle counting was performed based on the pairing result, with Centroid of each blob we calculated distance between two frames by simple formula and hence dividing it by the time between two frames obtained from the video. Our proposed method was tested on several video surveillance records in different conditions such as dusty or foggy weather, vibrating camera, and in roads with medium-level traffic volumes. The results showed that the new proposed method performed better than our previously published method and other methods, including the Kalman filter or Gaussian model, in different traffic conditions.

  12. FPGA Based Adaptive Rate and Manifold Pattern Projection for Structured Light 3D Camera System †

    PubMed Central

    Lee, Sukhan

    2018-01-01

    The quality of the captured point cloud and the scanning speed of a structured light 3D camera system depend upon their capability of handling the object surface of a large reflectance variation in the trade-off of the required number of patterns to be projected. In this paper, we propose and implement a flexible embedded framework that is capable of triggering the camera single or multiple times for capturing single or multiple projections within a single camera exposure setting. This allows the 3D camera system to synchronize the camera and projector even for miss-matched frame rates such that the system is capable of projecting different types of patterns for different scan speed applications. This makes the system capturing a high quality of 3D point cloud even for the surface of a large reflectance variation while achieving a high scan speed. The proposed framework is implemented on the Field Programmable Gate Array (FPGA), where the camera trigger is adaptively generated in such a way that the position and the number of triggers are automatically determined according to camera exposure settings. In other words, the projection frequency is adaptive to different scanning applications without altering the architecture. In addition, the proposed framework is unique as it does not require any external memory for storage because pattern pixels are generated in real-time, which minimizes the complexity and size of the application-specific integrated circuit (ASIC) design and implementation. PMID:29642506

  13. High speed fluorescence imaging with compressed ultrafast photography

    NASA Astrophysics Data System (ADS)

    Thompson, J. V.; Mason, J. D.; Beier, H. T.; Bixler, J. N.

    2017-02-01

    Fluorescent lifetime imaging is an optical technique that facilitates imaging molecular interactions and cellular functions. Because the excited lifetime of a fluorophore is sensitive to its local microenvironment,1, 2 measurement of fluorescent lifetimes can be used to accurately detect regional changes in temperature, pH, and ion concentration. However, typical state of the art fluorescent lifetime methods are severely limited when it comes to acquisition time (on the order of seconds to minutes) and video rate imaging. Here we show that compressed ultrafast photography (CUP) can be used in conjunction with fluorescent lifetime imaging to overcome these acquisition rate limitations. Frame rates up to one hundred billion frames per second have been demonstrated with compressed ultrafast photography using a streak camera.3 These rates are achieved by encoding time in the spatial direction with a pseudo-random binary pattern. The time domain information is then reconstructed using a compressed sensing algorithm, resulting in a cube of data (x,y,t) for each readout image. Thus, application of compressed ultrafast photography will allow us to acquire an entire fluorescent lifetime image with a single laser pulse. Using a streak camera with a high-speed CMOS camera, acquisition rates of 100 frames per second can be achieved, which will significantly enhance our ability to quantitatively measure complex biological events with high spatial and temporal resolution. In particular, we will demonstrate the ability of this technique to do single-shot fluorescent lifetime imaging of cells and microspheres.

  14. Application of high-speed photography to chip refining

    NASA Astrophysics Data System (ADS)

    Stationwala, Mustafa I.; Miller, Charles E.; Atack, Douglas; Karnis, A.

    1991-04-01

    Several high speed photographic methods have been employed to elucidate the mechanistic aspects of producing mechanical pulp in a disc refiner. Material flow patterns of pulp in a refmer were previously recorded by means of a HYCAM camera and continuous lighting system which provided cine pictures at up to 10,000 pps. In the present work an IMACON camera was used to obtain several series of high resolution, high speed photographs, each photograph containing an eight-frame sequence obtained at a framing rate of 100,000 pps. These high-resolution photographs made it possible to identify the nature of the fibrous material trapped on the bars of the stationary disc. Tangential movement of fibre floes, during the passage of bars on the rotating disc over bars on the stationary disc, was also observed on the stator bars. In addition, using a cinestroboscopic technique a large number of high resolution pictures were taken at three different positions of the rotating disc relative to the stationary disc. These pictures were computer analyzed, statistically, to determine the fractional coverage of the bars of the stationary disc with pulp. Information obtained from these studies provides new insights into the mechanism of the refining process.

  15. Dynamic displacement measurement of large-scale structures based on the Lucas-Kanade template tracking algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Jie; Zhu, Chang`an

    2016-01-01

    The development of optics and computer technologies enables the application of the vision-based technique that uses digital cameras to the displacement measurement of large-scale structures. Compared with traditional contact measurements, vision-based technique allows for remote measurement, has a non-intrusive characteristic, and does not necessitate mass introduction. In this study, a high-speed camera system is developed to complete the displacement measurement in real time. The system consists of a high-speed camera and a notebook computer. The high-speed camera can capture images at a speed of hundreds of frames per second. To process the captured images in computer, the Lucas-Kanade template tracking algorithm in the field of computer vision is introduced. Additionally, a modified inverse compositional algorithm is proposed to reduce the computing time of the original algorithm and improve the efficiency further. The modified algorithm can rapidly accomplish one displacement extraction within 1 ms without having to install any pre-designed target panel onto the structures in advance. The accuracy and the efficiency of the system in the remote measurement of dynamic displacement are demonstrated in the experiments on motion platform and sound barrier on suspension viaduct. Experimental results show that the proposed algorithm can extract accurate displacement signal and accomplish the vibration measurement of large-scale structures.

  16. High-Speed Video Analysis in a Conceptual Physics Class

    NASA Astrophysics Data System (ADS)

    Desbien, Dwain M.

    2011-09-01

    The use of probe ware and computers has become quite common in introductory physics classrooms. Video analysis is also becoming more popular and is available to a wide range of students through commercially available and/or free software.2,3 Video analysis allows for the study of motions that cannot be easily measured in the traditional lab setting and also allows real-world situations to be analyzed. Many motions are too fast to easily be captured at the standard video frame rate of 30 frames per second (fps) employed by most video cameras. This paper will discuss using a consumer camera that can record high-frame-rate video in a college-level conceptual physics class. In particular this will involve the use of model rockets to determine the acceleration during the boost period right at launch and compare it to a simple model of the expected acceleration.

  17. Network-linked long-time recording high-speed video camera system

    NASA Astrophysics Data System (ADS)

    Kimura, Seiji; Tsuji, Masataka

    2001-04-01

    This paper describes a network-oriented, long-recording-time high-speed digital video camera system that utilizes an HDD (Hard Disk Drive) as a recording medium. Semiconductor memories (DRAM, etc.) are the most common image data recording media with existing high-speed digital video cameras. They are extensively used because of their advantage of high-speed writing and reading of picture data. The drawback is that their recording time is limited to only several seconds because the data amount is very large. A recording time of several seconds is sufficient for many applications. However, a much longer recording time is required in some applications where an exact prediction of trigger timing is hard to make. In the Late years, the recording density of the HDD has been dramatically improved, which has attracted more attention to its value as a long-recording-time medium. We conceived an idea that we would be able to build a compact system that makes possible a long time recording if the HDD can be used as a memory unit for high-speed digital image recording. However, the data rate of such a system, capable of recording 640 X 480 pixel resolution pictures at 500 frames per second (fps) with 8-bit grayscale is 153.6 Mbyte/sec., and is way beyond the writing speed of the commonly used HDD. So, we developed a dedicated image compression system and verified its capability to lower the data rate from the digital camera to match the HDD writing rate.

  18. Ultra-fast high-resolution hybrid and monolithic CMOS imagers in multi-frame radiography

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, Kris; Douence, Vincent; Bai, Yibin; Nedrow, Paul; Mariam, Fesseha; Merrill, Frank; Morris, Christopher L.; Saunders, Andy

    2014-09-01

    A new burst-mode, 10-frame, hybrid Si-sensor/CMOS-ROIC FPA chip has been recently fabricated at Teledyne Imaging Sensors. The intended primary use of the sensor is in the multi-frame 800 MeV proton radiography at LANL. The basic part of the hybrid is a large (48×49 mm2) stitched CMOS chip of 1100×1100 pixel count, with a minimum shutter speed of 50 ns. The performance parameters of this chip are compared to the first generation 3-frame 0.5-Mpixel custom hybrid imager. The 3-frame cameras have been in continuous use for many years, in a variety of static and dynamic experiments at LANSCE. The cameras can operate with a per-frame adjustable integration time of ~ 120ns-to- 1s, and inter-frame time of 250ns to 2s. Given the 80 ms total readout time, the original and the new imagers can be externally synchronized to 0.1-to-5 Hz, 50-ns wide proton beam pulses, and record up to ~1000-frame radiographic movies typ. of 3-to-30 minute duration. The performance of the global electronic shutter is discussed and compared to that of a high-resolution commercial front-illuminated monolithic CMOS imager.

  19. An approach to instrument qualified visual range

    NASA Astrophysics Data System (ADS)

    Courtade, Benoît; Bonnet, Jordan; Woodruff, Chris; Larson, Josiah; Giles, Andrew; Sonde, Nikhil; Moore, C. J.; Schimon, David; Harris, David Money; Pond, Duane; Way, Scott

    2008-04-01

    This paper describes a system that calculates aircraft visual range with instrumentation alone. A unique message is encoded using modified binary phase shift keying and continuously flashed at high speed by ALSF-II runway approach lights. The message is sampled at 400 frames per second by an aircraft borne high-speed camera. The encoding is designed to avoid visible flicker and minimize frame rate. Instrument qualified visual range is identified as the largest distance at which the aircraft system can acquire and verify the correct, runway-specific signal. Scaled testing indicates that if the system were implemented on one full ALSF-II fixture, instrument qualified range could be established at 5 miles in clear weather conditions.

  20. High-Speed Photography of Detonation Propagation in Dynamically Precompressed Liquid Explosives

    NASA Astrophysics Data System (ADS)

    Petel, O. E.; Higgins, A. J.; Yoshinaka, A. C.; Zhang, F.

    2007-12-01

    The propagation of detonation in shock-compressed nitromethane was observed with a high-speed framing camera. The test explosive, nitromethane, was compressed by a reverberating shock wave to pressures as high as 10 GPa prior to being detonated by a secondary detonation event. The pressure and density in the test explosive prior to detonation were determined using two methods: manganin stress gauge measurements and LS-DYNA simulations. The velocity of the detonation front was determined from consecutive frames and correlated to the density of the reverberating shock-compressed explosive prior to detonation. Observing detonation propagation under these non-ambient conditions provides data which can be useful in the validation of equation of state models.

  1. Selection of optical model of stereophotography experiment for determination the cloud base height as a problem of testing of statistical hypotheses

    NASA Astrophysics Data System (ADS)

    Chulichkov, Alexey I.; Nikitin, Stanislav V.; Emilenko, Alexander S.; Medvedev, Andrey P.; Postylyakov, Oleg V.

    2017-10-01

    Earlier, we developed a method for estimating the height and speed of clouds from cloud images obtained by a pair of digital cameras. The shift of a fragment of the cloud in the right frame relative to its position in the left frame is used to estimate the height of the cloud and its velocity. This shift is estimated by the method of the morphological analysis of images. However, this method requires that the axes of the cameras are parallel. Instead of real adjustment of the axes, we use virtual camera adjustment, namely, a transformation of a real frame, the result of which could be obtained if all the axes were perfectly adjusted. For such adjustment, images of stars as infinitely distant objects were used: on perfectly aligned cameras, images on both the right and left frames should be identical. In this paper, we investigate in more detail possible mathematical models of cloud image deformations caused by the misalignment of the axes of two cameras, as well as their lens aberration. The simplest model follows the paraxial approximation of lens (without lens aberrations) and reduces to an affine transformation of the coordinates of one of the frames. The other two models take into account the lens distortion of the 3rd and 3rd and 5th orders respectively. It is shown that the models differ significantly when converting coordinates near the edges of the frame. Strict statistical criteria allow choosing the most reliable model, which is as much as possible consistent with the measurement data. Further, each of these three models was used to determine parameters of the image deformations. These parameters are used to provide cloud images to mean what they would have when measured using an ideal setup, and then the distance to cloud is calculated. The results were compared with data of a laser range finder.

  2. High-speed adaptive optics line scan confocal retinal imaging for human eye

    PubMed Central

    Wang, Xiaolin; Zhang, Yuhua

    2017-01-01

    Purpose Continuous and rapid eye movement causes significant intraframe distortion in adaptive optics high resolution retinal imaging. To minimize this artifact, we developed a high speed adaptive optics line scan confocal retinal imaging system. Methods A high speed line camera was employed to acquire retinal image and custom adaptive optics was developed to compensate the wave aberration of the human eye’s optics. The spatial resolution and signal to noise ratio were assessed in model eye and in living human eye. The improvement of imaging fidelity was estimated by reduction of intra-frame distortion of retinal images acquired in the living human eyes with frame rates at 30 frames/second (FPS), 100 FPS, and 200 FPS. Results The device produced retinal image with cellular level resolution at 200 FPS with a digitization of 512×512 pixels/frame in the living human eye. Cone photoreceptors in the central fovea and rod photoreceptors near the fovea were resolved in three human subjects in normal chorioretinal health. Compared with retinal images acquired at 30 FPS, the intra-frame distortion in images taken at 200 FPS was reduced by 50.9% to 79.7%. Conclusions We demonstrated the feasibility of acquiring high resolution retinal images in the living human eye at a speed that minimizes retinal motion artifact. This device may facilitate research involving subjects with nystagmus or unsteady fixation due to central vision loss. PMID:28257458

  3. High-speed adaptive optics line scan confocal retinal imaging for human eye.

    PubMed

    Lu, Jing; Gu, Boyu; Wang, Xiaolin; Zhang, Yuhua

    2017-01-01

    Continuous and rapid eye movement causes significant intraframe distortion in adaptive optics high resolution retinal imaging. To minimize this artifact, we developed a high speed adaptive optics line scan confocal retinal imaging system. A high speed line camera was employed to acquire retinal image and custom adaptive optics was developed to compensate the wave aberration of the human eye's optics. The spatial resolution and signal to noise ratio were assessed in model eye and in living human eye. The improvement of imaging fidelity was estimated by reduction of intra-frame distortion of retinal images acquired in the living human eyes with frame rates at 30 frames/second (FPS), 100 FPS, and 200 FPS. The device produced retinal image with cellular level resolution at 200 FPS with a digitization of 512×512 pixels/frame in the living human eye. Cone photoreceptors in the central fovea and rod photoreceptors near the fovea were resolved in three human subjects in normal chorioretinal health. Compared with retinal images acquired at 30 FPS, the intra-frame distortion in images taken at 200 FPS was reduced by 50.9% to 79.7%. We demonstrated the feasibility of acquiring high resolution retinal images in the living human eye at a speed that minimizes retinal motion artifact. This device may facilitate research involving subjects with nystagmus or unsteady fixation due to central vision loss.

  4. Location accuracy evaluation of lightning location systems using natural lightning flashes recorded by a network of high-speed cameras

    NASA Astrophysics Data System (ADS)

    Alves, J.; Saraiva, A. C. V.; Campos, L. Z. D. S.; Pinto, O., Jr.; Antunes, L.

    2014-12-01

    This work presents a method for the evaluation of location accuracy of all Lightning Location System (LLS) in operation in southeastern Brazil, using natural cloud-to-ground (CG) lightning flashes. This can be done through a multiple high-speed cameras network (RAMMER network) installed in the Paraiba Valley region - SP - Brazil. The RAMMER network (Automated Multi-camera Network for Monitoring and Study of Lightning) is composed by four high-speed cameras operating at 2,500 frames per second. Three stationary black-and-white (B&W) cameras were situated in the cities of São José dos Campos and Caçapava. A fourth color camera was mobile (installed in a car), but operated in a fixed location during the observation period, within the city of São José dos Campos. The average distance among cameras was 13 kilometers. Each RAMMER sensor position was determined so that the network can observe the same lightning flash from different angles and all recorded videos were GPS (Global Position System) time stamped, allowing comparisons of events between cameras and the LLS. The RAMMER sensor is basically composed by a computer, a Phantom high-speed camera version 9.1 and a GPS unit. The lightning cases analyzed in the present work were observed by at least two cameras, their position was visually triangulated and the results compared with BrasilDAT network, during the summer seasons of 2011/2012 and 2012/2013. The visual triangulation method is presented in details. The calibration procedure showed an accuracy of 9 meters between the accurate GPS position of the object triangulated and the result from the visual triangulation method. Lightning return stroke positions, estimated with the visual triangulation method, were compared with LLS locations. Differences between solutions were not greater than 1.8 km.

  5. Active hyperspectral imaging using a quantum cascade laser (QCL) array and digital-pixel focal plane array (DFPA) camera.

    PubMed

    Goyal, Anish; Myers, Travis; Wang, Christine A; Kelly, Michael; Tyrrell, Brian; Gokden, B; Sanchez, Antonio; Turner, George; Capasso, Federico

    2014-06-16

    We demonstrate active hyperspectral imaging using a quantum-cascade laser (QCL) array as the illumination source and a digital-pixel focal-plane-array (DFPA) camera as the receiver. The multi-wavelength QCL array used in this work comprises 15 individually addressable QCLs in which the beams from all lasers are spatially overlapped using wavelength beam combining (WBC). The DFPA camera was configured to integrate the laser light reflected from the sample and to perform on-chip subtraction of the passive thermal background. A 27-frame hyperspectral image was acquired of a liquid contaminant on a diffuse gold surface at a range of 5 meters. The measured spectral reflectance closely matches the calculated reflectance. Furthermore, the high-speed capabilities of the system were demonstrated by capturing differential reflectance images of sand and KClO3 particles that were moving at speeds of up to 10 m/s.

  6. In-Situ Observation of Horizontal Centrifugal Casting using a High-Speed Camera

    NASA Astrophysics Data System (ADS)

    Esaka, Hisao; Kawai, Kohsuke; Kaneko, Hiroshi; Shinozuka, Kei

    2012-07-01

    In order to understand the solidification process of horizontal centrifugal casting, experimental equipment for in-situ observation using transparent organic substance has been constructed. Succinonitrile-1 mass% water alloy was filled in the round glass cell and the glass cell was completely sealed. To observe the movement of equiaxed grains more clearly and to understand the effect of movement of free surface, a high-speed camera has been installed on the equipment. The most advantageous point of this equipment is that the camera rotates with mold, so that one can observe the same location of the glass cell. Because the recording rate could be increased up to 250 frames per second, the quality of movie was dramatically modified and this made easier and more precise to pursue the certain equiaxed grain. The amplitude of oscillation of equiaxed grain ( = At) decreased as the solidification proceeded.

  7. A Fast MEANSHIFT Algorithm-Based Target Tracking System

    PubMed Central

    Sun, Jian

    2012-01-01

    Tracking moving targets in complex scenes using an active video camera is a challenging task. Tracking accuracy and efficiency are two key yet generally incompatible aspects of a Target Tracking System (TTS). A compromise scheme will be studied in this paper. A fast mean-shift-based Target Tracking scheme is designed and realized, which is robust to partial occlusion and changes in object appearance. The physical simulation shows that the image signal processing speed is >50 frame/s. PMID:22969397

  8. High Speed Photographic Analysis Of Railgun Plasmas

    NASA Astrophysics Data System (ADS)

    Macintyre, I. B.

    1985-02-01

    Various experiments are underway at the Materials Research Laboratories, Australian Department of Defence, to develop a theory for the behaviour and propulsion action of plasmas in rail guns. Optical recording and imaging devices, with their low vulnerability to the effects of magnetic and electric fields present in the vicinity of electromagnetic launchers, have proven useful as diagnostic tools. This paper describes photoinstrumentation systems developed to provide visual qualitative assessment of the behaviour of plasma travelling along the bore of railgun launchers. In addition, a quantitative system is incorporated providing continuous data (on a microsecond time scale) of (a) Length of plasma during flight along the launcher bore. (b) Velocity of plasma. (c) Distribution of plasma with respect to time after creation. (d) Plasma intensity profile as it travels along the launcher bore. The evolution of the techniques used is discussed. Two systems were employed. The first utilized a modified high speed streak camera to record the light emitted from the plasma, through specially prepared fibre optic cables. The fibre faces external to the bore were then imaged onto moving film. The technique involved the insertion of fibres through the launcher body to enable the plasma to be viewed at discrete positions as it travelled along the launcher bore. Camera configuration, fibre optic preparation and experimental results are outlined. The second system utilized high speed streak and framing photography in conjunction with accurate sensitometric control procedures on the recording film. The two cameras recorded the plasma travelling along the bore of a specially designed transparent launcher. The streak camera, fitted with a precise slit size, recorded a streak image of the upper brightness range of the plasma as it travelled along the launcher's bore. The framing camera recorded an overall view of the launcher and the plasma path, to the maximum possible, governed by the film's ability to reproduce the plasma's brightness range. The instrumentation configuration, calibration, and film measurement using microdensitometer scanning techniques to evaluate inbore plasma behaviour, are also presented.

  9. A portable high-speed camera system for vocal fold examinations.

    PubMed

    Hertegård, Stellan; Larsson, Hans

    2014-11-01

    In this article, we present a new portable low-cost system for high-speed examinations of the vocal folds. Analysis of glottal vibratory parameters from the high-speed recordings is compared with videostroboscopic recordings. The high-speed system is built around a Fastec 1 monochrome camera, which is used with newly developed software, High-Speed Studio (HSS). The HSS has options for video/image recording, contains a database, and has a set of analysis options. The Fastec/HSS system has been used clinically since 2011 in more than 2000 patient examinations and recordings. The Fastec 1 camera has sufficient time resolution (≥4000 frames/s) and light sensitivity (ISO 3200) to produce images for detailed analyses of parameters pertinent to vocal fold function. The camera can be used with both rigid and flexible endoscopes. The HSS software includes options for analyses of glottal vibrations, such as kymogram, phase asymmetry, glottal area variation, open and closed phase, and angle of vocal fold abduction. It can also be used for separate analysis of the left and vocal fold movements, including maximum speed during opening and closing, a parameter possibly related to vocal fold elasticity. A blinded analysis of 32 patients with various voice disorders examined with both the Fastec/HSS system and videostroboscopy showed that the high-speed recordings were significantly better for the analysis of glottal parameters (eg, mucosal wave and vibration asymmetry). The monochrome high-speed system can be used in daily clinical work within normal clinical time limits for patient examinations. A detailed analysis can be made of voice disorders and laryngeal pathology at a relatively low cost. Copyright © 2014 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  10. How many pixels does it take to make a good 4"×6" print? Pixel count wars revisited

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    2011-01-01

    In the early 1980's the future of conventional silver-halide photographic systems was of great concern due to the potential introduction of electronic imaging systems then typified by the Sony Mavica analog electronic camera. The focus was on the quality of film-based systems as expressed in the number of equivalent number pixels and bits-per-pixel, and how many pixels would be required to create an equivalent quality image from a digital camera. It was found that 35-mm frames, for ISO 100 color negative film, contained equivalent pixels of 12 microns for a total of 18 million pixels per frame (6 million pixels per layer) with about 6 bits of information per pixel; the introduction of new emulsion technology, tabular AgX grains, increased the value to 8 bit per pixel. Higher ISO speed films had larger equivalent pixels, fewer pixels per frame, but retained the 8 bits per pixel. Further work found that a high quality 3.5" x 5.25" print could be obtained from a three layer system containing 1300 x 1950 pixels per layer or about 7.6 million pixels in all. In short, it became clear that when a digital camera contained about 6 million pixels (in a single layer using a color filter array and appropriate image processing) that digital systems would challenge and replace conventional film-based system for the consumer market. By 2005 this became the reality. Since 2005 there has been a "pixel war" raging amongst digital camera makers. The question arises about just how many pixels are required and are all pixels equal? This paper will provide a practical look at how many pixels are needed for a good print based on the form factor of the sensor (sensor size) and the effective optical modulation transfer function (optical spread function) of the camera lens. Is it better to have 16 million, 5.7-micron pixels or 6 million 7.8-micron pixels? How does intrinsic (no electronic boost) ISO speed and exposure latitude vary with pixel size? A systematic review of these issues will be provided within the context of image quality and ISO speed models developed over the last 15 years.

  11. Strategic options towards an affordable high-performance infrared camera

    NASA Astrophysics Data System (ADS)

    Oduor, Patrick; Mizuno, Genki; Dutta, Achyut K.; Lewis, Jay; Dhar, Nibir K.

    2016-05-01

    The promise of infrared (IR) imaging attaining low-cost akin to CMOS sensors success has been hampered by the inability to achieve cost advantages that are necessary for crossover from military and industrial applications into the consumer and mass-scale commercial realm despite well documented advantages. Banpil Photonics is developing affordable IR cameras by adopting new strategies to speed-up the decline of the IR camera cost curve. We present a new short-wave IR (SWIR) camera; 640x512 pixel InGaAs uncooled system that is high sensitivity low noise (<50e-), high dynamic range (100 dB), high-frame rates (> 500 frames per second (FPS)) at full resolution, and low power consumption (< 1 W) in a compact system. This camera paves the way towards mass market adoption by not only demonstrating high-performance IR imaging capability value add demanded by military and industrial application, but also illuminates a path towards justifiable price points essential for consumer facing application industries such as automotive, medical, and security imaging adoption. Among the strategic options presented include new sensor manufacturing technologies that scale favorably towards automation, multi-focal plane array compatible readout electronics, and dense or ultra-small pixel pitch devices.

  12. C-RED one: ultra-high speed wavefront sensing in the infrared made possible

    NASA Astrophysics Data System (ADS)

    Gach, J.-L.; Feautrier, Philippe; Stadler, Eric; Greffe, Timothee; Clop, Fabien; Lemarchand, Stéphane; Carmignani, Thomas; Boutolleau, David; Baker, Ian

    2016-07-01

    First Light Imaging's CRED-ONE infrared camera is capable of capturing up to 3500 full frames per second with a subelectron readout noise. This breakthrough has been made possible thanks to the use of an e-APD infrared focal plane array which is a real disruptive technology in imagery. We will show the performances of the camera, its main features and compare them to other high performance wavefront sensing cameras like OCAM2 in the visible and in the infrared. The project leading to this application has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement N° 673944.

  13. Explosively Driven Particle Fields Imaged Using a High-Speed Framing Camera and Particle Image Velocimetry

    DTIC Science & Technology

    2011-08-01

    inert steel particles and by Frost et al. (2005, 2007) with reactive aluminum and magnesium particles. All used sensitized nitromethane and were...particles in a spherical or cylindrical charge case was used with sensitized nitromethane . Frost et al. (2002), determined that for a given charge

  14. Observations of long delays to detonation in propellant for tests with marginal card gaps

    NASA Technical Reports Server (NTRS)

    Olinger, B.

    1980-01-01

    Using the large-scale card gap tests with pin and high-speed framing camera techniques, VRP propellant, and presumably others, were found to transit to detonation at marginal gaps after a long delay. In addition, manganin-constantan gauge measurements were made in the card gap stack.

  15. Pulsed-neutron imaging by a high-speed camera and center-of-gravity processing

    NASA Astrophysics Data System (ADS)

    Mochiki, K.; Uragaki, T.; Koide, J.; Kushima, Y.; Kawarabayashi, J.; Taketani, A.; Otake, Y.; Matsumoto, Y.; Su, Y.; Hiroi, K.; Shinohara, T.; Kai, T.

    2018-01-01

    Pulsed-neutron imaging is attractive technique in the research fields of energy-resolved neutron radiography and RANS (RIKEN) and RADEN (J-PARC/JAEA) are small and large accelerator-driven pulsed-neutron facilities for its imaging, respectively. To overcome the insuficient spatial resolution of the conunting type imaging detectors like μ NID, nGEM and pixelated detectors, camera detectors combined with a neutron color image intensifier were investigated. At RANS center-of-gravity technique was applied to spots image obtained by a CCD camera and the technique was confirmed to be effective for improving spatial resolution. At RADEN a high-frame-rate CMOS camera was used and super resolution technique was applied and it was recognized that the spatial resolution was futhermore improved.

  16. Particle displacement tracking applied to air flows

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.

    1991-01-01

    Electronic Particle Image Velocimeter (PIV) techniques offer many advantages over conventional photographic PIV methods such as fast turn around times and simplified data reduction. A new all electronic PIV technique was developed which can measure high speed gas velocities. The Particle Displacement Tracking (PDT) technique employs a single cw laser, small seed particles (1 micron), and a single intensified, gated CCD array frame camera to provide a simple and fast method of obtaining two-dimensional velocity vector maps with unambiguous direction determination. Use of a single CCD camera eliminates registration difficulties encountered when multiple cameras are used to obtain velocity magnitude and direction information. An 80386 PC equipped with a large memory buffer frame-grabber board provides all of the data acquisition and data reduction operations. No array processors of other numerical processing hardware are required. Full video resolution (640x480 pixel) is maintained in the acquired images, providing high resolution video frames of the recorded particle images. The time between data acquisition to display of the velocity vector map is less than 40 sec. The new electronic PDT technique is demonstrated on an air nozzle flow with velocities less than 150 m/s.

  17. High speed line-scan confocal imaging of stimulus-evoked intrinsic optical signals in the retina

    PubMed Central

    Li, Yang-Guo; Liu, Lei; Amthor, Franklin; Yao, Xin-Cheng

    2010-01-01

    A rapid line-scan confocal imager was developed for functional imaging of the retina. In this imager, an acousto-optic deflector (AOD) was employed to produce mechanical vibration- and inertia-free light scanning, and a high-speed (68,000 Hz) linear CCD camera was used to achieve sub-cellular and sub-millisecond spatiotemporal resolution imaging. Two imaging modalities, i.e., frame-by-frame and line-by-line recording, were validated for reflected light detection of intrinsic optical signals (IOSs) in visible light stimulus activated frog retinas. Experimental results indicated that fast IOSs were tightly correlated with retinal stimuli, and could track visible light flicker stimulus frequency up to at least 2 Hz. PMID:20125743

  18. High-Speed Photography of Detonation Propagation in Dynamically Precompressed Liquid Explosives

    NASA Astrophysics Data System (ADS)

    Petel, Oren; Higgins, Andrew; Yoshinaka, Akio; Zhang, Fan

    2007-06-01

    The propagation of detonation in shock compressed nitromethane was observed with a high speed framing camera. The test explosive, nitromethane, was compressed by a reverberating shock wave to pressures on the order of 10 GPa prior to being detonated by a secondary detonation event. The pressure and density in the test explosive prior to detonation was determined using two methods: manganin strain gauge measurements and LS-DYNA simulations. The velocity of the detonation front was determined from consecutive frames and correlated to the density of the explosive post-reverberating shock wave and prior to being detonated. Observing detonation propagation under these non-ambient conditions provides data which can be useful in the validation of equation of state models.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ranson, W.F.; Schaeffel, J.A.; Murphree, E.A.

    The response of prestressed and preheated plates subject to an exponentially decaying blast load was experimentally determined. A grid was reflected from the front surface of the plate and the response was recorded with a high speed camera. The camera used in this analysis was a rotating drum camera operating at 20,000 frames per second with a maximum of 224 frames at 39 microseconds separation. Inplane tension loads were applied to the plate by means of air cylinders. Maximum biaxial load applied to the plate was 500 pounds. Plate preheating was obtained with resistance heaters located in the specimen platemore » holder with a maximum capability of 500F. Data analysis was restricted to the maximum conditions at the center of the plate. Strains were determined from the photographic data and the stresses were calculated from the strain data. Results were obtained from zero preload conditions to a maximum of 480 pounds inplane tension loads and a plate temperature of 490F. The blast load ranged from 6 to 23 psi.« less

  20. Single-event transient imaging with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor.

    PubMed

    Mochizuki, Futa; Kagawa, Keiichiro; Okihara, Shin-ichiro; Seo, Min-Woong; Zhang, Bo; Takasawa, Taishi; Yasutomi, Keita; Kawahito, Shoji

    2016-02-22

    In the work described in this paper, an image reproduction scheme with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor was demonstrated. The sensor captures an object by compressing a sequence of images with focal-plane temporally random-coded shutters, followed by reconstruction of time-resolved images. Because signals are modulated pixel-by-pixel during capturing, the maximum frame rate is defined only by the charge transfer speed and can thus be higher than those of conventional ultra-high-speed cameras. The frame rate and optical efficiency of the multi-aperture scheme are discussed. To demonstrate the proposed imaging method, a 5×3 multi-aperture image sensor was fabricated. The average rising and falling times of the shutters were 1.53 ns and 1.69 ns, respectively. The maximum skew among the shutters was 3 ns. The sensor observed plasma emission by compressing it to 15 frames, and a series of 32 images at 200 Mfps was reconstructed. In the experiment, by correcting disparities and considering temporal pixel responses, artifacts in the reconstructed images were reduced. An improvement in PSNR from 25.8 dB to 30.8 dB was confirmed in simulations.

  1. Fast-camera imaging on the W7-X stellarator

    NASA Astrophysics Data System (ADS)

    Ballinger, S. B.; Terry, J. L.; Baek, S. G.; Tang, K.; Grulke, O.

    2017-10-01

    Fast cameras recording in the visible range have been used to study filamentary (``blob'') edge turbulence in tokamak plasmas, revealing that emissive filaments aligned with the magnetic field can propagate perpendicular to it at speeds on the order of 1 km/s in the SOL or private flux region. The motion of these filaments has been studied in several tokamaks, including MAST, NSTX, and Alcator C-Mod. Filaments were also observed in the W7-X Stellarator using fast cameras during its initial run campaign. For W7-X's upcoming 2017-18 run campaign, we have installed a Phantom V710 fast camera with a view of the machine cross section and part of a divertor module in order to continue studying edge and divertor filaments. The view is coupled to the camera via a coherent fiber bundle. The Phantom camera is able to record at up to 400,000 frames per second and has a spatial resolution of roughly 2 cm in the view. A beam-splitter is used to share the view with a slower machine-protection camera. Stepping-motor actuators tilt the beam-splitter about two orthogonal axes, making it possible to frame user-defined sub-regions anywhere within the view. The diagnostic has been prepared to be remotely controlled via MDSplus. The MIT portion of this work is supported by US DOE award DE-SC0014251.

  2. TEM Video Compressive Sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, Andrew; Kovarik, Libor; Abellan, Patricia

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ TEM experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing methods [1, 2, 3, 4] to increase the framerate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integrated into amore » single camera frame during the acquisition process, and then extracted upon readout using statistical compressive sensing inversion. Our simulations show that it should be possible to increase the speed of any camera by at least an order of magnitude. Compressive Sensing (CS) combines sensing and compression in one operation, and thus provides an approach that could further improve the temporal resolution while correspondingly reducing the electron dose rate. Because the signal is measured in a compressive manner, fewer total measurements are required. When applied to TEM video capture, compressive imaging couled improve acquisition speed and reduce the electron dose rate. CS is a recent concept, and has come to the forefront due the seminal work of Candès [5]. Since the publication of Candès, there has been enormous growth in the application of CS and development of CS variants. For electron microscopy applications, the concept of CS has also been recently applied to electron tomography [6], and reduction of electron dose in scanning transmission electron microscopy (STEM) imaging [7]. To demonstrate the applicability of coded aperture CS video reconstruction for atomic level imaging, we simulate compressive sensing on observations of Pd nanoparticles and Ag nanoparticles during exposure to high temperatures and other environmental conditions. Figure 1 highlights the results from the Pd nanoparticle experiment. On the left, 10 frames are reconstructed from a single coded frame—the original frames are shown for comparison. On the right a selection of three frames are shown from reconstructions at compression levels 10,20,30. The reconstructions, which are not post-processed, are true to the original and degrade in a straightforward manner. The final choice of compression level will obviously depend on both the temporal and spatial resolution required for a specific imaging task, but the results indicate that an increase in speed of better than an order of magnitude should be possible for all experiments. References: [1] P Llull, X Liao, X Yuan et al. Optics express 21(9), (2013), p. 10526. [2] J Yang, X Yuan, X Liao et al. Image Processing, IEEE Trans 23(11), (2014), p. 4863. [3] X Yuan, J Yang, P Llull et al. In ICIP 2013 (IEEE), p. 14. [4] X Yuan, P Llull, X Liao et al. In CVPR 2014. p. 3318. [5] EJ Candès, J Romberg and T Tao. Information Theory, IEEE Trans 52(2), (2006), p. 489. [6] P Binev, W Dahmen, R DeVore et al. In Modeling Nanoscale Imaging in Electron Microscopy, eds. T Vogt, W Dahmen and P Binev (Springer US), Nanostructure Science and Technology (2012). p. 73. [7] A Stevens, H Yang, L Carin et al. Microscopy 63(1), (2014), pp. 41.« less

  3. Magneto-optical system for high speed real time imaging.

    PubMed

    Baziljevich, M; Barness, D; Sinvani, M; Perel, E; Shaulov, A; Yeshurun, Y

    2012-08-01

    A new magneto-optical system has been developed to expand the range of high speed real time magneto-optical imaging. A special source for the external magnetic field has also been designed, using a pump solenoid to rapidly excite the field coil. Together with careful modifications of the cryostat, to reduce eddy currents, ramping rates reaching 3000 T/s have been achieved. Using a powerful laser as the light source, a custom designed optical assembly, and a high speed digital camera, real time imaging rates up to 30 000 frames per seconds have been demonstrated.

  4. Magneto-optical system for high speed real time imaging

    NASA Astrophysics Data System (ADS)

    Baziljevich, M.; Barness, D.; Sinvani, M.; Perel, E.; Shaulov, A.; Yeshurun, Y.

    2012-08-01

    A new magneto-optical system has been developed to expand the range of high speed real time magneto-optical imaging. A special source for the external magnetic field has also been designed, using a pump solenoid to rapidly excite the field coil. Together with careful modifications of the cryostat, to reduce eddy currents, ramping rates reaching 3000 T/s have been achieved. Using a powerful laser as the light source, a custom designed optical assembly, and a high speed digital camera, real time imaging rates up to 30 000 frames per seconds have been demonstrated.

  5. Utilizing ISS Camera Systems for Scientific Analysis of Lightning Characteristics and comparison with ISS-LIS and GLM

    NASA Astrophysics Data System (ADS)

    Schultz, C. J.; Lang, T. J.; Leake, S.; Runco, M.; Blakeslee, R. J.

    2017-12-01

    Video and still frame images from cameras aboard the International Space Station (ISS) are used to inspire, educate, and provide a unique vantage point from low-Earth orbit that is second to none; however, these cameras have overlooked capabilities for contributing to scientific analysis of the Earth and near-space environment. The goal of this project is to study how georeferenced video/images from available ISS camera systems can be useful for scientific analysis, using lightning properties as a demonstration. Camera images from the crew cameras and high definition video from the Chiba University Meteor Camera were combined with lightning data from the National Lightning Detection Network (NLDN), ISS-Lightning Imaging Sensor (ISS-LIS), the Geostationary Lightning Mapper (GLM) and lightning mapping arrays. These cameras provide significant spatial resolution advantages ( 10 times or better) over ISS-LIS and GLM, but with lower temporal resolution. Therefore, they can serve as a complementarity analysis tool for studying lightning and thunderstorm processes from space. Lightning sensor data, Visible Infrared Imaging Radiometer Suite (VIIRS) derived city light maps, and other geographic databases were combined with the ISS attitude and position data to reverse geolocate each image or frame. An open-source Python toolkit has been developed to assist with this effort. Next, the locations and sizes of all flashes in each frame or image were computed and compared with flash characteristics from all available lightning datasets. This allowed for characterization of cloud features that are below the 4-km and 8-km resolution of ISS-LIS and GLM which may reduce the light that reaches the ISS-LIS or GLM sensor. In the case of video, consecutive frames were overlaid to determine the rate of change of the light escaping cloud top. Characterization of the rate of change in geometry, more generally the radius, of light escaping cloud top was integrated with the NLDN, ISS-LIS and GLM to understand how the peak rate of change and the peak area of each flash aligned with each lightning system in time. Flash features like leaders could be inferred from the video frames as well. Testing is being done to see if leader speeds may be accurately calculated under certain circumstances.

  6. Ultrahigh-speed X-ray imaging of hypervelocity projectiles

    NASA Astrophysics Data System (ADS)

    Miller, Stuart; Singh, Bipin; Cool, Steven; Entine, Gerald; Campbell, Larry; Bishel, Ron; Rushing, Rick; Nagarkar, Vivek V.

    2011-08-01

    High-speed X-ray imaging is an extremely important modality for healthcare, industrial, military and research applications such as medical computed tomography, non-destructive testing, imaging in-flight projectiles, characterizing exploding ordnance, and analyzing ballistic impacts. We report on the development of a modular, ultrahigh-speed, high-resolution digital X-ray imaging system with large active imaging area and microsecond time resolution, capable of acquiring at a rate of up to 150,000 frames per second. The system is based on a high-resolution, high-efficiency, and fast-decay scintillator screen optically coupled to an ultra-fast image-intensified CCD camera designed for ballistic impact studies and hypervelocity projectile imaging. A specially designed multi-anode, high-fluence X-ray source with 50 ns pulse duration provides a sequence of blur-free images of hypervelocity projectiles traveling at speeds exceeding 8 km/s (18,000 miles/h). This paper will discuss the design, performance, and high frame rate imaging capability of the system.

  7. Remote gaze tracking system on a large display.

    PubMed

    Lee, Hyeon Chang; Lee, Won Oh; Cho, Chul Woo; Gwon, Su Yeong; Park, Kang Ryoung; Lee, Heekyung; Cha, Jihun

    2013-10-07

    We propose a new remote gaze tracking system as an intelligent TV interface. Our research is novel in the following three ways: first, because a user can sit at various positions in front of a large display, the capture volume of the gaze tracking system should be greater, so the proposed system includes two cameras which can be moved simultaneously by panning and tilting mechanisms, a wide view camera (WVC) for detecting eye position and an auto-focusing narrow view camera (NVC) for capturing enlarged eye images. Second, in order to remove the complicated calibration between the WVC and NVC and to enhance the capture speed of the NVC, these two cameras are combined in a parallel structure. Third, the auto-focusing of the NVC is achieved on the basis of both the user's facial width in the WVC image and a focus score calculated on the eye image of the NVC. Experimental results showed that the proposed system can be operated with a gaze tracking accuracy of ±0.737°~±0.775° and a speed of 5~10 frames/s.

  8. Remote Gaze Tracking System on a Large Display

    PubMed Central

    Lee, Hyeon Chang; Lee, Won Oh; Cho, Chul Woo; Gwon, Su Yeong; Park, Kang Ryoung; Lee, Heekyung; Cha, Jihun

    2013-01-01

    We propose a new remote gaze tracking system as an intelligent TV interface. Our research is novel in the following three ways: first, because a user can sit at various positions in front of a large display, the capture volume of the gaze tracking system should be greater, so the proposed system includes two cameras which can be moved simultaneously by panning and tilting mechanisms, a wide view camera (WVC) for detecting eye position and an auto-focusing narrow view camera (NVC) for capturing enlarged eye images. Second, in order to remove the complicated calibration between the WVC and NVC and to enhance the capture speed of the NVC, these two cameras are combined in a parallel structure. Third, the auto-focusing of the NVC is achieved on the basis of both the user's facial width in the WVC image and a focus score calculated on the eye image of the NVC. Experimental results showed that the proposed system can be operated with a gaze tracking accuracy of ±0.737°∼±0.775° and a speed of 5∼10 frames/s. PMID:24105351

  9. A computational approach to real-time image processing for serial time-encoded amplified microscopy

    NASA Astrophysics Data System (ADS)

    Oikawa, Minoru; Hiyama, Daisuke; Hirayama, Ryuji; Hasegawa, Satoki; Endo, Yutaka; Sugie, Takahisa; Tsumura, Norimichi; Kuroshima, Mai; Maki, Masanori; Okada, Genki; Lei, Cheng; Ozeki, Yasuyuki; Goda, Keisuke; Shimobaba, Tomoyoshi

    2016-03-01

    High-speed imaging is an indispensable technique, particularly for identifying or analyzing fast-moving objects. The serial time-encoded amplified microscopy (STEAM) technique was proposed to enable us to capture images with a frame rate 1,000 times faster than using conventional methods such as CCD (charge-coupled device) cameras. The application of this high-speed STEAM imaging technique to a real-time system, such as flow cytometry for a cell-sorting system, requires successively processing a large number of captured images with high throughput in real time. We are now developing a high-speed flow cytometer system including a STEAM camera. In this paper, we describe our approach to processing these large amounts of image data in real time. We use an analog-to-digital converter that has up to 7.0G samples/s and 8-bit resolution for capturing the output voltage signal that involves grayscale images from the STEAM camera. Therefore the direct data output from the STEAM camera generates 7.0G byte/s continuously. We provided a field-programmable gate array (FPGA) device as a digital signal pre-processor for image reconstruction and finding objects in a microfluidic channel with high data rates in real time. We also utilized graphics processing unit (GPU) devices for accelerating the calculation speed of identification of the reconstructed images. We built our prototype system, which including a STEAM camera, a FPGA device and a GPU device, and evaluated its performance in real-time identification of small particles (beads), as virtual biological cells, owing through a microfluidic channel.

  10. Marker-less multi-frame motion tracking and compensation in PET-brain imaging

    NASA Astrophysics Data System (ADS)

    Lindsay, C.; Mukherjee, J. M.; Johnson, K.; Olivier, P.; Song, X.; Shao, L.; King, M. A.

    2015-03-01

    In PET brain imaging, patient motion can contribute significantly to the degradation of image quality potentially leading to diagnostic and therapeutic problems. To mitigate the image artifacts resulting from patient motion, motion must be detected and tracked then provided to a motion correction algorithm. Existing techniques to track patient motion fall into one of two categories: 1) image-derived approaches and 2) external motion tracking (EMT). Typical EMT requires patients to have markers in a known pattern on a rigid too attached to their head, which are then tracked by expensive and bulky motion tracking camera systems or stereo cameras. This has made marker-based EMT unattractive for routine clinical application. Our main contributions are the development of a marker-less motion tracking system that uses lowcost, small depth-sensing cameras which can be installed in the bore of the imaging system. Our motion tracking system does not require anything to be attached to the patient and can track the rigid transformation (6-degrees of freedom) of the patient's head at a rate 60 Hz. We show that our method can not only be used in with Multi-frame Acquisition (MAF) PET motion correction, but precise timing can be employed to determine only the necessary frames needed for correction. This can speeds up reconstruction by eliminating the unnecessary subdivision of frames.

  11. Determination of the Static Friction Coefficient from Circular Motion

    ERIC Educational Resources Information Center

    Molina-Bolívar, J. A.; Cabrerizo-Vílchez, M. A.

    2014-01-01

    This paper describes a physics laboratory exercise for determining the coefficient of static friction between two surfaces. The circular motion of a coin placed on the surface of a rotating turntable has been studied. For this purpose, the motion is recorded with a high-speed digital video camera recording at 240 frames s[superscript-1], and the…

  12. In-line particle measurement in a recovery boiler using high-speed infrared imaging

    NASA Astrophysics Data System (ADS)

    Siikanen, Sami; Miikkulainen, Pasi; Kaarre, Marko; Juuti, Mikko

    2012-06-01

    Black liquor is the fuel of Kraft recovery boilers. It is sprayed into the furnace of a recovery boiler through splashplate nozzles. The operation of a recovery boiler is largely influenced by the particle size and particle size distribution of black liquor. When entrained by upwards-flowing flue gas flow, small droplet particles may form carry-over and cause the fouling of heat transfer surfaces. Large droplet particles hit the char bed and the walls of the furnace without being dried. In this study, particles of black liquor sprays were imaged using a high-speed infrared camera. Measurements were done in a functional recovery boiler in a pulp mill. Objective was to find a suitable wavelength range and settings such as integration time, frame rate and averaging for the camera.

  13. Micro Fourier Transform Profilometry (μFTP): 3D shape measurement at 10,000 frames per second

    NASA Astrophysics Data System (ADS)

    Zuo, Chao; Tao, Tianyang; Feng, Shijie; Huang, Lei; Asundi, Anand; Chen, Qian

    2018-03-01

    Fringe projection profilometry is a well-established technique for optical 3D shape measurement. However, in many applications, it is desirable to make 3D measurements at very high speed, especially with fast moving or shape changing objects. In this work, we demonstrate a new 3D dynamic imaging technique, Micro Fourier Transform Profilometry (μFTP), which can realize an acquisition rate up to 10,000 3D frame per second (fps). The high measurement speed is achieved by the number of patterns reduction as well as high-speed fringe projection hardware. In order to capture 3D information in such a short period of time, we focus on the improvement of the phase recovery, phase unwrapping, and error compensation algorithms, allowing to reconstruct an accurate, unambiguous, and distortion-free 3D point cloud with every two projected patterns. We also develop a high-frame-rate fringe projection hardware by pairing a high-speed camera and a DLP projector, enabling binary pattern switching and precisely synchronized image capture at a frame rate up to 20,000 fps. Based on this system, we demonstrate high-quality textured 3D imaging of 4 transient scenes: vibrating cantilevers, rotating fan blades, flying bullet, and bursting balloon, which were previously difficult or even unable to be captured with conventional approaches.

  14. Micro Fourier Transform Profilometry (μFTP): 3D shape measurement at 10,000 frames per second

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zuo, Chao; Tao, Tianyang; Feng, Shijie

    We report that fringe projection profilometry is a well-established technique for optical 3D shape measurement. However, in many applications, it is desirable to make 3D measurements at very high speed, especially with fast moving or shape changing objects. In this work, we demonstrate a new 3D dynamic imaging technique, Micro Fourier Transform Profilometry (μFTP), which can realize an acquisition rate up to 10,000 3D frame per second (fps). The high measurement speed is achieved by the number of patterns reduction as well as high-speed fringe projection hardware. In order to capture 3D information in such a short period of time,more » we focus on the improvement of the phase recovery, phase unwrapping, and error compensation algorithms, allowing to reconstruct an accurate, unambiguous, and distortion-free 3D point cloud with every two projected patterns. We also develop a high-frame-rate fringe projection hardware by pairing a high-speed camera and a DLP projector, enabling binary pattern switching and precisely synchronized image capture at a frame rate up to 20,000 fps. Lastly, based on this system, we demonstrate high-quality textured 3D imaging of 4 transient scenes: vibrating cantilevers, rotating fan blades, flying bullet, and bursting balloon, which were previously difficult or even unable to be captured with conventional approaches.« less

  15. Ultra-fast bright field and fluorescence imaging of the dynamics of micrometer-sized objects

    NASA Astrophysics Data System (ADS)

    Chen, Xucai; Wang, Jianjun; Versluis, Michel; de Jong, Nico; Villanueva, Flordeliza S.

    2013-06-01

    High speed imaging has application in a wide area of industry and scientific research. In medical research, high speed imaging has the potential to reveal insight into mechanisms of action of various therapeutic interventions. Examples include ultrasound assisted thrombolysis, drug delivery, and gene therapy. Visual observation of the ultrasound, microbubble, and biological cell interaction may help the understanding of the dynamic behavior of microbubbles and may eventually lead to better design of such delivery systems. We present the development of a high speed bright field and fluorescence imaging system that incorporates external mechanical waves such as ultrasound. Through collaborative design and contract manufacturing, a high speed imaging system has been successfully developed at the University of Pittsburgh Medical Center. We named the system "UPMC Cam," to refer to the integrated imaging system that includes the multi-frame camera and its unique software control, the customized modular microscope, the customized laser delivery system, its auxiliary ultrasound generator, and the combined ultrasound and optical imaging chamber for in vitro and in vivo observations. This system is capable of imaging microscopic bright field and fluorescence movies at 25 × 106 frames per second for 128 frames, with a frame size of 920 × 616 pixels. Example images of microbubble under ultrasound are shown to demonstrate the potential application of the system.

  16. Micro Fourier Transform Profilometry (μFTP): 3D shape measurement at 10,000 frames per second

    DOE PAGES

    Zuo, Chao; Tao, Tianyang; Feng, Shijie; ...

    2017-11-06

    We report that fringe projection profilometry is a well-established technique for optical 3D shape measurement. However, in many applications, it is desirable to make 3D measurements at very high speed, especially with fast moving or shape changing objects. In this work, we demonstrate a new 3D dynamic imaging technique, Micro Fourier Transform Profilometry (μFTP), which can realize an acquisition rate up to 10,000 3D frame per second (fps). The high measurement speed is achieved by the number of patterns reduction as well as high-speed fringe projection hardware. In order to capture 3D information in such a short period of time,more » we focus on the improvement of the phase recovery, phase unwrapping, and error compensation algorithms, allowing to reconstruct an accurate, unambiguous, and distortion-free 3D point cloud with every two projected patterns. We also develop a high-frame-rate fringe projection hardware by pairing a high-speed camera and a DLP projector, enabling binary pattern switching and precisely synchronized image capture at a frame rate up to 20,000 fps. Lastly, based on this system, we demonstrate high-quality textured 3D imaging of 4 transient scenes: vibrating cantilevers, rotating fan blades, flying bullet, and bursting balloon, which were previously difficult or even unable to be captured with conventional approaches.« less

  17. Ultra-fast bright field and fluorescence imaging of the dynamics of micrometer-sized objects

    PubMed Central

    Chen, Xucai; Wang, Jianjun; Versluis, Michel; de Jong, Nico; Villanueva, Flordeliza S.

    2013-01-01

    High speed imaging has application in a wide area of industry and scientific research. In medical research, high speed imaging has the potential to reveal insight into mechanisms of action of various therapeutic interventions. Examples include ultrasound assisted thrombolysis, drug delivery, and gene therapy. Visual observation of the ultrasound, microbubble, and biological cell interaction may help the understanding of the dynamic behavior of microbubbles and may eventually lead to better design of such delivery systems. We present the development of a high speed bright field and fluorescence imaging system that incorporates external mechanical waves such as ultrasound. Through collaborative design and contract manufacturing, a high speed imaging system has been successfully developed at the University of Pittsburgh Medical Center. We named the system “UPMC Cam,” to refer to the integrated imaging system that includes the multi-frame camera and its unique software control, the customized modular microscope, the customized laser delivery system, its auxiliary ultrasound generator, and the combined ultrasound and optical imaging chamber for in vitro and in vivo observations. This system is capable of imaging microscopic bright field and fluorescence movies at 25 × 106 frames per second for 128 frames, with a frame size of 920 × 616 pixels. Example images of microbubble under ultrasound are shown to demonstrate the potential application of the system. PMID:23822346

  18. High-Speed Video Observations of a Natural Lightning Stepped Leader

    NASA Astrophysics Data System (ADS)

    Jordan, D. M.; Hill, J. D.; Uman, M. A.; Yoshida, S.; Kawasaki, Z.

    2010-12-01

    High-speed video images of one branch of a natural negative lightning stepped leader were obtained at a frame rate of 300 kfps (3.33 us exposure) on June 18th, 2010 at the International Center for Lightning Research and Testing (ICLRT) located on the Camp Blanding Army National Guard Base in north-central Florida. The images were acquired using a 20 mm Nikon lens mounted on a Photron SA1.1 high-speed camera. A total of 225 frames (about 0.75 ms) of the downward stepped leader were captured, followed by 45 frames of the leader channel re-illumination by the return stroke and subsequent decay following the ground attachment of the primary leader channel. Luminous characteristics of dart-stepped leader propagation in triggered lightning obtained by Biagi et al. [2009, 2010] and of long laboratory spark formation [e.g., Bazelyan and Raizer, 1998; Gallimberti et al., 2002] are evident in the frames of the natural lightning stepped leader. Space stems/leaders are imaged in twelve different frames at various distances in front of the descending leader tip, which branches into two distinct components 125 frames after the channel enters the field of view. In each case, the space stem/leader appears to connect to the leader tip above in the subsequent frame, forming a new step. Each connection is associated with significant isolated brightening of the channel at the connection point followed by typically three or four frames of upward propagating re-illumination of the existing leader channel. In total, at least 80 individual steps were imaged.

  19. Near-infrared high-resolution real-time omnidirectional imaging platform for drone detection

    NASA Astrophysics Data System (ADS)

    Popovic, Vladan; Ott, Beat; Wellig, Peter; Leblebici, Yusuf

    2016-10-01

    Recent technological advancements in hardware systems have made higher quality cameras. State of the art panoramic systems use them to produce videos with a resolution of 9000 x 2400 pixels at a rate of 30 frames per second (fps).1 Many modern applications use object tracking to determine the speed and the path taken by each object moving through a scene. The detection requires detailed pixel analysis between two frames. In fields like surveillance systems or crowd analysis, this must be achieved in real time.2 In this paper, we focus on the system-level design of multi-camera sensor acquiring near-infrared (NIR) spectrum and its ability to detect mini-UAVs in a representative rural Swiss environment. The presented results show the UAV detection from the trial that we conducted during a field trial in August 2015.

  20. Schlieren imaging of loud sounds and weak shock waves in air near the limit of visibility

    NASA Astrophysics Data System (ADS)

    Hargather, Michael John; Settles, Gary S.; Madalis, Matthew J.

    2010-02-01

    A large schlieren system with exceptional sensitivity and a high-speed digital camera are used to visualize loud sounds and a variety of common phenomena that produce weak shock waves in the atmosphere. Frame rates varied from 10,000 to 30,000 frames/s with microsecond frame exposures. Sound waves become visible to this instrumentation at frequencies above 10 kHz and sound pressure levels in the 110 dB (6.3 Pa) range and above. The density gradient produced by a weak shock wave is examined and found to depend upon the profile and thickness of the shock as well as the density difference across it. Schlieren visualizations of weak shock waves from common phenomena include loud trumpet notes, various impact phenomena that compress a bubble of air, bursting a toy balloon, popping a champagne cork, snapping a wooden stick, and snapping a wet towel. The balloon burst, snapping a ruler on a table, and snapping the towel and a leather belt all produced readily visible shock-wave phenomena. In contrast, clapping the hands, snapping the stick, and the champagne cork all produced wave trains that were near the weak limit of visibility. Overall, with sensitive optics and a modern high-speed camera, many nonlinear acoustic phenomena in the air can be observed and studied.

  1. Note: Sound recovery from video using SVD-based information extraction

    NASA Astrophysics Data System (ADS)

    Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Chang'an

    2016-08-01

    This note reports an efficient singular value decomposition (SVD)-based vibration extraction approach that recovers sound information in silent high-speed video. A high-speed camera of which frame rates are in the range of 2 kHz-10 kHz is applied to film the vibrating objects. Sub-images cut from video frames are transformed into column vectors and then reconstructed to a new matrix. The SVD of the new matrix produces orthonormal image bases (OIBs) and image projections onto specific OIB can be recovered as understandable acoustical signals. Standard frequencies of 256 Hz and 512 Hz tuning forks are extracted offline from their vibrating surfaces and a 3.35 s speech signal is recovered online from a piece of paper that is stimulated by sound waves within 1 min.

  2. Spectroscopy and optical imaging of coalescing droplets

    NASA Astrophysics Data System (ADS)

    Ivanov, Maksym; Viderström, Michel; Chang, Kelken; Ramírez Contreras, Claudia; Mehlig, Bernhard; Hanstorp, Dag

    2016-09-01

    We report on experimental investigations of the dynamics of colliding liquid droplets by combining optical trapping, spectroscopy and high-speed color imaging. Two droplets with diameters between 5 and 50 microns are suspended in quiescent air by optical traps. The traps allows us to control the initial positions, and hence the impact parameter and the relative velocity of the colliding droplets. Movies of the droplet dynamics are recorded using high-speed digital movie cameras at a frame rate of up to 63000 frames per second. A fluorescent dye is added to one of the colliding droplets. We investigate the temporal evolution of the scattered and fluorescence light from the colliding droplets with concurrent spectroscopy and color imaging. This technique can be used to detect the exchange of molecules between a pair of neutral or charged droplets.

  3. The threshold of vapor channel formation in water induced by pulsed CO2 laser

    NASA Astrophysics Data System (ADS)

    Guo, Wenqing; Zhang, Xianzeng; Zhan, Zhenlin; Xie, Shusen

    2012-12-01

    Water plays an important role in laser ablation. There are two main interpretations of laser-water interaction: hydrokinetic effect and vapor phenomenon. The two explanations are reasonable in some way, but they can't explain the mechanism of laser-water interaction completely. In this study, the dynamic process of vapor channel formation induced by pulsed CO2 laser in static water layer was monitored by high-speed camera. The wavelength of pulsed CO2 laser is 10.64 um, and pulse repetition rate is 60 Hz. The laser power ranged from 1 to 7 W with a step of 0.5 W. The frame rate of high-speed camera used in the experiment was 80025 fps. Based on high-speed camera pictures, the dynamic process of vapor channel formation was examined, and the threshold of vapor channel formation, pulsation period, the volume, the maximum depth and corresponding width of vapor channel were determined. The results showed that the threshold of vapor channel formation was about 2.5 W. Moreover, pulsation period, the maximum depth and corresponding width of vapor channel increased with the increasing of the laser power.

  4. High speed Infrared imaging method for observation of the fast varying temperature phenomena

    NASA Astrophysics Data System (ADS)

    Moghadam, Reza; Alavi, Kambiz; Yuan, Baohong

    With new improvements in high-end commercial R&D camera technologies many challenges have been overcome for exploring the high-speed IR camera imaging. The core benefits of this technology is the ability to capture fast varying phenomena without image blur, acquire enough data to properly characterize dynamic energy, and increase the dynamic range without compromising the number of frames per second. This study presents a noninvasive method for determining the intensity field of a High Intensity Focused Ultrasound Device (HIFU) beam using Infrared imaging. High speed Infrared camera was placed above the tissue-mimicking material that was heated by HIFU with no other sensors present in the HIFU axial beam. A MATLAB simulation code used to perform a finite-element solution to the pressure wave propagation and heat equations within the phantom and temperature rise to the phantom was computed. Three different power levels of HIFU transducers were tested and the predicted temperature increase values were within about 25% of IR measurements. The fundamental theory and methods developed in this research can be used to detect fast varying temperature phenomena in combination with the infrared filters.

  5. High-frame-rate infrared and visible cameras for test range instrumentation

    NASA Astrophysics Data System (ADS)

    Ambrose, Joseph G.; King, B.; Tower, John R.; Hughes, Gary W.; Levine, Peter A.; Villani, Thomas S.; Esposito, Benjamin J.; Davis, Timothy J.; O'Mara, K.; Sjursen, W.; McCaffrey, Nathaniel J.; Pantuso, Francis P.

    1995-09-01

    Field deployable, high frame rate camera systems have been developed to support the test and evaluation activities at the White Sands Missile Range. The infrared cameras employ a 640 by 480 format PtSi focal plane array (FPA). The visible cameras employ a 1024 by 1024 format backside illuminated CCD. The monolithic, MOS architecture of the PtSi FPA supports commandable frame rate, frame size, and integration time. The infrared cameras provide 3 - 5 micron thermal imaging in selectable modes from 30 Hz frame rate, 640 by 480 frame size, 33 ms integration time to 300 Hz frame rate, 133 by 142 frame size, 1 ms integration time. The infrared cameras employ a 500 mm, f/1.7 lens. Video outputs are 12-bit digital video and RS170 analog video with histogram-based contrast enhancement. The 1024 by 1024 format CCD has a 32-port, split-frame transfer architecture. The visible cameras exploit this architecture to provide selectable modes from 30 Hz frame rate, 1024 by 1024 frame size, 32 ms integration time to 300 Hz frame rate, 1024 by 1024 frame size (with 2:1 vertical binning), 0.5 ms integration time. The visible cameras employ a 500 mm, f/4 lens, with integration time controlled by an electro-optical shutter. Video outputs are RS170 analog video (512 by 480 pixels), and 12-bit digital video.

  6. A new approach to the form and position error measurement of the auto frame surface based on laser

    NASA Astrophysics Data System (ADS)

    Wang, Hua; Li, Wei

    2013-03-01

    Auto frame is a very large workpiece, with length up to 12 meters and width up to 2 meters, and it's very easy to know that it's inconvenient and not automatic to measure such a large workpiece by independent manual operation. In this paper we propose a new approach to reconstruct the 3D model of the large workpiece, especially the auto truck frame, based on multiple pulsed lasers, for the purpose of measuring the form and position errors. In a concerned area, it just needs one high-speed camera and two lasers. It is a fast, high-precision and economical approach.

  7. High-Speed Edge-Detecting Line Scan Smart Camera

    NASA Technical Reports Server (NTRS)

    Prokop, Norman F.

    2012-01-01

    A high-speed edge-detecting line scan smart camera was developed. The camera is designed to operate as a component in a NASA Glenn Research Center developed inlet shock detection system. The inlet shock is detected by projecting a laser sheet through the airflow. The shock within the airflow is the densest part and refracts the laser sheet the most in its vicinity, leaving a dark spot or shadowgraph. These spots show up as a dip or negative peak within the pixel intensity profile of an image of the projected laser sheet. The smart camera acquires and processes in real-time the linear image containing the shock shadowgraph and outputting the shock location. Previously a high-speed camera and personal computer would perform the image capture and processing to determine the shock location. This innovation consists of a linear image sensor, analog signal processing circuit, and a digital circuit that provides a numerical digital output of the shock or negative edge location. The smart camera is capable of capturing and processing linear images at over 1,000 frames per second. The edges are identified as numeric pixel values within the linear array of pixels, and the edge location information can be sent out from the circuit in a variety of ways, such as by using a microcontroller and onboard or external digital interface to include serial data such as RS-232/485, USB, Ethernet, or CAN BUS; parallel digital data; or an analog signal. The smart camera system can be integrated into a small package with a relatively small number of parts, reducing size and increasing reliability over the previous imaging system..

  8. Dynamic frequency-domain interferometer for absolute distance measurements with high resolution

    NASA Astrophysics Data System (ADS)

    Weng, Jidong; Liu, Shenggang; Ma, Heli; Tao, Tianjiong; Wang, Xiang; Liu, Cangli; Tan, Hua

    2014-11-01

    A unique dynamic frequency-domain interferometer for absolute distance measurement has been developed recently. This paper presents the working principle of the new interferometric system, which uses a photonic crystal fiber to transmit the wide-spectrum light beams and a high-speed streak camera or frame camera to record the interference stripes. Preliminary measurements of harmonic vibrations of a speaker, driven by a radio, and the changes in the tip clearance of a rotating gear wheel show that this new type of interferometer has the ability to perform absolute distance measurements both with high time- and distance-resolution.

  9. Development of a drive system for a sequential space camera

    NASA Technical Reports Server (NTRS)

    Sharpsteen, J. T.; Solheim, C. D.; Stoap, L. J.

    1976-01-01

    An electronically commutated dc motor is reported for driving the camera claw and magazine, and a stepper motor is described for driving the shutter with the two motors synchronized electrically. Subsequent tests on the breadboard positively proved the concept, but further development beyond this study should be done. The breadboard testing also established that the electronically commutated motor can control speed over a wide dynamic range, and has a high torque capability for accelerating loads. This performance suggested the possibility of eliminating the clutch from the system while retaining all of the other mechanical features of the DAC, if the requirement for independent shutter speeds and frame rates can be removed. Therefore, as a final step in the study, the breadboard shutter and shutter drive were returned to the original DAC configuration, while retaining the brushless dc motor drive.

  10. Real-time color measurement using active illuminant

    NASA Astrophysics Data System (ADS)

    Tominaga, Shoji; Horiuchi, Takahiko; Yoshimura, Akihiko

    2010-01-01

    This paper proposes a method for real-time color measurement using active illuminant. A synchronous measurement system is constructed by combining a high-speed active spectral light source and a high-speed monochrome camera. The light source is a programmable spectral source which is capable of emitting arbitrary spectrum in high speed. This system is the essential advantage of capturing spectral images without using filters in high frame rates. The new method of real-time colorimetry is different from the traditional method based on the colorimeter or the spectrometers. We project the color-matching functions onto an object surface as spectral illuminants. Then we can obtain the CIE-XYZ tristimulus values directly from the camera outputs at every point on the surface. We describe the principle of our colorimetric technique based on projection of the color-matching functions and the procedure for realizing a real-time measurement system of a moving object. In an experiment, we examine the performance of real-time color measurement for a static object and a moving object.

  11. TH-CD-201-10: Highly Efficient Synchronized High-Speed Scintillation Camera System for Measuring Proton Range, SOBP and Dose Distributions in a 2D-Plane

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goddu, S; Sun, B; Grantham, K

    2016-06-15

    Purpose: Proton therapy (PT) delivery is complex and extremely dynamic. Therefore, quality assurance testing is vital, but highly time-consuming. We have developed a High-Speed Scintillation-Camera-System (HS-SCS) for simultaneously measuring multiple beam characteristics. Methods: High-speed camera was placed in a light-tight housing and dual-layer neutron shield. HS-SCS is synchronized with a synchrocyclotron to capture individual proton-beam-pulses (PBPs) at ∼504 frames/sec. The PBPs from synchrocyclotron trigger the HS-SCS to open its shutter for programmed exposure-time. Light emissions within 30×30×5cm3 plastic-scintillator (BC-408) were captured by a CCD-camera as individual images revealing dose-deposition in a 2D-plane with a resolution of 0.7mm for range andmore » SOBP measurements and 1.67mm for profiles. The CCD response as well as signal to noise ratio (SNR) was characterized for varying exposure times, gains for different light intensities using a TV-Optoliner system. Software tools were developed to analyze ∼5000 images to extract different beam parameters. Quenching correction-factors were established by comparing scintillation Bragg-Peaks with water scanned ionization-chamber measurements. Quenching corrected Bragg-peaks were integrated to ascertain proton-beam range (PBR), width of Spared-Out-Bragg-Peak (MOD) and distal.« less

  12. A device for synchronizing biomechanical data with cine film.

    PubMed

    Rome, L C

    1995-03-01

    Biomechanists are faced with two problems in synchronizing continuous physiological data to discrete, frame-based kinematic data from films. First, the accuracy of most synchronization techniques is good only to one frame and hence depends on framing rate. Second, even if perfectly correlated at the beginning of a 'take', the film and physiological data may become progressively desynchronized as the 'take' proceeds. A system is described, which provides synchronization between cine film and continuous physiological data with an accuracy of +/- 0.2 ms, independent of framing rate and the duration of the film 'take'. Shutter pulses from the camera were output to a computer recording system where they were recorded and counted, and to a digital device which counted the pulses and illuminated the count on the bank of LEDs which was filmed with the subject. Synchronization was performed by using the rising edge of the shutter pulse and by comparing the frame number imprinted on the film to the frame number recorded by the computer system. In addition to providing highly accurate synchronization over long film 'takes', this system provides several other advantages. First, having frame numbers imprinted both on the film and computer record greatly facilitates analysis. Second, the LEDs were designed to show the 'take number' while the camera is coming up to speed, thereby avoiding the use of cue cards which disturb the animal. Finally, use of this device results in considerable savings in film.

  13. The application of high-speed cinematography for the quantitative analysis of equine locomotion.

    PubMed

    Fredricson, I; Drevemo, S; Dalin, G; Hjertën, G; Björne, K

    1980-04-01

    Locomotive disorders constitute a serious problem in horse racing which will only be rectified by a better understanding of the causative factors associated with disturbances of gait. This study describes a system for the quantitative analysis of the locomotion of horses at speed. The method is based on high-speed cinematography with a semi-automatic system of analysis of the films. The recordings are made with a 16 mm high-speed camera run at 500 frames per second (fps) and the films are analysed by special film-reading equipment and a mini-computer. The time and linear gait variables are presented in tabular form and the angles and trajectories of the joints and body segments are presented graphically.

  14. Solar Extreme Ultraviolet Rocket Telesope Spectrograph ** SERTS ** Detector and Electronics subsystems

    NASA Astrophysics Data System (ADS)

    Payne, L.; Haas, J. P.; Linard, D.; White, L.

    1997-12-01

    The Laboratory for Astronomy and Solar Physics at Goddard Space Flight Center uses a variety imaging sensors for its instrumentation programs. This paper describes the detector system for SERTS. The SERTS rocket telescope uses an open faceplate, single plate MCP tube as the primary detector for EUV spectra from the Sun. The optical output of this detector is fiber-optically coupled to a cooled, large format CCD. This CCD is operated using a software controlled Camera controller based upon a design used for the SOHO/CDS mission. This camera is a general purpose design, with a topology that supports multiple types of imaging devices. Multiport devices (up to 4 ports) and multiphase clocks are supportable as well as variable speed operation. Clock speeds from 100KHz to 1MHz have been used, and the topology is currently being extended to support 10MHz operation. The form factor for the camera system is based on the popular VME buss. Because the tube is an open faceplate design, the detector system has an assortment of vacuum doors and plumbing to allow operation in vacuum but provide for safe storage at normal atmosphere. Vac-ion pumps (3) are used to maintain working vacuum at all times. Marshall Space Flight Center provided the SERTS programs with HVPS units for both the vac-ion pumps and the MCP tube. The MCP tube HVPS is a direct derivative of the design used for the SXI mission for NOAA. Auxiliary equipment includes a frame buffer that works either as a multi-frame storage unit or as a photon counting accumulation unit. This unit also performs interface buffering so that the camera may appear as a piece of GPIB instrumentation.

  15. Electronic camera-management system for 35-mm and 70-mm film cameras

    NASA Astrophysics Data System (ADS)

    Nielsen, Allan

    1993-01-01

    Military and commercial test facilities have been tasked with the need for increasingly sophisticated data collection and data reduction. A state-of-the-art electronic control system for high speed 35 mm and 70 mm film cameras designed to meet these tasks is described. Data collection in today's test range environment is difficult at best. The need for a completely integrated image and data collection system is mandated by the increasingly complex test environment. Instrumentation film cameras have been used on test ranges to capture images for decades. Their high frame rates coupled with exceptionally high resolution make them an essential part of any test system. In addition to documenting test events, today's camera system is required to perform many additional tasks. Data reduction to establish TSPI (time- space-position information) may be performed after a mission and is subject to all of the variables present in documenting the mission. A typical scenario would consist of multiple cameras located on tracking mounts capturing the event along with azimuth and elevation position data. Corrected data can then be reduced using each camera's time and position deltas and calculating the TSPI of the object using triangulation. An electronic camera control system designed to meet these requirements has been developed by Photo-Sonics, Inc. The feedback received from test technicians at range facilities throughout the world led Photo-Sonics to design the features of this control system. These prominent new features include: a comprehensive safety management system, full local or remote operation, frame rate accuracy of less than 0.005 percent, and phase locking capability to Irig-B. In fact, Irig-B phase lock operation of multiple cameras can reduce the time-distance delta of a test object traveling at mach-1 to less than one inch during data reduction.

  16. Assessment of the metrological performance of an in situ storage image sensor ultra-high speed camera for full-field deformation measurements

    NASA Astrophysics Data System (ADS)

    Rossi, Marco; Pierron, Fabrice; Forquin, Pascal

    2014-02-01

    Ultra-high speed (UHS) cameras allow us to acquire images typically up to about 1 million frames s-1 for a full spatial resolution of the order of 1 Mpixel. Different technologies are available nowadays to achieve these performances, an interesting one is the so-called in situ storage image sensor architecture where the image storage is incorporated into the sensor chip. Such an architecture is all solid state and does not contain movable devices as occurs, for instance, in the rotating mirror UHS cameras. One of the disadvantages of this system is the low fill factor (around 76% in the vertical direction and 14% in the horizontal direction) since most of the space in the sensor is occupied by memory. This peculiarity introduces a series of systematic errors when the camera is used to perform full-field strain measurements. The aim of this paper is to develop an experimental procedure to thoroughly characterize the performance of such kinds of cameras in full-field deformation measurement and identify the best operative conditions which minimize the measurement errors. A series of tests was performed on a Shimadzu HPV-1 UHS camera first using uniform scenes and then grids under rigid movements. The grid method was used as full-field measurement optical technique here. From these tests, it has been possible to appropriately identify the camera behaviour and utilize this information to improve actual measurements.

  17. CMOS Image Sensors for High Speed Applications.

    PubMed

    El-Desouki, Munir; Deen, M Jamal; Fang, Qiyin; Liu, Louis; Tse, Frances; Armstrong, David

    2009-01-01

    Recent advances in deep submicron CMOS technologies and improved pixel designs have enabled CMOS-based imagers to surpass charge-coupled devices (CCD) imaging technology for mainstream applications. The parallel outputs that CMOS imagers can offer, in addition to complete camera-on-a-chip solutions due to being fabricated in standard CMOS technologies, result in compelling advantages in speed and system throughput. Since there is a practical limit on the minimum pixel size (4∼5 μm) due to limitations in the optics, CMOS technology scaling can allow for an increased number of transistors to be integrated into the pixel to improve both detection and signal processing. Such smart pixels truly show the potential of CMOS technology for imaging applications allowing CMOS imagers to achieve the image quality and global shuttering performance necessary to meet the demands of ultrahigh-speed applications. In this paper, a review of CMOS-based high-speed imager design is presented and the various implementations that target ultrahigh-speed imaging are described. This work also discusses the design, layout and simulation results of an ultrahigh acquisition rate CMOS active-pixel sensor imager that can take 8 frames at a rate of more than a billion frames per second (fps).

  18. Game of thrown bombs in 3D: using high speed cameras and photogrammetry techniques to reconstruct bomb trajectories at Stromboli (Italy)

    NASA Astrophysics Data System (ADS)

    Gaudin, D.; Taddeucci, J.; Scarlato, P.; Del Bello, E.; Houghton, B. F.; Orr, T. R.; Andronico, D.; Kueppers, U.

    2015-12-01

    Large juvenile bombs and lithic clasts, produced and ejected during explosive volcanic eruptions, follow ballistic trajectories. Of particular interest are: 1) the determination of ejection velocity and launch angle, which give insights into shallow conduit conditions and geometry; 2) particle trajectories, with an eye on trajectory evolution caused by collisions between bombs, as well as the interaction between bombs and ash/gas plumes; and 3) the computation of the final emplacement of bomb-sized clasts, which is important for hazard assessment and risk management. Ground-based imagery from a single camera only allows the reconstruction of bomb trajectories in a plan perpendicular to the line of sight, which may lead to underestimation of bomb velocities and does not allow the directionality of the ejections to be studied. To overcome this limitation, we adapted photogrammetry techniques to reconstruct 3D bomb trajectories from two or three synchronized high-speed video cameras. In particular, we modified existing algorithms to consider the errors that may arise from the very high velocity of the particles and the impossibility of measuring tie points close to the scene. Our method was tested during two field campaigns at Stromboli. In 2014, two high-speed cameras with a 500 Hz frame rate and a ~2 cm resolution were set up ~350m from the crater, 10° apart and synchronized. The experiment was repeated with similar parameters in 2015, but using three high-speed cameras in order to significantly reduce uncertainties and allow their estimation. Trajectory analyses for tens of bombs at various times allowed for the identification of shifts in the mean directivity and dispersal angle of the jets during the explosions. These time evolutions are also visible on the permanent video-camera monitoring system, demonstrating the applicability of our method to all kinds of explosive volcanoes.

  19. Slow speed—fast motion: time-lapse recordings in physics education

    NASA Astrophysics Data System (ADS)

    Vollmer, Michael; Möllmann, Klaus-Peter

    2018-05-01

    Video analysis with a 30 Hz frame rate is the standard tool in physics education. The development of affordable high-speed-cameras has extended the capabilities of the tool for much smaller time scales to the 1 ms range, using frame rates of typically up to 1000 frames s-1, allowing us to study transient physics phenomena happening too fast for the naked eye. Here we want to extend the range of phenomena which may be studied by video analysis in the opposite direction by focusing on much longer time scales ranging from minutes, hours to many days or even months. We discuss this time-lapse method, needed equipment and give a few hints of how to produce respective recordings for two specific experiments.

  20. Fast Fiber-Coupled Imaging Devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brockington, Samuel; Case, Andrew; Witherspoon, Franklin Douglas

    HyperV Technologies Corp. has successfully designed, built and experimentally demonstrated a full scale 1024 pixel 100 MegaFrames/s fiber coupled camera with 12 or 14 bits, and record lengths of 32K frames, exceeding our original performance objectives. This high-pixel-count, fiber optically-coupled, imaging diagnostic can be used for investigating fast, bright plasma events. In Phase 1 of this effort, a 100 pixel fiber-coupled fast streak camera for imaging plasma jet profiles was constructed and successfully demonstrated. The resulting response from outside plasma physics researchers emphasized development of increased pixel performance as a higher priority over increasing pixel count. In this Phase 2more » effort, HyperV therefore focused on increasing the sample rate and bit-depth of the photodiode pixel designed in Phase 1, while still maintaining a long record length and holding the cost per channel to levels which allowed up to 1024 pixels to be constructed. Cost per channel was 53.31 dollars, very close to our original target of $50 per channel. The system consists of an imaging "camera head" coupled to a photodiode bank with an array of optical fibers. The output of these fast photodiodes is then digitized at 100 Megaframes per second and stored in record lengths of 32,768 samples with bit depths of 12 to 14 bits per pixel. Longer record lengths are possible with additional memory. A prototype imaging system with up to 1024 pixels was designed and constructed and used to successfully take movies of very fast moving plasma jets as a demonstration of the camera performance capabilities. Some faulty electrical components on the 64 circuit boards resulted in only 1008 functional channels out of 1024 on this first generation prototype system. We experimentally observed backlit high speed fan blades in initial camera testing and then followed that with full movies and streak images of free flowing high speed plasma jets (at 30-50 km/s). Jet structure and jet collisions onto metal pillars in the path of the plasma jets were recorded in a single shot. This new fast imaging system is an attractive alternative to conventional fast framing cameras for applications and experiments where imaging events using existing techniques are inefficient or impossible. The development of HyperV's new diagnostic was split into two tracks: a next generation camera track, in which HyperV built, tested, and demonstrated a prototype 1024 channel camera at its own facility, and a second plasma community beta test track, where selected plasma physics programs received small systems of a few test pixels to evaluate the expected performance of a full scale camera on their experiments. These evaluations were performed as part of an unfunded collaboration with researchers at Los Alamos National Laboratory and the University of California at Davis. Results from the prototype 1024-pixel camera are discussed, as well as results from the collaborations with test pixel system deployment sites.« less

  1. Unmanned Vehicle Guidance Using Video Camera/Vehicle Model

    NASA Technical Reports Server (NTRS)

    Sutherland, T.

    1999-01-01

    A video guidance sensor (VGS) system has flown on both STS-87 and STS-95 to validate a single camera/target concept for vehicle navigation. The main part of the image algorithm was the subtraction of two consecutive images using software. For a nominal size image of 256 x 256 pixels this subtraction can take a large portion of the time between successive frames in standard rate video leaving very little time for other computations. The purpose of this project was to integrate the software subtraction into hardware to speed up the subtraction process and allow for more complex algorithms to be performed, both in hardware and software.

  2. High-Speed Observer: Automated Streak Detection in SSME Plumes

    NASA Technical Reports Server (NTRS)

    Rieckoff, T. J.; Covan, M.; OFarrell, J. M.

    2001-01-01

    A high frame rate digital video camera installed on test stands at Stennis Space Center has been used to capture images of Space Shuttle main engine plumes during test. These plume images are processed in real time to detect and differentiate anomalous plume events occurring during a time interval on the order of 5 msec. Such speed yields near instantaneous availability of information concerning the state of the hardware. This information can be monitored by the test conductor or by other computer systems, such as the integrated health monitoring system processors, for possible test shutdown before occurrence of a catastrophic engine failure.

  3. SarcOptiM for ImageJ: high-frequency online sarcomere length computing on stimulated cardiomyocytes.

    PubMed

    Pasqualin, Côme; Gannier, François; Yu, Angèle; Malécot, Claire O; Bredeloux, Pierre; Maupoil, Véronique

    2016-08-01

    Accurate measurement of cardiomyocyte contraction is a critical issue for scientists working on cardiac physiology and physiopathology of diseases implying contraction impairment. Cardiomyocytes contraction can be quantified by measuring sarcomere length, but few tools are available for this, and none is freely distributed. We developed a plug-in (SarcOptiM) for the ImageJ/Fiji image analysis platform developed by the National Institutes of Health. SarcOptiM computes sarcomere length via fast Fourier transform analysis of video frames captured or displayed in ImageJ and thus is not tied to a dedicated video camera. It can work in real time or offline, the latter overcoming rotating motion or displacement-related artifacts. SarcOptiM includes a simulator and video generator of cardiomyocyte contraction. Acquisition parameters, such as pixel size and camera frame rate, were tested with both experimental recordings of rat ventricular cardiomyocytes and synthetic videos. It is freely distributed, and its source code is available. It works under Windows, Mac, or Linux operating systems. The camera speed is the limiting factor, since the algorithm can compute online sarcomere shortening at frame rates >10 kHz. In conclusion, SarcOptiM is a free and validated user-friendly tool for studying cardiomyocyte contraction in all species, including human. Copyright © 2016 the American Physiological Society.

  4. Multiple Sensor Camera for Enhanced Video Capturing

    NASA Astrophysics Data System (ADS)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  5. Colliding droplets: A short film presentation

    NASA Astrophysics Data System (ADS)

    Hendricks, C. D.

    1981-12-01

    A series of experiments were performed in which liquid droplets were caused to collide. Impact velocities to several meters per second and droplet diameters up to 600 micrometers were used. The impact parameters in the collisions vary from zero to greater than the sum of the droplet radii. Photographs of the collisions were taken with a high speed framing camera in order to study the impacts and subsequent behavior of the droplets.

  6. Video-rate or high-precision: a flexible range imaging camera

    NASA Astrophysics Data System (ADS)

    Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.; Payne, Andrew D.; Conroy, Richard M.; Godbaz, John P.; Jongenelen, Adrian P. P.

    2008-02-01

    A range imaging camera produces an output similar to a digital photograph, but every pixel in the image contains distance information as well as intensity. This is useful for measuring the shape, size and location of objects in a scene, hence is well suited to certain machine vision applications. Previously we demonstrated a heterodyne range imaging system operating in a relatively high resolution (512-by-512) pixels and high precision (0.4 mm best case) configuration, but with a slow measurement rate (one every 10 s). Although this high precision range imaging is useful for some applications, the low acquisition speed is limiting in many situations. The system's frame rate and length of acquisition is fully configurable in software, which means the measurement rate can be increased by compromising precision and image resolution. In this paper we demonstrate the flexibility of our range imaging system by showing examples of high precision ranging at slow acquisition speeds and video-rate ranging with reduced ranging precision and image resolution. We also show that the heterodyne approach and the use of more than four samples per beat cycle provides better linearity than the traditional homodyne quadrature detection approach. Finally, we comment on practical issues of frame rate and beat signal frequency selection.

  7. Masked-backlighter technique used to simultaneously image x-ray absorption and x-ray emission from an inertial confinement fusion plasma.

    PubMed

    Marshall, F J; Radha, P B

    2014-11-01

    A method to simultaneously image both the absorption and the self-emission of an imploding inertial confinement fusion plasma has been demonstrated on the OMEGA Laser System. The technique involves the use of a high-Z backlighter, half of which is covered with a low-Z material, and a high-speed x-ray framing camera aligned to capture images backlit by this masked backlighter. Two strips of the four-strip framing camera record images backlit by the high-Z portion of the backlighter, while the other two strips record images aligned with the low-Z portion of the backlighter. The emission from the low-Z material is effectively eliminated by a high-Z filter positioned in front of the framing camera, limiting the detected backlighter emission to that of the principal emission line of the high-Z material. As a result, half of the images are of self-emission from the plasma and the other half are of self-emission plus the backlighter. The advantage of this technique is that the self-emission simultaneous with backlighter absorption is independently measured from a nearby direction. The absorption occurs only in the high-Z backlit frames and is either spatially separated from the emission or the self-emission is suppressed by filtering, or by using a backlighter much brighter than the self-emission, or by subtraction. The masked-backlighter technique has been used on the OMEGA Laser System to simultaneously measure the emission profiles and the absorption profiles of polar-driven implosions.

  8. Ultra-fast framing camera tube

    DOEpatents

    Kalibjian, Ralph

    1981-01-01

    An electronic framing camera tube features focal plane image dissection and synchronized restoration of the dissected electron line images to form two-dimensional framed images. Ultra-fast framing is performed by first streaking a two-dimensional electron image across a narrow slit, thereby dissecting the two-dimensional electron image into sequential electron line images. The dissected electron line images are then restored into a framed image by a restorer deflector operated synchronously with the dissector deflector. The number of framed images on the tube's viewing screen is equal to the number of dissecting slits in the tube. The distinguishing features of this ultra-fast framing camera tube are the focal plane dissecting slits, and the synchronously-operated restorer deflector which restores the dissected electron line images into a two-dimensional framed image. The framing camera tube can produce image frames having high spatial resolution of optical events in the sub-100 picosecond range.

  9. High speed video shooting with continuous-wave laser illumination in laboratory modeling of wind - wave interaction

    NASA Astrophysics Data System (ADS)

    Kandaurov, Alexander; Troitskaya, Yuliya; Caulliez, Guillemette; Sergeev, Daniil; Vdovin, Maxim

    2014-05-01

    Three examples of usage of high-speed video filming in investigation of wind-wave interaction in laboratory conditions is described. Experiments were carried out at the Wind - wave stratified flume of IAP RAS (length 10 m, cross section of air channel 0.4 x 0.4 m, wind velocity up to 24 m/s) and at the Large Air-Sea Interaction Facility (LASIF) - MIO/Luminy (length 40 m, cross section of air channel 3.2 x 1.6 m, wind velocity up to 10 m/s). A combination of PIV-measurements, optical measurements of water surface form and wave gages were used for detailed investigation of the characteristics of the wind flow over the water surface. The modified PIV-method is based on the use of continuous-wave (CW) laser illumination of the airflow seeded by particles and high-speed video. During the experiments on the Wind - wave stratified flume of IAP RAS Green (532 nm) CW laser with 1.5 Wt output power was used as a source for light sheet. High speed digital camera Videosprint (VS-Fast) was used for taking visualized air flow images with the frame rate 2000 Hz. Velocity air flow field was retrieved by PIV images processing with adaptive cross-correlation method on the curvilinear grid following surface wave profile. The mean wind velocity profiles were retrieved using conditional in phase averaging like in [1]. In the experiments on the LASIF more powerful Argon laser (4 Wt, CW) was used as well as high-speed camera with higher sensitivity and resolution: Optronics Camrecord CR3000x2, frame rate 3571 Hz, frame size 259×1696 px. In both series of experiments spherical 0.02 mm polyamide particles with inertial time 7 ms were used for seeding airflow. New particle seeding system based on utilization of air pressure is capable of injecting 2 g of particles per second for 1.3 - 2.4 s without flow disturbance. Used in LASIF this system provided high particle density on PIV-images. In combination with high-resolution camera it allowed us to obtain momentum fluxes directly from measured air velocity fluctuations. This data was then compared to values retrieved from wind speed profiles [2]. Visualization of water surface structure and droplets under strong wind conditions was carried out at the Wind - wave stratified flume of IAP RAS with high-speed camera NAC Memrecam HX-3 having a record-breaking performance at the moment. Shooting was performed at frame rates over 4500 Hz in 1080p resolution (1920 x 1080 px). Experimental study of droplets under strong winds has discovered a "bag breakup" droplet-production mechanism (observed previously in technical devices for liquid disintegration [3]). The investigation on this mechanism in the laboratory can improve the parameterization of heat fluxes in the models of hurricanes and intense sea storms. This work was supported by RFBR grants (project code 13-05-00865, 13-05-12093, 12-05-01064, 14-08-31740, 14-05-31415), President Grant for young scientists MK-3550.2014.5 and grant of the Government of the Russian Federation designed to support scientific research project implemented under the supervision of leading scientists at Russian institutions of higher learning (project code 11.G34.31.0048). References 1. Troitskaya Yu., D. Sergeev, O. Ermakova, G. Balandina (2011), Statistical Parameters of the Air Turbulent Boundary Layer over Steep Water Waves Measured by the PIV Technique, J. Phys. Oceanogr., 41, 1421-1454 2. Troitskaya, Y. I., D. A. Sergeev, A. A. Kandaurov, G. A. Baidakov, M. A. Vdovin, and V. I. Kazakov "Laboratory and theoretical modeling of air-sea momentum transfer under severe wind conditions" J. Geophys. Res., 117, C00J21, 2012. 3. Villermaux, E. (2007), Fragmentation, Ann. Review Fluid Mech., 39,419-446, doi:10.1146/annurev.fluid.39.050905.110214.

  10. Three-dimensional optical reconstruction of vocal fold kinematics using high-speed video with a laser projection system

    PubMed Central

    Luegmair, Georg; Mehta, Daryush D.; Kobler, James B.; Döllinger, Michael

    2015-01-01

    Vocal fold kinematics and its interaction with aerodynamic characteristics play a primary role in acoustic sound production of the human voice. Investigating the temporal details of these kinematics using high-speed videoendoscopic imaging techniques has proven challenging in part due to the limitations of quantifying complex vocal fold vibratory behavior using only two spatial dimensions. Thus, we propose an optical method of reconstructing the superior vocal fold surface in three spatial dimensions using a high-speed video camera and laser projection system. Using stereo-triangulation principles, we extend the camera-laser projector method and present an efficient image processing workflow to generate the three-dimensional vocal fold surfaces during phonation captured at 4000 frames per second. Initial results are provided for airflow-driven vibration of an ex vivo vocal fold model in which at least 75% of visible laser points contributed to the reconstructed surface. The method captures the vertical motion of the vocal folds at a high accuracy to allow for the computation of three-dimensional mucosal wave features such as vibratory amplitude, velocity, and asymmetry. PMID:26087485

  11. The effectiveness of detection of splashed particles using a system of three integrated high-speed cameras

    NASA Astrophysics Data System (ADS)

    Ryżak, Magdalena; Beczek, Michał; Mazur, Rafał; Sochan, Agata; Bieganowski, Andrzej

    2017-04-01

    The phenomenon of splash, which is one of the factors causing erosion of the soil surface, is the subject of research of various scientific teams. One of efficient methods of observation and analysis of this phenomenon are high-speed cameras to measure particles at 2000 frames per second or higher. Analysis of the phenomenon of splash with the use of high-speed cameras and specialized software can reveal, among other things, the number of broken particles, their speeds, trajectories, and the distances over which they were transferred. The paper presents an attempt at evaluation of the efficiency of detection of splashed particles with the use of a set of 3 cameras (Vision Research MIRO 310) and software Dantec Dynamics Studio, using a 3D module (Volumetric PTV). In order to assess the effectiveness of estimating the number of particles, the experiment was performed on glass beads with a diameter of 0.5 mm (corresponding to the sand fraction). Water droplets with a diameter of 4.2 mm fell on a sample from a height of 1.5 m. Two types of splashed particles were observed: particle having a low range (up to 18 mm) splashed at larger angles and particles of a high range (up to 118 mm) splashed at smaller angles. The detection efficiency the number of splashed particles estimated by the software was 45 - 65% for particles with a large range. The effectiveness of the detection of particles by the software has been calculated on the basis of comparison with the number of beads that fell on the adhesive surface around the sample. This work was partly financed from the National Science Centre, Poland; project no. 2014/14/E/ST10/00851.

  12. High-speed real-time 3-D coordinates measurement based on fringe projection profilometry considering camera lens distortion

    NASA Astrophysics Data System (ADS)

    Feng, Shijie; Chen, Qian; Zuo, Chao; Sun, Jiasong; Yu, Shi Ling

    2014-10-01

    Optical three-dimensional (3-D) profilometry is gaining increasing attention for its simplicity, flexibility, high accuracy, and non-contact nature. Recent advances in imaging sensors and digital projection technology further its progress in high-speed, real-time applications, enabling 3-D shapes reconstruction of moving objects and dynamic scenes. However, the camera lens is never perfect and the lens distortion does influence the accuracy of the measurement result, which is often overlooked in the existing real-time 3-D shape measurement systems. To this end, here we present a novel high-speed real-time 3-D coordinates measuring technique based on fringe projection with the consideration of the camera lens distortion. A pixel mapping relation between a distorted image and a corrected one is pre-determined and stored in computer memory for real-time fringe correction. The out-of-plane height is obtained firstly and the acquisition for the two corresponding in-plane coordinates follows on the basis of the solved height. Besides, a method of lookup table (LUT) is introduced as well for fast data processing. Our experimental results reveal that the measurement error of the in-plane coordinates has been reduced by one order of magnitude and the accuracy of the out-plane coordinate been tripled after the distortions being eliminated. Moreover, owing to the generated LUTs, a 3-D reconstruction speed of 92.34 frames per second can be achieved.

  13. Full-scale high-speed ``Edgerton'' retroreflective shadowgraphy of gunshots

    NASA Astrophysics Data System (ADS)

    Settles, Gary

    2005-11-01

    Almost 1/2 century ago, H. E. ``Doc'' Edgerton demonstrated a simple and elegant direct-shadowgraph technique for imaging large-scale events like explosions and gunshots. Only a retroreflective screen, flashlamp illumination, and an ordinary view camera were required. Retroreflective shadowgraphy has seen occasional use since then, but its unique combination of large scale, simplicity and portability has barely been tapped. It functions well in environments hostile to most optical diagnostics, such as full-scale outdoor daylight ballistics and explosives testing. Here, shadowgrams cast upon a 2.4 m square retroreflective screen are imaged by a Photron Fastcam APX-RS digital camera that is capable of megapixel image resolution at 3000 frames/sec up to 250,000 frames/sec at lower resolution. Microsecond frame exposures are used to examine the external ballistics of several firearms, including a high-powered rifle, an AK-47 submachine gun, and several pistols and revolvers. Muzzle blast phenomena and the mechanism of gunpowder residue deposition on the shooter's hands are clearly visualized. In particular, observing the firing of a pistol with and without a silencer (suppressor) suggests that some of the muzzle blast energy is converted by the silencer into supersonic jet noise.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fahim, Farah; Deptuch, Grzegorz; Shenai, Alpana

    The Vertically Integrated Photon Imaging Chip - Large, (VIPIC-L), is a large area, small pixel (65μm), 3D integrated, photon counting ASIC with zero-suppressed or full frame dead-time-less data readout. It features data throughput of 14.4 Gbps per chip with a full frame readout speed of 56kframes/s in the imaging mode. VIPIC-L contain 192 x 192 pixel array and the total size of the chip is 1.248cm x 1.248cm with only a 5μm periphery. It contains about 120M transistors. A 1.3M pixel camera module will be developed by arranging a 6 x 6 array of 3D VIPIC-L’s bonded to a largemore » area silicon sensor on the analog side and to a readout board on the digital side. The readout board hosts a bank of FPGA’s, one per VIPIC-L to allow processing of up to 0.7 Tbps of raw data produced by the camera.« less

  15. Comparison of implosion core metrics: A 10 ps dilation X-ray imager vs a 100 ps gated microchannel plate [Comparison of implosion core shape observations, 10 ps dilation X-ray imager vs 100 ps gated microchannel plate

    DOE PAGES

    Nagel, S. R.; Benedetti, L. R.; Bradley, D. K.; ...

    2016-08-05

    The dilation x-ray imager (DIXI) is a high-speed x-ray framing camera that uses the pulse-dilation technique to achieve a temporal resolution of less than 10 ps. This is a 10× improvement over conventional framing cameras currently employed on the National Ignition Facility (NIF) (100 ps resolution), and otherwise only achievable with 1D streaked imaging. A side effect of the dramatically reduced gate width is the comparatively lower detected signal level. Therefore we implement a Poisson noise reduction with non-local principal component analysis method to improve the robustness of the DIXI data analysis. Furthermore, we present results on ignition-relevant experiments atmore » the NIF using DIXI. In particular we focus on establishing that/when DIXI gives reliable shape metrics (P 0, P 2 and P 4 Legendre modes, and their temporal evolution/swings).« less

  16. Comparison of implosion core metrics: A 10 ps dilation X-ray imager vs a 100 ps gated microchannel plate [Comparison of implosion core shape observations, 10 ps dilation X-ray imager vs 100 ps gated microchannel plate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagel, S. R.; Benedetti, L. R.; Bradley, D. K.

    The dilation x-ray imager (DIXI) is a high-speed x-ray framing camera that uses the pulse-dilation technique to achieve a temporal resolution of less than 10 ps. This is a 10× improvement over conventional framing cameras currently employed on the National Ignition Facility (NIF) (100 ps resolution), and otherwise only achievable with 1D streaked imaging. A side effect of the dramatically reduced gate width is the comparatively lower detected signal level. Therefore we implement a Poisson noise reduction with non-local principal component analysis method to improve the robustness of the DIXI data analysis. Furthermore, we present results on ignition-relevant experiments atmore » the NIF using DIXI. In particular we focus on establishing that/when DIXI gives reliable shape metrics (P 0, P 2 and P 4 Legendre modes, and their temporal evolution/swings).« less

  17. C-RED One and C-RED2: SWIR high-performance cameras using Saphira e-APD and Snake InGaAs detectors

    NASA Astrophysics Data System (ADS)

    Gach, Jean-Luc; Feautrier, Philippe; Stadler, Eric; Clop, Fabien; Lemarchand, Stephane; Carmignani, Thomas; Wanwanscappel, Yann; Boutolleau, David

    2018-02-01

    After the development of the OCAM2 EMCCD fast visible camera dedicated to advanced adaptive optics wavefront sensing, First Light Imaging moved to the SWIR fast cameras with the development of the C-RED One and the C-RED 2 cameras. First Light Imaging's C-RED One infrared camera is capable of capturing up to 3500 full frames per second with a subelectron readout noise and very low background. C-RED One is based on the last version of the SAPHIRA detector developed by Leonardo UK. This breakthrough has been made possible thanks to the use of an e-APD infrared focal plane array which is a real disruptive technology in imagery. C-RED One is an autonomous system with an integrated cooling system and a vacuum regeneration system. It operates its sensor with a wide variety of read out techniques and processes video on-board thanks to an FPGA. We will show its performances and expose its main features. In addition to this project, First Light Imaging developed an InGaAs 640x512 fast camera with unprecedented performances in terms of noise, dark and readout speed based on the SNAKE SWIR detector from Sofradir. The camera was called C-RED 2. The C-RED 2 characteristics and performances will be described. The C-RED One project has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement N° 673944. The C-RED 2 development is supported by the "Investments for the future" program and the Provence Alpes Côte d'Azur Region, in the frame of the CPER.

  18. Spallation and fracture resulting from reflected and intersecting stress waves.

    NASA Technical Reports Server (NTRS)

    Kinslow, R.

    1973-01-01

    Discussion of the effects of stress waves produced in solid by explosions or high-velocity impacts. These waves rebound from free surfaces in the form of tensile waves that are capable of causing internal fractures or spallation of the material. The high-speed framing camera is shown to be an important tool for observing the stress waves and fracture in transparent targets, and its photographs provide valuable information on the mechanics of fracture.

  19. High Speed Video Measurements of a Magneto-optical Trap

    NASA Astrophysics Data System (ADS)

    Horstman, Luke; Graber, Curtis; Erickson, Seth; Slattery, Anna; Hoyt, Chad

    2016-05-01

    We present a video method to observe the mechanical properties of a lithium magneto-optical trap. A sinusoidally amplitude-modulated laser beam perturbed a collection of trapped ce7 Li atoms and the oscillatory response was recorded with a NAC Memrecam GX-8 high speed camera at 10,000 frames per second. We characterized the trap by modeling the oscillating cold atoms as a damped, driven, harmonic oscillator. Matlab scripts tracked the atomic cloud movement and relative phase directly from the captured high speed video frames. The trap spring constant, with magnetic field gradient bz = 36 G/cm, was measured to be 4 . 5 +/- . 5 ×10-19 N/m, which implies a trap resonant frequency of 988 +/- 55 Hz. Additionally, at bz = 27 G/cm the spring constant was measured to be 2 . 3 +/- . 2 ×10-19 N/m, which corresponds to a resonant frequency of 707 +/- 30 Hz. These properties at bz = 18 G/cm were found to be 8 . 8 +/- . 5 ×10-20 N/m, and 438 +/- 13 Hz. NSF #1245573.

  20. Investigation of television transmission using adaptive delta modulation principles

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.

    1976-01-01

    The results are presented of a study on the use of the delta modulator as a digital encoder of television signals. The computer simulation of different delta modulators was studied in order to find a satisfactory delta modulator. After finding a suitable delta modulator algorithm via computer simulation, the results were analyzed and then implemented in hardware to study its ability to encode real time motion pictures from an NTSC format television camera. The effects of channel errors on the delta modulated video signal were tested along with several error correction algorithms via computer simulation. A very high speed delta modulator was built (out of ECL logic), incorporating the most promising of the correction schemes, so that it could be tested on real time motion pictures. Delta modulators were investigated which could achieve significant bandwidth reduction without regard to complexity or speed. The first scheme investigated was a real time frame to frame encoding scheme which required the assembly of fourteen, 131,000 bit long shift registers as well as a high speed delta modulator. The other schemes involved the computer simulation of two dimensional delta modulator algorithms.

  1. Full-Frame Reference for Test Photo of Moon

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This pair of views shows how little of the full image frame was taken up by the Moon in test images taken Sept. 8, 2005, by the High Resolution Imaging Science Experiment (HiRISE) camera on NASA's Mars Reconnaissance Orbiter. The Mars-bound camera imaged Earth's Moon from a distance of about 10 million kilometers (6 million miles) away -- 26 times the distance between Earth and the Moon -- as part of an activity to test and calibrate the camera. The images are very significant because they show that the Mars Reconnaissance Orbiter spacecraft and this camera can properly operate together to collect very high-resolution images of Mars. The target must move through the camera's telescope view in just the right direction and speed to acquire a proper image. The day's test images also demonstrate that the focus mechanism works properly with the telescope to produce sharp images.

    Out of the 20,000-pixel-by-6,000-pixel full frame, the Moon's diameter is about 340 pixels, if the full Moon could be seen. The illuminated crescent is about 60 pixels wide, and the resolution is about 10 kilometers (6 miles) per pixel. At Mars, the entire image region will be filled with high-resolution information.

    The Mars Reconnaissance Orbiter, launched on Aug. 12, 2005, is on course to reach Mars on March 10, 2006. After gradually adjusting the shape of its orbit for half a year, it will begin its primary science phase in November 2006. From the mission's planned science orbit about 300 kilometers (186 miles) above the surface of Mars, the high resolution camera will be able to discern features as small as one meter or yard across.

    The Mars Reconnaissance Orbiter mission is managed by NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology, Pasadena, for the NASA Science Mission Directorate. Lockheed Martin Space Systems, Denver, prime contractor for the project, built the spacecraft. Ball Aerospace & Technologies Corp., Boulder, Colo., built the High Resolution Imaging Science Experiment instrument for the University of Arizona, Tucson, to provide to the mission. The HiRISE Operations Center at the University of Arizona processes images from the camera.

  2. Multi-mode Observations of Cloud-to-Ground Lightning Strokes

    NASA Astrophysics Data System (ADS)

    Smith, M. W.; Smith, B. J.; Clemenson, M. D.; Zollweg, J. D.

    2015-12-01

    We present hyper-temporal and hyper-spectral data collected using a suite of three Phantom high-speed cameras configured to observe cloud-to-ground lightning strokes. The first camera functioned as a contextual imager to show the location and structure of the strokes. The other two cameras were operated as slit-less spectrometers, with resolutions of 0.2 to 1.0 nm. The imaging camera was operated at a readout rate of 48,000 frames per second and provided an image-based trigger mechanism for the spectrometers. Each spectrometer operated at a readout rate of 400,000 frames per second. The sensors were deployed on the southern edge of Albuquerque, New Mexico and collected data over a 4 week period during the thunderstorm season in the summer of 2015. Strikes observed by the sensor suite were correlated to specific strikes recorded by the National Lightning Data Network (NLDN) and thereby geo-located. Sensor calibration factors, distance to each strike, and calculated values of atmospheric transmission were used to estimate absolute radiometric intensities for the spectral-temporal data. The data that we present show the intensity and time evolution of broadband and line emission features for both leader and return strokes. We highlight several key features and overall statistics of the observations. A companion poster describes a lightning model that is being developed at Sandia National Laboratories.

  3. SUSI 62 A Robust and Safe Parachute Uav with Long Flight Time and Good Payload

    NASA Astrophysics Data System (ADS)

    Thamm, H. P.

    2011-09-01

    In many research areas in the geo-sciences (erosion, land use, land cover change, etc.) or applications (e.g. forest management, mining, land management etc.) there is a demand for remote sensing images of a very high spatial and temporal resolution. Due to the high costs of classic aerial photo campaigns, the use of a UAV is a promising option for obtaining the desired remote sensed information at the time it is needed. However, the UAV must be easy to operate, safe, robust and should have a high payload and long flight time. For that purpose, the parachute UAV SUSI 62 was developed. It consists of a steel frame with a powerful 62 cm3 2- stroke engine and a parachute wing. The frame can be easily disassembled for transportation or to replace parts. On the frame there is a gimbal mounted sensor carrier where different sensors, standard SLR cameras and/or multi-spectral and thermal sensors can be mounted. Due to the design of the parachute, the SUSI 62 is very easy to control. Two different parachute sizes are available for different wind speed conditions. The SUSI 62 has a payload of up to 8 kg providing options to use different sensors at the same time or to extend flight duration. The SUSI 62 needs a runway of between 10 m and 50 m, depending on the wind conditions. The maximum flight speed is approximately 50 km/h. It can be operated in a wind speed of up to 6 m/s. The design of the system utilising a parachute UAV makes it comparatively safe as a failure of the electronics or the remote control only results in the UAV coming to the ground at a slow speed. The video signal from the camera, the GPS coordinates and other flight parameters are transmitted to the ground station in real time. An autopilot is available, which guarantees that the area of investigation is covered at the desired resolution and overlap. The robustly designed SUSI 62 has been used successfully in Europe, Africa and Australia for scientific projects and also for agricultural, forestry and industrial applications.

  4. Real-time 3D measurement based on structured light illumination considering camera lens distortion

    NASA Astrophysics Data System (ADS)

    Feng, Shijie; Chen, Qian; Zuo, Chao; Sun, Jiasong; Yu, ShiLing

    2014-12-01

    Optical three-dimensional (3-D) profilometry is gaining increasing attention for its simplicity, flexibility, high accuracy, and non-contact nature. Recent advances in imaging sensors and digital projection technology further its progress in high-speed, real-time applications, enabling 3-D shapes reconstruction of moving objects and dynamic scenes. In traditional 3-D measurement system where the processing time is not a key factor, camera lens distortion correction is performed directly. However, for the time-critical high-speed applications, the time-consuming correction algorithm is inappropriate to be performed directly during the real-time process. To cope with this issue, here we present a novel high-speed real-time 3-D coordinates measuring technique based on fringe projection with the consideration of the camera lens distortion. A pixel mapping relation between a distorted image and a corrected one is pre-determined and stored in computer memory for real-time fringe correction. And a method of lookup table (LUT) is introduced as well for fast data processing. Our experimental results reveal that the measurement error of the in-plane coordinates has been reduced by one order of magnitude and the accuracy of the out-plane coordinate been tripled after the distortions being eliminated. Moreover, owing to the merit of the LUT, the 3-D reconstruction can be achieved at 92.34 frames per second.

  5. Extracting information of fixational eye movements through pupil tracking

    NASA Astrophysics Data System (ADS)

    Xiao, JiangWei; Qiu, Jian; Luo, Kaiqin; Peng, Li; Han, Peng

    2018-01-01

    Human eyes are never completely static even when they are fixing a stationary point. These irregular, small movements, which consist of micro-tremors, micro-saccades and drifts, can prevent the fading of the images that enter our eyes. The importance of researching the fixational eye movements has been experimentally demonstrated recently. However, the characteristics of fixational eye movements and their roles in visual process have not been explained clearly, because these signals can hardly be completely extracted by now. In this paper, we developed a new eye movement detection device with a high-speed camera. This device includes a beam splitter mirror, an infrared light source and a high-speed digital video camera with a frame rate of 200Hz. To avoid the influence of head shaking, we made the device wearable by fixing the camera on a safety helmet. Using this device, the experiments of pupil tracking were conducted. By localizing the pupil center and spectrum analysis, the envelope frequency spectrum of micro-saccades, micro-tremors and drifts are shown obviously. The experimental results show that the device is feasible and effective, so that the device can be applied in further characteristic analysis.

  6. Slow Progress in Dune (Left Rear Wheel)

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The left rear wheel of NASA's Mars Exploration Rover Opportunity makes slow but steady progress through soft dune material in this movie clip of frames taken by the rover's rear hazard identification camera over a period of several days. The sequence starts on Opportunity's 460th martian day, or sol (May 10, 2005) and ends 11 days later. In eight drives during that period, Opportunity advanced a total of 26 centimeters (10 inches) while spinning its wheels enough to have driven 46 meters (151 feet) if there were no slippage. The motion appears to speed up near the end of the clip, but that is an artifact of individual frames being taken less frequently.

  7. Slow Progress in Dune (Left Front Wheel)

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The left front wheel of NASA's Mars Exploration Rover Opportunity makes slow but steady progress through soft dune material in this movie clip of frames taken by the rover's front hazard identification camera over a period of several days. The sequence starts on Opportunity's 460th martian day, or sol (May 10, 2005) and ends 11 days later. In eight drives during that period, Opportunity advanced a total of 26 centimeters (10 inches) while spinning its wheels enough to have driven 46 meters (151 feet) if there were no slippage. The motion appears to speed up near the end of the clip, but that is an artifact of individual frames being taken less frequently.

  8. Detonation Velocity Measurements from a Digital High-speed Rotating-mirror Framing Camera

    DTIC Science & Technology

    2012-09-01

    information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) 2...The intense light emission produced throughout the energetic material detonation process, however, suggests the alternative use of optical measurement...edge, figure 1). As previously stated, the detonation wave position was typically measured as a result of its light emission . Here, however, it was

  9. High speed, real-time, camera bandwidth converter

    DOEpatents

    Bower, Dan E; Bloom, David A; Curry, James R

    2014-10-21

    Image data from a CMOS sensor with 10 bit resolution is reformatted in real time to allow the data to stream through communications equipment that is designed to transport data with 8 bit resolution. The incoming image data has 10 bit resolution. The communication equipment can transport image data with 8 bit resolution. Image data with 10 bit resolution is transmitted in real-time, without a frame delay, through the communication equipment by reformatting the image data.

  10. Investigating high speed phenomena in laser plasma interactions using dilation x-ray imager (invited).

    PubMed

    Nagel, S R; Hilsabeck, T J; Bell, P M; Bradley, D K; Ayers, M J; Piston, K; Felker, B; Kilkenny, J D; Chung, T; Sammuli, B; Hares, J D; Dymoke-Bradshaw, A K L

    2014-11-01

    The DIlation X-ray Imager (DIXI) is a new, high-speed x-ray framing camera at the National Ignition Facility (NIF) sensitive to x-rays in the range of ≈2-17 keV. DIXI uses the pulse-dilation technique to achieve a temporal resolution of less than 10 ps, a ≈10× improvement over conventional framing cameras currently employed on the NIF (≈100 ps resolution), and otherwise only attainable with 1D streaked imaging. The pulse-dilation technique utilizes a voltage ramp to impart a velocity gradient on the signal-bearing electrons. The temporal response, spatial resolution, and x-ray sensitivity of DIXI are characterized with a short x-ray impulse generated using the COMET laser facility at Lawrence Livermore National Laboratory. At the NIF a pinhole array at 10 cm from target chamber center (tcc) projects images onto the photocathode situated outside the NIF chamber wall with a magnification of ≈64×. DIXI will provide important capabilities for warm-dense-matter physics, high-energy-density science, and inertial confinement fusion, adding important capabilities to temporally resolve hot-spot formation, x-ray emission, fuel motion, and mix levels in the hot-spot at neutron yields of up to 10(17). We present characterization data as well as first results on electron-transport phenomena in buried-layer foil experiments.

  11. Combined two-dimensional velocity and temperature measurements of natural convection using a high-speed camera and temperature-sensitive particles

    NASA Astrophysics Data System (ADS)

    Someya, Satoshi; Li, Yanrong; Ishii, Keiko; Okamoto, Koji

    2011-01-01

    This paper proposes a combined method for two-dimensional temperature and velocity measurements in liquid and gas flows using temperature-sensitive particles (TSPs), a pulsed ultraviolet laser, and a high-speed camera. TSPs respond to temperature changes in the flow and can also serve as tracers for the velocity field. The luminescence from the TSPs was recorded at 15,000 frames per second as sequential images for a lifetime-based temperature analysis. These images were also used for the particle image velocimetry calculations. The temperature field was estimated using several images, based on the lifetime method. The decay curves for various temperature conditions fit well to exponential functions, and from these the decay constants at each temperature were obtained. The proposed technique was applied to measure the temperature and velocity fields in natural convection driven by a Marangoni force and buoyancy in a rectangular tank. The accuracy of the temperature measurement of the proposed technique was ±0.35-0.40°C.

  12. Noise and sensitivity of x-ray framing cameras at Nike (abstract)

    NASA Astrophysics Data System (ADS)

    Pawley, C. J.; Deniz, A. V.; Lehecka, T.

    1999-01-01

    X-ray framing cameras are the most widely used tool for radiographing density distributions in laser and Z-pinch driven experiments. The x-ray framing cameras that were developed specifically for experiments on the Nike laser system are described. One of these cameras has been coupled to a CCD camera and was tested for resolution and image noise using both electrons and x rays. The largest source of noise in the images was found to be due to low quantum detection efficiency of x-ray photons.

  13. Pixel-based characterisation of CMOS high-speed camera systems

    NASA Astrophysics Data System (ADS)

    Weber, V.; Brübach, J.; Gordon, R. L.; Dreizler, A.

    2011-05-01

    Quantifying high-repetition rate laser diagnostic techniques for measuring scalars in turbulent combustion relies on a complete description of the relationship between detected photons and the signal produced by the detector. CMOS-chip based cameras are becoming an accepted tool for capturing high frame rate cinematographic sequences for laser-based techniques such as Particle Image Velocimetry (PIV) and Planar Laser Induced Fluorescence (PLIF) and can be used with thermographic phosphors to determine surface temperatures. At low repetition rates, imaging techniques have benefitted from significant developments in the quality of CCD-based camera systems, particularly with the uniformity of pixel response and minimal non-linearities in the photon-to-signal conversion. The state of the art in CMOS technology displays a significant number of technical aspects that must be accounted for before these detectors can be used for quantitative diagnostics. This paper addresses these issues.

  14. Development of low-cost high-performance multispectral camera system at Banpil

    NASA Astrophysics Data System (ADS)

    Oduor, Patrick; Mizuno, Genki; Olah, Robert; Dutta, Achyut K.

    2014-05-01

    Banpil Photonics (Banpil) has developed a low-cost high-performance multispectral camera system for Visible to Short- Wave Infrared (VIS-SWIR) imaging for the most demanding high-sensitivity and high-speed military, commercial and industrial applications. The 640x512 pixel InGaAs uncooled camera system is designed to provide a compact, smallform factor to within a cubic inch, high sensitivity needing less than 100 electrons, high dynamic range exceeding 190 dB, high-frame rates greater than 1000 frames per second (FPS) at full resolution, and low power consumption below 1W. This is practically all the feature benefits highly desirable in military imaging applications to expand deployment to every warfighter, while also maintaining a low-cost structure demanded for scaling into commercial markets. This paper describes Banpil's development of the camera system including the features of the image sensor with an innovation integrating advanced digital electronics functionality, which has made the confluence of high-performance capabilities on the same imaging platform practical at low cost. It discusses the strategies employed including innovations of the key components (e.g. focal plane array (FPA) and Read-Out Integrated Circuitry (ROIC)) within our control while maintaining a fabless model, and strategic collaboration with partners to attain additional cost reductions on optics, electronics, and packaging. We highlight the challenges and potential opportunities for further cost reductions to achieve a goal of a sub-$1000 uncooled high-performance camera system. Finally, a brief overview of emerging military, commercial and industrial applications that will benefit from this high performance imaging system and their forecast cost structure is presented.

  15. Measurement of instantaneous rotational speed using double-sine-varying-density fringe pattern

    NASA Astrophysics Data System (ADS)

    Zhong, Jianfeng; Zhong, Shuncong; Zhang, Qiukun; Peng, Zhike

    2018-03-01

    Fast and accurate rotational speed measurement is required both for condition monitoring and faults diagnose of rotating machineries. A vision- and fringe pattern-based rotational speed measurement system was proposed to measure the instantaneous rotational speed (IRS) with high accuracy and reliability. A special double-sine-varying-density fringe pattern (DSVD-FP) was designed and pasted around the shaft surface completely and worked as primary angular sensor. The rotational angle could be correctly obtained from the left and right fringe period densities (FPDs) of the DSVD-FP image sequence recorded by a high-speed camera. The instantaneous angular speed (IAS) between two adjacent frames could be calculated from the real-time rotational angle curves, thus, the IRS also could be obtained accurately and efficiently. Both the measurement principle and system design of the novel method have been presented. The influence factors on the sensing characteristics and measurement accuracy of the novel system, including the spectral centrobaric correction method (SCCM) on the FPD calculation, the noise sources introduce by the image sensor, the exposure time and the vibration of the shaft, were investigated through simulations and experiments. The sampling rate of the high speed camera could be up to 5000 Hz, thus, the measurement becomes very fast and the change in rotational speed was sensed within 0.2 ms. The experimental results for different IRS measurements and characterization of the response property of a servo motor demonstrated the high accuracy and fast measurement of the proposed technique, making it attractive for condition monitoring and faults diagnosis of rotating machineries.

  16. Simultaneous tracking and regulation visual servoing of wheeled mobile robots with uncalibrated extrinsic parameters

    NASA Astrophysics Data System (ADS)

    Lu, Qun; Yu, Li; Zhang, Dan; Zhang, Xuebo

    2018-01-01

    This paper presentsa global adaptive controller that simultaneously solves tracking and regulation for wheeled mobile robots with unknown depth and uncalibrated camera-to-robot extrinsic parameters. The rotational angle and the scaled translation between the current camera frame and the reference camera frame, as well as the ones between the desired camera frame and the reference camera frame can be calculated in real time by using the pose estimation techniques. A transformed system is first obtained, for which an adaptive controller is then designed to accomplish both tracking and regulation tasks, and the controller synthesis is based on Lyapunov's direct method. Finally, the effectiveness of the proposed method is illustrated by a simulation study.

  17. Monocular Stereo Measurement Using High-Speed Catadioptric Tracking

    PubMed Central

    Hu, Shaopeng; Matsumoto, Yuji; Takaki, Takeshi; Ishii, Idaku

    2017-01-01

    This paper presents a novel concept of real-time catadioptric stereo tracking using a single ultrafast mirror-drive pan-tilt active vision system that can simultaneously switch between hundreds of different views in a second. By accelerating video-shooting, computation, and actuation at the millisecond-granularity level for time-division multithreaded processing in ultrafast gaze control, the active vision system can function virtually as two or more tracking cameras with different views. It enables a single active vision system to act as virtual left and right pan-tilt cameras that can simultaneously shoot a pair of stereo images for the same object to be observed at arbitrary viewpoints by switching the direction of the mirrors of the active vision system frame by frame. We developed a monocular galvano-mirror-based stereo tracking system that can switch between 500 different views in a second, and it functions as a catadioptric active stereo with left and right pan-tilt tracking cameras that can virtually capture 8-bit color 512×512 images each operating at 250 fps to mechanically track a fast-moving object with a sufficient parallax for accurate 3D measurement. Several tracking experiments for moving objects in 3D space are described to demonstrate the performance of our monocular stereo tracking system. PMID:28792483

  18. Engine flow visualization using a copper vapor laser

    NASA Technical Reports Server (NTRS)

    Regan, Carolyn A.; Chun, Kue S.; Schock, Harold J., Jr.

    1987-01-01

    A flow visualization system has been developed to determine the air flow within the combustion chamber of a motored, axisymmetric engine. The engine has been equipped with a transparent quartz cylinder, allowing complete optical access to the chamber. A 40-Watt copper vapor laser is used as the light source. Its beam is focused down to a sheet approximately 1 mm thick. The light plane is passed through the combustion chamber, and illuminates oil particles which were entrained in the intake air. The light scattered off of the particles is recorded by a high speed rotating prism movie camera. A movie is then made showing the air flow within the combustion chamber for an entire four-stroke engine cycle. The system is synchronized so that a pulse generated by the camera triggers the laser's thyratron. The camera is run at 5,000 frames per second; the trigger drives one laser pulse per frame. This paper describes the optics used in the flow visualization system, the synchronization circuit, and presents results obtained from the movie. This is believed to be the first published study showing a planar observation of airflow in a four-stroke piston-cylinder assembly. These flow visualization results have been used to interpret flow velocity measurements previously obtained with a laser Doppler velocimetry system.

  19. Real-time machine vision system using FPGA and soft-core processor

    NASA Astrophysics Data System (ADS)

    Malik, Abdul Waheed; Thörnberg, Benny; Meng, Xiaozhou; Imran, Muhammad

    2012-06-01

    This paper presents a machine vision system for real-time computation of distance and angle of a camera from reference points in the environment. Image pre-processing, component labeling and feature extraction modules were modeled at Register Transfer (RT) level and synthesized for implementation on field programmable gate arrays (FPGA). The extracted image component features were sent from the hardware modules to a soft-core processor, MicroBlaze, for computation of distance and angle. A CMOS imaging sensor operating at a clock frequency of 27MHz was used in our experiments to produce a video stream at the rate of 75 frames per second. Image component labeling and feature extraction modules were running in parallel having a total latency of 13ms. The MicroBlaze was interfaced with the component labeling and feature extraction modules through Fast Simplex Link (FSL). The latency for computing distance and angle of camera from the reference points was measured to be 2ms on the MicroBlaze, running at 100 MHz clock frequency. In this paper, we present the performance analysis, device utilization and power consumption for the designed system. The FPGA based machine vision system that we propose has high frame speed, low latency and a power consumption that is much lower compared to commercially available smart camera solutions.

  20. Hardware accelerator design for tracking in smart camera

    NASA Astrophysics Data System (ADS)

    Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Vohra, Anil

    2011-10-01

    Smart Cameras are important components in video analysis. For video analysis, smart cameras needs to detect interesting moving objects, track such objects from frame to frame, and perform analysis of object track in real time. Therefore, the use of real-time tracking is prominent in smart cameras. The software implementation of tracking algorithm on a general purpose processor (like PowerPC) could achieve low frame rate far from real-time requirements. This paper presents the SIMD approach based hardware accelerator designed for real-time tracking of objects in a scene. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA. Resulted frame rate is 30 frames per second for 250x200 resolution video in gray scale.

  1. HiPERCAM: a high-speed quintuple-beam CCD camera for the study of rapid variability in the universe

    NASA Astrophysics Data System (ADS)

    Dhillon, Vikram S.; Marsh, Thomas R.; Bezawada, Naidu; Black, Martin; Dixon, Simon; Gamble, Trevor; Henry, David; Kerry, Paul; Littlefair, Stuart; Lunney, David W.; Morris, Timothy; Osborn, James; Wilson, Richard W.

    2016-08-01

    HiPERCAM is a high-speed camera for the study of rapid variability in the Universe. The project is funded by a ɛ3.5M European Research Council Advanced Grant. HiPERCAM builds on the success of our previous instrument, ULTRACAM, with very significant improvements in performance thanks to the use of the latest technologies. HiPERCAM will use 4 dichroic beamsplitters to image simultaneously in 5 optical channels covering the u'g'r'I'z' bands. Frame rates of over 1000 per second will be achievable using an ESO CCD controller (NGC), with every frame GPS timestamped. The detectors are custom-made, frame-transfer CCDs from e2v, with 4 low noise (2.5e-) outputs, mounted in small thermoelectrically-cooled heads operated at 180 K, resulting in virtually no dark current. The two reddest CCDs will be deep-depletion devices with anti-etaloning, providing high quantum efficiencies across the red part of the spectrum with no fringing. The instrument will also incorporate scintillation noise correction via the conjugate-plane photometry technique. The opto-mechanical chassis will make use of additive manufacturing techniques in metal to make a light-weight, rigid and temperature-invariant structure. First light is expected on the 4.2m William Herschel Telescope on La Palma in 2017 (on which the field of view will be 10' with a 0.3"/pixel scale), with subsequent use planned on the 10.4m Gran Telescopio Canarias on La Palma (on which the field of view will be 4' with a 0.11"/pixel scale) and the 3.5m New Technology Telescope in Chile.

  2. Time-resolved nanoseconds dynamics of ultrasound contrast agent microbubbles manipulated and controlled by optical tweezers

    NASA Astrophysics Data System (ADS)

    Garbin, Valeria; Cojoc, Dan; Ferrari, Enrico; Di Fabrizio, Enzo; Overvelde, Marlies L. J.; Versluis, Michel; van der Meer, Sander M.; de Jong, Nico; Lohse, Detlef

    2006-08-01

    Optical tweezers enable non-destructive, contact-free manipulation of ultrasound contrast agent (UCA) microbubbles, which are used in medical imaging for enhancing the echogenicity of the blood pool and to quantify organ perfusion. The understanding of the fundamental dynamics of ultrasound-driven contrast agent microbubbles is a first step for exploiting their acoustical properties and to develop new diagnostic and therapeutic applications. In this respect, optical tweezers can be used to study UCA microbubbles under controlled and repeatable conditions, by positioning them away from interfaces and from neighboring bubbles. In addition, a high-speed imaging system is required to record the dynamics of UCA microbubbles in ultrasound, as their oscillations occur on the nanoseconds timescale. In this work, we demonstrate the use of an optical tweezers system combined with a high-speed camera capable of 128-frame recordings at up to 25 million frames per second (Mfps), for the study of individual UCA microbubble dynamics as a function of the distance from solid interfaces.

  3. The application of high-speed TV-holography to time-resolved vibration measurements

    NASA Astrophysics Data System (ADS)

    Buckberry, C.; Reeves, M.; Moore, A. J.; Hand, D. P.; Barton, J. S.; Jones, J. D. C.

    1999-10-01

    We describe an electronic speckle pattern interferometer (ESPI) system that has enabled non-harmonic vibrations to be measured with μs temporal resolution. The short exposure period and high framing rate of a high-speed camera at up to 40,500 frames per second allow low-power CW laser illumination and fibre-optic beam delivery to be used, rather than the high peak power pulsed lasers normally used in ESPI for transient measurement. The technique has been demonstrated in the laboratory and tested in preliminary industrial trials. The ability to measure vibration with high spatial and temporal resolution, which is not provided by techniques such as scanning laser vibrometry, has many applications in manufacturing design, and in an illustrative application described here revealed previously unmeasured “rocking” vibrations of a car door. It has been possible to make the measurement on the door as part of a complete vehicle standing on its own tyres, wheels and suspension, and where the excitation was generated by the running of the vehicle's own engine.

  4. High-speed particle tracking in microscopy using SPAD image sensors

    NASA Astrophysics Data System (ADS)

    Gyongy, Istvan; Davies, Amy; Miguelez Crespo, Allende; Green, Andrew; Dutton, Neale A. W.; Duncan, Rory R.; Rickman, Colin; Henderson, Robert K.; Dalgarno, Paul A.

    2018-02-01

    Single photon avalanche diodes (SPADs) are used in a wide range of applications, from fluorescence lifetime imaging microscopy (FLIM) to time-of-flight (ToF) 3D imaging. SPAD arrays are becoming increasingly established, combining the unique properties of SPADs with widefield camera configurations. Traditionally, the photosensitive area (fill factor) of SPAD arrays has been limited by the in-pixel digital electronics. However, recent designs have demonstrated that by replacing the complex digital pixel logic with simple binary pixels and external frame summation, the fill factor can be increased considerably. A significant advantage of such binary SPAD arrays is the high frame rates offered by the sensors (>100kFPS), which opens up new possibilities for capturing ultra-fast temporal dynamics in, for example, life science cellular imaging. In this work we consider the use of novel binary SPAD arrays in high-speed particle tracking in microscopy. We demonstrate the tracking of fluorescent microspheres undergoing Brownian motion, and in intra-cellular vesicle dynamics, at high frame rates. We thereby show how binary SPAD arrays can offer an important advance in live cell imaging in such fields as intercellular communication, cell trafficking and cell signaling.

  5. Overt vs. covert speed cameras in combination with delayed vs. immediate feedback to the offender.

    PubMed

    Marciano, Hadas; Setter, Pe'erly; Norman, Joel

    2015-06-01

    Speeding is a major problem in road safety because it increases both the probability of accidents and the severity of injuries if an accident occurs. Speed cameras are one of the most common speed enforcement tools. Most of the speed cameras around the world are overt, but there is evidence that this can cause a "kangaroo effect" in driving patterns. One suggested alternative to prevent this kangaroo effect is the use of covert cameras. Another issue relevant to the effect of enforcement countermeasures on speeding is the timing of the fine. There is general agreement on the importance of the immediacy of the punishment, however, in the context of speed limit enforcement, implementing such immediate punishment is difficult. An immediate feedback that mediates the delay between the speed violation and getting a ticket is one possible solution. This study examines combinations of concealment and the timing of the fine in operating speed cameras in order to evaluate the most effective one in terms of enforcing speed limits. Using a driving simulator, the driving performance of the following four experimental groups was tested: (1) overt cameras with delayed feedback, (2) overt cameras with immediate feedback, (3) covert cameras with delayed feedback, and (4) covert cameras with immediate feedback. Each of the 58 participants drove in the same scenario on three different days. The results showed that both median speed and speed variance were higher with overt than with covert cameras. Moreover, implementing a covert camera system along with immediate feedback was more conducive to drivers maintaining steady speeds at the permitted levels from the very beginning. Finally, both 'overt cameras' groups exhibit a kangaroo effect throughout the entire experiment. It can be concluded that an implementation strategy consisting of covert speed cameras combined with immediate feedback to the offender is potentially an optimal way to motivate drivers to maintain speeds at the speed limit. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Automatic Calibration of an Airborne Imaging System to an Inertial Navigation Unit

    NASA Technical Reports Server (NTRS)

    Ansar, Adnan I.; Clouse, Daniel S.; McHenry, Michael C.; Zarzhitsky, Dimitri V.; Pagdett, Curtis W.

    2013-01-01

    This software automatically calibrates a camera or an imaging array to an inertial navigation system (INS) that is rigidly mounted to the array or imager. In effect, it recovers the coordinate frame transformation between the reference frame of the imager and the reference frame of the INS. This innovation can automatically derive the camera-to-INS alignment using image data only. The assumption is that the camera fixates on an area while the aircraft flies on orbit. The system then, fully automatically, solves for the camera orientation in the INS frame. No manual intervention or ground tie point data is required.

  7. Dynamic Uniaxial Tensile Loading of Vector Polymers

    DTIC Science & Technology

    2011-11-01

    to apply the loading velocity to the strip at x = 0 after impact by a steel slug projectile. The flange has two sets of grooves. One set, denoted as...travels down the barrel . The strip is clamped to the outside of the barrel at x = L. A Photron SA1 high-speed video camera with a framing rate of...nominal stress. Equation 1 is expressed in terms of particle displacement to obtain the wave equation Flange Gun Barrel Rubber Strip Clamp x = 0

  8. IMAX camera (12-IML-1)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The IMAX camera system is used to record on-orbit activities of interest to the public. Because of the extremely high resolution of the IMAX camera, projector, and audio systems, the audience is afforded a motion picture experience unlike any other. IMAX and OMNIMAX motion picture systems were designed to create motion picture images of superior quality and audience impact. The IMAX camera is a 65 mm, single lens, reflex viewing design with a 15 perforation per frame horizontal pull across. The frame size is 2.06 x 2.77 inches. Film travels through the camera at a rate of 336 feet per minute when the camera is running at the standard 24 frames/sec.

  9. Real-time unmanned aircraft systems surveillance video mosaicking using GPU

    NASA Astrophysics Data System (ADS)

    Camargo, Aldo; Anderson, Kyle; Wang, Yi; Schultz, Richard R.; Fevig, Ronald A.

    2010-04-01

    Digital video mosaicking from Unmanned Aircraft Systems (UAS) is being used for many military and civilian applications, including surveillance, target recognition, border protection, forest fire monitoring, traffic control on highways, monitoring of transmission lines, among others. Additionally, NASA is using digital video mosaicking to explore the moon and planets such as Mars. In order to compute a "good" mosaic from video captured by a UAS, the algorithm must deal with motion blur, frame-to-frame jitter associated with an imperfectly stabilized platform, perspective changes as the camera tilts in flight, as well as a number of other factors. The most suitable algorithms use SIFT (Scale-Invariant Feature Transform) to detect the features consistent between video frames. Utilizing these features, the next step is to estimate the homography between two consecutives video frames, perform warping to properly register the image data, and finally blend the video frames resulting in a seamless video mosaick. All this processing takes a great deal of resources of resources from the CPU, so it is almost impossible to compute a real time video mosaic on a single processor. Modern graphics processing units (GPUs) offer computational performance that far exceeds current CPU technology, allowing for real-time operation. This paper presents the development of a GPU-accelerated digital video mosaicking implementation and compares it with CPU performance. Our tests are based on two sets of real video captured by a small UAS aircraft; one video comes from Infrared (IR) and Electro-Optical (EO) cameras. Our results show that we can obtain a speed-up of more than 50 times using GPU technology, so real-time operation at a video capture of 30 frames per second is feasible.

  10. A novel simultaneous streak and framing camera without principle errors

    NASA Astrophysics Data System (ADS)

    Jingzhen, L.; Fengshan, S.; Ningwen, L.; Xiangdong, G.; Bin, H.; Qingyang, W.; Hongyi, C.; Yi, C.; Xiaowei, L.

    2018-02-01

    A novel simultaneous streak and framing camera with continuous access, the perfect information of which is far more important for the exact interpretation and precise evaluation of many detonation events and shockwave phenomena, has been developed. The camera with the maximum imaging frequency of 2 × 106 fps and the maximum scanning velocity of 16.3 mm/μs has fine imaging properties which are the eigen resolution of over 40 lp/mm in the temporal direction and over 60 lp/mm in the spatial direction and the framing frequency principle error of zero for framing record, and the maximum time resolving power of 8 ns and the scanning velocity nonuniformity of 0.136%~-0.277% for streak record. The test data have verified the performance of the camera quantitatively. This camera, simultaneously gained frames and streak with parallax-free and identical time base, is characterized by the plane optical system at oblique incidence different from space system, the innovative camera obscura without principle errors, and the high velocity motor driven beryllium-like rotating mirror, made of high strength aluminum alloy with cellular lateral structure. Experiments demonstrate that the camera is very useful and reliable to take high quality pictures of the detonation events.

  11. A trillion frames per second: the techniques and applications of light-in-flight photography.

    PubMed

    Faccio, Daniele; Velten, Andreas

    2018-06-14

    Cameras capable of capturing videos at a trillion frames per second allow to freeze light in motion, a very counterintuitive capability when related to our everyday experience in which light appears to travel instantaneously. By combining this capability with computational imaging techniques, new imaging opportunities emerge such as three dimensional imaging of scenes that are hidden behind a corner, the study of relativistic distortion effects, imaging through diffusive media and imaging of ultrafast optical processes such as laser ablation, supercontinuum and plasma generation. We provide an overview of the main techniques that have been developed for ultra-high speed photography with a particular focus on `light in flight' imaging, i.e. applications where the key element is the imaging of light itself at frame rates that allow to freeze it's motion and therefore extract information that would otherwise be blurred out and lost. . © 2018 IOP Publishing Ltd.

  12. Evaluation of Particle Image Velocimetry Measurement Using Multi-wavelength Illumination

    NASA Astrophysics Data System (ADS)

    Lai, HC; Chew, TF; Razak, NA

    2018-05-01

    In past decades, particle image velocimetry (PIV) has been widely used in measuring fluid flow and a lot of researches have been done to improve the PIV technique. Many researches are conducted on high power light emitting diode (HPLED) to replace the traditional laser illumination system in PIV. As an extended work to the research in PIV illumination system, two high power light emitting diodes (HPLED) with different wavelength are introduced as PIV illumination system. The objective of this research is using dual colours LED to directly replace laser as illumination system in order for a single frame to be captured by a normal camera instead of a high speed camera. Dual colours HPLEDs PIV are capable with single frame double pulses mode which able to plot the velocity vector of the particles after correlation. An illumination system is designed and fabricated and evaluated by measuring water flow in a small tank. The results indicates that HPLEDs promises a few advantages in terms of cost, safety and performance. It has a high potential to be develop into an alternative for PIV in the near future.

  13. A Comprehensive Motion Estimation Technique for the Improvement of EIS Methods Based on the SURF Algorithm and Kalman Filter.

    PubMed

    Cheng, Xuemin; Hao, Qun; Xie, Mengdi

    2016-04-07

    Video stabilization is an important technology for removing undesired motion in videos. This paper presents a comprehensive motion estimation method for electronic image stabilization techniques, integrating the speeded up robust features (SURF) algorithm, modified random sample consensus (RANSAC), and the Kalman filter, and also taking camera scaling and conventional camera translation and rotation into full consideration. Using SURF in sub-pixel space, feature points were located and then matched. The false matched points were removed by modified RANSAC. Global motion was estimated by using the feature points and modified cascading parameters, which reduced the accumulated errors in a series of frames and improved the peak signal to noise ratio (PSNR) by 8.2 dB. A specific Kalman filter model was established by considering the movement and scaling of scenes. Finally, video stabilization was achieved with filtered motion parameters using the modified adjacent frame compensation. The experimental results proved that the target images were stabilized even when the vibrating amplitudes of the video become increasingly large.

  14. The effect of interference on delta modulation encoded video signals

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.

    1979-01-01

    The results of a study on the use of the delta modulator as a digital encoder of television signals are presented. The computer simulation was studied of different delta modulators in order to find a satisfactory delta modulator. After finding a suitable delta modulator algorithm via computer simulation, the results are analyzed and then implemented in hardware to study the ability to encode real time motion pictures from an NTSC format television camera. The effects were investigated of channel errors on the delta modulated video signal and several error correction algorithms were tested via computer simulation. A very high speed delta modulator was built (out of ECL logic), incorporating the most promising of the correction schemes, so that it could be tested on real time motion pictures. The final area of investigation concerned itself with finding delta modulators which could achieve significant bandwidth reduction without regard to complexity or speed. The first such scheme to be investigated was a real time frame to frame encoding scheme which required the assembly of fourteen, 131,000 bit long shift registers as well as a high speed delta modulator. The other schemes involved two dimensional delta modulator algorithms.

  15. High-speed multi-exposure laser speckle contrast imaging with a single-photon counting camera

    PubMed Central

    Dragojević, Tanja; Bronzi, Danilo; Varma, Hari M.; Valdes, Claudia P.; Castellvi, Clara; Villa, Federica; Tosi, Alberto; Justicia, Carles; Zappa, Franco; Durduran, Turgut

    2015-01-01

    Laser speckle contrast imaging (LSCI) has emerged as a valuable tool for cerebral blood flow (CBF) imaging. We present a multi-exposure laser speckle imaging (MESI) method which uses a high-frame rate acquisition with a negligible inter-frame dead time to mimic multiple exposures in a single-shot acquisition series. Our approach takes advantage of the noise-free readout and high-sensitivity of a complementary metal-oxide-semiconductor (CMOS) single-photon avalanche diode (SPAD) array to provide real-time speckle contrast measurement with high temporal resolution and accuracy. To demonstrate its feasibility, we provide comparisons between in vivo measurements with both the standard and the new approach performed on a mouse brain, in identical conditions. PMID:26309751

  16. Effects of automated speed enforcement in Montgomery County, Maryland, on vehicle speeds, public opinion, and crashes.

    PubMed

    Hu, Wen; McCartt, Anne T

    2016-09-01

    In May 2007, Montgomery County, Maryland, implemented an automated speed enforcement program, with cameras allowed on residential streets with speed limits of 35 mph or lower and in school zones. In 2009, the state speed camera law increased the enforcement threshold from 11 to 12 mph over the speed limit and restricted school zone enforcement hours. In 2012, the county began using a corridor approach, in which cameras were periodically moved along the length of a roadway segment. The long-term effects of the speed camera program on travel speeds, public attitudes, and crashes were evaluated. Changes in travel speeds at camera sites from 6 months before the program began to 7½ years after were compared with changes in speeds at control sites in the nearby Virginia counties of Fairfax and Arlington. A telephone survey of Montgomery County drivers was conducted in Fall 2014 to examine attitudes and experiences related to automated speed enforcement. Using data on crashes during 2004-2013, logistic regression models examined the program's effects on the likelihood that a crash involved an incapacitating or fatal injury on camera-eligible roads and on potential spillover roads in Montgomery County, using crashes in Fairfax County on similar roads as controls. About 7½ years after the program began, speed cameras were associated with a 10% reduction in mean speeds and a 62% reduction in the likelihood that a vehicle was traveling more than 10 mph above the speed limit at camera sites. When interviewed in Fall 2014, 95% of drivers were aware of the camera program, 62% favored it, and most had received a camera ticket or knew someone else who had. The overall effect of the camera program in its modified form, including both the law change and the corridor approach, was a 39% reduction in the likelihood that a crash resulted in an incapacitating or fatal injury. Speed cameras alone were associated with a 19% reduction in the likelihood that a crash resulted in an incapacitating or fatal injury, the law change was associated with a nonsignificant 8% increase, and the corridor approach provided an additional 30% reduction over and above the cameras. This study adds to the evidence that speed cameras can reduce speeding, which can lead to reductions in speeding-related crashes and crashes involving serious injuries or fatalities.

  17. Management of a patient's gait abnormality using smartphone technology in-clinic for improved qualitative analysis: A case report.

    PubMed

    VanWye, William R; Hoover, Donald L

    2018-05-01

    Qualitative analysis has its limitations as the speed of human movement often occurs more quickly than can be comprehended. Digital video allows for frame-by-frame analysis, and therefore likely more effective interventions for gait dysfunction. Although the use of digital video outside laboratory settings, just a decade ago, was challenging due to cost and time constraints, rapid use of smartphones and software applications has made this technology much more practical for clinical usage. A 35-year-old man presented for evaluation with the chief complaint of knee pain 24 months status-post triple arthrodesis following a work-related crush injury. In-clinic qualitative gait analysis revealed gait dysfunction, which was augmented by using a standard IPhone® 3GS camera. After video capture, an IPhone® application (Speed Up TV®, https://itunes.apple.com/us/app/speeduptv/id386986953?mt=8 ) allowed for frame-by-frame analysis. Corrective techniques were employed using in-clinic equipment to develop and apply a temporary heel-to-toe rocker sole (HTRS) to the patient's shoe. Post-intervention video revealed significantly improved gait efficiency with a decrease in pain. The patient was promptly fitted with a permanent HTRS orthosis. This intervention enabled the patient to successfully complete a work conditioning program and progress to job retraining. Video allows for multiple views, which can be further enhanced by using applications for frame-by-frame analysis and zoom capabilities. This is especially useful for less experienced observers of human motion, as well as for establishing comparative signs prior to implementation of training and/or permanent devices.

  18. Application of automatic threshold in dynamic target recognition with low contrast

    NASA Astrophysics Data System (ADS)

    Miao, Hua; Guo, Xiaoming; Chen, Yu

    2014-11-01

    Hybrid photoelectric joint transform correlator can realize automatic real-time recognition with high precision through the combination of optical devices and electronic devices. When recognizing targets with low contrast using photoelectric joint transform correlator, because of the difference of attitude, brightness and grayscale between target and template, only four to five frames of dynamic targets can be recognized without any processing. CCD camera is used to capture the dynamic target images and the capturing speed of CCD is 25 frames per second. Automatic threshold has many advantages like fast processing speed, effectively shielding noise interference, enhancing diffraction energy of useful information and better reserving outline of target and template, so this method plays a very important role in target recognition with optical correlation method. However, the automatic obtained threshold by program can not achieve the best recognition results for dynamic targets. The reason is that outline information is broken to some extent. Optimal threshold is obtained by manual intervention in most cases. Aiming at the characteristics of dynamic targets, the processing program of improved automatic threshold is finished by multiplying OTSU threshold of target and template by scale coefficient of the processed image, and combining with mathematical morphology. The optimal threshold can be achieved automatically by improved automatic threshold processing for dynamic low contrast target images. The recognition rate of dynamic targets is improved through decreased background noise effect and increased correlation information. A series of dynamic tank images with the speed about 70 km/h are adapted as target images. The 1st frame of this series of tanks can correlate only with the 3rd frame without any processing. Through OTSU threshold, the 80th frame can be recognized. By automatic threshold processing of the joint images, this number can be increased to 89 frames. Experimental results show that the improved automatic threshold processing has special application value for the recognition of dynamic target with low contrast.

  19. Initial Demonstration of 9-MHz Framing Camera Rates on the FAST UV Drive Laser Pulse Trains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lumpkin, A. H.; Edstrom Jr., D.; Ruan, J.

    2016-10-09

    We report the configuration of a Hamamatsu C5680 streak camera as a framing camera to record transverse spatial information of green-component laser micropulses at 3- and 9-MHz rates for the first time. The latter is near the time scale of the ~7.5-MHz revolution frequency of the Integrable Optics Test Accelerator (IOTA) ring and its expected synchroton radiation source temporal structure. The 2-D images are recorded with a Gig-E readout CCD camera. We also report a first proof of principle with an OTR source using the linac streak camera in a semi-framing mode.

  20. Advances in x-ray framing cameras at the National Ignition Facility to improve quantitative precision in x-ray imaging

    DOE PAGES

    Benedetti, L. R.; Holder, J. P.; Perkins, M.; ...

    2016-02-26

    We describe an experimental method to measure the gate profile of an x-ray framing camera and to determine several important functional parameters: relative gain (between strips), relative gain droop (within each strip), gate propagation velocity, gate width, and actual inter-strip timing. Several of these parameters cannot be measured accurately by any other technique. This method is then used to document cross talk-induced gain variations and artifacts created by radiation that arrives before the framing camera is actively amplifying x-rays. Electromagnetic cross talk can cause relative gains to vary significantly as inter-strip timing is varied. This imposes a stringent requirement formore » gain calibration. If radiation arrives before a framing camera is triggered, it can cause an artifact that manifests as a high-intensity, spatially varying background signal. Furthermore, we have developed a device that can be added to the framing camera head to prevent these artifacts.« less

  1. Advances in x-ray framing cameras at the National Ignition Facility to improve quantitative precision in x-ray imaging.

    PubMed

    Benedetti, L R; Holder, J P; Perkins, M; Brown, C G; Anderson, C S; Allen, F V; Petre, R B; Hargrove, D; Glenn, S M; Simanovskaia, N; Bradley, D K; Bell, P

    2016-02-01

    We describe an experimental method to measure the gate profile of an x-ray framing camera and to determine several important functional parameters: relative gain (between strips), relative gain droop (within each strip), gate propagation velocity, gate width, and actual inter-strip timing. Several of these parameters cannot be measured accurately by any other technique. This method is then used to document cross talk-induced gain variations and artifacts created by radiation that arrives before the framing camera is actively amplifying x-rays. Electromagnetic cross talk can cause relative gains to vary significantly as inter-strip timing is varied. This imposes a stringent requirement for gain calibration. If radiation arrives before a framing camera is triggered, it can cause an artifact that manifests as a high-intensity, spatially varying background signal. We have developed a device that can be added to the framing camera head to prevent these artifacts.

  2. Image synchronization for 3D application using the NanEye sensor

    NASA Astrophysics Data System (ADS)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado

    2015-03-01

    Based on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a novel technique to perfectly synchronize up to 8 individual self-timed cameras. Minimal form factor self-timed camera modules of 1 mm x 1 mm or smaller do not generally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge to synchronize multiple self-timed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras to synchronize their frame rate and frame phase. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames of multiple cameras, a Master-Slave interface was implemented. A single camera is defined as the Master entity, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the realization of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.

  3. Linear array of photodiodes to track a human speaker for video recording

    NASA Astrophysics Data System (ADS)

    DeTone, D.; Neal, H.; Lougheed, R.

    2012-12-01

    Communication and collaboration using stored digital media has garnered more interest by many areas of business, government and education in recent years. This is due primarily to improvements in the quality of cameras and speed of computers. An advantage of digital media is that it can serve as an effective alternative when physical interaction is not possible. Video recordings that allow for viewers to discern a presenter's facial features, lips and hand motions are more effective than videos that do not. To attain this, one must maintain a video capture in which the speaker occupies a significant portion of the captured pixels. However, camera operators are costly, and often do an imperfect job of tracking presenters in unrehearsed situations. This creates motivation for a robust, automated system that directs a video camera to follow a presenter as he or she walks anywhere in the front of a lecture hall or large conference room. Such a system is presented. The system consists of a commercial, off-the-shelf pan/tilt/zoom (PTZ) color video camera, a necklace of infrared LEDs and a linear photodiode array detector. Electronic output from the photodiode array is processed to generate the location of the LED necklace, which is worn by a human speaker. The computer controls the video camera movements to record video of the speaker. The speaker's vertical position and depth are assumed to remain relatively constant- the video camera is sent only panning (horizontal) movement commands. The LED necklace is flashed at 70Hz at a 50% duty cycle to provide noise-filtering capability. The benefit to using a photodiode array versus a standard video camera is its higher frame rate (4kHz vs. 60Hz). The higher frame rate allows for the filtering of infrared noise such as sunlight and indoor lighting-a capability absent from other tracking technologies. The system has been tested in a large lecture hall and is shown to be effective.

  4. Magnetic field effect on spoke behaviour

    NASA Astrophysics Data System (ADS)

    Hnilica, Jaroslav; Slapanska, Marta; Klein, Peter; Vasina, Petr

    2016-09-01

    The investigations of the non-reactive high power impulse magnetron sputtering (HiPIMS) discharge using high-speed camera imaging, optical emission spectroscopy and electrical probes showed that plasma is not homogeneously distributed over the target surface, but it is concentrated in regions of higher local plasma density called spokes rotating above the erosion racetrack. Magnetic field effect on spoke behaviour was studied by high-speed camera imaging in HiPIMS discharge using 3 inch titanium target. An employed camera enabled us to record two successive images in the same pulse with time delay of 3 μs between them, which allowed us to determine the number of spokes, spoke rotation velocity and spoke rotation frequency. The experimental conditions covered pressure range from 0.15 to 5 Pa, discharge current up to 350 A and magnetic fields of 37, 72 and 91 mT. Increase of the magnetic field influenced the number of spokes observed at the same pressure and at the same discharge current. Moreover, the investigation revealed different characteristic spoke shapes depending on the magnetic field strength - both diffusive and triangular shapes were observed for the same target material. The spoke rotation velocity was independent on the magnetic field strength. This research has been financially supported by the Czech Science Foundation in frame of the project 15-00863S.

  5. A dual-band adaptor for infrared imaging.

    PubMed

    McLean, A G; Ahn, J-W; Maingi, R; Gray, T K; Roquemore, A L

    2012-05-01

    A novel imaging adaptor providing the capability to extend a standard single-band infrared (IR) camera into a two-color or dual-band device has been developed for application to high-speed IR thermography on the National Spherical Tokamak Experiment (NSTX). Temperature measurement with two-band infrared imaging has the advantage of being mostly independent of surface emissivity, which may vary significantly in the liquid lithium divertor installed on NSTX as compared to that of an all-carbon first wall. In order to take advantage of the high-speed capability of the existing IR camera at NSTX (1.6-6.2 kHz frame rate), a commercial visible-range optical splitter was extensively modified to operate in the medium wavelength and long wavelength IR. This two-band IR adapter utilizes a dichroic beamsplitter, which reflects 4-6 μm wavelengths and transmits 7-10 μm wavelength radiation, each with >95% efficiency and projects each IR channel image side-by-side on the camera's detector. Cutoff filters are used in each IR channel, and ZnSe imaging optics and mirrors optimized for broadband IR use are incorporated into the design. In-situ and ex-situ temperature calibration and preliminary data of the NSTX divertor during plasma discharges are presented, with contrasting results for dual-band vs. single-band IR operation.

  6. Calculus migration characterization during Ho:YAG laser lithotripsy by high-speed camera using suspended pendulum method.

    PubMed

    Zhang, Jian James; Rajabhandharaks, Danop; Xuan, Jason Rongwei; Chia, Ray W J; Hasenberg, Thomas

    2017-07-01

    Calculus migration is a common problem during ureteroscopic laser lithotripsy procedure to treat urolithiasis. A conventional experimental method to characterize calculus migration utilized a hosting container (e.g., a "V" grove or a test tube). These methods, however, demonstrated large variation and poor detectability, possibly attributed to the friction between the calculus and the container on which the calculus was situated. In this study, calculus migration was investigated using a pendulum model suspended underwater to eliminate the aforementioned friction. A high-speed camera was used to study the movement of the calculus which covered zero order (displacement), first order (speed), and second order (acceleration). A commercialized, pulsed Ho:YAG laser at 2.1 μm, a 365-μm core diameter fiber, and a calculus phantom (Plaster of Paris, 10 × 10 × 10 mm 3 ) was utilized to mimic laser lithotripsy procedure. The phantom was hung on a stainless steel bar and irradiated by the laser at 0.5, 1.0, and 1.5 J energy per pulse at 10 Hz for 1 s (i.e., 5, 10, and 15 W). Movement of the phantom was recorded by a high-speed camera with a frame rate of 10,000 FPS. The video data files are analyzed by MATLAB program by processing each image frame and obtaining position data of the calculus. With a sample size of 10, the maximum displacement was 1.25 ± 0.10, 3.01 ± 0.52, and 4.37 ± 0.58 mm for 0.5, 1, and 1.5 J energy per pulse, respectively. Using the same laser power, the conventional method showed <0.5 mm total displacement. When reducing the phantom size to 5 × 5 × 5 mm 3 (one eighth in volume), the displacement was very inconsistent. The results suggested that using the pendulum model to eliminate the friction improved sensitivity and repeatability of the experiment. A detailed investigation on calculus movement and other causes of experimental variation will be conducted as a future study.

  7. A CMOS high speed imaging system design based on FPGA

    NASA Astrophysics Data System (ADS)

    Tang, Hong; Wang, Huawei; Cao, Jianzhong; Qiao, Mingrui

    2015-10-01

    CMOS sensors have more advantages than traditional CCD sensors. The imaging system based on CMOS has become a hot spot in research and development. In order to achieve the real-time data acquisition and high-speed transmission, we design a high-speed CMOS imaging system on account of FPGA. The core control chip of this system is XC6SL75T and we take advantages of CameraLink interface and AM41V4 CMOS image sensors to transmit and acquire image data. AM41V4 is a 4 Megapixel High speed 500 frames per second CMOS image sensor with global shutter and 4/3" optical format. The sensor uses column parallel A/D converters to digitize the images. The CameraLink interface adopts DS90CR287 and it can convert 28 bits of LVCMOS/LVTTL data into four LVDS data stream. The reflected light of objects is photographed by the CMOS detectors. CMOS sensors convert the light to electronic signals and then send them to FPGA. FPGA processes data it received and transmits them to upper computer which has acquisition cards through CameraLink interface configured as full models. Then PC will store, visualize and process images later. The structure and principle of the system are both explained in this paper and this paper introduces the hardware and software design of the system. FPGA introduces the driven clock of CMOS. The data in CMOS is converted to LVDS signals and then transmitted to the data acquisition cards. After simulation, the paper presents a row transfer timing sequence of CMOS. The system realized real-time image acquisition and external controls.

  8. SEOS frame camera applications study

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A research and development satellite is discussed which will provide opportunities for observation of transient phenomena that fall within the fixed viewing circle of the spacecraft. The evaluation of possible applications for frame cameras, for SEOS, are studied. The computed lens characteristics for each camera are listed.

  9. Cheetah: A high frame rate, high resolution SWIR image camera

    NASA Astrophysics Data System (ADS)

    Neys, Joel; Bentell, Jonas; O'Grady, Matt; Vermeiren, Jan; Colin, Thierry; Hooylaerts, Peter; Grietens, Bob

    2008-10-01

    A high resolution, high frame rate InGaAs based image sensor and associated camera has been developed. The sensor and the camera are capable of recording and delivering more than 1700 full 640x512pixel frames per second. The FPA utilizes a low lag CTIA current integrator in each pixel, enabling integration times shorter than one microsecond. On-chip logics allows for four different sub windows to be read out simultaneously at even higher rates. The spectral sensitivity of the FPA is situated in the SWIR range [0.9-1.7 μm] and can be further extended into the Visible and NIR range. The Cheetah camera has max 16 GB of on-board memory to store the acquired images and transfer the data over a Gigabit Ethernet connection to the PC. The camera is also equipped with a full CameralinkTM interface to directly stream the data to a frame grabber or dedicated image processing unit. The Cheetah camera is completely under software control.

  10. A new high-speed IR camera system

    NASA Technical Reports Server (NTRS)

    Travis, Jeffrey W.; Shu, Peter K.; Jhabvala, Murzy D.; Kasten, Michael S.; Moseley, Samuel H.; Casey, Sean C.; Mcgovern, Lawrence K.; Luers, Philip J.; Dabney, Philip W.; Kaipa, Ravi C.

    1994-01-01

    A multi-organizational team at the Goddard Space Flight Center is developing a new far infrared (FIR) camera system which furthers the state of the art for this type of instrument by the incorporating recent advances in several technological disciplines. All aspects of the camera system are optimized for operation at the high data rates required for astronomical observations in the far infrared. The instrument is built around a Blocked Impurity Band (BIB) detector array which exhibits responsivity over a broad wavelength band and which is capable of operating at 1000 frames/sec, and consists of a focal plane dewar, a compact camera head electronics package, and a Digital Signal Processor (DSP)-based data system residing in a standard 486 personal computer. In this paper we discuss the overall system architecture, the focal plane dewar, and advanced features and design considerations for the electronics. This system, or one derived from it, may prove useful for many commercial and/or industrial infrared imaging or spectroscopic applications, including thermal machine vision for robotic manufacturing, photographic observation of short-duration thermal events such as combustion or chemical reactions, and high-resolution surveillance imaging.

  11. Establishing imaging sensor specifications for digital still cameras

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    2007-02-01

    Digital Still Cameras, DSCs, have now displaced conventional still cameras in most markets. The heart of a DSC is thought to be the imaging sensor, be it Full Frame CCD, and Interline CCD, a CMOS sensor or the newer Foveon buried photodiode sensors. There is a strong tendency by consumers to consider only the number of mega-pixels in a camera and not to consider the overall performance of the imaging system, including sharpness, artifact control, noise, color reproduction, exposure latitude and dynamic range. This paper will provide a systematic method to characterize the physical requirements of an imaging sensor and supporting system components based on the desired usage. The analysis is based on two software programs that determine the "sharpness", potential for artifacts, sensor "photographic speed", dynamic range and exposure latitude based on the physical nature of the imaging optics, sensor characteristics (including size of pixels, sensor architecture, noise characteristics, surface states that cause dark current, quantum efficiency, effective MTF, and the intrinsic full well capacity in terms of electrons per square centimeter). Examples will be given for consumer, pro-consumer, and professional camera systems. Where possible, these results will be compared to imaging system currently on the market.

  12. High-frame rate multiport CCD imager and camera

    NASA Astrophysics Data System (ADS)

    Levine, Peter A.; Patterson, David R.; Esposito, Benjamin J.; Tower, John R.; Lawler, William B.

    1993-01-01

    A high frame rate visible CCD camera capable of operation up to 200 frames per second is described. The camera produces a 256 X 256 pixel image by using one quadrant of a 512 X 512 16-port, back illuminated CCD imager. Four contiguous outputs are digitally reformatted into a correct, 256 X 256 image. This paper details the architecture and timing used for the CCD drive circuits, analog processing, and the digital reformatter.

  13. EVA Robotic Assistant Project: Platform Attitude Prediction

    NASA Technical Reports Server (NTRS)

    Nickels, Kevin M.

    2003-01-01

    The Robotic Systems Technology Branch is currently working on the development of an EVA Robotic Assistant under the sponsorship of the Surface Systems Thrust of the NASA Cross Enterprise Technology Development Program (CETDP). This will be a mobile robot that can follow a field geologist during planetary surface exploration, carry his tools and the samples that he collects, and provide video coverage of his activity. Prior experiments have shown that for such a robot to be useful it must be able to follow the geologist at walking speed over any terrain of interest. Geologically interesting terrain tends to be rough rather than smooth. The commercial mobile robot that was recently purchased as an initial testbed for the EVA Robotic Assistant Project, an ATRV Jr., is capable of faster than walking speed outside but it has no suspension. Its wheels with inflated rubber tires are attached to axles that are connected directly to the robot body. Any angular motion of the robot produced by driving over rough terrain will directly affect the pointing of the on-board stereo cameras. The resulting image motion is expected to make tracking of the geologist more difficult. This will either require the tracker to search a larger part of the image to find the target from frame to frame or to search mechanically in pan and tilt whenever the image motion is large enough to put the target outside the image in the next frame. This project consists of the design and implementation of a Kalman filter that combines the output of the angular rate sensors and linear accelerometers on the robot to estimate the motion of the robot base. The motion of the stereo camera pair mounted on the robot that results from this motion as the robot drives over rough terrain is then straightforward to compute. The estimates may then be used, for example, to command the robot s on-board pan-tilt unit to compensate for the camera motion induced by the base movement. This has been accomplished in two ways: first, a standalone head stabilizer has been implemented and second, the estimates have been used to influence the search algorithm of the stereo tracking algorithm. Studies of the image motion of a tracked object indicate that the image motion of objects is suppressed while the robot crossing rough terrain. This work expands the range of speed and surface roughness over which the robot should be able to track and follow a field geologist and accept arm gesture commands from the geologist.

  14. Simultaneous visualization of transonic buffet on a rocket faring model using unsteady PSP measurement and Schlieren method

    NASA Astrophysics Data System (ADS)

    Nakakita, K.

    2017-02-01

    Simultaneous visualization technique of the combination of the unsteady Pressure-Sensitive Paint and the Schlieren measurement was introduced. It was applied to a wind tunnel test of a rocket faring model at the JAXA 2mx2m transonic wind tunnel. Quantitative unsteady pressure field was acquired by the unsteady PSP measurement, which consisted of a high-speed camera, high-power laser diode, and so on. Qualitative flow structure was acquired by the Schlieren measurement using a high-speed camera and Xenon lamp with a blue optical filter. Simultaneous visualization was achieved 1.6 kfps frame rate and it gave the detailed structure of unsteady flow fields caused by the unsteady shock wave oscillation due to shock-wave/boundary-layer interaction around the juncture between cone and cylinder on the model. Simultaneous measurement results were merged into a movie including surface pressure distribution on the rocket faring and spatial structure of shock wave system concerning to transonic buffet. Constructed movie gave a timeseries and global information of transonic buffet flow field on the rocket faring model visually.

  15. Imaging with organic indicators and high-speed charge-coupled device cameras in neurons: some applications where these classic techniques have advantages.

    PubMed

    Ross, William N; Miyazaki, Kenichi; Popovic, Marko A; Zecevic, Dejan

    2015-04-01

    Dynamic calcium and voltage imaging is a major tool in modern cellular neuroscience. Since the beginning of their use over 40 years ago, there have been major improvements in indicators, microscopes, imaging systems, and computers. While cutting edge research has trended toward the use of genetically encoded calcium or voltage indicators, two-photon microscopes, and in vivo preparations, it is worth noting that some questions still may be best approached using more classical methodologies and preparations. In this review, we highlight a few examples in neurons where the combination of charge-coupled device (CCD) imaging and classical organic indicators has revealed information that has so far been more informative than results using the more modern systems. These experiments take advantage of the high frame rates, sensitivity, and spatial integration of the best CCD cameras. These cameras can respond to the faster kinetics of organic voltage and calcium indicators, which closely reflect the fast dynamics of the underlying cellular events.

  16. Hardware accelerator design for change detection in smart camera

    NASA Astrophysics Data System (ADS)

    Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Chaudhury, Santanu; Vohra, Anil

    2011-10-01

    Smart Cameras are important components in Human Computer Interaction. In any remote surveillance scenario, smart cameras have to take intelligent decisions to select frames of significant changes to minimize communication and processing overhead. Among many of the algorithms for change detection, one based on clustering based scheme was proposed for smart camera systems. However, such an algorithm could achieve low frame rate far from real-time requirements on a general purpose processors (like PowerPC) available on FPGAs. This paper proposes the hardware accelerator capable of detecting real time changes in a scene, which uses clustering based change detection scheme. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA board. Resulted frame rate is 30 frames per second for QVGA resolution in gray scale.

  17. Evaluation of Event-Based Algorithms for Optical Flow with Ground-Truth from Inertial Measurement Sensor

    PubMed Central

    Rueckauer, Bodo; Delbruck, Tobi

    2016-01-01

    In this study we compare nine optical flow algorithms that locally measure the flow normal to edges according to accuracy and computation cost. In contrast to conventional, frame-based motion flow algorithms, our open-source implementations compute optical flow based on address-events from a neuromorphic Dynamic Vision Sensor (DVS). For this benchmarking we created a dataset of two synthesized and three real samples recorded from a 240 × 180 pixel Dynamic and Active-pixel Vision Sensor (DAVIS). This dataset contains events from the DVS as well as conventional frames to support testing state-of-the-art frame-based methods. We introduce a new source for the ground truth: In the special case that the perceived motion stems solely from a rotation of the vision sensor around its three camera axes, the true optical flow can be estimated using gyro data from the inertial measurement unit integrated with the DAVIS camera. This provides a ground-truth to which we can compare algorithms that measure optical flow by means of motion cues. An analysis of error sources led to the use of a refractory period, more accurate numerical derivatives and a Savitzky-Golay filter to achieve significant improvements in accuracy. Our pure Java implementations of two recently published algorithms reduce computational cost by up to 29% compared to the original implementations. Two of the algorithms introduced in this paper further speed up processing by a factor of 10 compared with the original implementations, at equal or better accuracy. On a desktop PC, they run in real-time on dense natural input recorded by a DAVIS camera. PMID:27199639

  18. Edge Turbulence Imaging in Alcator C-Mod

    NASA Astrophysics Data System (ADS)

    Zweben, Stewart J.

    2001-10-01

    This talk will describe measurements and modeling of the 2-D structure of edge turbulence in Alcator C-Mod. The radial vs. poloidal structure was measured using Gas Puff Imaging (GPI) (R. Maqueda et al, RSI 72, 931 (2001), J. Terry et al, J. Nucl. Materials 290-293, 757 (2001)), in which the visible light emitted by an edge neutral gas puff (generally D or He) is viewed along the local magnetic field by a fast-gated video camera. Strong fluctuations are observed in the gas cloud light emission when the camera is gated at ~2 microsec exposure time per frame. The structure of these fluctuations is highly turbulent with a typical radial and poloidal scale of ≈1 cm, and often with local maxima in the scrape-off layer (i.e. ``blobs"). Video clips and analyses of these images will be presented along with their variation in different plasma regimes. The local time dependence of edge turbulence is measured using high-speed photodiodes viewing the gas puff emission, a scanning Langmuir probe, and also with a Princeton Scientific Instruments ultra-fast framing camera, which can make 2-D images the gas puff at up to 200,000 frames/sec. Probe measurements show that the strong turbulence region moves to the separatrix as the density limit is approached, which may be connected to the density limit (B. LaBombard et al., Phys. Plasmas 8 2107 (2001)). Comparisons of this C-Mod turbulence data will be made with results of simulations from the Drift-Ballooning Mode (DBM) (B.N. Rogers et al, Phys. Rev. Lett. 20 4396 (1998))and Non-local Edge Turbulence (NLET) codes.

  19. Observation of a spark channel generated in water with shock wave assistance in plate-to-plate electrode configuration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stelmashuk, V., E-mail: vitalij@ipp.cas.cz

    2014-01-15

    When a high voltage pulse with an amplitude of 30 kV is applied to a pair of disk electrodes at a time when a shock wave is passing between them, an electrical spark is generated. The dynamic changes in the spark morphology are studied here using a high-speed framing camera. The primary result of this work is the provision of experimental evidence of plasma instability that was observed in the channel of the electric spark.

  20. High-speed image processing system and its micro-optics application

    NASA Astrophysics Data System (ADS)

    Ohba, Kohtaro; Ortega, Jesus C. P.; Tanikawa, Tamio; Tanie, Kazuo; Tajima, Kenji; Nagai, Hiroshi; Tsuji, Masataka; Yamada, Shigeru

    2003-07-01

    In this paper, a new application system with high speed photography, i.e. an observational system for the tele-micro-operation, has been proposed with a dynamic focusing system and a high-speed image processing system using the "Depth From Focus (DFF)" criteria. In micro operation, such as for the microsurgery, DNA operation and etc., the small depth of a focus on the microscope makes bad observation. For example, if the focus is on the object, the actuator cannot be seen with the microscope. On the other hand, if the focus is on the actuator, the object cannot be observed. In this sense, the "all-in-focus image," which holds the in-focused texture all over the image, is useful to observe the microenvironments on the microscope. It is also important to obtain the "depth map" which could show the 3D micro virtual environments in real-time to actuate the micro objects, intuitively. To realize the real-time micro operation with DFF criteria, which has to integrate several images to obtain "all-in-focus image" and "depth map," at least, the 240 frames par second based image capture and processing system should be required. At first, this paper briefly reviews the criteria of "depth from focus" to achieve the all-in-focus image and the 3D microenvironments' reconstruction, simultaneously. After discussing the problem in our past system, a new frame-rate system is constructed with the high-speed video camera and FPGA hardware with 240 frames par second. To apply this system in the real microscope, a new criterion "ghost filtering" technique to reconstruct the all-in-focus image is proposed. Finally, the micro observation shows the validity of this system.

  1. Vocal fold vibrations: high-speed imaging, kymography, and acoustic analysis: a preliminary report.

    PubMed

    Larsson, H; Hertegård, S; Lindestad, P A; Hammarberg, B

    2000-12-01

    To evaluate a new analysis system, High-Speed Tool Box (H. Larsson, custom-made program for image analysis, version 1.1, Department of Logopedics and Phoniatrics, Huddinge University Hospital, Huddinge, Sweden, 1998) for studying vocal fold vibrations using a high-speed camera and to relate findings from these analyses to sound characteristics. A Weinberger Speedcam + 500 system (Weinberger AG, Dietikon, Switzerland) was used with a frame rate of 1,904 frames per second. Images were stored and analyzed digitally. Analysis included automatic glottal edge detection and calculation of glottal area variations, as well as kymography. These signals were compared with acoustic waveforms using the Soundswell program (Hitech Development AB, Stockholm, Sweden). The High-Speed Tool Box was applied on two types of high-speed recordings: a diplophonic phonation and a tremor voice. Relations between glottal vibratory patterns and the sound waveform were analyzed. In the diplophonic phonation, the glottal area waveform, as well as the kymogram, showed a specific pattern of repetitive glottal closures, which was also seen in the acoustic waveform. In the tremor voice, fundamental frequency (F0) fluctuations in the acoustic waveform were reflected in slow variations in amplitude in the glottal area waveform. For studying details of mucosal movements during these kinds of abnormal vibrations, the glottal area waveform was particularly useful. Our results suggest that this combined high-speed acoustic-kymographic analysis package is a promising aid for separating and specifying different voice qualities such as diplophonia and voice tremor. Apart from clinical use, this finding should be of help for specification of the terminology of different voice qualities.

  2. Multi-MHz laser-scanning single-cell fluorescence microscopy by spatiotemporally encoded virtual source array

    PubMed Central

    Wu, Jianglai; Tang, Anson H. L.; Mok, Aaron T. Y.; Yan, Wenwei; Chan, Godfrey C. F.; Wong, Kenneth K. Y.; Tsia, Kevin K.

    2017-01-01

    Apart from the spatial resolution enhancement, scaling of temporal resolution, equivalently the imaging throughput, of fluorescence microscopy is of equal importance in advancing cell biology and clinical diagnostics. Yet, this attribute has mostly been overlooked because of the inherent speed limitation of existing imaging strategies. To address the challenge, we employ an all-optical laser-scanning mechanism, enabled by an array of reconfigurable spatiotemporally-encoded virtual sources, to demonstrate ultrafast fluorescence microscopy at line-scan rate as high as 8 MHz. We show that this technique enables high-throughput single-cell microfluidic fluorescence imaging at 75,000 cells/second and high-speed cellular 2D dynamical imaging at 3,000 frames per second, outperforming the state-of-the-art high-speed cameras and the gold-standard laser scanning strategies. Together with its wide compatibility to the existing imaging modalities, this technology could empower new forms of high-throughput and high-speed biological fluorescence microscopy that was once challenged. PMID:28966855

  3. Real-Time Detection of Sporadic Meteors in the Intensified TV Imaging Systems.

    PubMed

    Vítek, Stanislav; Nasyrova, Maria

    2017-12-29

    The automatic observation of the night sky through wide-angle video systems with the aim of detecting meteor and fireballs is currently among routine astronomical observations. The observation is usually done in multi-station or network mode, so it is possible to estimate the direction and the speed of the body flight. The high velocity of the meteorite flying through the atmosphere determines the important features of the camera systems, namely the high frame rate. Thanks to high frame rates, such imaging systems produce a large amount of data, of which only a small fragment has scientific potential. This paper focuses on methods for the real-time detection of fast moving objects in the video sequences recorded by intensified TV systems with frame rates of about 60 frames per second. The goal of our effort is to remove all unnecessary data during the daytime and make free hard-drive capacity for the next observation. The processing of data from the MAIA (Meteor Automatic Imager and Analyzer) system is demonstrated in the paper.

  4. Real-Time Detection of Sporadic Meteors in the Intensified TV Imaging Systems

    PubMed Central

    2017-01-01

    The automatic observation of the night sky through wide-angle video systems with the aim of detecting meteor and fireballs is currently among routine astronomical observations. The observation is usually done in multi-station or network mode, so it is possible to estimate the direction and the speed of the body flight. The high velocity of the meteorite flying through the atmosphere determines the important features of the camera systems, namely the high frame rate. Thanks to high frame rates, such imaging systems produce a large amount of data, of which only a small fragment has scientific potential. This paper focuses on methods for the real-time detection of fast moving objects in the video sequences recorded by intensified TV systems with frame rates of about 60 frames per second. The goal of our effort is to remove all unnecessary data during the daytime and make free hard-drive capacity for the next observation. The processing of data from the MAIA (Meteor Automatic Imager and Analyzer) system is demonstrated in the paper. PMID:29286294

  5. Monitoring and analysis of thermal deformation waves with a high-speed phase measurement system.

    PubMed

    Taylor, Lucas; Talghader, Joseph

    2015-10-20

    Thermal effects in optical substrates are vitally important in determining laser damage resistance in long-pulse and continuous-wave laser systems. Thermal deformation waves in a soda-lime-silica glass substrate have been measured using high-speed interferometry during a series of laser pulses incident on the surface. Two-dimensional images of the thermal waves were captured at a rate of up to six frames per thermal event using a quantitative phase measurement method. The system comprised a Mach-Zehnder interferometer, along with a high-speed camera capable of up to 20,000 frames-per-second. The sample was placed in the interferometer and irradiated with 100 ns, 2 kHz Q-switched pulses from a high-power Nd:YAG laser operating at 1064 nm. Phase measurements were converted to temperature using known values of thermal expansion and temperature-dependent refractive index for glass. The thermal decay at the center of the thermal wave was fit to a function derived from first principles with excellent agreement. Additionally, the spread of the thermal distribution over time was fit to the same function. Both the temporal decay fit and the spatial fit produced a thermal diffusivity of 5×10-7  m2/s.

  6. High-speed multi-frame laser Schlieren for visualization of explosive events

    NASA Astrophysics Data System (ADS)

    Clarke, S. A.; Murphy, M. J.; Landon, C. D.; Mason, T. A.; Adrian, R. J.; Akinci, A. A.; Martinez, M. E.; Thomas, K. A.

    2007-09-01

    High-Speed Multi-Frame Laser Schlieren is used for visualization of a range of explosive and non-explosive events. Schlieren is a well-known technique for visualizing shock phenomena in transparent media. Laser backlighting and a framing camera allow for Schlieren images with very short (down to 5 ns) exposure times, band pass filtering to block out explosive self-light, and 14 frames of a single explosive event. This diagnostic has been applied to several explosive initiation events, such as exploding bridgewires (EBW), Exploding Foil Initiators (EFI) (or slappers), Direct Optical Initiation (DOI), and ElectroStatic Discharge (ESD). Additionally, a series of tests have been performed on "cut-back" detonators with varying initial pressing (IP) heights. We have also used this Diagnostic to visualize a range of EBW, EFI, and DOI full-up detonators. The setup has also been used to visualize a range of other explosive events, such as explosively driven metal shock experiments and explosively driven microjets. Future applications to other explosive events such as boosters and IHE booster evaluation will be discussed. Finite element codes (EPIC, CTH) have been used to analyze the schlieren images to determine likely boundary or initial conditions to determine the temporal-spatial pressure profile across the output face of the detonator. These experiments are part of a phased plan to understand the evolution of detonation in a detonator from initiation shock through run to detonation to full detonation to transition to booster and booster detonation.

  7. Imaging of optically diffusive media by use of opto-elastography

    NASA Astrophysics Data System (ADS)

    Bossy, Emmanuel; Funke, Arik R.; Daoudi, Khalid; Tanter, Mickael; Fink, Mathias; Boccara, Claude

    2007-02-01

    We present a camera-based optical detection scheme designed to detect the transient motion created by the acoustic radiation force in elastic media. An optically diffusive tissue mimicking phantom was illuminated with coherent laser light, and a high speed camera (2 kHz frame rate) was used to acquire and cross-correlate consecutive speckle patterns. Time-resolved transient decorrelations of the optical speckle were measured as the results of localised motion induced in the medium by the radiation force and subsequent propagating shear waves. As opposed to classical acousto-optic techniques which are sensitive to vibrations induced by compressional waves at ultrasonic frequencies, the proposed technique is sensitive only to the low frequency transient motion induced in the medium by the radiation force. It therefore provides a way to assess both optical and shear mechanical properties.

  8. Design and Construction of an X-ray Lightning Camera

    NASA Astrophysics Data System (ADS)

    Schaal, M.; Dwyer, J. R.; Rassoul, H. K.; Uman, M. A.; Jordan, D. M.; Hill, J. D.

    2010-12-01

    A pinhole-type camera was designed and built for the purpose of producing high-speed images of the x-ray emissions from rocket-and-wire-triggered lightning. The camera consists of 30 7.62-cm diameter NaI(Tl) scintillation detectors, each sampling at 10 million frames per second. The steel structure of the camera is encased in 1.27-cm thick lead, which blocks x-rays that are less than 400 keV, except through a 7.62-cm diameter “pinhole” aperture located at the front of the camera. The lead and steel structure is covered in 0.16-cm thick aluminum to block RF noise, water and light. All together, the camera weighs about 550-kg and is approximately 1.2-m x 0.6-m x 0.6-m. The image plane, which is adjustable, was placed 32-cm behind the pinhole aperture, giving a field of view of about ±38° in both the vertical and horizontal directions. The elevation of the camera is adjustable between 0 and 50° from horizontal and the camera may be pointed in any azimuthal direction. In its current configuration, the camera’s angular resolution is about 14°. During the summer of 2010, the x-ray camera was located 44-m from the rocket-launch tower at the UF/Florida Tech International Center for Lightning Research and Testing (ICLRT) at Camp Blanding, FL and several rocket-triggered lightning flashes were observed. In this presentation, I will discuss the design, construction and operation of this x-ray camera.

  9. Electronic cameras for low-light microscopy.

    PubMed

    Rasnik, Ivan; French, Todd; Jacobson, Ken; Berland, Keith

    2013-01-01

    This chapter introduces to electronic cameras, discusses the various parameters considered for evaluating their performance, and describes some of the key features of different camera formats. The chapter also presents the basic understanding of functioning of the electronic cameras and how these properties can be exploited to optimize image quality under low-light conditions. Although there are many types of cameras available for microscopy, the most reliable type is the charge-coupled device (CCD) camera, which remains preferred for high-performance systems. If time resolution and frame rate are of no concern, slow-scan CCDs certainly offer the best available performance, both in terms of the signal-to-noise ratio and their spatial resolution. Slow-scan cameras are thus the first choice for experiments using fixed specimens such as measurements using immune fluorescence and fluorescence in situ hybridization. However, if video rate imaging is required, one need not evaluate slow-scan CCD cameras. A very basic video CCD may suffice if samples are heavily labeled or are not perturbed by high intensity illumination. When video rate imaging is required for very dim specimens, the electron multiplying CCD camera is probably the most appropriate at this technological stage. Intensified CCDs provide a unique tool for applications in which high-speed gating is required. The variable integration time video cameras are very attractive options if one needs to acquire images at video rate acquisition, as well as with longer integration times for less bright samples. This flexibility can facilitate many diverse applications with highly varied light levels. Copyright © 2007 Elsevier Inc. All rights reserved.

  10. Video-based measurements for wireless capsule endoscope tracking

    NASA Astrophysics Data System (ADS)

    Spyrou, Evaggelos; Iakovidis, Dimitris K.

    2014-01-01

    The wireless capsule endoscope is a swallowable medical device equipped with a miniature camera enabling the visual examination of the gastrointestinal (GI) tract. It wirelessly transmits thousands of images to an external video recording system, while its location and orientation are being tracked approximately by external sensor arrays. In this paper we investigate a video-based approach to tracking the capsule endoscope without requiring any external equipment. The proposed method involves extraction of speeded up robust features from video frames, registration of consecutive frames based on the random sample consensus algorithm, and estimation of the displacement and rotation of interest points within these frames. The results obtained by the application of this method on wireless capsule endoscopy videos indicate its effectiveness and improved performance over the state of the art. The findings of this research pave the way for a cost-effective localization and travel distance measurement of capsule endoscopes in the GI tract, which could contribute in the planning of more accurate surgical interventions.

  11. a Spatio-Spectral Camera for High Resolution Hyperspectral Imaging

    NASA Astrophysics Data System (ADS)

    Livens, S.; Pauly, K.; Baeck, P.; Blommaert, J.; Nuyts, D.; Zender, J.; Delauré, B.

    2017-08-01

    Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS) is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600-900 nm) in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots), horticulture (crop status monitoring to evaluate irrigation management in strawberry fields) and geology (meteorite detection on a grassland field). Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475-925 nm), and we discuss future work.

  12. Combined High-Speed 3D Scalar and Velocity Reconstruction of Hairpin Vortex

    NASA Astrophysics Data System (ADS)

    Sabatino, Daniel; Rossmann, Tobias; Zhu, Xuanyu; Thorsen, Mary

    2017-11-01

    The combination of 3D scanning stereoscopic particle image velocimetry (PIV) and 3D Planar Laser Induced Fluorescence (PLIF) is used to create high-speed three-dimensional reconstructions of the scalar and velocity fields of a developing hairpin vortex. The complete description of the regenerating hairpin vortex is needed as transitional boundary layers and turbulent spots are both comprised of and influenced by these vortices. A new high-speed, high power, laser-based imaging system is used which enables both high-speed 3D scanning stereo PIV and PLIF measurements. The experimental system uses a 250 Hz scanning mirror, two high-speed cameras with a 10 kHz frame rate, and a 40 kHz pulsed laser. Individual stereoscopic PIV images and scalar PLIF images are then reconstructed into time-resolved volumetric velocity and scalar data. The results from the volumetric velocity and scalar fields are compared to previous low-speed tomographic PIV data and scalar visualizations to determine the accuracy and fidelity of the high-speed diagnostics. Comparisons between the velocity and scalar field during hairpin development and regeneration are also discussed. Supported by the National Science Foundation under Grant CBET-1531475, Lafayette College,and the McCutcheon Foundation.

  13. High-speed spectral domain optical coherence tomography using non-uniform fast Fourier transform

    PubMed Central

    Chan, Kenny K. H.; Tang, Shuo

    2010-01-01

    The useful imaging range in spectral domain optical coherence tomography (SD-OCT) is often limited by the depth dependent sensitivity fall-off. Processing SD-OCT data with the non-uniform fast Fourier transform (NFFT) can improve the sensitivity fall-off at maximum depth by greater than 5dB concurrently with a 30 fold decrease in processing time compared to the fast Fourier transform with cubic spline interpolation method. NFFT can also improve local signal to noise ratio (SNR) and reduce image artifacts introduced in post-processing. Combined with parallel processing, NFFT is shown to have the ability to process up to 90k A-lines per second. High-speed SD-OCT imaging is demonstrated at camera-limited 100 frames per second on an ex-vivo squid eye. PMID:21258551

  14. Frequency identification of vibration signals using video camera image data.

    PubMed

    Jeng, Yih-Nen; Wu, Chia-Hung

    2012-10-16

    This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC) can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system.

  15. Frequency Identification of Vibration Signals Using Video Camera Image Data

    PubMed Central

    Jeng, Yih-Nen; Wu, Chia-Hung

    2012-01-01

    This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC) can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system. PMID:23202026

  16. Visible camera imaging of plasmas in Proto-MPEX

    NASA Astrophysics Data System (ADS)

    Mosby, R.; Skeen, C.; Biewer, T. M.; Renfro, R.; Ray, H.; Shaw, G. C.

    2015-11-01

    The prototype Material Plasma Exposure eXperiment (Proto-MPEX) is a linear plasma device being developed at Oak Ridge National Laboratory (ORNL). This machine plans to study plasma-material interaction (PMI) physics relevant to future fusion reactors. Measurements of plasma light emission will be made on Proto-MPEX using fast, visible framing cameras. The cameras utilize a global shutter, which allows a full frame image of the plasma to be captured and compared at multiple times during the plasma discharge. Typical exposure times are ~10-100 microseconds. The cameras are capable of capturing images at up to 18,000 frames per second (fps). However, the frame rate is strongly dependent on the size of the ``region of interest'' that is sampled. The maximum ROI corresponds to the full detector area, of ~1000x1000 pixels. The cameras have an internal gain, which controls the sensitivity of the 10-bit detector. The detector includes a Bayer filter, for ``true-color'' imaging of the plasma emission. This presentation will exmine the optimized camera settings for use on Proto-MPEX. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.

  17. Very low cost real time histogram-based contrast enhancer utilizing fixed-point DSP processing

    NASA Astrophysics Data System (ADS)

    McCaffrey, Nathaniel J.; Pantuso, Francis P.

    1998-03-01

    A real time contrast enhancement system utilizing histogram- based algorithms has been developed to operate on standard composite video signals. This low-cost DSP based system is designed with fixed-point algorithms and an off-chip look up table (LUT) to reduce the cost considerably over other contemporary approaches. This paper describes several real- time contrast enhancing systems advanced at the Sarnoff Corporation for high-speed visible and infrared cameras. The fixed-point enhancer was derived from these high performance cameras. The enhancer digitizes analog video and spatially subsamples the stream to qualify the scene's luminance. Simultaneously, the video is streamed through a LUT that has been programmed with the previous calculation. Reducing division operations by subsampling reduces calculation- cycles and also allows the processor to be used with cameras of nominal resolutions. All values are written to the LUT during blanking so no frames are lost. The enhancer measures 13 cm X 6.4 cm X 3.2 cm, operates off 9 VAC and consumes 12 W. This processor is small and inexpensive enough to be mounted with field deployed security cameras and can be used for surveillance, video forensics and real- time medical imaging.

  18. Wide-field Fourier ptychographic microscopy using laser illumination source

    PubMed Central

    Chung, Jaebum; Lu, Hangwen; Ou, Xiaoze; Zhou, Haojiang; Yang, Changhuei

    2016-01-01

    Fourier ptychographic (FP) microscopy is a coherent imaging method that can synthesize an image with a higher bandwidth using multiple low-bandwidth images captured at different spatial frequency regions. The method’s demand for multiple images drives the need for a brighter illumination scheme and a high-frame-rate camera for a faster acquisition. We report the use of a guided laser beam as an illumination source for an FP microscope. It uses a mirror array and a 2-dimensional scanning Galvo mirror system to provide a sample with plane-wave illuminations at diverse incidence angles. The use of a laser presents speckles in the image capturing process due to reflections between glass surfaces in the system. They appear as slowly varying background fluctuations in the final reconstructed image. We are able to mitigate these artifacts by including a phase image obtained by differential phase contrast (DPC) deconvolution in the FP algorithm. We use a 1-Watt laser configured to provide a collimated beam with 150 mW of power and beam diameter of 1 cm to allow for the total capturing time of 0.96 seconds for 96 raw FPM input images in our system, with the camera sensor’s frame rate being the bottleneck for speed. We demonstrate a factor of 4 resolution improvement using a 0.1 NA objective lens over the full camera field-of-view of 2.7 mm by 1.5 mm. PMID:27896016

  19. Real time heart rate variability assessment from Android smartphone camera photoplethysmography: Postural and device influences.

    PubMed

    Guede-Fernandez, F; Ferrer-Mileo, V; Ramos-Castro, J; Fernandez-Chimeno, M; Garcia-Gonzalez, M A

    2015-01-01

    The aim of this paper is to present a smartphone based system for real-time pulse-to-pulse (PP) interval time series acquisition by frame-to-frame camera image processing. The developed smartphone application acquires image frames from built-in rear-camera at the maximum available rate (30 Hz) and the smartphone GPU has been used by Renderscript API for high performance frame-by-frame image acquisition and computing in order to obtain PPG signal and PP interval time series. The relative error of mean heart rate is negligible. In addition, measurement posture and the employed smartphone model influences on the beat-to-beat error measurement of heart rate and HRV indices have been analyzed. Then, the standard deviation of the beat-to-beat error (SDE) was 7.81 ± 3.81 ms in the worst case. Furthermore, in supine measurement posture, significant device influence on the SDE has been found and the SDE is lower with Samsung S5 than Motorola X. This study can be applied to analyze the reliability of different smartphone models for HRV assessment from real-time Android camera frames processing.

  20. Fusion: ultra-high-speed and IR image sensors

    NASA Astrophysics Data System (ADS)

    Etoh, T. Goji; Dao, V. T. S.; Nguyen, Quang A.; Kimata, M.

    2015-08-01

    Most targets of ultra-high-speed video cameras operating at more than 1 Mfps, such as combustion, crack propagation, collision, plasma, spark discharge, an air bag at a car accident and a tire under a sudden brake, generate sudden heat. Researchers in these fields require tools to measure the high-speed motion and heat simultaneously. Ultra-high frame rate imaging is achieved by an in-situ storage image sensor. Each pixel of the sensor is equipped with multiple memory elements to record a series of image signals simultaneously at all pixels. Image signals stored in each pixel are read out after an image capturing operation. In 2002, we developed an in-situ storage image sensor operating at 1 Mfps 1). However, the fill factor of the sensor was only 15% due to a light shield covering the wide in-situ storage area. Therefore, in 2011, we developed a backside illuminated (BSI) in-situ storage image sensor to increase the sensitivity with 100% fill factor and a very high quantum efficiency 2). The sensor also achieved a much higher frame rate,16.7 Mfps, thanks to the wiring on the front side with more freedom 3). The BSI structure has another advantage that it has less difficulties in attaching an additional layer on the backside, such as scintillators. This paper proposes development of an ultra-high-speed IR image sensor in combination of advanced nano-technologies for IR imaging and the in-situ storage technology for ultra-highspeed imaging with discussion on issues in the integration.

  1. High speed imaging, lightning mapping arrays and thermal imaging: a synergy for the monitoring of electrical discharges at the onset of volcanic explosions

    NASA Astrophysics Data System (ADS)

    Gaudin, Damien; Cimarelli, Corrado; Behnke, Sonja; Cigala, Valeria; Edens, Harald; McNutt, Stefen; Smith, Cassandra; Thomas, Ronald; Van Eaton, Alexa

    2017-04-01

    Volcanic lightning is being increasingly studied, due to its great potential for the detection and monitoring of ash plumes. Indeed, it is observed in a large number of ash-rich volcanic eruptions and it produces electromagnetic waves that can be detected remotely in all weather conditions. Electrical discharges in volcanic plume can also significantly change the structural, chemical and reactivity properties of the erupted material. Although electrical discharges are detected in various regions of the plume, those happening at the onset of an explosion are of particular relevance for the early warning and the study of volcanic jet dynamics. In order to better constrain the electrical activity of young volcanic plumes, we deployed at Sakurajima (Japan) in 2015 a multiparametric set-up including: i) a lightning mapping array (LMA) of 10 VHF antennas recording the electromagnetic waves produced by lightning at a sample rate of 25 Msps; ii) a visible-light high speed camera (5000 frames per second, 0.5 m pixel size, 300 m field of view) shooting short movies (approx. duration 1 s) at different stages of the plume evolution, showing the location of discharges in relation to the plume; and iii) a thermal camera (25 fps, 1.5 m pixel size, 800 m field of view) continuously recording the plume and allowing the estimation of its main source parameters (volume, rise velocity, mass eruption rate). The complementarity of these three setups is demonstrated by comparing and aggregating the data at various stages of the plume development. In the earliest stages, the high speed camera spots discrete small discharges, that appear on the LMA data as peaks superimposed to the continuous radio frequency (CRF) signal. At later stages, flashes happen less frequently and increase in length. The correspondence between high speed camera and LMA data allows to define a direct correlation between the length of the flash and the intensity of the electromagnetic signal. Such correlation is used to estimate the evolution of the total discharges within a volcanic plume, while the superimposition of thermal and high speed videos allows to contextualize the flashes location in the scope of the plume features and dynamics.

  2. Ambient-Light-Canceling Camera Using Subtraction of Frames

    NASA Technical Reports Server (NTRS)

    Morookian, John Michael

    2004-01-01

    The ambient-light-canceling camera (ALCC) is a proposed near-infrared electronic camera that would utilize a combination of (1) synchronized illumination during alternate frame periods and (2) subtraction of readouts from consecutive frames to obtain images without a background component of ambient light. The ALCC is intended especially for use in tracking the motion of an eye by the pupil center corneal reflection (PCCR) method. Eye tracking by the PCCR method has shown potential for application in human-computer interaction for people with and without disabilities, and for noninvasive monitoring, detection, and even diagnosis of physiological and neurological deficiencies. In the PCCR method, an eye is illuminated by near-infrared light from a lightemitting diode (LED). Some of the infrared light is reflected from the surface of the cornea. Some of the infrared light enters the eye through the pupil and is reflected from back of the eye out through the pupil a phenomenon commonly observed as the red-eye effect in flash photography. An electronic camera is oriented to image the user's eye. The output of the camera is digitized and processed by algorithms that locate the two reflections. Then from the locations of the centers of the two reflections, the direction of gaze is computed. As described thus far, the PCCR method is susceptible to errors caused by reflections of ambient light. Although a near-infrared band-pass optical filter can be used to discriminate against ambient light, some sources of ambient light have enough in-band power to compete with the LED signal. The mode of operation of the ALCC would complement or supplant spectral filtering by providing more nearly complete cancellation of the effect of ambient light. In the operation of the ALCC, a near-infrared LED would be pulsed on during one camera frame period and off during the next frame period. Thus, the scene would be illuminated by both the LED (signal) light and the ambient (background) light during one frame period, and would be illuminated with only ambient (background) light during the next frame period. The camera output would be digitized and sent to a computer, wherein the pixel values of the background-only frame would be subtracted from the pixel values of the signal-plus-background frame to obtain signal-only pixel values (see figure). To prevent artifacts of motion from entering the images, it would be necessary to acquire image data at a rate greater than the standard video rate of 30 frames per second. For this purpose, the ALCC would exploit a novel control technique developed at NASA s Jet Propulsion Laboratory for advanced charge-coupled-device (CCD) cameras. This technique provides for readout from a subwindow [region of interest (ROI)] within the image frame. Because the desired reflections from the eye would typically occupy a small fraction of the area within the image frame, the ROI capability would make it possible to acquire and subtract pixel values at rates of several hundred frames per second considerably greater than the standard video rate and sufficient to both (1) suppress motion artifacts and (2) track the motion of the eye between consecutive subtractive frame pairs.

  3. High-speed Particle Image Velocimetry Near Surfaces

    PubMed Central

    Lu, Louise; Sick, Volker

    2013-01-01

    Multi-dimensional and transient flows play a key role in many areas of science, engineering, and health sciences but are often not well understood. The complex nature of these flows may be studied using particle image velocimetry (PIV), a laser-based imaging technique for optically accessible flows. Though many forms of PIV exist that extend the technique beyond the original planar two-component velocity measurement capabilities, the basic PIV system consists of a light source (laser), a camera, tracer particles, and analysis algorithms. The imaging and recording parameters, the light source, and the algorithms are adjusted to optimize the recording for the flow of interest and obtain valid velocity data. Common PIV investigations measure two-component velocities in a plane at a few frames per second. However, recent developments in instrumentation have facilitated high-frame rate (> 1 kHz) measurements capable of resolving transient flows with high temporal resolution. Therefore, high-frame rate measurements have enabled investigations on the evolution of the structure and dynamics of highly transient flows. These investigations play a critical role in understanding the fundamental physics of complex flows. A detailed description for performing high-resolution, high-speed planar PIV to study a transient flow near the surface of a flat plate is presented here. Details for adjusting the parameter constraints such as image and recording properties, the laser sheet properties, and processing algorithms to adapt PIV for any flow of interest are included. PMID:23851899

  4. An acoustic charge transport imager for high definition television applications

    NASA Technical Reports Server (NTRS)

    Hunt, W. D.; Brennan, K. F.; Summers, C. J.

    1994-01-01

    The primary goal of this research is to develop a solid-state television (HDTV) imager chip operating at a frame rate of about 170 frames/sec at 2 Megapixels/frame. This imager will offer an order of magnitude improvements in speed over CCD designs and will allow for monolithic imagers operating from the IR to UV. The technical approach of the project focuses on the development of the three basic components of the imager and their subsequent integration. The camera chip can be divided into three distinct functions: (1) image capture via an array of avalanche photodiodes (APD's); (2) charge collection, storage, and overflow control via a charge transfer transistor device (CTD); and (3) charge readout via an array of acoustic charge transport (ACT) channels. The use of APD's allows for front end gain at low noise and low operating voltages while the ACT readout enables concomitant high speed and high charge transfer efficiency. Currently work is progressing towards the optimization of each of these component devices. In addition to the development of each of the three distinct components, work towards their integration and manufacturability is also progressing. The component designs are considered not only to meet individual specifications but to provide overall system level performance suitable for HDTV operation upon integration. The ultimate manufacturability and reliability of the chip constrains the design as well. The progress made during this period is described in detail.

  5. Inner hair cell stereocilia movements captured in-situ by a high-speed camera with subpixel image processing

    NASA Astrophysics Data System (ADS)

    Wang, Yanli; Puria, Sunil; Steele, Charles R.; Ricci, Anthony J.

    2018-05-01

    Mechanical stimulation of the stereocilia hair bundles of the inner and outer hair cells (IHCs and OHCs, respectively) drives IHC synaptic release and OHC electromotility. The modes of hair-bundle motion can have a dramatic influence on the electrophysiological responses of the hair cells. The in vivo modes of motion are, however, unknown for both IHC and OHC bundles. In this work, we are developing technology to investigate the in situ hair-bundle motion in excised mouse cochleae, for which the hair bundles of the OHCs are embedded in the tectorial membrane but those of the IHCs are not. Motion is generated by pushing onto the stapes at 1 kHz with a glass probe coupled to a piezo stack, and recorded using a high-speed camera at 10,000 frames per second. The motions of individual IHC stereocilia and the cell boundary are analyzed using 2D and 1D Gaussian fitting algorithms, respectively. Preliminary results show that the IHC bundle moves mainly in the radial direction and exhibits a small degree of splay, and that the stereocilia in the second row move less than those in the first row, even in the same focal plane.

  6. Multisensor data fusion across time and space

    NASA Astrophysics Data System (ADS)

    Villeneuve, Pierre V.; Beaven, Scott G.; Reed, Robert A.

    2014-06-01

    Field measurement campaigns typically deploy numerous sensors having different sampling characteristics for spatial, temporal, and spectral domains. Data analysis and exploitation is made more difficult and time consuming as the sample data grids between sensors do not align. This report summarizes our recent effort to demonstrate feasibility of a processing chain capable of "fusing" image data from multiple independent and asynchronous sensors into a form amenable to analysis and exploitation using commercially-available tools. Two important technical issues were addressed in this work: 1) Image spatial registration onto a common pixel grid, 2) Image temporal interpolation onto a common time base. The first step leverages existing image matching and registration algorithms. The second step relies upon a new and innovative use of optical flow algorithms to perform accurate temporal upsampling of slower frame rate imagery. Optical flow field vectors were first derived from high-frame rate, high-resolution imagery, and then finally used as a basis for temporal upsampling of the slower frame rate sensor's imagery. Optical flow field values are computed using a multi-scale image pyramid, thus allowing for more extreme object motion. This involves preprocessing imagery to varying resolution scales and initializing new vector flow estimates using that from the previous coarser-resolution image. Overall performance of this processing chain is demonstrated using sample data involving complex too motion observed by multiple sensors mounted to the same base. Multiple sensors were included, including a high-speed visible camera, up to a coarser resolution LWIR camera.

  7. The Interaction between Speed Camera Enforcement and Speed-Related Mass Media Publicity in Victoria, Australia

    PubMed Central

    Cameron, M. H.; Newstead, S. V.; Diamantopoulou, K.; Oxley, P.

    2003-01-01

    The objective was to measure the presence of any interaction between the effect of mobile covert speed camera enforcement and the effect of intensive mass media road safety publicity with speed-related themes. During 1999, the Victoria Police varied the levels of speed camera activity substantially in four Melbourne police districts according to a systematic plan. Camera hours were increased or reduced by 50% or 100% in respective districts for a month at a time, during months when speed-related publicity was present and during months when it was absent. Monthly frequencies of casualty crashes, and their severe injury outcome, in each district during 1996–2000 were analysed to test the effects of the enforcement, publicity and their interaction. Reductions in crash frequency were associated monotonically with increasing levels of speed camera ticketing, and there was a statistically significant 41% reduction in fatal crash outcome associated with very high camera activity. High publicity awareness was associated with 12% reduction in crash frequency. The interaction between the enforcement and publicity was not statistically significant. PMID:12941230

  8. Microenergetic Shock Initiation Studies on Deposited Films of Petn

    NASA Astrophysics Data System (ADS)

    Tappan, Alexander S.; Wixom, Ryan R.; Trott, Wayne M.; Long, Gregory T.; Knepper, Robert; Brundage, Aaron L.; Jones, David A.

    2009-12-01

    Films of the high explosive PETN (pentaerythritol tetranitrate) up to 500-μm thick have been deposited through physical vapor deposition, with the intent of creating well-defined samples for shock-initiation studies. PETN films were characterized with microscopy, x-ray diffraction, and focused ion beam nanotomography. These high-density films were subjected to strong shocks in both the out-of-plane and in-plane orientations. Initiation behavior was monitored with high-speed framing and streak camera photography. Direct initiation with a donor explosive (either RDX with binder, or CL-20 with binder) was possible in both orientations, but with the addition of a thin aluminum buffer plate (in-plane configuration only), initiation proved to be difficult. Initiation was possible with an explosively-driven 0.13-mm thick Kapton flyer and direct observation of initiation behavior was examined using streak camera photography at different flyer velocities. Models of this configuration were created using the shock physics code CTH.

  9. Characterization of calculus migration during Ho:YAG laser lithotripsy by high speed camera using suspended pendulum method

    NASA Astrophysics Data System (ADS)

    Zhang, Jian James; Rajabhandharaks, Danop; Xuan, Jason Rongwei; Chia, Ray W. J.; Hasenberg, Tom

    2014-03-01

    Calculus migration is a common problem during ureteroscopic laser lithotripsy procedure to treat urolithiasis. A conventional experimental method to characterize calculus migration utilized a hosting container (e.g. a "V" grove or a test tube). These methods, however, demonstrated large variation and poor detectability, possibly attributing to friction between the calculus and the container on which the calculus was situated. In this study, calculus migration was investigated using a pendulum model suspended under water to eliminate the aforementioned friction. A high speed camera was used to study the movement of the calculus which covered zero order (displacement), 1st order (speed) and 2nd order (acceleration). A commercialized, pulsed Ho:YAG laser at 2.1 um, 365-um core fiber, and calculus phantom (Plaster of Paris, 10×10×10mm cube) were utilized to mimic laser lithotripsy procedure. The phantom was hung on a stainless steel bar and irradiated by the laser at 0.5, 1.0 and 1.5J energy per pulse at 10Hz for 1 second (i.e., 5, 10, and 15W). Movement of the phantom was recorded by a high-speed camera with a frame rate of 10,000 FPS. Maximum displacement was 1.25+/-0.10, 3.01+/-0.52, and 4.37+/-0.58 mm for 0.5, 1, and 1.5J energy per pulse, respectively. Using the same laser power, the conventional method showed <0.5 mm total displacement. When reducing the phantom size to 5×5×5mm (1/8 in volume), the displacement was very inconsistent. The results suggested that using the pendulum model to eliminate the friction improved sensitivity and repeatability of the experiment. Detailed investigation on calculus movement and other causes of experimental variation will be conducted as a future study.

  10. Robust human detection, tracking, and recognition in crowded urban areas

    NASA Astrophysics Data System (ADS)

    Chen, Hai-Wen; McGurr, Mike

    2014-06-01

    In this paper, we present algorithms we recently developed to support an automated security surveillance system for very crowded urban areas. In our approach for human detection, the color features are obtained by taking the difference of R, G, B spectrum and converting R, G, B to HSV (Hue, Saturation, Value) space. Morphological patch filtering and regional minimum and maximum segmentation on the extracted features are applied for target detection. The human tracking process approach includes: 1) Color and intensity feature matching track candidate selection; 2) Separate three parallel trackers for color, bright (above mean intensity), and dim (below mean intensity) detections, respectively; 3) Adaptive track gate size selection for reducing false tracking probability; and 4) Forward position prediction based on previous moving speed and direction for continuing tracking even when detections are missed from frame to frame. The Human target recognition is improved with a Super-Resolution Image Enhancement (SRIE) process. This process can improve target resolution by 3-5 times and can simultaneously process many targets that are tracked. Our approach can project tracks from one camera to another camera with a different perspective viewing angle to obtain additional biometric features from different perspective angles, and to continue tracking the same person from the 2nd camera even though the person moved out of the Field of View (FOV) of the 1st camera with `Tracking Relay'. Finally, the multiple cameras at different view poses have been geo-rectified to nadir view plane and geo-registered with Google- Earth (or other GIS) to obtain accurate positions (latitude, longitude, and altitude) of the tracked human for pin-point targeting and for a large area total human motion activity top-view. Preliminary tests of our algorithms indicate than high probability of detection can be achieved for both moving and stationary humans. Our algorithms can simultaneously track more than 100 human targets with averaged tracking period (time length) longer than the performance of the current state-of-the-art.

  11. High Speed Digital Camera Technology Review

    NASA Technical Reports Server (NTRS)

    Clements, Sandra D.

    2009-01-01

    A High Speed Digital Camera Technology Review (HSD Review) is being conducted to evaluate the state-of-the-shelf in this rapidly progressing industry. Five HSD cameras supplied by four camera manufacturers participated in a Field Test during the Space Shuttle Discovery STS-128 launch. Each camera was also subjected to Bench Tests in the ASRC Imaging Development Laboratory. Evaluation of the data from the Field and Bench Tests is underway. Representatives from the imaging communities at NASA / KSC and the Optical Systems Group are participating as reviewers. A High Speed Digital Video Camera Draft Specification was updated to address Shuttle engineering imagery requirements based on findings from this HSD Review. This draft specification will serve as the template for a High Speed Digital Video Camera Specification to be developed for the wider OSG imaging community under OSG Task OS-33.

  12. An electronic pan/tilt/zoom camera system

    NASA Technical Reports Server (NTRS)

    Zimmermann, Steve; Martin, H. Lee

    1991-01-01

    A camera system for omnidirectional image viewing applications that provides pan, tilt, zoom, and rotational orientation within a hemispherical field of view (FOV) using no moving parts was developed. The imaging device is based on the effect that from a fisheye lens, which produces a circular image of an entire hemispherical FOV, can be mathematically corrected using high speed electronic circuitry. An incoming fisheye image from any image acquisition source is captured in memory of the device, a transformation is performed for the viewing region of interest and viewing direction, and a corrected image is output as a video image signal for viewing, recording, or analysis. As a result, this device can accomplish the functions of pan, tilt, rotation, and zoom throughout a hemispherical FOV without the need for any mechanical mechanisms. A programmable transformation processor provides flexible control over viewing situations. Multiple images, each with different image magnifications and pan tilt rotation parameters, can be obtained from a single camera. The image transformation device can provide corrected images at frame rates compatible with RS-170 standard video equipment.

  13. Bio-inspired motion detection in an FPGA-based smart camera module.

    PubMed

    Köhler, T; Röchter, F; Lindemann, J P; Möller, R

    2009-03-01

    Flying insects, despite their relatively coarse vision and tiny nervous system, are capable of carrying out elegant and fast aerial manoeuvres. Studies of the fly visual system have shown that this is accomplished by the integration of signals from a large number of elementary motion detectors (EMDs) in just a few global flow detector cells. We developed an FPGA-based smart camera module with more than 10,000 single EMDs, which is closely modelled after insect motion-detection circuits with respect to overall architecture, resolution and inter-receptor spacing. Input to the EMD array is provided by a CMOS camera with a high frame rate. Designed as an adaptable solution for different engineering applications and as a testbed for biological models, the EMD detector type and parameters such as the EMD time constants, the motion-detection directions and the angle between correlated receptors are reconfigurable online. This allows a flexible and simultaneous detection of complex motion fields such as translation, rotation and looming, such that various tasks, e.g., obstacle avoidance, height/distance control or speed regulation can be performed by the same compact device.

  14. Apollo 12 photography 70 mm, 16 mm, and 35 mm frame index

    NASA Technical Reports Server (NTRS)

    1970-01-01

    For each 70-mm frame, the index presents information on: (1) the focal length of the camera, (2) the photo scale at the principal point of the frame, (3) the selenographic coordinates at the principal point of the frame, (4) the percentage of forward overlap of the frame, (5) the sun angle (medium, low, high), (6) the quality of the photography, (7) the approximate tilt (minimum and maximum) of the camera, and (8) the direction of tilt. A brief description of each frame is also included. The index to the 16-mm sequence photography includes information concerning the approximate surface coverage of the photographic sequence and a brief description of the principal features shown. A column of remarks is included to indicate: (1) if the sequence is plotted on the photographic index map and (2) the quality of the photography. The pictures taken using the lunar surface closeup stereoscopic camera (35 mm) are also described in this same index format.

  15. Constructing a Database from Multiple 2D Images for Camera Pose Estimation and Robot Localization

    NASA Technical Reports Server (NTRS)

    Wolf, Michael; Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.; Padgett, Curtis W.

    2012-01-01

    The LMDB (Landmark Database) Builder software identifies persistent image features (landmarks) in a scene viewed multiple times and precisely estimates the landmarks 3D world positions. The software receives as input multiple 2D images of approximately the same scene, along with an initial guess of the camera poses for each image, and a table of features matched pair-wise in each frame. LMDB Builder aggregates landmarks across an arbitrarily large collection of frames with matched features. Range data from stereo vision processing can also be passed to improve the initial guess of the 3D point estimates. The LMDB Builder aggregates feature lists across all frames, manages the process to promote selected features to landmarks, and iteratively calculates the 3D landmark positions using the current camera pose estimations (via an optimal ray projection method), and then improves the camera pose estimates using the 3D landmark positions. Finally, it extracts image patches for each landmark from auto-selected key frames and constructs the landmark database. The landmark database can then be used to estimate future camera poses (and therefore localize a robotic vehicle that may be carrying the cameras) by matching current imagery to landmark database image patches and using the known 3D landmark positions to estimate the current pose.

  16. Operation Greenhouse. Scientific Director's report of atomic weapon tests at Eniwetok, 1951. Annex 1. 6, blast measurements. Part 3. Pressure near ground level. Section 4. Blast asymmetry from aerial photographs. Section 5. Ball-crusher-gauge measurements of peak pressure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1985-04-01

    Aerial motion pictures from manned aircraft were taken of the Dog, Easy, and George Shots and from a drone aircraft on Dog Shot to determine whether asymmetries in the blast waves could be detected and measured. Only one film, that taken of Dog Shot from a drone, was considered good enough to warrant detailed analysis, but this failed to yield any positive information on asymmetries. The analysis showed that failure to obtain good arrival-time data arose from a number of cases, but primarily from uncertainities in magnification and timing. Results could only be matched with reliable data from blast-velocity switchesmore » by use of large corrections. Asymnetries, if present, were judged to have been too small or to have occurred too early to be detected with the slow-frame speed used. Recommendations for better results include locating the aircraft directly overhead at the time of burst and using a camera having greater frame speed and provided with timing marks.« less

  17. The NACA High-Speed Motion-Picture Camera Optical Compensation at 40,000 Photographs Per Second

    NASA Technical Reports Server (NTRS)

    Miller, Cearcy D

    1946-01-01

    The principle of operation of the NACA high-speed camera is completely explained. This camera, operating at the rate of 40,000 photographs per second, took the photographs presented in numerous NACA reports concerning combustion, preignition, and knock in the spark-ignition engine. Many design details are presented and discussed, details of an entirely conventional nature are omitted. The inherent aberrations of the camera are discussed and partly evaluated. The focal-plane-shutter effect of the camera is explained. Photographs of the camera are presented. Some high-speed motion pictures of familiar objects -- photoflash bulb, firecrackers, camera shutter -- are reproduced as an illustration of the quality of the photographs taken by the camera.

  18. Holocinematographic velocimeter for measuring time-dependent, three-dimensional flows

    NASA Technical Reports Server (NTRS)

    Beeler, George B.; Weinstein, Leonard M.

    1987-01-01

    Two simulatneous, orthogonal-axis holographic movies are made of tracer particles in a low-speed water tunnel to determine the time-dependent, three-dimensional velocity field. This instrument is called a Holocinematographic Velocimeter (HCV). The holographic movies are reduced to the velocity field with an automatic data reduction system. This permits the reduction of large numbers of holograms (time steps) in a reasonable amount of time. The current version of the HCV, built for proof-of-concept tests, uses low-frame rate holographic cameras and a prototype of a new type of water tunnel. This water tunnel is a unique low-disturbance facility which has minimal wall effects on the flow. This paper presents the first flow field examined by the HCV, the two-dimensional von Karman vortex street downstream of an unswept circular cylinder. Key factors in the HCV are flow speed, spatial and temporal resolution required, measurement volume, film transport speed, and laser pulse length. The interactions between these factors are discussed.

  19. Fast optically sectioned fluorescence HiLo endomicroscopy.

    PubMed

    Ford, Tim N; Lim, Daryl; Mertz, Jerome

    2012-02-01

    We describe a nonscanning, fiber bundle endomicroscope that performs optically sectioned fluorescence imaging with fast frame rates and real-time processing. Our sectioning technique is based on HiLo imaging, wherein two widefield images are acquired under uniform and structured illumination and numerically processed to reject out-of-focus background. This work is an improvement upon an earlier demonstration of widefield optical sectioning through a flexible fiber bundle. The improved device features lateral and axial resolutions of 2.6 and 17 μm, respectively, a net frame rate of 9.5 Hz obtained by real-time image processing with a graphics processing unit (GPU) and significantly reduced motion artifacts obtained by the use of a double-shutter camera. We demonstrate the performance of our system with optically sectioned images and videos of a fluorescently labeled chorioallantoic membrane (CAM) in the developing G. gallus embryo. HiLo endomicroscopy is a candidate technique for low-cost, high-speed clinical optical biopsies.

  20. Fast optically sectioned fluorescence HiLo endomicroscopy

    NASA Astrophysics Data System (ADS)

    Ford, Tim N.; Lim, Daryl; Mertz, Jerome

    2012-02-01

    We describe a nonscanning, fiber bundle endomicroscope that performs optically sectioned fluorescence imaging with fast frame rates and real-time processing. Our sectioning technique is based on HiLo imaging, wherein two widefield images are acquired under uniform and structured illumination and numerically processed to reject out-of-focus background. This work is an improvement upon an earlier demonstration of widefield optical sectioning through a flexible fiber bundle. The improved device features lateral and axial resolutions of 2.6 and 17 μm, respectively, a net frame rate of 9.5 Hz obtained by real-time image processing with a graphics processing unit (GPU) and significantly reduced motion artifacts obtained by the use of a double-shutter camera. We demonstrate the performance of our system with optically sectioned images and videos of a fluorescently labeled chorioallantoic membrane (CAM) in the developing G. gallus embryo. HiLo endomicroscopy is a candidate technique for low-cost, high-speed clinical optical biopsies.

  1. A Pixel Correlation Technique for Smaller Telescopes to Measure Doubles

    NASA Astrophysics Data System (ADS)

    Wiley, E. O.

    2013-04-01

    Pixel correlation uses the same reduction techniques as speckle imaging but relies on autocorrelation among captured pixel hits rather than true speckles. A video camera operating at speeds (8-66 milliseconds) similar to lucky imaging to capture 400-1,000 video frames. The AVI files are converted to bitmap images and analyzed using the interferometric algorithms in REDUC using all frames. This results in a series of corellograms from which theta and rho can be measured. Results using a 20 cm (8") Dall-Kirkham working at f22.5 are presented for doubles with separations between 1" to 5.7" under average seeing conditions. I conclude that this form of visualizing and analyzing visual double stars is a viable alternative to lucky imaging that can be employed by telescopes that are too small in aperture to capture a sufficient number of speckles for true speckle interferometry.

  2. High-accuracy and robust face recognition system based on optical parallel correlator using a temporal image sequence

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Mami; Ohta, Maiko; Kodate, Kashiko

    2005-09-01

    Face recognition is used in a wide range of security systems, such as monitoring credit card use, searching for individuals with street cameras via Internet and maintaining immigration control. There are still many technical subjects under study. For instance, the number of images that can be stored is limited under the current system, and the rate of recognition must be improved to account for photo shots taken at different angles under various conditions. We implemented a fully automatic Fast Face Recognition Optical Correlator (FARCO) system by using a 1000 frame/s optical parallel correlator designed and assembled by us. Operational speed for the 1: N (i.e. matching a pair of images among N, where N refers to the number of images in the database) identification experiment (4000 face images) amounts to less than 1.5 seconds, including the pre/post processing. From trial 1: N identification experiments using FARCO, we acquired low error rates of 2.6% False Reject Rate and 1.3% False Accept Rate. By making the most of the high-speed data-processing capability of this system, much more robustness can be achieved for various recognition conditions when large-category data are registered for a single person. We propose a face recognition algorithm for the FARCO while employing a temporal image sequence of moving images. Applying this algorithm to a natural posture, a two times higher recognition rate scored compared with our conventional system. The system has high potential for future use in a variety of purposes such as search for criminal suspects by use of street and airport video cameras, registration of babies at hospitals or handling of an immeasurable number of images in a database.

  3. High-speed digital imaging of cytosolic Ca2+ and contraction in single cardiomyocytes.

    PubMed

    O'Rourke, B; Reibel, D K; Thomas, A P

    1990-07-01

    A charge-coupled device (CCD) camera, with the capacity for simultaneous spatially resolved photon counting and rapid frame transfer, was utilized for high-speed digital image collection from an inverted epifluorescence microscope. The unique properties of the CCD detector were applied to an analysis of cell shortening and the Ca2+ transient from fluorescence images of fura-2-loaded [corrected] cardiomyocytes. On electrical stimulation of the cell, a series of sequential subimages was collected and used to create images of Ca2+ within the cell during contraction. The high photosensitivity of the camera, combined with a detector-based frame storage technique, permitted collection of fluorescence images 10 ms apart. This rate of image collection was sufficient to resolve the rapid events of contraction, e.g., the upstroke of the Ca2+ transient (less than 40 ms) and the time to peak shortening (less than 80 ms). The technique was used to examine the effects of beta-adrenoceptor activation, fura-2 load, and stimulus frequency on cytosolic Ca2+ transients and contractions of single cardiomyocytes. beta-Adrenoceptor stimulation resulted in pronounced increases in peak Ca2+, maximal rates of rise and decay of Ca2+, extent of shortening, and maximal velocities of shortening and relaxation. Raising the intracellular load of fura-2 had little effect on the rising phase of Ca2+ or the extent of shortening but extended the duration of the Ca2+ transient and contraction. In related experiments utilizing differential-interference contrast microscopy, the same technique was applied to visualize sarcomere dynamics in contracting cells. This newly developed technique is a versatile tool for analyzing the Ca2+ transient and mechanical events in studies of excitation-contraction coupling in cardiomyocytes.

  4. Algorithms for High-Speed Noninvasive Eye-Tracking System

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Morookian, John-Michael; Lambert, James

    2010-01-01

    Two image-data-processing algorithms are essential to the successful operation of a system of electronic hardware and software that noninvasively tracks the direction of a person s gaze in real time. The system was described in High-Speed Noninvasive Eye-Tracking System (NPO-30700) NASA Tech Briefs, Vol. 31, No. 8 (August 2007), page 51. To recapitulate from the cited article: Like prior commercial noninvasive eyetracking systems, this system is based on (1) illumination of an eye by a low-power infrared light-emitting diode (LED); (2) acquisition of video images of the pupil, iris, and cornea in the reflected infrared light; (3) digitization of the images; and (4) processing the digital image data to determine the direction of gaze from the centroids of the pupil and cornea in the images. Most of the prior commercial noninvasive eyetracking systems rely on standard video cameras, which operate at frame rates of about 30 Hz. Such systems are limited to slow, full-frame operation. The video camera in the present system includes a charge-coupled-device (CCD) image detector plus electronic circuitry capable of implementing an advanced control scheme that effects readout from a small region of interest (ROI), or subwindow, of the full image. Inasmuch as the image features of interest (the cornea and pupil) typically occupy a small part of the camera frame, this ROI capability can be exploited to determine the direction of gaze at a high frame rate by reading out from the ROI that contains the cornea and pupil (but not from the rest of the image) repeatedly. One of the present algorithms exploits the ROI capability. The algorithm takes horizontal row slices and takes advantage of the symmetry of the pupil and cornea circles and of the gray-scale contrasts of the pupil and cornea with respect to other parts of the eye. The algorithm determines which horizontal image slices contain the pupil and cornea, and, on each valid slice, the end coordinates of the pupil and cornea. Information from multiple slices is then combined to robustly locate the centroids of the pupil and cornea images. The other of the two present algorithms is a modified version of an older algorithm for estimating the direction of gaze from the centroids of the pupil and cornea. The modification lies in the use of the coordinates of the centroids, rather than differences between the coordinates of the centroids, in a gaze-mapping equation. The equation locates a gaze point, defined as the intersection of the gaze axis with a surface of interest, which is typically a computer display screen (see figure). The expected advantage of the modification is to make the gaze computation less dependent on some simplifying assumptions that are sometimes not accurate

  5. Dynamic Projection Mapping onto Deforming Non-Rigid Surface Using Deformable Dot Cluster Marker.

    PubMed

    Narita, Gaku; Watanabe, Yoshihiro; Ishikawa, Masatoshi

    2017-03-01

    Dynamic projection mapping for moving objects has attracted much attention in recent years. However, conventional approaches have faced some issues, such as the target objects being limited to rigid objects, and the limited moving speed of the targets. In this paper, we focus on dynamic projection mapping onto rapidly deforming non-rigid surfaces with a speed sufficiently high that a human does not perceive any misalignment between the target object and the projected images. In order to achieve such projection mapping, we need a high-speed technique for tracking non-rigid surfaces, which is still a challenging problem in the field of computer vision. We propose the Deformable Dot Cluster Marker (DDCM), a novel fiducial marker for high-speed tracking of non-rigid surfaces using a high-frame-rate camera. The DDCM has three performance advantages. First, it can be detected even when it is strongly deformed. Second, it realizes robust tracking even in the presence of external and self occlusions. Third, it allows millisecond-order computational speed. Using DDCM and a high-speed projector, we realized dynamic projection mapping onto a deformed sheet of paper and a T-shirt with a speed sufficiently high that the projected images appeared to be printed on the objects.

  6. Encrypting Digital Camera with Automatic Encryption Key Deletion

    NASA Technical Reports Server (NTRS)

    Oakley, Ernest C. (Inventor)

    2007-01-01

    A digital video camera includes an image sensor capable of producing a frame of video data representing an image viewed by the sensor, an image memory for storing video data such as previously recorded frame data in a video frame location of the image memory, a read circuit for fetching the previously recorded frame data, an encryption circuit having an encryption key input connected to receive the previously recorded frame data from the read circuit as an encryption key, an un-encrypted data input connected to receive the frame of video data from the image sensor and an encrypted data output port, and a write circuit for writing a frame of encrypted video data received from the encrypted data output port of the encryption circuit to the memory and overwriting the video frame location storing the previously recorded frame data.

  7. Programmable Illumination and High-Speed, Multi-Wavelength, Confocal Microscopy Using a Digital Micromirror

    PubMed Central

    Martial, Franck P.; Hartell, Nicholas A.

    2012-01-01

    Confocal microscopy is routinely used for high-resolution fluorescence imaging of biological specimens. Most standard confocal systems scan a laser across a specimen and collect emitted light passing through a single pinhole to produce an optical section of the sample. Sequential scanning on a point-by-point basis limits the speed of image acquisition and even the fastest commercial instruments struggle to resolve the temporal dynamics of rapid cellular events such as calcium signals. Various approaches have been introduced that increase the speed of confocal imaging. Nipkov disk microscopes, for example, use arrays of pinholes or slits on a spinning disk to achieve parallel scanning which significantly increases the speed of acquisition. Here we report the development of a microscope module that utilises a digital micromirror device as a spatial light modulator to provide programmable confocal optical sectioning with a single camera, at high spatial and axial resolution at speeds limited by the frame rate of the camera. The digital micromirror acts as a solid state Nipkov disk but with the added ability to change the pinholes size and separation and to control the light intensity on a mirror-by-mirror basis. The use of an arrangement of concave and convex mirrors in the emission pathway instead of lenses overcomes the astigmatism inherent with DMD devices, increases light collection efficiency and ensures image collection is achromatic so that images are perfectly aligned at different wavelengths. Combined with non-laser light sources, this allows low cost, high-speed, multi-wavelength image acquisition without the need for complex wavelength-dependent image alignment. The micromirror can also be used for programmable illumination allowing spatially defined photoactivation of fluorescent proteins. We demonstrate the use of this system for high-speed calcium imaging using both a single wavelength calcium indicator and a genetically encoded, ratiometric, calcium sensor. PMID:22937130

  8. Programmable illumination and high-speed, multi-wavelength, confocal microscopy using a digital micromirror.

    PubMed

    Martial, Franck P; Hartell, Nicholas A

    2012-01-01

    Confocal microscopy is routinely used for high-resolution fluorescence imaging of biological specimens. Most standard confocal systems scan a laser across a specimen and collect emitted light passing through a single pinhole to produce an optical section of the sample. Sequential scanning on a point-by-point basis limits the speed of image acquisition and even the fastest commercial instruments struggle to resolve the temporal dynamics of rapid cellular events such as calcium signals. Various approaches have been introduced that increase the speed of confocal imaging. Nipkov disk microscopes, for example, use arrays of pinholes or slits on a spinning disk to achieve parallel scanning which significantly increases the speed of acquisition. Here we report the development of a microscope module that utilises a digital micromirror device as a spatial light modulator to provide programmable confocal optical sectioning with a single camera, at high spatial and axial resolution at speeds limited by the frame rate of the camera. The digital micromirror acts as a solid state Nipkov disk but with the added ability to change the pinholes size and separation and to control the light intensity on a mirror-by-mirror basis. The use of an arrangement of concave and convex mirrors in the emission pathway instead of lenses overcomes the astigmatism inherent with DMD devices, increases light collection efficiency and ensures image collection is achromatic so that images are perfectly aligned at different wavelengths. Combined with non-laser light sources, this allows low cost, high-speed, multi-wavelength image acquisition without the need for complex wavelength-dependent image alignment. The micromirror can also be used for programmable illumination allowing spatially defined photoactivation of fluorescent proteins. We demonstrate the use of this system for high-speed calcium imaging using both a single wavelength calcium indicator and a genetically encoded, ratiometric, calcium sensor.

  9. Ultrasound investigation of fetal human upper respiratory anatomy.

    PubMed

    Wolfson, V P; Laitman, J T

    1990-07-01

    Although the human upper respiratory-upper digestive tract is an area of vital importance, relatively little is known about either the structural or functional changes that occur in the region during the fetal period. While investigations in our laboratory have begun to chart these changes through the use of postmortem materials, in vivo studies have been rarely attempted. This study combines ultrasonography with new applications of video editing to examine aspects of prenatal upper respiratory development. Structures of the fetal upper respiratory-digestive tract and their movements were studied through the use of ultrasonography and detailed frame-by-frame analysis. Twenty-five living fetuses, aged 18-36 weeks gestation, were studied in utero during routine diagnostic ultrasound examination. These real-time linear array sonograms were videotaped during each study. Videotapes were next analyzed for anatomical structures and movement patterns, played back through the ultrasound machine in normal speed, and then examined with a frame-by-frame video editor (FFVE) to identify structures and movements. Still images were photographed directly from the video monitor using a 35 mm camera. Results show that upper respiratory and digestive structures, as well as their movements, could be seen clearly during normal speed and repeat frame-by-frame analysis. Major structures that could be identified in the majority of subjects included trachea in 20 of 25 fetuses (80%); larynx, 76%; pharynx, 76%. Smaller structures were more variable, but were nevertheless observed on both sagittal and coronal section: piriform sinuses, 76%; thyroid cartilage, 36%; cricoid cartilage, 32%; and epiglottis, 16%. Movements of structures could also be seen and were those typically observed in connection with swallowing: fluttering tongue movements, changes in pharyngeal shape, and passage of a bolus via the piriform sinuses to esophagus. Fetal swallows had minimal laryngeal motion. This study represents the first time that the appearance of upper airway and digestive tract structures have been quantified in conjunction with their movements in the living fetus.

  10. High-Speed Schlieren Movies of Decelerators at Supersonic Speeds

    NASA Technical Reports Server (NTRS)

    1960-01-01

    Tests were conducted on several types of porous parachutes, a paraglider, and a simulated retrorocket. Mach numbers ranged from 1.8-3.0, porosity from 20-80 percent, and camera speeds from 1680-3000 feet per second (fps) in trials with porous parachutes. Trials of reefed parachutes were conducted at Mach number 2.0 and reefing of 12-33 percent at camera speeds of 600 fps. A flexible parachute with an inflatable ring in the periphery of the canopy was tested at Reynolds number 750,000 per foot, Mach number 2.85, porosity of 28 percent, and camera speed of 36oo fps. A vortex-ring parachute was tested at Mach number 2.2 and camera speed of 3000 fps. The paraglider, with a sweepback of 45 degrees at an angle of attack of 45 degrees was tested at Mach number 2.65, drag coefficient of 0.200, and lift coefficient of 0.278 at a camera speed of 600 fps. A cold air jet exhausting upstream from the center of a bluff body was used to simulate a retrorocket. The free-stream Mach number was 2.0, free-stream dynamic pressure was 620 lb/sq ft, jet-exit static pressure ratio was 10.9, and camera speed was 600 fps.

  11. Development of a 300,000-pixel ultrahigh-speed high-sensitivity CCD

    NASA Astrophysics Data System (ADS)

    Ohtake, H.; Hayashida, T.; Kitamura, K.; Arai, T.; Yonai, J.; Tanioka, K.; Maruyama, H.; Etoh, T. Goji; Poggemann, D.; Ruckelshausen, A.; van Kuijk, H.; Bosiers, Jan T.

    2006-02-01

    We are developing an ultrahigh-speed, high-sensitivity broadcast camera that is capable of capturing clear, smooth slow-motion videos even where lighting is limited, such as at professional baseball games played at night. In earlier work, we developed an ultrahigh-speed broadcast color camera1) using three 80,000-pixel ultrahigh-speed, highsensitivity CCDs2). This camera had about ten times the sensitivity of standard high-speed cameras, and enabled an entirely new style of presentation for sports broadcasts and science programs. Most notably, increasing the pixel count is crucially important for applying ultrahigh-speed, high-sensitivity CCDs to HDTV broadcasting. This paper provides a summary of our experimental development aimed at improving the resolution of CCD even further: a new ultrahigh-speed high-sensitivity CCD that increases the pixel count four-fold to 300,000 pixels.

  12. Modulated CMOS camera for fluorescence lifetime microscopy.

    PubMed

    Chen, Hongtao; Holst, Gerhard; Gratton, Enrico

    2015-12-01

    Widefield frequency-domain fluorescence lifetime imaging microscopy (FD-FLIM) is a fast and accurate method to measure the fluorescence lifetime of entire images. However, the complexity and high costs involved in construction of such a system limit the extensive use of this technique. PCO AG recently released the first luminescence lifetime imaging camera based on a high frequency modulated CMOS image sensor, QMFLIM2. Here we tested and provide operational procedures to calibrate the camera and to improve the accuracy using corrections necessary for image analysis. With its flexible input/output options, we are able to use a modulated laser diode or a 20 MHz pulsed white supercontinuum laser as the light source. The output of the camera consists of a stack of modulated images that can be analyzed by the SimFCS software using the phasor approach. The nonuniform system response across the image sensor must be calibrated at the pixel level. This pixel calibration is crucial and needed for every camera settings, e.g. modulation frequency and exposure time. A significant dependency of the modulation signal on the intensity was also observed and hence an additional calibration is needed for each pixel depending on the pixel intensity level. These corrections are important not only for the fundamental frequency, but also for the higher harmonics when using the pulsed supercontinuum laser. With these post data acquisition corrections, the PCO CMOS-FLIM camera can be used for various biomedical applications requiring a large frame and high speed acquisition. © 2015 Wiley Periodicals, Inc.

  13. Real-time processing for full-range Fourier-domain optical-coherence tomography with zero-filling interpolation using multiple graphic processing units.

    PubMed

    Watanabe, Yuuki; Maeno, Seiya; Aoshima, Kenji; Hasegawa, Haruyuki; Koseki, Hitoshi

    2010-09-01

    The real-time display of full-range, 2048?axial pixelx1024?lateral pixel, Fourier-domain optical-coherence tomography (FD-OCT) images is demonstrated. The required speed was achieved by using dual graphic processing units (GPUs) with many stream processors to realize highly parallel processing. We used a zero-filling technique, including a forward Fourier transform, a zero padding to increase the axial data-array size to 8192, an inverse-Fourier transform back to the spectral domain, a linear interpolation from wavelength to wavenumber, a lateral Hilbert transform to obtain the complex spectrum, a Fourier transform to obtain the axial profiles, and a log scaling. The data-transfer time of the frame grabber was 15.73?ms, and the processing time, which includes the data transfer between the GPU memory and the host computer, was 14.75?ms, for a total time shorter than the 36.70?ms frame-interval time using a line-scan CCD camera operated at 27.9?kHz. That is, our OCT system achieved a processed-image display rate of 27.23 frames/s.

  14. Reliability of a Qualitative Video Analysis for Running.

    PubMed

    Pipkin, Andrew; Kotecki, Kristy; Hetzel, Scott; Heiderscheit, Bryan

    2016-07-01

    Study Design Reliability study. Background Video analysis of running gait is frequently performed in orthopaedic and sports medicine practices to assess biomechanical factors that may contribute to injury. However, the reliability of a whole-body assessment has not been determined. Objective To determine the intrarater and interrater reliability of the qualitative assessment of specific running kinematics from a 2-dimensional video. Methods Running-gait analysis was performed on videos recorded from 15 individuals (8 male, 7 female) running at a self-selected pace (3.17 ± 0.40 m/s, 8:28 ± 1:04 min/mi) using a high-speed camera (120 frames per second). These videos were independently rated on 2 occasions by 3 experienced physical therapists using a standardized qualitative assessment. Fifteen sagittal and frontal plane kinematic variables were rated on a 3- or 5-point categorical scale at specific events of the gait cycle, including initial contact (n = 3) and midstance (n = 9), or across the full gait cycle (n = 3). The video frame number corresponding to each gait event was also recorded. Intrarater and interrater reliability values were calculated for gait-event detection (intraclass correlation coefficient [ICC] and standard error of measurement [SEM]) and the individual kinematic variables (weighted kappa [κw]). Results Gait-event detection was highly reproducible within raters (ICC = 0.94-1.00; SEM, 0.3-1.0 frames) and between raters (ICC = 0.77-1.00; SEM, 0.4-1.9 frames). Eleven of the 15 kinematic variables demonstrated substantial (κw = 0.60-0.799) or excellent (κw>0.80) intrarater agreement, with the exception of foot-to-center-of-mass position (κw = 0.59), forefoot position (κw = 0.58), ankle dorsiflexion at midstance (κw = 0.49), and center-of-mass vertical excursion (κw = 0.36). Interrater agreement for the kinematic measures varied more widely (κw = 0.00-0.85), with 5 variables showing substantial or excellent reliability. Conclusion The qualitative assessment of specific kinematic measures during running can be reliably performed with the use of a high-speed video camera. Detection of specific gait events was highly reproducible, as were common kinematic variables such as rearfoot position, foot-strike pattern, tibial inclination angle, knee flexion angle, and forward trunk lean. Other variables should be used with caution. J Orthop Sports Phys Ther 2016;46(7):556-561. Epub 6 Jun 2016. doi:10.2519/jospt.2016.6280.

  15. Flexible nuclear medicine camera and method of using

    DOEpatents

    Dilmanian, F.A.; Packer, S.; Slatkin, D.N.

    1996-12-10

    A nuclear medicine camera and method of use photographically record radioactive decay particles emitted from a source, for example a small, previously undetectable breast cancer, inside a patient. The camera includes a flexible frame containing a window, a photographic film, and a scintillation screen, with or without a gamma-ray collimator. The frame flexes for following the contour of the examination site on the patient, with the window being disposed in substantially abutting contact with the skin of the patient for reducing the distance between the film and the radiation source inside the patient. The frame is removably affixed to the patient at the examination site for allowing the patient mobility to wear the frame for a predetermined exposure time period. The exposure time may be several days for obtaining early qualitative detection of small malignant neoplasms. 11 figs.

  16. Computational Studies of X-ray Framing Cameras for the National Ignition Facility

    DTIC Science & Technology

    2013-06-01

    Livermore National Laboratory 7000 East Avenue Livermore, CA 94550 USA Abstract The NIF is the world’s most powerful laser facility and is...a phosphor screen where the output is recorded. The x-ray framing cameras have provided excellent information. As the yields at NIF have increased...experiments on the NIF . The basic operation of these cameras is shown in Fig. 1. Incident photons generate photoelectrons both in the pores of the MCP and

  17. Earth Observations taken by Expedition 41 crewmember

    NASA Image and Video Library

    2014-09-13

    ISS041-E-013683 (13 Sept. 2014) --- Photographed with a mounted automated camera, this is one of a number of images featuring the European Space Agency?s Automated Transfer Vehicle (ATV-5 or Georges Lemaitre) docked with the International Space Station. Except for color changes, the images are almost identical. The variation in color from frame to frame is due to the camera?s response to the motion of the orbital outpost, relative to the illumination from the sun.

  18. Earth Observations taken by Expedition 41 crewmember

    NASA Image and Video Library

    2014-09-13

    ISS041-E-013687 (13 Sept. 2014) --- Photographed with a mounted automated camera, this is one of a number of images featuring the European Space Agency?s Automated Transfer Vehicle (ATV-5 or Georges Lemaitre) docked with the International Space Station. Except for color changes, the images are almost identical. The variation in color from frame to frame is due to the camera?s response to the motion of the orbital outpost, relative to the illumination from the sun.

  19. Earth Observations taken by Expedition 41 crewmember

    NASA Image and Video Library

    2014-09-13

    ISS041-E-013693 (13 Sept. 2014) --- Photographed with a mounted automated camera, this is one of a number of images featuring the European Space Agency?s Automated Transfer Vehicle (ATV-5 or Georges Lemaitre) docked with the International Space Station. Except for color changes, the images are almost identical. The variation in color from frame to frame is due to the camera?s response to the motion of the orbital outpost, relative to the illumination from the sun.

  20. A combined microphone and camera calibration technique with application to acoustic imaging.

    PubMed

    Legg, Mathew; Bradley, Stuart

    2013-10-01

    We present a calibration technique for an acoustic imaging microphone array, combined with a digital camera. Computer vision and acoustic time of arrival data are used to obtain microphone coordinates in the camera reference frame. Our new method allows acoustic maps to be plotted onto the camera images without the need for additional camera alignment or calibration. Microphones and cameras may be placed in an ad-hoc arrangement and, after calibration, the coordinates of the microphones are known in the reference frame of a camera in the array. No prior knowledge of microphone positions, inter-microphone spacings, or air temperature is required. This technique is applied to a spherical microphone array and a mean difference of 3 mm was obtained between the coordinates obtained with this calibration technique and those measured using a precision mechanical method.

  1. SHOK—The First Russian Wide-Field Optical Camera in Space

    NASA Astrophysics Data System (ADS)

    Lipunov, V. M.; Gorbovskoy, E. S.; Kornilov, V. G.; Panasyuk, M. I.; Amelushkin, A. M.; Petrov, V. L.; Yashin, I. V.; Svertilov, S. I.; Vedenkin, N. N.

    2018-02-01

    Onboard the spacecraft Lomonosov is established two fast, fixed, very wide-field cameras SHOK. The main goal of this experiment is the observation of GRB optical emission before, synchronously, and after the gamma-ray emission. The field of view of each of the cameras is placed in the gamma-ray burst detection area of other devices located onboard the "Lomonosov" spacecraft. SHOK provides measurements of optical emissions with a magnitude limit of ˜ 9-10m on a single frame with an exposure of 0.2 seconds. The device is designed for continuous sky monitoring at optical wavelengths in the very wide field of view (1000 square degrees each camera), detection and localization of fast time-varying (transient) optical sources on the celestial sphere, including provisional and synchronous time recording of optical emissions from the gamma-ray burst error boxes, detected by the BDRG device and implemented by a control signal (alert trigger) from the BDRG. The Lomonosov spacecraft has two identical devices, SHOK1 and SHOK2. The core of each SHOK device is a fast-speed 11-Megapixel CCD. Each of the SHOK devices represents a monoblock, consisting of a node observations of optical emission, the electronics node, elements of the mechanical construction, and the body.

  2. Differences in glance behavior between drivers using a rearview camera, parking sensor system, both technologies, or no technology during low-speed parking maneuvers.

    PubMed

    Kidd, David G; McCartt, Anne T

    2016-02-01

    This study characterized the use of various fields of view during low-speed parking maneuvers by drivers with a rearview camera, a sensor system, a camera and sensor system combined, or neither technology. Participants performed four different low-speed parking maneuvers five times. Glances to different fields of view the second time through the four maneuvers were coded along with the glance locations at the onset of the audible warning from the sensor system and immediately after the warning for participants in the sensor and camera-plus-sensor conditions. Overall, the results suggest that information from cameras and/or sensor systems is used in place of mirrors and shoulder glances. Participants with a camera, sensor system, or both technologies looked over their shoulders significantly less than participants without technology. Participants with cameras (camera and camera-plus-sensor conditions) used their mirrors significantly less compared with participants without cameras (no-technology and sensor conditions). Participants in the camera-plus-sensor condition looked at the center console/camera display for a smaller percentage of the time during the low-speed maneuvers than participants in the camera condition and glanced more frequently to the center console/camera display immediately after the warning from the sensor system compared with the frequency of glances to this location at warning onset. Although this increase was not statistically significant, the pattern suggests that participants in the camera-plus-sensor condition may have used the warning as a cue to look at the camera display. The observed differences in glance behavior between study groups were illustrated by relating it to the visibility of a 12-15-month-old child-size object. These findings provide evidence that drivers adapt their glance behavior during low-speed parking maneuvers following extended use of rearview cameras and parking sensors, and suggest that other technologies which augment the driving task may do the same. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Note: An improved 3D imaging system for electron-electron coincidence measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Yun Fei; Lee, Suk Kyoung; Adhikari, Pradip

    We demonstrate an improved imaging system that can achieve highly efficient 3D detection of two electrons in coincidence. The imaging system is based on a fast frame complementary metal-oxide semiconductor camera and a high-speed waveform digitizer. We have shown previously that this detection system is capable of 3D detection of ions and electrons with good temporal and spatial resolution. Here, we show that with a new timing analysis algorithm, this system can achieve an unprecedented dead-time (<0.7 ns) and dead-space (<1 mm) when detecting two electrons. A true zero dead-time detection is also demonstrated.

  4. Note: An improved 3D imaging system for electron-electron coincidence measurements

    NASA Astrophysics Data System (ADS)

    Lin, Yun Fei; Lee, Suk Kyoung; Adhikari, Pradip; Herath, Thushani; Lingenfelter, Steven; Winney, Alexander H.; Li, Wen

    2015-09-01

    We demonstrate an improved imaging system that can achieve highly efficient 3D detection of two electrons in coincidence. The imaging system is based on a fast frame complementary metal-oxide semiconductor camera and a high-speed waveform digitizer. We have shown previously that this detection system is capable of 3D detection of ions and electrons with good temporal and spatial resolution. Here, we show that with a new timing analysis algorithm, this system can achieve an unprecedented dead-time (<0.7 ns) and dead-space (<1 mm) when detecting two electrons. A true zero dead-time detection is also demonstrated.

  5. Oscillation effects upon film boiling from a sphere.

    NASA Technical Reports Server (NTRS)

    Schmidt, W. E.; Witte, L. C.

    1972-01-01

    Heat transfer rates from a silver-plated copper sphere, 0.75 in. in diameter, were studied by high speed photography during oscillations of the sphere in saturated liquid nitrogen and Freon-11. The oscillation frequencies ranged from zero to 13 Hz, and the amplitude-to-diameter ratio varied from zero to 2.67. The sphere was supported by a thin-walled stainless steel tube and carried a thermocouple attached near the lower stagnation point. A Fastax WF-3 16mm movie camera was used at about 2000 frames/sec. The differences in the vapor removal process at lower and higher oscillation frequencies are discussed.

  6. High-Speed Imaging of the First Kink Mode Instability in a Magnetoplasmadynamic Thruster

    NASA Technical Reports Server (NTRS)

    Walker, Jonathan A.; Langendof, Samuel; Walker, Mitchell L. R.; Polzin, Kurt; Kimberlin, Adam

    2013-01-01

    One of the biggest challenges to efficient magnetoplasmadynamic thruster (MPDT) operation is the onset of high-frequency voltage oscillations as the discharge current is increased above a threshold value. The onset regime is closely related to magnetohydrodynamic instabilities known as kink modes. This work documents direct observation of the formation and quasi-steady state behavior of an argon discharge plasma in a MPDT operating at discharge currents of 8 to 10 kA for a pulse length of approximately 4 ms. A high-speed camera images the quasi-steady-state operation of the thruster at 26,143 fps with a frame exposure time of 10 micro s. A 0.9 neutral density filter and 488-nm argon line filter with a 10-nm bandwidth are used on separate trials to capture the time evolution of the discharge plasma. Frame-by-frame analysis of the power flux incident on the CCD sensor shows both the initial discharge plasma formation process and the steady-state behavior of the discharge plasma. Light intensity levels on the order of 4-6 W/m2 indicate radial and azimuthal asymmetries in the concentration of argon plasma in the discharge channel. The plasma concentration exhibits characteristics that suggest the presence of a helical plasma column. This helical behavior has been observed in previous experiments that characterize plasma kink mode instabilities indirectly. Therefore, the direct imaging of these plasma kink modes further supports the link between MPDT onset behavior and the excitation of the magnetohydrodynamic instabilities.

  7. Field Test Data for Detecting Vibrations of a Building Using High-Speed Video Cameras

    DTIC Science & Technology

    2017-10-01

    ARL-TR-8185 ● OCT 2017 US Army Research Laboratory Field Test Data for Detecting Vibrations of a Building Using High -Speed Video...Field Test Data for Detecting Vibrations of a Building Using High -Speed Video Cameras by Caitlin P Conn and Geoffrey H Goldman Sensors and...June 2016 – October 2017 4. TITLE AND SUBTITLE Field Test Data for Detecting Vibrations of a Building Using High -Speed Video Cameras 5a. CONTRACT

  8. Method and means for generating a synchronizing pulse from a repetitive wave of varying frequency

    DOEpatents

    DeVolpi, Alexander; Pecina, Ronald J.; Travis, Dale J.

    1976-01-01

    An event that occurs repetitively at continuously changing frequencies can be used to generate a triggering pulse which is used to synchronize or control. The triggering pulse is generated at a predetermined percentage of the period of the repetitive waveform without regard to frequency. Counts are accumulated in two counters, the first counting during the "on" fraction of the period, and the second counting during the "off" fraction. The counts accumulated during each cycle are compared. On equality the trigger pulse is generated. Count input rates to each counter are determined by the ratio of the on-off fractions of the event waveform and the desired phase relationship. This invention is of particular utility in providing a trigger or synchronizing pulse during the open period of the shutter of a high-speed framing camera during its acceleration as well as its period of substantially constant speed.

  9. Dynamic strain distribution of FRP plate under blast loading

    NASA Astrophysics Data System (ADS)

    Saburi, T.; Yoshida, M.; Kubota, S.

    2017-02-01

    The dynamic strain distribution of a fiber re-enforced plastic (FRP) plate under blast loading was investigated using a Digital Image Correlation (DIC) image analysis method. The testing FRP plates were mounted in parallel to each other on a steel frame. 50 g of composition C4 explosive was used as a blast loading source and set in the center of the FRP plates. The dynamic behavior of the FRP plate under blast loading were observed by two high-speed video cameras. The set of two high-speed video image sequences were used to analyze the FRP three-dimensional strain distribution by means of DIC method. A point strain profile extracted from the analyzed strain distribution data was compared with a directly observed strain profile using a strain gauge and it was shown that the strain profile under the blast loading by DIC method is quantitatively accurate.

  10. fastSIM: a practical implementation of fast structured illumination microscopy.

    PubMed

    Lu-Walther, Hui-Wen; Kielhorn, Martin; Förster, Ronny; Jost, Aurélie; Wicker, Kai; Heintzmann, Rainer

    2015-01-16

    A significant improvement in acquisition speed of structured illumination microscopy (SIM) opens a new field of applications to this already well-established super-resolution method towards 3D scanning real-time imaging of living cells. We demonstrate a method of increased acquisition speed on a two-beam SIM fluorescence microscope with a lateral resolution of ~100 nm at a maximum raw data acquisition rate of 162 frames per second (fps) with a region of interest of 16.5  ×  16.5 µm 2 , free of mechanically moving components. We use a programmable spatial light modulator (ferroelectric LCOS) which promises precise and rapid control of the excitation pattern in the sample plane. A passive Fourier filter and a segmented azimuthally patterned polarizer are used to perform structured illumination with maximum contrast. Furthermore, the free running mode in a modern sCMOS camera helps to achieve faster data acquisition.

  11. A midsummer-night's shock wave

    NASA Astrophysics Data System (ADS)

    Hargather, Michael; Liebner, Thomas; Settles, Gary

    2007-11-01

    The aerial pyrotechnic shells used in professional display fireworks explode a bursting charge at altitude in order to disperse the ``stars'' of the display. The shock wave from the bursting charge is heard on the ground as a loud report, though it has by then typically decayed to a mere sound wave. However, viewers seated near the standard safety borders can still be subjected to weak shock waves. These have been visualized using a large, portable, retro-reflective ``Edgerton'' shadowgraph technique and a high-speed digital video camera. Images recorded at 10,000 frames per second show essentially-planar shock waves from 10- and 15-cm firework shells impinging on viewers during the 2007 Central Pennsylvania July 4th Festival. The shock speed is not measurably above Mach 1, but we nonetheless conclude that, if one can sense a shock-like overpressure, then the wave motion is strong enough to be observed by density-sensitive optics.

  12. Two-dimensional time-resolved ultra-high speed imaging of K-alpha emission from short-pulse-laser interactions to observe electron recirculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagel, S. R.; Chen, H.; Park, J.

    Time resolved x-ray images with 7 ps resolution are recorded on relativistic short-pulse laser-plasma experiments using the dilation x-ray imager, a high-speed x-ray framing camera, sensitive to x-rays in the range of ≈1-17 keV. Furthermore, this capability enables a series of 2D x-ray images to be recorded at picosecond scales, which allows for the investigation of fast electron transport within the target with unprecedented temporal resolution. With an increase in the Kα-emission spot size over time we found that targets were thinner than the recirculation limit and is absent for thicker targets. Together with the observed polarization dependence of themore » spot size increase, this indicates that electron recirculation is relevant for the x-ray production in thin targets.« less

  13. In-flight measurements of aircraft propeller deformation by means of an autarkic fast rotating imaging system

    NASA Astrophysics Data System (ADS)

    Stasicki, Boleslaw; Boden, Fritz

    2015-03-01

    The non-intrusive in-flight measurement of the deformation and pitch of the aircraft propeller is a demanding task. The idea of an imaging system integrated and rotating with the aircraft propeller has been presented on the 30th International Congress on High-Speed Imaging and Photonics (ICHSIP30) in 2012. Since then this system has been constructed and tested in the laboratory as well as on the real aircraft. In this paper we outline the principle of Image Pattern Correlation Technique (IPCT) based on Digital Image Correlation (DIC) and describe the construction of a dedicated autarkic 3D camera system placed on the investigated propeller and rotating at its full speed. Furthermore, the results of the first ground and in-flight tests are shown and discussed. This development has been found by the European Commission within the 7th frame project AIM2 (contract no. 266107).

  14. Structure of propagating arc in a magneto-hydrodynamic rail plasma actuator

    NASA Astrophysics Data System (ADS)

    Gray, Miles D.; Choi, Young-Joon; Sirohi, Jayant; Raja, Laxminarayan L.

    2016-01-01

    The spatio-temporal evolution of a magnetically driven arc in a rail plasma flow actuator has been characterized with high-speed imaging, electrical measurements, and spectroscopy. The arc draws a peak current of ~1 kA. High-speed framing cameras were used to observe the complex arc propagation phenomenon. In particular, the anode and cathode roots were observed to have different modes of transit, which resulted in distinct types of electrode degradation on the anode and cathode surfaces. Observations of the arc electrical properties and induced magnetic fields are used to explain the transit mechanism of the arc. Emission spectroscopy revealed the arc temperature and species composition as a function of transit distance of the arc. The results obtained offer significant insights into the electromagnetic properties of the arc-rail system as well as arc-surface interaction phenomena in a propagating arc.

  15. fastSIM: a practical implementation of fast structured illumination microscopy

    NASA Astrophysics Data System (ADS)

    Lu-Walther, Hui-Wen; Kielhorn, Martin; Förster, Ronny; Jost, Aurélie; Wicker, Kai; Heintzmann, Rainer

    2015-03-01

    A significant improvement in acquisition speed of structured illumination microscopy (SIM) opens a new field of applications to this already well-established super-resolution method towards 3D scanning real-time imaging of living cells. We demonstrate a method of increased acquisition speed on a two-beam SIM fluorescence microscope with a lateral resolution of ~100 nm at a maximum raw data acquisition rate of 162 frames per second (fps) with a region of interest of 16.5  ×  16.5 µm2, free of mechanically moving components. We use a programmable spatial light modulator (ferroelectric LCOS) which promises precise and rapid control of the excitation pattern in the sample plane. A passive Fourier filter and a segmented azimuthally patterned polarizer are used to perform structured illumination with maximum contrast. Furthermore, the free running mode in a modern sCMOS camera helps to achieve faster data acquisition.

  16. Two-dimensional time-resolved ultra-high speed imaging of K-alpha emission from short-pulse-laser interactions to observe electron recirculation

    DOE PAGES

    Nagel, S. R.; Chen, H.; Park, J.; ...

    2017-04-04

    Time resolved x-ray images with 7 ps resolution are recorded on relativistic short-pulse laser-plasma experiments using the dilation x-ray imager, a high-speed x-ray framing camera, sensitive to x-rays in the range of ≈1-17 keV. Furthermore, this capability enables a series of 2D x-ray images to be recorded at picosecond scales, which allows for the investigation of fast electron transport within the target with unprecedented temporal resolution. With an increase in the Kα-emission spot size over time we found that targets were thinner than the recirculation limit and is absent for thicker targets. Together with the observed polarization dependence of themore » spot size increase, this indicates that electron recirculation is relevant for the x-ray production in thin targets.« less

  17. Sequential detection of web defects

    DOEpatents

    Eichel, Paul H.; Sleefe, Gerard E.; Stalker, K. Terry; Yee, Amy A.

    2001-01-01

    A system for detecting defects on a moving web having a sequential series of identical frames uses an imaging device to form a real-time camera image of a frame and a comparitor to comparing elements of the camera image with corresponding elements of an image of an exemplar frame. The comparitor provides an acceptable indication if the pair of elements are determined to be statistically identical; and a defective indication if the pair of elements are determined to be statistically not identical. If the pair of elements is neither acceptable nor defective, the comparitor recursively compares the element of said exemplar frame with corresponding elements of other frames on said web until one of the acceptable or defective indications occur.

  18. Students' framing of laboratory exercises using infrared cameras

    NASA Astrophysics Data System (ADS)

    Haglund, Jesper; Jeppsson, Fredrik; Hedberg, David; Schönborn, Konrad J.

    2015-12-01

    Thermal science is challenging for students due to its largely imperceptible nature. Handheld infrared cameras offer a pedagogical opportunity for students to see otherwise invisible thermal phenomena. In the present study, a class of upper secondary technology students (N =30 ) partook in four IR-camera laboratory activities, designed around the predict-observe-explain approach of White and Gunstone. The activities involved central thermal concepts that focused on heat conduction and dissipative processes such as friction and collisions. Students' interactions within each activity were videotaped and the analysis focuses on how a purposefully selected group of three students engaged with the exercises. As the basis for an interpretative study, a "thick" narrative description of the students' epistemological and conceptual framing of the exercises and how they took advantage of the disciplinary affordance of IR cameras in the thermal domain is provided. Findings include that the students largely shared their conceptual framing of the four activities, but differed among themselves in their epistemological framing, for instance, in how far they found it relevant to digress from the laboratory instructions when inquiring into thermal phenomena. In conclusion, the study unveils the disciplinary affordances of infrared cameras, in the sense of their use in providing access to knowledge about macroscopic thermal science.

  19. High-speed and ultrahigh-speed cinematographic recording techniques

    NASA Astrophysics Data System (ADS)

    Miquel, J. C.

    1980-12-01

    A survey is presented of various high-speed and ultrahigh-speed cinematographic recording systems (covering a range of speeds from 100 to 14-million pps). Attention is given to the functional and operational characteristics of cameras and to details of high-speed cinematography techniques (including image processing, and illumination). A list of cameras (many of them French) available in 1980 is presented

  20. Measurement of inkjet first-drop behavior using a high-speed camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kwon, Kye-Si, E-mail: kskwon@sch.ac.kr; Kim, Hyung-Seok; Choi, Moohyun

    2016-03-15

    Drop-on-demand inkjet printing has been used as a manufacturing tool for printed electronics, and it has several advantages since a droplet of an exact amount can be deposited on an exact location. Such technology requires positioning the inkjet head on the printing location without jetting, so a jetting pause (non-jetting) idle time is required. Nevertheless, the behavior of the first few drops after the non-jetting pause time is well known to be possibly different from that which occurs in the steady state. The abnormal behavior of the first few drops may result in serious problems regarding printing quality. Therefore, amore » proper evaluation of a first-droplet failure has become important for the inkjet industry. To this end, in this study, we propose the use of a high-speed camera to evaluate first-drop dissimilarity. For this purpose, the image acquisition frame rate was determined to be an integer multiple of the jetting frequency, and in this manner, we can directly compare the droplet locations of each drop in order to characterize the first-drop behavior. Finally, we evaluate the effect of a sub-driving voltage during the non-jetting pause time to effectively suppress the first-drop dissimilarity.« less

  1. Flexible nuclear medicine camera and method of using

    DOEpatents

    Dilmanian, F. Avraham; Packer, Samuel; Slatkin, Daniel N.

    1996-12-10

    A nuclear medicine camera 10 and method of use photographically record radioactive decay particles emitted from a source, for example a small, previously undetectable breast cancer, inside a patient. The camera 10 includes a flexible frame 20 containing a window 22, a photographic film 24, and a scintillation screen 26, with or without a gamma-ray collimator 34. The frame 20 flexes for following the contour of the examination site on the patient, with the window 22 being disposed in substantially abutting contact with the skin of the patient for reducing the distance between the film 24 and the radiation source inside the patient. The frame 20 is removably affixed to the patient at the examination site for allowing the patient mobility to wear the frame 20 for a predetermined exposure time period. The exposure time may be several days for obtaining early qualitative detection of small malignant neoplasms.

  2. The use of uncalibrated roadside CCTV cameras to estimate mean traffic speed

    DOT National Transportation Integrated Search

    2001-12-01

    In this report, we present a novel approach for estimating traffic speed using a sequence of images from an un-calibrated camera. We assert that exact calibration is not necessary to estimate speed. Instead, to estimate speed, we use: (1) geometric r...

  3. Optimizing Radiometric Fidelity to Enhance Aerial Image Change Detection Utilizing Digital Single Lens Reflex (DSLR) Cameras

    NASA Astrophysics Data System (ADS)

    Kerr, Andrew D.

    Determining optimal imaging settings and best practices related to the capture of aerial imagery using consumer-grade digital single lens reflex (DSLR) cameras, should enable remote sensing scientists to generate consistent, high quality, and low cost image data sets. Radiometric optimization, image fidelity, image capture consistency and repeatability were evaluated in the context of detailed image-based change detection. The impetus for this research is in part, a dearth of relevant, contemporary literature, on the utilization of consumer grade DSLR cameras for remote sensing, and the best practices associated with their use. The main radiometric control settings on a DSLR camera, EV (Exposure Value), WB (White Balance), light metering, ISO, and aperture (f-stop), are variables that were altered and controlled over the course of several image capture missions. These variables were compared for their effects on dynamic range, intra-frame brightness variation, visual acuity, temporal consistency, and the detectability of simulated cracks placed in the images. This testing was conducted from a terrestrial, rather than an airborne collection platform, due to the large number of images per collection, and the desire to minimize inter-image misregistration. The results point to a range of slightly underexposed image exposure values as preferable for change detection and noise minimization fidelity. The makeup of the scene, the sensor, and aerial platform, influence the selection of the aperture and shutter speed which along with other variables, allow for estimation of the apparent image motion (AIM) motion blur in the resulting images. The importance of the image edges in the image application, will in part dictate the lowest usable f-stop, and allow the user to select a more optimal shutter speed and ISO. The single most important camera capture variable is exposure bias (EV), with a full dynamic range, wide distribution of DN values, and high visual contrast and acuity occurring around -0.7 to -0.3EV exposure bias. The ideal values for sensor gain, was found to be ISO 100, with ISO 200 a less desirable. This study offers researchers a better understanding of the effects of camera capture settings on RSI pairs and their influence on image-based change detection.

  4. 640x480 PtSi Stirling-cooled camera system

    NASA Astrophysics Data System (ADS)

    Villani, Thomas S.; Esposito, Benjamin J.; Davis, Timothy J.; Coyle, Peter J.; Feder, Howard L.; Gilmartin, Harvey R.; Levine, Peter A.; Sauer, Donald J.; Shallcross, Frank V.; Demers, P. L.; Smalser, P. J.; Tower, John R.

    1992-09-01

    A Stirling cooled 3 - 5 micron camera system has been developed. The camera employs a monolithic 640 X 480 PtSi-MOS focal plane array. The camera system achieves an NEDT equals 0.10 K at 30 Hz frame rate with f/1.5 optics (300 K background). At a spatial frequency of 0.02 cycles/mRAD the vertical and horizontal Minimum Resolvable Temperature are in the range of MRT equals 0.03 K (f/1.5 optics, 300 K background). The MOS focal plane array achieves a resolution of 480 TV lines per picture height independent of background level and position within the frame.

  5. Using a High-Speed Camera to Measure the Speed of Sound

    ERIC Educational Resources Information Center

    Hack, William Nathan; Baird, William H.

    2012-01-01

    The speed of sound is a physical property that can be measured easily in the lab. However, finding an inexpensive and intuitive way for students to determine this speed has been more involved. The introduction of affordable consumer-grade high-speed cameras (such as the Exilim EX-FC100) makes conceptually simple experiments feasible. Since the…

  6. Multi-camera synchronization core implemented on USB3 based FPGA platform

    NASA Astrophysics Data System (ADS)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado

    2015-03-01

    Centered on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a new technique to synchronize up to 8 individual self-timed cameras with minimal error. Small form factor self-timed camera modules of 1 mm x 1 mm or smaller do not normally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge of synchronizing multiple selftimed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames, a Master-Slave interface was implemented. A single camera is defined as the Master, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the implementation of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.

  7. Studies on the formation, temporal evolution and forensic applications of camera "fingerprints".

    PubMed

    Kuppuswamy, R

    2006-06-02

    A series of experiments was conducted by exposing negative film in brand new cameras of different make and model. The exposures were repeated at regular time intervals spread over a period of 2 years. The processed film negatives were studied under a stereomicroscope (10-40x) in transmitted illumination for the presence of the characterizing features on their four frame-edges. These features were then related to those present on the masking frame of the cameras by examining the latter in reflected light stereomicroscopy (10-40x). The purpose of the study was to determine the origin and permanence of the frame-edge-marks, and also the processes by which the marks may probably alter with time. The investigations have arrived at the following conclusions: (i) the edge-marks have originated principally from the imperfections received on the film mask from the manufacturing and also occasionally from the accumulated dirt, dust and fiber on the film mask over an extended time period. (ii) The edge profiles of the cameras have remained fixed over a considerable period of time so as to be of a valuable identification medium. (iii) The marks are found to be varying in nature even with those cameras manufactured at similar time. (iv) The influence of f/number and object distance has great effect in the recording of the frame-edge marks during exposure of the film. The above findings would serve as a useful addition to the technique of camera edge-mark comparisons.

  8. Laser-Camera Vision Sensing for Spacecraft Mobile Robot Navigation

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Khalil, Ahmad S.; Dorais, Gregory A.; Gawdiak, Yuri

    2002-01-01

    The advent of spacecraft mobile robots-free-flyng sensor platforms and communications devices intended to accompany astronauts or remotely operate on space missions both inside and outside of a spacecraft-has demanded the development of a simple and effective navigation schema. One such system under exploration involves the use of a laser-camera arrangement to predict relative positioning of the mobile robot. By projecting laser beams from the robot, a 3D reference frame can be introduced. Thus, as the robot shifts in position, the position reference frame produced by the laser images is correspondingly altered. Using normalization and camera registration techniques presented in this paper, the relative translation and rotation of the robot in 3D are determined from these reference frame transformations.

  9. Vision Based SLAM in Dynamic Scenes

    DTIC Science & Technology

    2012-12-20

    the correct relative poses between cameras at frame F. For this purpose, we detect and match SURF features between cameras in dilierent groups, and...all cameras in s uch a challenging case. For a compa rison, we disabled the ’ inte r-camera pose estimation’ and applied the ’ intra-camera pose esti

  10. OOM - OBJECT ORIENTATION MANIPULATOR, VERSION 6.1

    NASA Technical Reports Server (NTRS)

    Goza, S. P.

    1994-01-01

    The Object Orientation Manipulator (OOM) is an application program for creating, rendering, and recording three-dimensional computer-generated still and animated images. This is done using geometrically defined 3D models, cameras, and light sources, referred to collectively as animation elements. OOM does not provide the tools necessary to construct 3D models; instead, it imports binary format model files generated by the Solid Surface Modeler (SSM). Model files stored in other formats must be converted to the SSM binary format before they can be used in OOM. SSM is available as MSC-21914 or as part of the SSM/OOM bundle, COS-10047. Among OOM's features are collision detection (with visual and audio feedback), the capability to define and manipulate hierarchical relationships between animation elements, stereographic display, and ray-traced rendering. OOM uses Euler angle transformations for calculating the results of translation and rotation operations. OOM provides an interactive environment for the manipulation and animation of models, cameras, and light sources. Models are the basic entity upon which OOM operates and are therefore considered the primary animation elements. Cameras and light sources are considered secondary animation elements. A camera, in OOM, is simply a location within the three-space environment from which the contents of the environment are observed. OOM supports the creation and full animation of cameras. Light sources can be defined, positioned and linked to models, but they cannot be animated independently. OOM can simultaneously accommodate as many animation elements as the host computer's memory permits. Once the required animation elements are present, the user may position them, orient them, and define any initial relationships between them. Once the initial relationships are defined, the user can display individual still views for rendering and output, or define motion for the animation elements by using the Interp Animation Editor. The program provides the capability to save still images, animated sequences of frames, and the information that describes the initialization process for an OOM session. OOM provides the same rendering and output options for both still and animated images. OOM is equipped with a robust model manipulation environment featuring a full screen viewing window, a menu-oriented user interface, and an interpolative Animation Editor. It provides three display modes: solid, wire frame, and simple, that allow the user to trade off visual authenticity for update speed. In the solid mode, each model is drawn based on the shading characteristics assigned to it when it was built. All of the shading characteristics supported by SSM are recognized and properly rendered in this mode. If increasing model complexity impedes the operation of OOM in this mode, then wireframe and simple modes are available. These provide substantially faster screen updates than solid mode. The creation and placement of cameras and light sources is under complete control of the user. One light source is provided in the default element set. It is modeled as a direct light source providing a type of lighting analogous to that provided by the Sun. OOM can accommodate as many light sources as the memory of the host computer permits. Animation is created in OOM using a technique called key frame interpolation. First, various program functions are used to load models, load or create light sources and cameras, and specify initial positions for each element. When these steps are completed, the Interp function is used to create an animation sequence for each element to be animated. An animation sequence consists of a user-defined number of frames (screen images) with some subset of those being defined as key frames. The motion of the element between key frames is interpolated automatically by the software. Key frames thus act as transition points in the motion of an element. This saves the user from having to individually define element data at each frame of a sequence. Animation frames and still images can be output to videotape recorders, film recorders, color printers, and disk files. OOM is written in C-language for implementation on SGI IRIS 4D series workstations running the IRIX operating system. A minimum of 8Mb of RAM is recommended for this program. The standard distribution medium for OOM is a .25 inch streaming magnetic IRIX tape cartridge in UNIX tar format. OOM is also offered as a bundle with a related program, SSM (Solid Surface Modeler). Please see the abstract for SSM/OOM (COS-10047) for information about the bundled package. OOM was released in 1993.

  11. High-throughput microfluidic line scan imaging for cytological characterization

    NASA Astrophysics Data System (ADS)

    Hutcheson, Joshua A.; Powless, Amy J.; Majid, Aneeka A.; Claycomb, Adair; Fritsch, Ingrid; Balachandran, Kartik; Muldoon, Timothy J.

    2015-03-01

    Imaging cells in a microfluidic chamber with an area scan camera is difficult due to motion blur and data loss during frame readout causing discontinuity of data acquisition as cells move at relatively high speeds through the chamber. We have developed a method to continuously acquire high-resolution images of cells in motion through a microfluidics chamber using a high-speed line scan camera. The sensor acquires images in a line-by-line fashion in order to continuously image moving objects without motion blur. The optical setup comprises an epi-illuminated microscope with a 40X oil immersion, 1.4 NA objective and a 150 mm tube lens focused on a microfluidic channel. Samples containing suspended cells fluorescently stained with 0.01% (w/v) proflavine in saline are introduced into the microfluidics chamber via a syringe pump; illumination is provided by a blue LED (455 nm). Images were taken of samples at the focal plane using an ELiiXA+ 8k/4k monochrome line-scan camera at a line rate of up to 40 kHz. The system's line rate and fluid velocity are tightly controlled to reduce image distortion and are validated using fluorescent microspheres. Image acquisition was controlled via MATLAB's Image Acquisition toolbox. Data sets comprise discrete images of every detectable cell which may be subsequently mined for morphological statistics and definable features by a custom texture analysis algorithm. This high-throughput screening method, comparable to cell counting by flow cytometry, provided efficient examination including counting, classification, and differentiation of saliva, blood, and cultured human cancer cells.

  12. Accurate estimation of camera shot noise in the real-time

    NASA Astrophysics Data System (ADS)

    Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.

    2017-10-01

    Nowadays digital cameras are essential parts of various technological processes and daily tasks. They are widely used in optics and photonics, astronomy, biology and other various fields of science and technology such as control systems and video-surveillance monitoring. One of the main information limitations of photo- and videocameras are noises of photosensor pixels. Camera's photosensor noise can be divided into random and pattern components. Temporal noise includes random noise component while spatial noise includes pattern noise component. Temporal noise can be divided into signal-dependent shot noise and signal-nondependent dark temporal noise. For measurement of camera noise characteristics, the most widely used methods are standards (for example, EMVA Standard 1288). It allows precise shot and dark temporal noise measurement but difficult in implementation and time-consuming. Earlier we proposed method for measurement of temporal noise of photo- and videocameras. It is based on the automatic segmentation of nonuniform targets (ASNT). Only two frames are sufficient for noise measurement with the modified method. In this paper, we registered frames and estimated shot and dark temporal noises of cameras consistently in the real-time. The modified ASNT method is used. Estimation was performed for the cameras: consumer photocamera Canon EOS 400D (CMOS, 10.1 MP, 12 bit ADC), scientific camera MegaPlus II ES11000 (CCD, 10.7 MP, 12 bit ADC), industrial camera PixeLink PL-B781F (CMOS, 6.6 MP, 10 bit ADC) and video-surveillance camera Watec LCL-902C (CCD, 0.47 MP, external 8 bit ADC). Experimental dependencies of temporal noise on signal value are in good agreement with fitted curves based on a Poisson distribution excluding areas near saturation. Time of registering and processing of frames used for temporal noise estimation was measured. Using standard computer, frames were registered and processed during a fraction of second to several seconds only. Also the accuracy of the obtained temporal noise values was estimated.

  13. Laser-induced Microparticle Impact Experiments on Soft Materials

    NASA Astrophysics Data System (ADS)

    Kooi, Steven; Veysset, David; Maznev, Alexei; Yang, Yun Jung; Olsen, Bradley; Nelson, Keith

    High-velocity impact testing is used to study fundamental aspects of materials behavior under high strain rates as well as in applications ranging from armor testing to the development of novel drug delivery platforms. In this work, we study high-velocity impact of micron-size projectiles on soft viscoelastic materials including synthetic hydrogels and gelatin samples. In an all optical laser-induced projectile impact test (LIPIT), a monolayer of microparticles is placed on a transparent substrate coated with a laser absorbing polymer layer. Ablation of a laser-irradiated polymer region accelerates the microparticles which are ejected from the launching pad into free space, reaching controllable speeds up to 1.5 km/s depending on the laser pulse energy and particle characteristics. The particles are monitored while in free space and after impact on the target surface with an ultrahigh-speed multi-frame camera that can record up to 16 images with time resolution of each frame as short as 3 ns. We present images and movies capturing individual particle impact and penetration in gels, and discuss the observed dynamics in the case of high Reynolds and Weber numbers. The results can provide direct input for modeling of high-velocity impact responses and high strain rate deformation in gels and other soft materials..

  14. Real-time dynamics of high-velocity micro-particle impact

    NASA Astrophysics Data System (ADS)

    Veysset, David; Hsieh, Alex; Kooi, Steve; Maznev, Alex A.; Tang, Shengchang; Olsen, Bradley D.; Nelson, Keith A.

    High-velocity micro-particle impact is important for many areas of science and technology, from space exploration to the development of novel drug delivery platforms. We present real-time observations of supersonic micro-particle impacts using multi-frame imaging. In an all optical laser-induced projectile impact test, a monolayer of micro-particles is placed on a transparent substrate coated with a laser absorbing polymer layer. Ablation of a laser-irradiated polymer region accelerates the micro-particles into free space with speeds up to 1.0 km/s. The particles are monitored during the impact on the target with an ultrahigh-speed multi-frame camera that can record up to 16 images with time resolution as short as 3 ns. In particular, we investigated the high-velocity impact deformation response of poly(urethane urea) (PUU) elastomers to further the fundamental understanding of the molecular influence on dynamical behaviors of PUUs. We show the dynamic-stiffening response of the PUUs and demonstrate the significance of segmental dynamics in the response. We also present movies capturing individual particle impact and penetration in gels, and discuss the observed dynamics. The results will provide an impetus for modeling high-velocity microscale impact responses and high strain rate deformation in polymers, gels, and other materials.

  15. Motile behaviour of the free-living planktonic ciliate Zoothamnium pelagicum (Ciliophora, Peritrichia).

    PubMed

    Gómez, Fernando

    2017-06-01

    Zoothamnium pelagicum is the only free-floating species among ∼1000 peritrich ciliates that develops its complete life cycle in the open ocean. In the NW Mediterranean Sea, Z. pelagicum was usually associated with ectobiotic bacteria, while in the South Atlantic Ocean was sometimes fouled by the diatom Licmophora. Each colony constituted a radial branch that joined at its base with other colonies to form a lens-shaped pseudocolony of up to 400 zooids. The cilia beat slowly, propelling the expanded pseudocolony in the direction of the concave face. Contraction was triggered by external stimuli (threat) or occurred spontaneously. Frame-by-frame analyses of high-speed camera sequences revealed that during contraction the pseudocolony reduced its diameter 70-75% in 3-3.2ms with peak velocity up to 350mms -1 . The contraction induced a forward jump of 1-2mm that attained a peak speed of 110mms -1 (∼250pseudocolony lengthss -1 ) in 5ms after onset. This medusa-like locomotion at low Reynolds numbers allowed the pseudocolony to exploit new patches of food resources, as well as to escape from predators. Zoothamnium pelagicum has been able to proliferate in the oligotrophic open ocean, while its sessile counterparts are restricted to eutrophic environments. Copyright © 2017 Elsevier GmbH. All rights reserved.

  16. Optical observations of electrical activity in cloud discharges

    NASA Astrophysics Data System (ADS)

    Vayanganie, S. P. A.; Fernando, M.; Sonnadara, U.; Cooray, V.; Perera, C.

    2018-07-01

    Temporal variation of the luminosity of seven natural cloud-to-cloud lightning channels were studied, and results were presented. They were recorded by using a high-speed video camera with the speed of 5000 fps (frames per second) and the pixel resolution of 512 × 512 in three locations in Sri Lanka in the tropics. Luminosity variation of the channel with time was obtained by analyzing the image sequences. Recorded video frames together with the luminosity variation were studied to understand the cloud discharge process. Image analysis techniques also used to understand the characteristics of channels. Cloud flashes show more luminosity variability than ground flashes. Most of the time it starts with a leader which do not have stepping process. Channel width and standard deviation of intensity variation across the channel for each cloud flashes was obtained. Brightness variation across the channel shows a Gaussian distribution. The average time duration of the cloud flashes which start with non stepped leader was 180.83 ms. Identified characteristics are matched with the existing models to understand the process of cloud flashes. The fact that cloud discharges are not confined to a single process have been further confirmed from this study. The observations show that cloud flash is a basic lightning discharge which transfers charge between two charge centers without using one specific mechanism.

  17. Optical flow estimation on image sequences with differently exposed frames

    NASA Astrophysics Data System (ADS)

    Bengtsson, Tomas; McKelvey, Tomas; Lindström, Konstantin

    2015-09-01

    Optical flow (OF) methods are used to estimate dense motion information between consecutive frames in image sequences. In addition to the specific OF estimation method itself, the quality of the input image sequence is of crucial importance to the quality of the resulting flow estimates. For instance, lack of texture in image frames caused by saturation of the camera sensor during exposure can significantly deteriorate the performance. An approach to avoid this negative effect is to use different camera settings when capturing the individual frames. We provide a framework for OF estimation on such sequences that contain differently exposed frames. Information from multiple frames are combined into a total cost functional such that the lack of an active data term for saturated image areas is avoided. Experimental results demonstrate that using alternate camera settings to capture the full dynamic range of an underlying scene can clearly improve the quality of flow estimates. When saturation of image data is significant, the proposed methods show superior performance in terms of lower endpoint errors of the flow vectors compared to a set of baseline methods. Furthermore, we provide some qualitative examples of how and when our method should be used.

  18. Reducing road traffic injuries: effectiveness of speed cameras in an urban setting.

    PubMed

    Pérez, Katherine; Marí-Dell'Olmo, Marc; Tobias, Aurelio; Borrell, Carme

    2007-09-01

    We assessed the effectiveness of speed cameras on Barcelona's beltway in reducing the numbers of road collisions and injuries and the number of vehicles involved in collisions. We designed a time-series study with a comparison group to assess the effects of the speed cameras. The "intervention group" was the beltway, and the comparison group consisted of arterial roads on which no fixed speed cameras had been installed. The outcome measures were number of road collisions, number of people injured, and number of vehicles involved in collisions. We fit the data to Poisson regression models that were adjusted according to trends and seasonality. The relative risk (RR) of a road collision occurring on the beltway after (vs before) installation of speed cameras was 0.73 (95% confidence interval [CI]=0.63, 0.85). This protective effect was greater during weekend periods. No differences were observed for arterial roads (RR=0.99; 95% CI=0.90, 1.10). Attributable fraction estimates for the 2 years of the study intervention showed 364 collisions prevented, 507 fewer people injured, and 789 fewer vehicles involved in collisions. Speed cameras installed in an urban setting are effective in reducing the numbers of road collisions and, consequently, the numbers of injured people and vehicles involved in collisions.

  19. High-speed mid-infrared hyperspectral imaging using quantum cascade lasers

    NASA Astrophysics Data System (ADS)

    Kelley, David B.; Goyal, Anish K.; Zhu, Ninghui; Wood, Derek A.; Myers, Travis R.; Kotidis, Petros; Murphy, Cara; Georgan, Chelsea; Raz, Gil; Maulini, Richard; Müller, Antoine

    2017-05-01

    We report on a standoff chemical detection system using widely tunable external-cavity quantum cascade lasers (ECQCLs) to illuminate target surfaces in the mid infrared (λ = 7.4 - 10.5 μm). Hyperspectral images (hypercubes) are acquired by synchronously operating the EC-QCLs with a LN2-cooled HgCdTe camera. The use of rapidly tunable lasers and a high-frame-rate camera enables the capture of hypercubes with 128 x 128 pixels and >100 wavelengths in <0.1 s. Furthermore, raster scanning of the laser illumination allowed imaging of a 100-cm2 area at 5-m standoff. Raw hypercubes are post-processed to generate a hypercube that represents the surface reflectance relative to that of a diffuse reflectance standard. Results will be shown for liquids (e.g., silicone oil) and solid particles (e.g., caffeine, acetaminophen) on a variety of surfaces (e.g., aluminum, plastic, glass). Signature spectra are obtained for particulate loadings of RDX on glass of <1 μg/cm2.

  20. Simultaneous planar measurements of soot structure and velocity fields in a turbulent lifted jet flame at 3 kHz

    NASA Astrophysics Data System (ADS)

    Köhler, M.; Boxx, I.; Geigle, K. P.; Meier, W.

    2011-05-01

    We describe a newly developed combustion diagnostic for the simultaneous planar imaging of soot structure and velocity fields in a highly sooting, lifted turbulent jet flame at 3000 frames per second, or two orders of magnitude faster than "conventional" laser imaging systems. This diagnostic uses short pulse duration (8 ns), frequency-doubled, diode-pumped solid state (DPSS) lasers to excite laser-induced incandescence (LII) at 3 kHz, which is then imaged onto a high framerate CMOS camera. A second (dual-cavity) DPSS laser and CMOS camera form the basis of a particle image velocity (PIV) system used to acquire 2-component velocity field in the flame. The LII response curve (measured in a laminar propane diffusion flame) is presented and the combined diagnostics then applied in a heavily sooting lifted turbulent jet flame. The potential challenges and rewards of application of this combined imaging technique at high speeds are discussed.

  1. Infrared engineering for the advancement of science: A UK perspective

    NASA Astrophysics Data System (ADS)

    Baker, Ian M.

    2017-02-01

    Leonardo MW (formerly Selex ES) has been developing infrared sensors and cameras for over 62 years at two main sites at Southampton and Basildon. Funding mainly from UK MOD has seen the technology progress from single element PbSe sensors to advanced, high definition, HgCdTe cameras, widely deployed in many fields today. However, in the last 10 years the major challenges and research funding has come from projects within the scientific sphere, particularly: astronomy and space. Low photon flux, high resolution spectroscopy and fast frame rates are the motivation to drive the sensitivity of infrared detectors to the single photon level. These detectors make use of almost noiseless avalanche gain in HgCdTe to achieve the sensitivity and speed of response. Metal Organic Vapour Phase Epitaxy, MOVPE, grown on low-cost GaAs substrates, provides the capability for crucial bandgap engineering to suppress breakdown currents and allow high avalanche gain even in very low background conditions. This paper describes the progress so far and provides a glimpse of the future.

  2. Establishing a Ballistic Test Methodology for Documenting the Containment Capability of Small Gas Turbine Engine Compressors

    NASA Technical Reports Server (NTRS)

    Heady, Joel; Pereira, J. Michael; Ruggeri, Charles R.; Bobula, George A.

    2009-01-01

    A test methodology currently employed for large engines was extended to quantify the ballistic containment capability of a small turboshaft engine compressor case. The approach involved impacting the inside of a compressor case with a compressor blade. A gas gun propelled the blade into the case at energy levels representative of failed compressor blades. The test target was a full compressor case. The aft flange was rigidly attached to a test stand and the forward flange was attached to a main frame to provide accurate boundary conditions. A window machined in the case allowed the projectile to pass through and impact the case wall from the inside with the orientation, direction and speed that would occur in a blade-out event. High-peed, digital-video cameras provided accurate velocity and orientation data. Calibrated cameras and digital image correlation software generated full field displacement and strain information at the back side of the impact point.

  3. Identifying sports videos using replay, text, and camera motion features

    NASA Astrophysics Data System (ADS)

    Kobla, Vikrant; DeMenthon, Daniel; Doermann, David S.

    1999-12-01

    Automated classification of digital video is emerging as an important piece of the puzzle in the design of content management systems for digital libraries. The ability to classify videos into various classes such as sports, news, movies, or documentaries, increases the efficiency of indexing, browsing, and retrieval of video in large databases. In this paper, we discuss the extraction of features that enable identification of sports videos directly from the compressed domain of MPEG video. These features include detecting the presence of action replays, determining the amount of scene text in vide, and calculating various statistics on camera and/or object motion. The features are derived from the macroblock, motion,and bit-rate information that is readily accessible from MPEG video with very minimal decoding, leading to substantial gains in processing speeds. Full-decoding of selective frames is required only for text analysis. A decision tree classifier built using these features is able to identify sports clips with an accuracy of about 93 percent.

  4. 3D imaging and wavefront sensing with a plenoptic objective

    NASA Astrophysics Data System (ADS)

    Rodríguez-Ramos, J. M.; Lüke, J. P.; López, R.; Marichal-Hernández, J. G.; Montilla, I.; Trujillo-Sevilla, J.; Femenía, B.; Puga, M.; López, M.; Fernández-Valdivia, J. J.; Rosa, F.; Dominguez-Conde, C.; Sanluis, J. C.; Rodríguez-Ramos, L. F.

    2011-06-01

    Plenoptic cameras have been developed over the last years as a passive method for 3d scanning. Several superresolution algorithms have been proposed in order to increase the resolution decrease associated with lightfield acquisition with a microlenses array. A number of multiview stereo algorithms have also been applied in order to extract depth information from plenoptic frames. Real time systems have been implemented using specialized hardware as Graphical Processing Units (GPUs) and Field Programmable Gates Arrays (FPGAs). In this paper, we will present our own implementations related with the aforementioned aspects but also two new developments consisting of a portable plenoptic objective to transform every conventional 2d camera in a 3D CAFADIS plenoptic camera, and the novel use of a plenoptic camera as a wavefront phase sensor for adaptive optics (OA). The terrestrial atmosphere degrades the telescope images due to the diffraction index changes associated with the turbulence. These changes require a high speed processing that justify the use of GPUs and FPGAs. Na artificial Laser Guide Stars (Na-LGS, 90km high) must be used to obtain the reference wavefront phase and the Optical Transfer Function of the system, but they are affected by defocus because of the finite distance to the telescope. Using the telescope as a plenoptic camera allows us to correct the defocus and to recover the wavefront phase tomographically. These advances significantly increase the versatility of the plenoptic camera, and provides a new contribution to relate the wave optics and computer vision fields, as many authors claim.

  5. High Speed Imaging of Cavitation around Dental Ultrasonic Scaler Tips.

    PubMed

    Vyas, Nina; Pecheva, Emilia; Dehghani, Hamid; Sammons, Rachel L; Wang, Qianxi X; Leppinen, David M; Walmsley, A Damien

    2016-01-01

    Cavitation occurs around dental ultrasonic scalers, which are used clinically for removing dental biofilm and calculus. However it is not known if this contributes to the cleaning process. Characterisation of the cavitation around ultrasonic scalers will assist in assessing its contribution and in developing new clinical devices for removing biofilm with cavitation. The aim is to use high speed camera imaging to quantify cavitation patterns around an ultrasonic scaler. A Satelec ultrasonic scaler operating at 29 kHz with three different shaped tips has been studied at medium and high operating power using high speed imaging at 15,000, 90,000 and 250,000 frames per second. The tip displacement has been recorded using scanning laser vibrometry. Cavitation occurs at the free end of the tip and increases with power while the area and width of the cavitation cloud varies for different shaped tips. The cavitation starts at the antinodes, with little or no cavitation at the node. High speed image sequences combined with scanning laser vibrometry show individual microbubbles imploding and bubble clouds lifting and moving away from the ultrasonic scaler tip, with larger tip displacement causing more cavitation.

  6. High Speed Imaging of Cavitation around Dental Ultrasonic Scaler Tips

    PubMed Central

    Vyas, Nina; Pecheva, Emilia; Dehghani, Hamid; Sammons, Rachel L.; Wang, Qianxi X.; Leppinen, David M.; Walmsley, A. Damien

    2016-01-01

    Cavitation occurs around dental ultrasonic scalers, which are used clinically for removing dental biofilm and calculus. However it is not known if this contributes to the cleaning process. Characterisation of the cavitation around ultrasonic scalers will assist in assessing its contribution and in developing new clinical devices for removing biofilm with cavitation. The aim is to use high speed camera imaging to quantify cavitation patterns around an ultrasonic scaler. A Satelec ultrasonic scaler operating at 29 kHz with three different shaped tips has been studied at medium and high operating power using high speed imaging at 15,000, 90,000 and 250,000 frames per second. The tip displacement has been recorded using scanning laser vibrometry. Cavitation occurs at the free end of the tip and increases with power while the area and width of the cavitation cloud varies for different shaped tips. The cavitation starts at the antinodes, with little or no cavitation at the node. High speed image sequences combined with scanning laser vibrometry show individual microbubbles imploding and bubble clouds lifting and moving away from the ultrasonic scaler tip, with larger tip displacement causing more cavitation. PMID:26934340

  7. Scalable Photogrammetric Motion Capture System "mosca": Development and Application

    NASA Astrophysics Data System (ADS)

    Knyaz, V. A.

    2015-05-01

    Wide variety of applications (from industrial to entertainment) has a need for reliable and accurate 3D information about motion of an object and its parts. Very often the process of movement is rather fast as in cases of vehicle movement, sport biomechanics, animation of cartoon characters. Motion capture systems based on different physical principles are used for these purposes. The great potential for obtaining high accuracy and high degree of automation has vision-based system due to progress in image processing and analysis. Scalable inexpensive motion capture system is developed as a convenient and flexible tool for solving various tasks requiring 3D motion analysis. It is based on photogrammetric techniques of 3D measurements and provides high speed image acquisition, high accuracy of 3D measurements and highly automated processing of captured data. Depending on the application the system can be easily modified for different working areas from 100 mm to 10 m. The developed motion capture system uses from 2 to 4 technical vision cameras for video sequences of object motion acquisition. All cameras work in synchronization mode at frame rate up to 100 frames per second under the control of personal computer providing the possibility for accurate calculation of 3D coordinates of interest points. The system was used for a set of different applications fields and demonstrated high accuracy and high level of automation.

  8. Some Effects of Injection Advance Angle, Engine-Jacket Temperature, and Speed on Combustion in a Compression-Ignition Engine

    NASA Technical Reports Server (NTRS)

    Rothrock, A M; Waldron, C D

    1936-01-01

    An optical indicator and a high-speed motion-picture camera capable of operating at the rate of 2,000 frames per second were used to record simultaneously the pressure development and the flame formation in the combustion chamber of the NACA combustion apparatus. Tests were made at engine speeds of 570 and 1,500 r.p.m. The engine-jacket temperature was varied from 100 degrees to 300 degrees F. And the injection advance angle from 13 degrees after top center to 120 degrees before top center. The results show that the course of the combustion is largely controlled by the temperature and pressure of the air in the chamber from the time the fuel is injected until the time at which combustion starts and by the ignition lag. The conclusion is presented that in a compression-ignition engine with a quiescent combustion chamber the ignition lag should be the longest that can be used without excessive rates of pressure rise; any further shortening of the ignition lag decreased the effective combustion of the engine.

  9. Towards real-time remote processing of laparoscopic video

    NASA Astrophysics Data System (ADS)

    Ronaghi, Zahra; Duffy, Edward B.; Kwartowitz, David M.

    2015-03-01

    Laparoscopic surgery is a minimally invasive surgical technique where surgeons insert a small video camera into the patient's body to visualize internal organs and small tools to perform surgical procedures. However, the benefit of small incisions has a drawback of limited visualization of subsurface tissues, which can lead to navigational challenges in the delivering of therapy. Image-guided surgery (IGS) uses images to map subsurface structures and can reduce the limitations of laparoscopic surgery. One particular laparoscopic camera system of interest is the vision system of the daVinci-Si robotic surgical system (Intuitive Surgical, Sunnyvale, CA, USA). The video streams generate approximately 360 megabytes of data per second, demonstrating a trend towards increased data sizes in medicine, primarily due to higher-resolution video cameras and imaging equipment. Processing this data on a bedside PC has become challenging and a high-performance computing (HPC) environment may not always be available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second (fps) rate, it is required that each 11.9 MB video frame be processed by a server and returned within 1/30th of a second. The ability to acquire, process and visualize data in real-time is essential for performance of complex tasks as well as minimizing risk to the patient. As a result, utilizing high-speed networks to access computing clusters will lead to real-time medical image processing and improve surgical experiences by providing real-time augmented laparoscopic data. We aim to develop a medical video processing system using an OpenFlow software defined network that is capable of connecting to multiple remote medical facilities and HPC servers.

  10. Versatile quantitative phase imaging system applied to high-speed, low noise and multimodal imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Federici, Antoine; Aknoun, Sherazade; Savatier, Julien; Wattellier, Benoit F.

    2017-02-01

    Quadriwave lateral shearing interferometry (QWLSI) is a well-established quantitative phase imaging (QPI) technique based on the analysis of interference patterns of four diffraction orders by an optical grating set in front of an array detector [1]. As a QPI modality, this is a non-invasive imaging technique which allow to measure the optical path difference (OPD) of semi-transparent samples. We present a system enabling QWLSI with high-performance sCMOS cameras [2] and apply it to perform high-speed imaging, low noise as well as multimodal imaging. This modified QWLSI system contains a versatile optomechanical device which images the optical grating near the detector plane. Such a device is coupled with any kind of camera by varying its magnification. In this paper, we study the use of a sCMOS Zyla5.5 camera from Andor along with our modified QWLSI system. We will present high-speed live cell imaging, up to 200Hz frame rate, in order to follow intracellular fast motions while measuring the quantitative phase information. The structural and density information extracted from the OPD signal is complementary to the specific and localized fluorescence signal [2]. In addition, QPI detects cells even when the fluorophore is not expressed. This is very useful to follow a protein expression with time. The 10 µm spatial pixel resolution of our modified QWLSI associated to the high sensitivity of the Zyla5.5 enabling to perform high quality fluorescence imaging, we have carried out multimodal imaging revealing fine structures cells, like actin filaments, merged with the morphological information of the phase. References [1]. P. Bon, G. Maucort, B. Wattellier, and S. Monneret, "Quadriwave lateral shearing interferometry for quantitative phase microscopy of living cells," Opt. Express, vol. 17, pp. 13080-13094, 2009. [2] P. Bon, S. Lécart, E. Fort and S. Lévêque-Fort, "Fast label-free cytoskeletal network imaging in living mammalian cells," Biophysical journal, 106(8), pp. 1588-1595, 2014

  11. A Real-Time Method to Estimate Speed of Object Based on Object Detection and Optical Flow Calculation

    NASA Astrophysics Data System (ADS)

    Liu, Kaizhan; Ye, Yunming; Li, Xutao; Li, Yan

    2018-04-01

    In recent years Convolutional Neural Network (CNN) has been widely used in computer vision field and makes great progress in lots of contents like object detection and classification. Even so, combining Convolutional Neural Network, which means making multiple CNN frameworks working synchronously and sharing their output information, could figure out useful message that each of them cannot provide singly. Here we introduce a method to real-time estimate speed of object by combining two CNN: YOLOv2 and FlowNet. In every frame, YOLOv2 provides object size; object location and object type while FlowNet providing the optical flow of whole image. On one hand, object size and object location help to select out the object part of optical flow image thus calculating out the average optical flow of every object. On the other hand, object type and object size help to figure out the relationship between optical flow and true speed by means of optics theory and priori knowledge. Therefore, with these two key information, speed of object can be estimated. This method manages to estimate multiple objects at real-time speed by only using a normal camera even in moving status, whose error is acceptable in most application fields like manless driving or robot vision.

  12. Robust Video Stabilization Using Particle Keypoint Update and l1-Optimized Camera Path

    PubMed Central

    Jeon, Semi; Yoon, Inhye; Jang, Jinbeum; Yang, Seungji; Kim, Jisung; Paik, Joonki

    2017-01-01

    Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i) robust feature detection using particle keypoints between adjacent frames; (ii) camera path estimation and smoothing; and (iii) rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV). The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems. PMID:28208622

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaffney, Kelly

    Movies have transformed our perception of the world. With slow motion photography, we can see a hummingbird flap its wings, and a bullet pierce an apple. The remarkably small and extremely fast molecular world that determines how your body functions cannot be captured with even the most sophisticated movie camera today. To see chemistry in real time requires a camera capable of seeing molecules that are one ten billionth of a foot with a frame rate of 10 trillion frames per second! SLAC has embarked on the construction of just such a camera. Please join me as I discuss howmore » this molecular movie camera will work and how it will change our perception of the molecular world.« less

  14. Geometrical calibration television measuring systems with solid state photodetectors

    NASA Astrophysics Data System (ADS)

    Matiouchenko, V. G.; Strakhov, V. V.; Zhirkov, A. O.

    2000-11-01

    The various optical measuring methods for deriving information about the size and form of objects are now used in difference branches- mechanical engineering, medicine, art, criminalistics. Measuring by means of the digital television systems is one of these methods. The development of this direction is promoted by occurrence on the market of various types and costs small-sized television cameras and frame grabbers. There are many television measuring systems using the expensive cameras, but accuracy performances of low cost cameras are also interested for the system developers. For this reason inexpensive mountingless camera SK1004CP (format 1/3', cost up to 40$) and frame grabber Aver2000 were used in experiments.

  15. Earth Observation taken during the 41G mission

    NASA Image and Video Library

    2009-06-25

    41G-120-056 (October 1984) --- Parts of Israel, Lebanon, Palestine, Syria and Jordan and part of the Mediterranean Sea are seen in this nearly-vertical, large format camera's view from the Earth-orbiting Space Shuttle Challenger. The Sea of Galilee is at center frame and the Dead Sea at bottom center. The frame's center coordinates are 32.5 degrees north latitude and 35.5 degrees east longitude. A Linhof camera, using 4" x 5" film, was used to expose the frame through one of the windows on Challenger's aft flight deck.

  16. Communication: Time- and space-sliced velocity map electron imaging

    NASA Astrophysics Data System (ADS)

    Lee, Suk Kyoung; Lin, Yun Fei; Lingenfelter, Steven; Fan, Lin; Winney, Alexander H.; Li, Wen

    2014-12-01

    We develop a new method to achieve slice electron imaging using a conventional velocity map imaging apparatus with two additional components: a fast frame complementary metal-oxide semiconductor camera and a high-speed digitizer. The setup was previously shown to be capable of 3D detection and coincidence measurements of ions. Here, we show that when this method is applied to electron imaging, a time slice of 32 ps and a spatial slice of less than 1 mm thick can be achieved. Each slice directly extracts 3D velocity distributions of electrons and provides electron velocity distributions that are impossible or difficult to obtain with a standard 2D imaging electron detector.

  17. Differential high-speed digital micromirror device based fluorescence speckle confocal microscopy.

    PubMed

    Jiang, Shihong; Walker, John

    2010-01-20

    We report a differential fluorescence speckle confocal microscope that acquires an image in a fraction of a second by exploiting the very high frame rate of modern digital micromirror devices (DMDs). The DMD projects a sequence of predefined binary speckle patterns to the sample and modulates the intensity of the returning fluorescent light simultaneously. The fluorescent light reflecting from the DMD's "on" and "off" pixels is modulated by correlated speckle and anticorrelated speckle, respectively, to form two images on two CCD cameras in parallel. The sum of the two images recovers a widefield image, but their difference gives a near-confocal image in real time. Experimental results for both low and high numerical apertures are shown.

  18. Unsteady motion of laser ablation plume by vortex induced by the expansion of curved shock wave

    NASA Astrophysics Data System (ADS)

    Tran, D. T.; Mori, K.

    2017-02-01

    There are a number of industrial applications of laser ablation in a gas atmosphere. When an intense pulsed laser beam is irradiated on a solid surface in the gas atmosphere, the surface material is ablated and expands into the atmosphere. At the same time, a spherical shock wave is launched by the ablation jet to induce the unsteady flow around the target surface. The ablated materials, luminously working as tracer, exhibit strange unsteady motions depending on the experimental conditions. By using a high-speed video camera (HPV-X2), unsteady motion ablated materials are visualized at the frame rate more than 106 fps, and qualitatively characterized.

  19. Optical head tracking for functional magnetic resonance imaging using structured light.

    PubMed

    Zaremba, Andrei A; MacFarlane, Duncan L; Tseng, Wei-Che; Stark, Andrew J; Briggs, Richard W; Gopinath, Kaundinya S; Cheshkov, Sergey; White, Keith D

    2008-07-01

    An accurate motion-tracking technique is needed to compensate for subject motion during functional magnetic resonance imaging (fMRI) procedures. Here, a novel approach to motion metrology is discussed. A structured light pattern specifically coded for digital signal processing is positioned onto a fiduciary of the patient. As the patient undergoes spatial transformations in 6 DoF (degrees of freedom), a high-resolution CCD camera captures successive images for analysis on a computing platform. A high-speed image processing algorithm is used to calculate spatial transformations in a time frame commensurate with patient movements (10-100 ms) and with a precision of at least 0.5 microm for translations and 0.1 deg for rotations.

  20. Analysis of straw row in the image to control the trajectory of the agricultural combine harvester

    NASA Astrophysics Data System (ADS)

    Shkanaev, Aleksandr Yurievich; Polevoy, Dmitry Valerevich; Panchenko, Aleksei Vladimirovich; Krokhina, Darya Alekseevna; Nailevish, Sadekov Rinat

    2018-04-01

    The paper proposes a solution to the automatic operation of the combine harvester along the straw rows by means of the images from the camera, installed in the cab of the harvester. The U-Net is used to recognize straw rows in the image. The edges of the row are approximated in the segmented image by the curved lines and further converted into the harvester coordinate system for the automatic operating system. The "new" network architecture and approaches to the row approximation has improved the quality of the recognition task and the processing speed of the frames up to 96% and 7.5 fps, respectively. Keywords: Grain harvester,

  1. Sensor fusion of cameras and a laser for city-scale 3D reconstruction.

    PubMed

    Bok, Yunsu; Choi, Dong-Geol; Kweon, In So

    2014-11-04

    This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale) in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate.

  2. Universal ICT Picosecond Camera

    NASA Astrophysics Data System (ADS)

    Lebedev, Vitaly B.; Syrtzev, V. N.; Tolmachyov, A. M.; Feldman, Gregory G.; Chernyshov, N. A.

    1989-06-01

    The paper reports on the design of an ICI camera operating in the mode of linear or three-frame image scan. The camera incorporates two tubes: time-analyzing ICI PIM-107 1 with cathode S-11, and brightness amplifier PMU-2V (gain about 104) for the image shaped by the first tube. The camera is designed on the basis of streak camera AGAT-SF3 2 with almost the same power sources, but substantially modified pulse electronics. Schematically, the design of tube PIM-107 is depicted in the figure. The tube consists of cermet housing 1, photocathode 2 made in a separate vacuum volume and introduced into the housing by means of a manipulator. In a direct vicinity of the photocathode, accelerating electrode is located made of a fine-structure grid. An electrostatic lens formed by focusing electrode 4 and anode diaphragm 5 produces a beam of electrons with a "remote crossover". The authors have suggested this term for an electron beam whose crossover is 40 to 60 mm away from the anode diaphragm plane which guarantees high sensitivity of scan plates 6 with respect to multiaperture framing diaphragm 7. Beyond every diaphragm aperture, a pair of deflecting plates 8 is found shielded from compensation plates 10 by diaphragm 9. The electronic image produced by the photocathode is focused on luminescent screen 11. The tube is controlled with the help of two saw-tooth voltages applied in antiphase across plates 6 and 10. Plates 6 serve for sweeping the electron beam over the surface of diaphragm 7. The beam is either allowed toward the screen, or delayed by the diaphragm walls. In such a manner, three frames are obtained, the number corresponding to that of the diaphragm apertures. Plates 10 serve for stopping the compensation of the image streak sweep on the screen. To avoid overlapping of frames, plates 8 receive static potentials responsible for shifting frames on the screen. Changing the potentials applied to plates 8, one can control the spacing between frames and partially or fully overlap the frames. This sort of control is independent of the frequency of frame running and of their duration, and can only determine frame positioning on the screen. Since diaphragm 7 is located in the area of crossover and electron trajectories cross in the crossover, the frame is not decomposed into separate elements during its formation. The image is transferred onto the screen practically within the entire time of frame duration increasing the aperture ratio of the tube as compared to that in Ref. 3.

  3. Fast optically sectioned fluorescence HiLo endomicroscopy

    PubMed Central

    Lim, Daryl; Mertz, Jerome

    2012-01-01

    Abstract. We describe a nonscanning, fiber bundle endomicroscope that performs optically sectioned fluorescence imaging with fast frame rates and real-time processing. Our sectioning technique is based on HiLo imaging, wherein two widefield images are acquired under uniform and structured illumination and numerically processed to reject out-of-focus background. This work is an improvement upon an earlier demonstration of widefield optical sectioning through a flexible fiber bundle. The improved device features lateral and axial resolutions of 2.6 and 17 μm, respectively, a net frame rate of 9.5 Hz obtained by real-time image processing with a graphics processing unit (GPU) and significantly reduced motion artifacts obtained by the use of a double-shutter camera. We demonstrate the performance of our system with optically sectioned images and videos of a fluorescently labeled chorioallantoic membrane (CAM) in the developing G. gallus embryo. HiLo endomicroscopy is a candidate technique for low-cost, high-speed clinical optical biopsies. PMID:22463023

  4. Solidification kinetics of a Cu-Zr alloy: ground-based and microgravity experiments

    NASA Astrophysics Data System (ADS)

    Galenko, P. K.; Hanke, R.; Paul, P.; Koch, S.; Rettenmayr, M.; Gegner, J.; Herlach, D. M.; Dreier, W.; Kharanzhevski, E. V.

    2017-04-01

    Experimental and theoretical results obtained in the MULTIPHAS-project (ESA-European Space Agency and DLR-German Aerospace Center) are critically discussed regarding solidification kinetics of congruently melting and glass forming Cu50Zr50 alloy samples. The samples are investigated during solidification using a containerless technique in the Electromagnetic Levitation Facility [1]. Applying elaborated methodologies for ground-based and microgravity experimental investigations [2], the kinetics of primary dendritic solidification is quantitatively evaluated. Electromagnetic Levitator in microgravity (parabolic flights and on board of the International Space Station) and Electrostatic Levitator on Ground are employed. The solidification kinetics is determined using a high-speed camera and applying two evaluation methods: “Frame by Frame” (FFM) and “First Frame - Last Frame” (FLM). In the theoretical interpretation of the solidification experiments, special attention is given to the behavior of the cluster structure in Cu50Zr50 samples with the increase of undercooling. Experimental results on solidification kinetics are interpreted using a theoretical model of diffusion controlled dendrite growth.

  5. Applications and limitations of electron correlation microscopy to study relaxation dynamics in supercooled liquids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Pei; He, Li; Besser, Matthew F.

    Here, electron correlation microscopy (ECM) is a way to measure structural relaxation times, τ, of liquids with nanometer-scale spatial resolution using coherent electron scattering equivalent of photon correlation spectroscopy. We have applied ECM with a 3.5 nm diameter probe to Pt 57.5Cu 14.7Ni 5.3P 22.5 amorphous nanorods and Pd 40Ni 40P 20 bulk metallic glass (BMG) heated inside the STEM into the supercooled liquid region. These data demonstrate that the ECM technique is limited by the characteristics of the time series, which must be at least 40τ to obtain a well-converged correlation function g 2(t), and the time per frame,more » which must be less than 0.1τ to obtain sufficient sampling. A high-speed direct electron camera enables fast acquisition and affords reliable g 2(t) data even with low signal per frame.« less

  6. Applications and limitations of electron correlation microscopy to study relaxation dynamics in supercooled liquids

    DOE PAGES

    Zhang, Pei; He, Li; Besser, Matthew F.; ...

    2016-09-08

    Here, electron correlation microscopy (ECM) is a way to measure structural relaxation times, τ, of liquids with nanometer-scale spatial resolution using coherent electron scattering equivalent of photon correlation spectroscopy. We have applied ECM with a 3.5 nm diameter probe to Pt 57.5Cu 14.7Ni 5.3P 22.5 amorphous nanorods and Pd 40Ni 40P 20 bulk metallic glass (BMG) heated inside the STEM into the supercooled liquid region. These data demonstrate that the ECM technique is limited by the characteristics of the time series, which must be at least 40τ to obtain a well-converged correlation function g 2(t), and the time per frame,more » which must be less than 0.1τ to obtain sufficient sampling. A high-speed direct electron camera enables fast acquisition and affords reliable g 2(t) data even with low signal per frame.« less

  7. Spectral structure of a polycapillary lens shaped X-ray beam

    NASA Astrophysics Data System (ADS)

    Gogolev, A. S.; Filatov, N. A.; Uglov, S. R.; Hampai, D.; Dabagov, S. B.

    2018-04-01

    Polycapillary X-ray optics is widely used in X-ray analysis techniques to create a small secondary source, for instance, or to deliver X-rays to the point of interest with minimum intensity losses [1]. The main characteristics of the analytical devices on its base are the size and divergence of the focused or translated beam. In this work, we used the photon-counting pixel detector ModuPIX to study the parameters for polycapillary focused X-ray tube radiation as well as the energy and spatial dependences of radiation at the focus. We have characterized the high-speed spectral camera ModuPIX, which is a single Timepix device with a fast parallel readout allowing up to 850 frames per second with 256 × 256 pixels and a 55 μm pitch defined by the frame frequency. By means of the silicon monochromator the energy response function is measured in clustering mode by the energy scan over total X-ray tube spectrum.

  8. The development of large-aperture test system of infrared camera and visible CCD camera

    NASA Astrophysics Data System (ADS)

    Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying

    2015-10-01

    Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shao, Michael; Nemati, Bijan; Zhai, Chengxing

    We present an approach that significantly increases the sensitivity for finding and tracking small and fast near-Earth asteroids (NEAs). This approach relies on a combined use of a new generation of high-speed cameras which allow short, high frame-rate exposures of moving objects, effectively 'freezing' their motion, and a computationally enhanced implementation of the 'shift-and-add' data processing technique that helps to improve the signal-to-noise ratio (SNR) for detection of NEAs. The SNR of a single short exposure of a dim NEA is insufficient to detect it in one frame, but by computationally searching for an appropriate velocity vector, shifting successive framesmore » relative to each other and then co-adding the shifted frames in post-processing, we synthetically create a long-exposure image as if the telescope were tracking the object. This approach, which we call 'synthetic tracking,' enhances the familiar shift-and-add technique with the ability to do a wide blind search, detect, and track dim and fast-moving NEAs in near real time. We discuss also how synthetic tracking improves the astrometry of fast-moving NEAs. We apply this technique to observations of two known asteroids conducted on the Palomar 200 inch telescope and demonstrate improved SNR and 10 fold improvement of astrometric precision over the traditional long-exposure approach. In the past 5 yr, about 150 NEAs with absolute magnitudes H = 28 (∼10 m in size) or fainter have been discovered. With an upgraded version of our camera and a field of view of (28 arcmin){sup 2} on the Palomar 200 inch telescope, synthetic tracking could allow detecting up to 180 such objects per night, including very small NEAs with sizes down to 7 m.« less

  10. Fracturing mechanics before valve-in-valve therapy of small aortic bioprosthetic heart valves.

    PubMed

    Johansen, Peter; Engholt, Henrik; Tang, Mariann; Nybo, Rasmus F; Rasmussen, Per D; Nielsen-Kudsk, Jens Erik

    2017-10-13

    Patients with degraded bioprosthetic heart valves (BHV) who are not candidates for valve replacement may benefit from transcatheter valve-in-valve (VIV) therapy. However, in smaller-sized surgical BHV the resultant orifice may become too narrow. To overcome this, the valve frame can be fractured by a high-pressure balloon prior to VIV. However, knowledge on fracture pressures and mechanics are prerequisites. The aim of this study was to identify the fracture pressures needed in BHV, and to describe the fracture mechanics. Commonly used BHV of small sizes were mounted on a high-pressure balloon situated in a biplane fluoroscopic system with a high-speed camera. The instant of fracture was captured along with the balloon pressure. The valves were inspected for material protrusion and later dissected for fracture zone investigation and description. The valves with a polymer frame fractured at a lower pressure (8-10 atm) than those with a metal stent (19-26 atm). None of the fractured valves had elements protruding. VIV procedures in small-sized BHV may be performed after prior fracture of the valve frame by high-pressure balloon dilatation. This study provides tentative guidelines for expected balloon sizes and pressures for valve fracturing.

  11. Spectral characterisation and noise performance of Vanilla—an active pixel sensor

    NASA Astrophysics Data System (ADS)

    Blue, Andrew; Bates, R.; Bohndiek, S. E.; Clark, A.; Arvanitis, Costas D.; Greenshaw, T.; Laing, A.; Maneuski, D.; Turchetta, R.; O'Shea, V.

    2008-06-01

    This work will report on the characterisation of a new active pixel sensor, Vanilla. The Vanilla comprises of 512×512 (25μm 2) pixels. The sensor has a 12 bit digital output for full-frame mode, although it can also be readout in analogue mode, whereby it can also be read in a fully programmable region-of-interest (ROI) mode. In full frame, the sensor can operate at a readout rate of more than 100 frames per second (fps), while in ROI mode, the speed depends on the size, shape and number of ROIs. For example, an ROI of 6×6 pixels can be read at 20,000 fps in analogue mode. Using photon transfer curve (PTC) measurements allowed for the calculation of the read noise, shot noise, full-well capacity and camera gain constant of the sensor. Spectral response measurements detailed the quantum efficiency (QE) of the detector through the UV and visible region. Analysis of the ROI readout mode was also performed. Such measurements suggest that the Vanilla APS (active pixel sensor) will be suitable for a wide range of applications including particle physics and medical imaging.

  12. High speed spectral domain optical coherence tomography for retinal imaging at 500,000 A‑lines per second

    PubMed Central

    An, Lin; Li, Peng; Shen, Tueng T.; Wang, Ruikang

    2011-01-01

    We present a new development of ultrahigh speed spectral domain optical coherence tomography (SDOCT) for human retinal imaging at 850 nm central wavelength by employing two high-speed line scan CMOS cameras, each running at 250 kHz. Through precisely controlling the recording and reading time periods of the two cameras, the SDOCT system realizes an imaging speed at 500,000 A-lines per second, while maintaining both high axial resolution (~8 μm) and acceptable depth ranging (~2.5 mm). With this system, we propose two scanning protocols for human retinal imaging. The first is aimed to achieve isotropic dense sampling and fast scanning speed, enabling a 3D imaging within 0.72 sec for a region covering 4x4 mm2. In this case, the B-frame rate is 700 Hz and the isotropic dense sampling is 500 A-lines along both the fast and slow axes. This scanning protocol minimizes the motion artifacts, thus making it possible to perform two directional averaging so that the signal to noise ratio of the system is enhanced while the degradation of its resolution is minimized. The second protocol is designed to scan the retina in a large field of view, in which 1200 A-lines are captured along both the fast and slow axes, covering 10 mm2, to provide overall information about the retinal status. Because of relatively long imaging time (4 seconds for a 3D scan), the motion artifact is inevitable, making it difficult to interpret the 3D data set, particularly in a way of depth-resolved en-face fundus images. To mitigate this difficulty, we propose to use the relatively high reflecting retinal pigmented epithelium layer as the reference to flatten the original 3D data set along both the fast and slow axes. We show that the proposed system delivers superb performance for human retina imaging. PMID:22025983

  13. High speed spectral domain optical coherence tomography for retinal imaging at 500,000 A‑lines per second.

    PubMed

    An, Lin; Li, Peng; Shen, Tueng T; Wang, Ruikang

    2011-10-01

    We present a new development of ultrahigh speed spectral domain optical coherence tomography (SDOCT) for human retinal imaging at 850 nm central wavelength by employing two high-speed line scan CMOS cameras, each running at 250 kHz. Through precisely controlling the recording and reading time periods of the two cameras, the SDOCT system realizes an imaging speed at 500,000 A-lines per second, while maintaining both high axial resolution (~8 μm) and acceptable depth ranging (~2.5 mm). With this system, we propose two scanning protocols for human retinal imaging. The first is aimed to achieve isotropic dense sampling and fast scanning speed, enabling a 3D imaging within 0.72 sec for a region covering 4x4 mm(2). In this case, the B-frame rate is 700 Hz and the isotropic dense sampling is 500 A-lines along both the fast and slow axes. This scanning protocol minimizes the motion artifacts, thus making it possible to perform two directional averaging so that the signal to noise ratio of the system is enhanced while the degradation of its resolution is minimized. The second protocol is designed to scan the retina in a large field of view, in which 1200 A-lines are captured along both the fast and slow axes, covering 10 mm(2), to provide overall information about the retinal status. Because of relatively long imaging time (4 seconds for a 3D scan), the motion artifact is inevitable, making it difficult to interpret the 3D data set, particularly in a way of depth-resolved en-face fundus images. To mitigate this difficulty, we propose to use the relatively high reflecting retinal pigmented epithelium layer as the reference to flatten the original 3D data set along both the fast and slow axes. We show that the proposed system delivers superb performance for human retina imaging.

  14. SU-D-BRC-07: System Design for a 3D Volumetric Scintillation Detector Using SCMOS Cameras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darne, C; Robertson, D; Alsanea, F

    2016-06-15

    Purpose: The purpose of this project is to build a volumetric scintillation detector for quantitative imaging of 3D dose distributions of proton beams accurately in near real-time. Methods: The liquid scintillator (LS) detector consists of a transparent acrylic tank (20×20×20 cm{sup 3}) filled with a liquid scintillator that when irradiated with protons generates scintillation light. To track rapid spatial and dose variations in spot scanning proton beams we used three scientific-complementary metal-oxide semiconductor (sCMOS) imagers (2560×2160 pixels). The cameras collect optical signal from three orthogonal projections. To reduce system footprint two mirrors oriented at 45° to the tank surfaces redirectmore » scintillation light to cameras for capturing top and right views. Selection of fixed focal length objective lenses for these cameras was based on their ability to provide large depth of field (DoF) and required field of view (FoV). Multiple cross-hairs imprinted on the tank surfaces allow for image corrections arising from camera perspective and refraction. Results: We determined that by setting sCMOS to 16-bit dynamic range, truncating its FoV (1100×1100 pixels) to image the entire volume of the LS detector, and using 5.6 msec integration time imaging rate can be ramped up to 88 frames per second (fps). 20 mm focal length lens provides a 20 cm imaging DoF and 0.24 mm/pixel resolution. Master-slave camera configuration enable the slaves to initiate image acquisition instantly (within 2 µsec) after receiving a trigger signal. A computer with 128 GB RAM was used for spooling images from the cameras and can sustain a maximum recording time of 2 min per camera at 75 fps. Conclusion: The three sCMOS cameras are capable of high speed imaging. They can therefore be used for quick, high-resolution, and precise mapping of dose distributions from scanned spot proton beams in three dimensions.« less

  15. An Acoustic Charge Transport Imager for High Definition Television

    NASA Technical Reports Server (NTRS)

    Hunt, William D.; Brennan, Kevin; May, Gary; Glenn, William E.; Richardson, Mike; Solomon, Richard

    1999-01-01

    This project, over its term, included funding to a variety of companies and organizations. In addition to Georgia Tech these included Florida Atlantic University with Dr. William E. Glenn as the P.I., Kodak with Mr. Mike Richardson as the P.I. and M.I.T./Polaroid with Dr. Richard Solomon as the P.I. The focus of the work conducted by these organizations was the development of camera hardware for High Definition Television (HDTV). The focus of the research at Georgia Tech was the development of new semiconductor technology to achieve a next generation solid state imager chip that would operate at a high frame rate (I 70 frames per second), operate at low light levels (via the use of avalanche photodiodes as the detector element) and contain 2 million pixels. The actual cost required to create this new semiconductor technology was probably at least 5 or 6 times the investment made under this program and hence we fell short of achieving this rather grand goal. We did, however, produce a number of spin-off technologies as a result of our efforts. These include, among others, improved avalanche photodiode structures, significant advancement of the state of understanding of ZnO/GaAs structures and significant contributions to the analysis of general GaAs semiconductor devices and the design of Surface Acoustic Wave resonator filters for wireless communication. More of these will be described in the report. The work conducted at the partner sites resulted in the development of 4 prototype HDTV cameras. The HDTV camera developed by Kodak uses the Kodak KAI-2091M high- definition monochrome image sensor. This progressively-scanned charge-coupled device (CCD) can operate at video frame rates and has 9 gm square pixels. The photosensitive area has a 16:9 aspect ratio and is consistent with the "Common Image Format" (CIF). It features an active image area of 1928 horizontal by 1084 vertical pixels and has a 55% fill factor. The camera is designed to operate in continuous mode with an output data rate of 5MHz, which gives a maximum frame rate of 4 frames per second. The MIT/Polaroid group developed two cameras under this program. The cameras have effectively four times the current video spatial resolution and at 60 frames per second are double the normal video frame rate.

  16. A feasibility study of damage detection in beams using high-speed camera (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Wan, Chao; Yuan, Fuh-Gwo

    2017-04-01

    In this paper a method for damage detection in beam structures using high-speed camera is presented. Traditional methods of damage detection in structures typically involve contact (i.e., piezoelectric sensor or accelerometer) or non-contact sensors (i.e., laser vibrometer) which can be costly and time consuming to inspect an entire structure. With the popularity of the digital camera and the development of computer vision technology, video cameras offer a viable capability of measurement including higher spatial resolution, remote sensing and low-cost. In the study, a damage detection method based on the high-speed camera was proposed. The system setup comprises a high-speed camera and a line-laser which can capture the out-of-plane displacement of a cantilever beam. The cantilever beam with an artificial crack was excited and the vibration process was recorded by the camera. A methodology called motion magnification, which can amplify subtle motions in a video is used for modal identification of the beam. A finite element model was used for validation of the proposed method. Suggestions for applications of this methodology and challenges in future work will be discussed.

  17. Enhancement Strategies for Frame-To Uas Stereo Visual Odometry

    NASA Astrophysics Data System (ADS)

    Kersten, J.; Rodehorst, V.

    2016-06-01

    Autonomous navigation of indoor unmanned aircraft systems (UAS) requires accurate pose estimations usually obtained from indirect measurements. Navigation based on inertial measurement units (IMU) is known to be affected by high drift rates. The incorporation of cameras provides complementary information due to the different underlying measurement principle. The scale ambiguity problem for monocular cameras is avoided when a light-weight stereo camera setup is used. However, also frame-to-frame stereo visual odometry (VO) approaches are known to accumulate pose estimation errors over time. Several valuable real-time capable techniques for outlier detection and drift reduction in frame-to-frame VO, for example robust relative orientation estimation using random sample consensus (RANSAC) and bundle adjustment, are available. This study addresses the problem of choosing appropriate VO components. We propose a frame-to-frame stereo VO method based on carefully selected components and parameters. This method is evaluated regarding the impact and value of different outlier detection and drift-reduction strategies, for example keyframe selection and sparse bundle adjustment (SBA), using reference benchmark data as well as own real stereo data. The experimental results demonstrate that our VO method is able to estimate quite accurate trajectories. Feature bucketing and keyframe selection are simple but effective strategies which further improve the VO results. Furthermore, introducing the stereo baseline constraint in pose graph optimization (PGO) leads to significant improvements.

  18. Geiger-mode APD camera system for single-photon 3D LADAR imaging

    NASA Astrophysics Data System (ADS)

    Entwistle, Mark; Itzler, Mark A.; Chen, Jim; Owens, Mark; Patel, Ketan; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir

    2012-06-01

    The unparalleled sensitivity of 3D LADAR imaging sensors based on single photon detection provides substantial benefits for imaging at long stand-off distances and minimizing laser pulse energy requirements. To obtain 3D LADAR images with single photon sensitivity, we have demonstrated focal plane arrays (FPAs) based on InGaAsP Geiger-mode avalanche photodiodes (GmAPDs) optimized for use at either 1.06 μm or 1.55 μm. These state-of-the-art FPAs exhibit excellent pixel-level performance and the capability for 100% pixel yield on a 32 x 32 format. To realize the full potential of these FPAs, we have recently developed an integrated camera system providing turnkey operation based on FPGA control. This system implementation enables the extremely high frame-rate capability of the GmAPD FPA, and frame rates in excess of 250 kHz (for 0.4 μs range gates) can be accommodated using an industry-standard CameraLink interface in full configuration. Real-time data streaming for continuous acquisition of 2 μs range gate point cloud data with 13-bit time-stamp resolution at 186 kHz frame rates has been established using multiple solid-state storage drives. Range gate durations spanning 4 ns to 10 μs provide broad operational flexibility. The camera also provides real-time signal processing in the form of multi-frame gray-scale contrast images and single-frame time-stamp histograms, and automated bias control has been implemented to maintain a constant photon detection efficiency in the presence of ambient temperature changes. A comprehensive graphical user interface has been developed to provide complete camera control using a simple serial command set, and this command set supports highly flexible end-user customization.

  19. Vehicle speed detection based on gaussian mixture model using sequential of images

    NASA Astrophysics Data System (ADS)

    Setiyono, Budi; Ratna Sulistyaningrum, Dwi; Soetrisno; Fajriyah, Farah; Wahyu Wicaksono, Danang

    2017-09-01

    Intelligent Transportation System is one of the important components in the development of smart cities. Detection of vehicle speed on the highway is supporting the management of traffic engineering. The purpose of this study is to detect the speed of the moving vehicles using digital image processing. Our approach is as follows: The inputs are a sequence of frames, frame rate (fps) and ROI. The steps are following: First we separate foreground and background using Gaussian Mixture Model (GMM) in each frames. Then in each frame, we calculate the location of object and its centroid. Next we determine the speed by computing the movement of centroid in sequence of frames. In the calculation of speed, we only consider frames when the centroid is inside the predefined region of interest (ROI). Finally we transform the pixel displacement into a time unit of km/hour. Validation of the system is done by comparing the speed calculated manually and obtained by the system. The results of software testing can detect the speed of vehicles with the highest accuracy is 97.52% and the lowest accuracy is 77.41%. And the detection results of testing by using real video footage on the road is included with real speed of the vehicle.

  20. Control system for several rotating mirror camera synchronization operation

    NASA Astrophysics Data System (ADS)

    Liu, Ningwen; Wu, Yunfeng; Tan, Xianxiang; Lai, Guoji

    1997-05-01

    This paper introduces a single chip microcomputer control system for synchronization operation of several rotating mirror high-speed cameras. The system consists of four parts: the microcomputer control unit (including the synchronization part and precise measurement part and the time delay part), the shutter control unit, the motor driving unit and the high voltage pulse generator unit. The control system has been used to control the synchronization working process of the GSI cameras (driven by a motor) and FJZ-250 rotating mirror cameras (driven by a gas driven turbine). We have obtained the films of the same objective from different directions in different speed or in same speed.

  1. Spirit Captures Two Dust Devils On the Move

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] Figure 1 Annotated

    At the Gusev site recently, skies have been very dusty, and on its 421st sol (March 10, 2005) NASA's Mars Exploration Rover Spirit spied two dust devils in action. This is an image from the rover's navigation camera.

    Views of the Gusev landing region from orbit show many dark streaks across the landscape -- tracks where dust devils have removed surface dust to show relatively darker soil below -- but this is the first time Spirit has photographed an active dust devil.

    Scientists are considering several causes of these small phenomena. Dust devils often occur when the Sun heats the surface of Mars. Warmed soil and rocks heat the layer of atmosphere closest to the surface, and the warm air rises in a whirling motion, stirring dust up from the surface like a miniature tornado. Another possibility is that a flow structure might develop over craters as wind speeds increase. As winds pick up, turbulence eddies and rotating columns of air form. As these columns grow in diameter they become taller and gain rotational speed. Eventually they become self-sustaining and the wind blows them down range.

    One sol before this image was taken, power output from Spirit's solar panels went up by about 50 percent when the amount of dust on the panels decreased. Was this a coincidence, or did a helpful dust devil pass over Spirit and lift off some of the dust?

    By comparing the separate images from the rover's different cameras, team members estimate that the dust devils moved about 500 meters (1,640 feet) in the 155 seconds between the navigation camera and hazard-avoidance camera frames; that equates to about 3 meters per second (7 miles per hour). The dust devils appear to be about 1,100 meters (almost three-quarters of a mile) from the rover.

  2. Dust Devils Seen by Spirit

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] Figure 1 Annotated

    At the Gusev site recently, skies have been very dusty, and on its 421st sol (March 10, 2005) NASA's Mars Exploration Rover Spirit spied two dust devils in action. This pair of images is from the rover's rear hazard-avoidance camera. Views of the Gusev landing region from orbit show many dark streaks across the landscape -- tracks where dust devils have removed surface dust to show relatively darker soil below -- but this is the first time Spirit has photographed an active dust devil.

    Scientists are considering several causes of these small phenomena. Dust devils often occur when the Sun heats the surface of Mars. Warmed soil and rocks heat the layer of atmosphere closest to the surface, and the warm air rises in a whirling motion, stirring dust up from the surface like a miniature tornado. Another possibility is that a flow structure might develop over craters as wind speeds increase. As winds pick up, turbulence eddies and rotating columns of air form. As these columns grow in diameter they become taller and gain rotational speed. Eventually they become self-sustaining and the wind blows them down range.

    One sol before this image was taken, power output from Spirit's solar panels went up by about 50 percent when the amount of dust on the panels decreased. Was this a coincidence, or did a helpful dust devil pass over Spirit and lift off some of the dust?

    By comparing the separate images from the rover's different cameras, team members estimate that the dust devils moved about 500 meters (1,640 feet) in the 155 seconds between the navigation camera and hazard-avoidance camera frames; that equates to about 3 meters per second (7 miles per hour). The dust devils appear to be about 1,100 meters (almost three-quarters of a mile) from the rover.

  3. Frames of Reference in the Classroom

    ERIC Educational Resources Information Center

    Grossman, Joshua

    2012-01-01

    The classic film "Frames of Reference" effectively illustrates concepts involved with inertial and non-inertial reference frames. In it, Donald G. Ivey and Patterson Hume use the cameras perspective to allow the viewer to see motion in reference frames translating with a constant velocity, translating while accelerating, and rotating--all with…

  4. Design and realization of an AEC&AGC system for the CCD aerial camera

    NASA Astrophysics Data System (ADS)

    Liu, Hai ying; Feng, Bing; Wang, Peng; Li, Yan; Wei, Hao yun

    2015-08-01

    An AEC and AGC(Automatic Exposure Control and Automatic Gain Control) system was designed for a CCD aerial camera with fixed aperture and electronic shutter. The normal AEC and AGE algorithm is not suitable to the aerial camera since the camera always takes high-resolution photographs in high-speed moving. The AEC and AGE system adjusts electronic shutter and camera gain automatically according to the target brightness and the moving speed of the aircraft. An automatic Gamma correction is used before the image is output so that the image is better for watching and analyzing by human eyes. The AEC and AGC system could avoid underexposure, overexposure, or image blurring caused by fast moving or environment vibration. A series of tests proved that the system meet the requirements of the camera system with its fast adjusting speed, high adaptability, high reliability in severe complex environment.

  5. [Are speed cameras able to reduce traffic noise disturbances? An intervention study in Luebeck].

    PubMed

    Schnoor, M; Waldmann, A; Pritzkuleit, R; Tchorz, J; Gigla, B; Katalinic, A

    2014-12-01

    Disturbance by traffic noise can result in health problems in the long run. However, the subjective perception of noise plays an important role in their development. The aim of this study was to determine if speed cameras are able to reduce subjective traffic noise disturbance of residents of high-traffic roads in Luebeck? In August 2012 a speed camera has been installed in 2 high-traffic roads in Luebeck (IG). Residents living 1.5 km in front of the installed speed cameras and behind them received a postal questionnaire to evaluate their subjective noise perception before (t0), 8 weeks (t1) and 12 months (t2) after the installation of the speed camera. As controls (CG) we asked residents of another high-traffic road in Luebeck without speed cameras and residents of 2 roads with several consecutive speed cameras installed a few years ago. Furthermore, objective measures of the traffic noise level were conducted. Response rates declined from 35.9% (t0) to 27.2% (t2). The proportion of women in the CG (61.4-63.7%) was significantly higher than in the IG (53.7-58.1%, p<0.05), and responders were significantly younger (46.5±20.5-50±22.0 vs. 59.1±17.0-60.5±16.9 years, p<0.05). A reduction of the perceived noise disturbance of 0.2 point, measured on a scale from 0 (no disturbance) to 10 (heavy disturbance), could be observed in both IG and CG. Directly asked, 15.2% of the IG and 19.3% of the CG reported a traffic noise reduction at t2. The objective measure shows a mean reduction of 0.6 dB at t1. The change of noise level of 0.6 dB, which could only be experienced by direct comparison, is in line with the subjective noise perception. As sole method to reduce traffic noise (and for health promotion) a speed camera is insufficient. © Georg Thieme Verlag KG Stuttgart · New York.

  6. Development and use of an L3CCD high-cadence imaging system for Optical Astronomy

    NASA Astrophysics Data System (ADS)

    Sheehan, Brendan J.; Butler, Raymond F.

    2008-02-01

    A high cadence imaging system, based on a Low Light Level CCD (L3CCD) camera, has been developed for photometric and polarimetric applications. The camera system is an iXon DV-887 from Andor Technology, which uses a CCD97 L3CCD detector from E2V technologies. This is a back illuminated device, giving it an extended blue response, and has an active area of 512×512 pixels. The camera system allows frame-rates ranging from 30 fps (full frame) to 425 fps (windowed & binned frame). We outline the system design, concentrating on the calibration and control of the L3CCD camera. The L3CCD detector can be either triggered directly by a GPS timeserver/frequency generator or be internally triggered. A central PC remotely controls the camera computer system and timeserver. The data is saved as standard `FITS' files. The large data loads associated with high frame rates, leads to issues with gathering and storing the data effectively. To overcome such problems, a specific data management approach is used, and a Python/PYRAF data reduction pipeline was written for the Linux environment. This uses calibration data collected either on-site, or from lab based measurements, and enables a fast and reliable method for reducing images. To date, the system has been used twice on the 1.5 m Cassini Telescope in Loiano (Italy) we present the reduction methods and observations made.

  7. Deep-UV-sensitive high-frame-rate backside-illuminated CCD camera developments

    NASA Astrophysics Data System (ADS)

    Dawson, Robin M.; Andreas, Robert; Andrews, James T.; Bhaskaran, Mahalingham; Farkas, Robert; Furst, David; Gershstein, Sergey; Grygon, Mark S.; Levine, Peter A.; Meray, Grazyna M.; O'Neal, Michael; Perna, Steve N.; Proefrock, Donald; Reale, Michael; Soydan, Ramazan; Sudol, Thomas M.; Swain, Pradyumna K.; Tower, John R.; Zanzucchi, Pete

    2002-04-01

    New applications for ultra-violet imaging are emerging in the fields of drug discovery and industrial inspection. High throughput is critical for these applications where millions of drug combinations are analyzed in secondary screenings or high rate inspection of small feature sizes over large areas is required. Sarnoff demonstrated in1990 a back illuminated, 1024 X 1024, 18 um pixel, split-frame-transfer device running at > 150 frames per second with high sensitivity in the visible spectrum. Sarnoff designed, fabricated and delivered cameras based on these CCDs and is now extending this technology to devices with higher pixel counts and higher frame rates through CCD architectural enhancements. The high sensitivities obtained in the visible spectrum are being pushed into the deep UV to support these new medical and industrial inspection applications. Sarnoff has achieved measured quantum efficiencies > 55% at 193 nm, rising to 65% at 300 nm, and remaining almost constant out to 750 nm. Optimization of the sensitivity is being pursued to tailor the quantum efficiency for particular wavelengths. Characteristics of these high frame rate CCDs and cameras will be described and results will be presented demonstrating high UV sensitivity down to 150 nm.

  8. Multiport backside-illuminated CCD imagers for high-frame-rate camera applications

    NASA Astrophysics Data System (ADS)

    Levine, Peter A.; Sauer, Donald J.; Hseuh, Fu-Lung; Shallcross, Frank V.; Taylor, Gordon C.; Meray, Grazyna M.; Tower, John R.; Harrison, Lorna J.; Lawler, William B.

    1994-05-01

    Two multiport, second-generation CCD imager designs have been fabricated and successfully tested. They are a 16-port 512 X 512 array and a 32-port 1024 X 1024 array. Both designs are back illuminated, have on-chip CDS, lateral blooming control, and use a split vertical frame transfer architecture with full frame storage. The 512 X 512 device has been operated at rates over 800 frames per second. The 1024 X 1024 device has been operated at rates over 300 frames per second. The major changes incorporated in the second-generation design are, reduction in gate length in the output area to give improved high-clock-rate performance, modified on-chip CDS circuitry for reduced noise, and optimized implants to improve performance of blooming control at lower clock amplitude. This paper discusses the imager design improvements and presents measured performance results at high and moderate frame rates. The design and performance of three moderate frame rate cameras are discussed.

  9. Evaluation of Eye Metrics as a Detector of Fatigue

    DTIC Science & Technology

    2010-03-01

    eyeglass frames . The cameras are angled upward toward the eyes and extract real-time pupil diameter, eye-lid movement, and eye-ball movement. The...because the cameras were mounted on eyeglass -like frames , the system was able to continuously monitor the eye throughout all sessions. Overall, the...of “ fitness for duty” testing and “real-time monitoring” of operator performance has been slow (Institute of Medicine, 2004). Oculometric-based

  10. Jovian thundercloud observation with Jovian orbiter and ground-based telescope

    NASA Astrophysics Data System (ADS)

    Takahashi, Yukihiro; Nakajima, Kensuke; Takeuchi, Satoru; Sato, Mitsuteru; Fukuhara, Tetsuya; Watanabe, Makoto; Yair, Yoav; Fischer, Georg; Aplin, Karen

    The latest observational and theoretical studies suggest that thunderstorms in Jupiter's at-mosphere are very important subject not only for understanding of meteorology, which may determine the large scale structures such as belt/zone and big ovals, but also for probing the water abundance of the deep atmosphere, which is crucial to constrain the behavior of volatiles in early solar system. Here we suggest a very simple high-speed imager on board Jovian orbiter, Optical Lightning Detector, OLD, optimized for detecting optical emissions from lightning dis-charge in Jupiter. OLD consists of radiation-tolerant CMOS sensors and two H Balmer Alpha line (656.3nm) filters. In normal sampling mode the frame intervals is 29ms with a full frame format of 512x512 pixels and in high-speed sampling mode the interval could be reduced down to 0.1ms by concentrating a limited area of 30x30 pixels. Weight, size and power consump-tion are about 1kg, 16x7x5.5 cm (sensor) and 16x12x4 cm (circuit), and 4W, respectively, though they can be reduced according to the spacecraft resources and required environmental tolerance. Also we plan to investigate the optical flashes using a ground-based middle-sized telescope, which will be built by Hokkaido University, with narrow-band high speed imaging unit using an EM-CCD camera. Observational strategy with these optical lightning detectors and spectral imagers, which enables us to estimate the horizontal motion and altitude of clouds, will be introduced.

  11. Standard design for National Ignition Facility x-ray streak and framing cameras.

    PubMed

    Kimbrough, J R; Bell, P M; Bradley, D K; Holder, J P; Kalantar, D K; MacPhee, A G; Telford, S

    2010-10-01

    The x-ray streak camera and x-ray framing camera for the National Ignition Facility were redesigned to improve electromagnetic pulse hardening, protect high voltage circuits from pressure transients, and maximize the use of common parts and operational software. Both instruments use the same PC104 based controller, interface, power supply, charge coupled device camera, protective hermetically sealed housing, and mechanical interfaces. Communication is over fiber optics with identical facility hardware for both instruments. Each has three triggers that can be either fiber optic or coax. High voltage protection consists of a vacuum sensor to enable the high voltage and pulsed microchannel plate phosphor voltage. In the streak camera, the high voltage is removed after the sweep. Both rely on the hardened aluminum box and a custom power supply to reduce electromagnetic pulse/electromagnetic interference (EMP/EMI) getting into the electronics. In addition, the streak camera has an EMP/EMI shield enclosing the front of the streak tube.

  12. A photoelastic modulator-based birefringence imaging microscope for measuring biological specimens

    NASA Astrophysics Data System (ADS)

    Freudenthal, John; Leadbetter, Andy; Wolf, Jacob; Wang, Baoliang; Segal, Solomon

    2014-11-01

    The photoelastic modulator (PEM) has been applied to a variety of polarimetric measurements. However, nearly all such applications use point-measurements where each point (spot) on the sample is measured one at a time. The main challenge for employing the PEM in a camera-based imaging instrument is that the PEM modulates too fast for typical cameras. The PEM modulates at tens of KHz. To capture the specific polarization information that is carried on the modulation frequency of the PEM, the camera needs to be at least ten times faster. However, the typical frame rates of common cameras are only in the tens or hundreds frames per second. In this paper, we report a PEM-camera birefringence imaging microscope. We use the so-called stroboscopic illumination method to overcome the incompatibility of the high frequency of the PEM to the relatively slow frame rate of a camera. We trigger the LED light source using a field-programmable gate array (FPGA) in synchrony with the modulation of the PEM. We show the measurement results of several standard birefringent samples as a part of the instrument calibration. Furthermore, we show results observed in two birefringent biological specimens, a human skin tissue that contains collagen and a slice of mouse brain that contains bundles of myelinated axonal fibers. Novel applications of this PEM-based birefringence imaging microscope to both research communities and industrial applications are being tested.

  13. An algorithm of a real time image tracking system using a camera with pan/tilt motors on an embedded system

    NASA Astrophysics Data System (ADS)

    Kim, Hie-Sik; Nam, Chul; Ha, Kwan-Yong; Ayurzana, Odgeral; Kwon, Jong-Won

    2005-12-01

    The embedded systems have been applied to many fields, including households and industrial sites. The user interface technology with simple display on the screen was implemented more and more. The user demands are increasing and the system has more various applicable fields due to a high penetration rate of the Internet. Therefore, the demand for embedded system is tend to rise. An embedded system for image tracking was implemented. This system is used a fixed IP for the reliable server operation on TCP/IP networks. Using an USB camera on the embedded Linux system developed a real time broadcasting of video image on the Internet. The digital camera is connected at the USB host port of the embedded board. All input images from the video camera are continuously stored as a compressed JPEG file in a directory at the Linux web-server. And each frame image data from web camera is compared for measurement of displacement Vector. That used Block matching algorithm and edge detection algorithm for past speed. And the displacement vector is used at pan/tilt motor control through RS232 serial cable. The embedded board utilized the S3C2410 MPU, which used the ARM 920T core form Samsung. The operating system was ported to embedded Linux kernel and mounted of root file system. And the stored images are sent to the client PC through the web browser. It used the network function of Linux and it developed a program with protocol of the TCP/IP.

  14. Navigation accuracy comparing non-covered frame and use of plastic sterile drapes to cover the reference frame in 3D acquisition.

    PubMed

    Corenman, Donald S; Strauch, Eric L; Dornan, Grant J; Otterstrom, Eric; Zalepa King, Lisa

    2017-09-01

    Advancements in surgical navigation technology coupled with 3-dimensional (3D) radiographic data have significantly enhanced the accuracy and efficiency of spinal fusion implant placement. Increased usage of such technology has led to rising concerns regarding maintenance of the sterile field, as makeshift drape systems are fraught with breaches thus presenting increased risk of surgical site infections (SSIs). A clinical need exists for a sterile draping solution with these techniques. Our objective was to quantify expected accuracy error associated with 2MM and 4MM thickness Sterile-Z Patient Drape ® using Medtronic O-Arm ® Surgical Imaging with StealthStation ® S7 ® Navigation System. Camera distance to reference frame was investigated for contribution to accuracy error. A testing jig was placed on the radiolucent table and the Medtronic passive reference frame was attached to jig. The StealthStation ® S7 ® navigation camera was placed at various distances from testing jig and the geometry error of reference frame was captured for three different drape configurations: no drape, 2MM drape and 4MM drape. The O-Arm ® gantry location and StealthStation ® S7 ® camera position was maintained and seven 3D acquisitions for each of drape configurations were measured. Data was analyzed by a two-factor analysis of variance (ANOVA) and Bonferroni comparisons were used to assess the independent effects of camera angle and drape on accuracy error. Median (and maximum) measurement accuracy error was higher for the 2MM than for the 4MM drape for each camera distance. The most extreme error observed (4.6 mm) occurred when using the 2MM and the 'far' camera distance. The 4MM drape was found to induce an accuracy error of 0.11 mm (95% confidence interval, 0.06-0.15; P<0.001) relative to the no drape testing, regardless of camera distance. Medium camera distance produced lower accuracy error than either the close (additional 0.08 mm error; 95% CI, 0-0.15; P=0.035) or far (additional 0.21mm error; 95% CI, 0.13-0.28; P<0.001) camera distances, regardless of whether a drape was used. In comparison to the 'no drape' condition, the accuracy error of 0.11 mm when using a 4MM film drape is minimal and clinically insignificant.

  15. Dynamic behavior of prosthetic aortic tissue valves as viewed by high-speed cinematography.

    PubMed

    Rainer, W G; Christopher, R A; Sadler, T R; Hilgenberg, A D

    1979-09-01

    Using a valve testing apparatus of our own design and with a high-speed (600 to 800 frames per second) 16 mm movie camera, films were made of Hancock porcine, Carpentier-Edwards porcine, and Ionescu-Shiley bovine pericardial valves mounted in the aortic position and cycled under physiological conditions at 72 to 100 beats per minute. Fresh and explanted valves were observed using saline or 36.5% glycerol as the pumping solution. When fresh valves were studied using saline solution as the pumpint fluid, the Hancock and Carpentier-Edwards porcine valves showed high-frequency leaflet vibration, which increased in frequency with higher cycling rates. Abnormal leaflet motion was decreased when glycerol was used as the blood analogue. The Ionescu-Shiley bovine pericardial valve did not show abnormal leaflet motion under these conditions. Conclusions drawn from tissue valve testing studies that use excessively high pulsing rates and pressures (accelerated testing) and saline or water as pumping solutions cannot be transposed to predict the fate of tissue valves in a clinical setting.

  16. Failure and penetration response of borosilicate glass during short-rod impact

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, C. E. Jr.; Orphal, D. L.; Behner, Th.

    2007-12-12

    The failure characterization of brittle materials like glass is of fundamental importance in describing the penetration resistance against projectiles. A critical question is whether this failure front remains 'steady' after the driving stress is removed. A test series with short gold rods (D = 1 mm, L/D{approx_equal}5-11) impacting borosilicate glass at {approx}1 to 2 km/s was carried out to investigate this question. The reverse ballistic method was used for the experiments, and the impact and penetration process was observed simultaneously with five flash X-rays and a 16-frame high-speed optical camera. Very high measurement accuracy was established to ensure reliable results.more » Results show that the failure front induced by rod impact and penetration does arrest (ceases to propagate) after the rod is totally eroded inside the glass. The impact of a second rod after a short time delay reinitiates the failure front at about the same speed.« less

  17. QuadCam - A Quadruple Polarimetric Camera for Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Skuljan, J.

    A specialised quadruple polarimetric camera for space situational awareness, QuadCam, has been built at the Defence Technology Agency (DTA), New Zealand, as part of collaboration with the Defence Science and Technology Laboratory (Dstl), United Kingdom. The design was based on a similar system originally developed at Dstl, with some significant modifications for improved performance. The system is made up of four identical CCD cameras looking in the same direction, but in a different plane of polarisation at 0, 45, 90 and 135 degrees with respect to the reference plane. A standard set of Stokes parameters can be derived from the four images in order to describe the state of polarisation of an object captured in the field of view. The modified design of the DTA QuadCam makes use of four small Raspberry Pi computers, so that each camera is controlled by its own computer in order to speed up the readout process and ensure that the four individual frames are taken simultaneously (to within 100-200 microseconds). In addition, a new firmware was requested from the camera manufacturer so that an output signal is generated to indicate the state of the camera shutter. A specialised GPS unit (also developed at DTA) is then used to monitor the shutter signals from the four cameras and record the actual time of exposure to an accuracy of about 100 microseconds. This makes the system well suited for the observation of fast-moving objects in the low Earth orbit (LEO). The QuadCam is currently mounted on a Paramount MEII robotic telescope mount at the newly built DTA space situational awareness observatory located on Whangaparaoa Peninsula near Auckland, New Zealand. The system will be used for tracking satellites in low Earth orbit and geostationary belt as well. The performance of the camera has been evaluated and a series of test images have been collected in order to derive the polarimetric signatures for selected satellites.

  18. The Television Framing Methods of the National Basketball Association: An Agenda-Setting Application.

    ERIC Educational Resources Information Center

    Fortunato, John A.

    2001-01-01

    Identifies and analyzes the exposure and portrayal framing methods that are utilized by the National Basketball Association (NBA). Notes that key informant interviews provide insight into the exposure framing method and reveal two portrayal instruments: cameras and announcers; and three framing strategies: depicting the NBA as a team game,…

  19. Broadband Terahertz Computed Tomography Using a 5k-pixel Real-time THz Camera

    NASA Astrophysics Data System (ADS)

    Trichopoulos, Georgios C.; Sertel, Kubilay

    2015-07-01

    We present a novel THz computed tomography system that enables fast 3-dimensional imaging and spectroscopy in the 0.6-1.2 THz band. The system is based on a new real-time broadband THz camera that enables rapid acquisition of multiple cross-sectional images required in computed tomography. Tomographic reconstruction is achieved using digital images from the densely-packed large-format (80×64) focal plane array sensor located behind a hyper-hemispherical silicon lens. Each pixel of the sensor array consists of an 85 μm × 92 μm lithographically fabricated wideband dual-slot antenna, monolithically integrated with an ultra-fast diode tuned to operate in the 0.6-1.2 THz regime. Concurrently, optimum impedance matching was implemented for maximum pixel sensitivity, enabling 5 frames-per-second image acquisition speed. As such, the THz computed tomography system generates diffraction-limited resolution cross-section images as well as the three-dimensional models of various opaque and partially transparent objects. As an example, an over-the-counter vitamin supplement pill is imaged and its material composition is reconstructed. The new THz camera enables, for the first time, a practical application of THz computed tomography for non-destructive evaluation and biomedical imaging.

  20. Parallel-Processing Software for Creating Mosaic Images

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard; Deen, Robert; McCauley, Michael; DeJong, Eric

    2008-01-01

    A computer program implements parallel processing for nearly real-time creation of panoramic mosaics of images of terrain acquired by video cameras on an exploratory robotic vehicle (e.g., a Mars rover). Because the original images are typically acquired at various camera positions and orientations, it is necessary to warp the images into the reference frame of the mosaic before stitching them together to create the mosaic. [Also see "Parallel-Processing Software for Correlating Stereo Images," Software Supplement to NASA Tech Briefs, Vol. 31, No. 9 (September 2007) page 26.] The warping algorithm in this computer program reflects the considerations that (1) for every pixel in the desired final mosaic, a good corresponding point must be found in one or more of the original images and (2) for this purpose, one needs a good mathematical model of the cameras and a good correlation of individual pixels with respect to their positions in three dimensions. The desired mosaic is divided into slices, each of which is assigned to one of a number of central processing units (CPUs) operating simultaneously. The results from the CPUs are gathered and placed into the final mosaic. The time taken to create the mosaic depends upon the number of CPUs, the speed of each CPU, and whether a local or a remote data-staging mechanism is used.

  1. Clever imaging with SmartScan

    NASA Astrophysics Data System (ADS)

    Tchernykh, Valerij; Dyblenko, Sergej; Janschek, Klaus; Seifart, Klaus; Harnisch, Bernd

    2005-08-01

    The cameras commonly used for Earth observation from satellites require high attitude stability during the image acquisition. For some types of cameras (high-resolution "pushbroom" scanners in particular), instantaneous attitude changes of even less than one arcsecond result in significant image distortion and blurring. Especially problematic are the effects of high-frequency attitude variations originating from micro-shocks and vibrations produced by the momentum and reaction wheels, mechanically activated coolers, and steering and deployment mechanisms on board. The resulting high attitude-stability requirements for Earth-observation satellites are one of the main reasons for their complexity and high cost. The novel SmartScan imaging concept, based on an opto-electronic system with no moving parts, offers the promise of high-quality imaging with only moderate satellite attitude stability. SmartScan uses real-time recording of the actual image motion in the focal plane of the camera during frame acquisition to correct the distortions in the image. Exceptional real-time performances with subpixel-accuracy image-motion measurement are provided by an innovative high-speed onboard opto-electronic correlation processor. SmartScan will therefore allow pushbroom scanners to be used for hyper-spectral imaging from satellites and other space platforms not primarily intended for imaging missions, such as micro- and nano-satellites with simplified attitude control, low-orbiting communications satellites, and manned space stations.

  2. Digital holographic interferometry applied to the investigation of ignition process.

    PubMed

    Pérez-Huerta, J S; Saucedo-Anaya, Tonatiuh; Moreno, I; Ariza-Flores, D; Saucedo-Orozco, B

    2017-06-12

    We use the digital holographic interferometry (DHI) technique to display the early ignition process for a butane-air mixture flame. Because such an event occurs in a short time (few milliseconds), a fast CCD camera is used to study the event. As more detail is required for monitoring the temporal evolution of the process, less light coming from the combustion is captured by the CCD camera, resulting in a deficient and underexposed image. Therefore, the CCD's direct observation of the combustion process is limited (down to 1000 frames per second). To overcome this drawback, we propose the use of DHI along with a high power laser in order to supply enough light to increase the speed capture, thus improving the visualization of the phenomenon in the initial moments. An experimental optical setup based on DHI is used to obtain a large sequence of phase maps that allows us to observe two transitory stages in the ignition process: a first explosion which slightly emits visible light, and a second stage induced by variations in temperature when the flame is emerging. While the last stage can be directly monitored by the CCD camera, the first stage is hardly detected by direct observation, and DHI clearly evidences this process. Furthermore, our method can be easily adapted for visualizing other types of fast processes.

  3. Focused Schlieren flow visualization studies of multiple venturi fuel injectors in a high pressure combustor

    NASA Technical Reports Server (NTRS)

    Chun, K. S.; Locke, R. J.; Lee, C. M.; Ratvasky, W. J.

    1994-01-01

    Multiple venturi fuel injectors were used to obtain uniform fuel distributions, better atomization and vaporization in the premixing/prevaporizing section of a lean premixed/prevaporized flame tube combustor. A focused Schlieren system was used to investigate the fuel/air mixing effectiveness of various fuel injection configurations. The Schlieren system was focused to a plane within the flow field of a test section equipped with optical windows. The focused image plane was parallel to the axial direction of the flow and normal to the optical axis. Images from that focused plane, formed by refracted light due to density gradients within the flow field, were filmed with a high-speed movie camera at framing rates of 8,000 frames per second (fps). Three fuel injection concepts were investigated by taking high-speed movies of the mixture flows at various operating conditions. The inlet air temperature was varied from 600 F to 1000 F, and inlet pressures from 80 psia to 150 psia. Jet-A fuel was used typically at an equivalence ratio of 0.5. The intensity variations of the digitized Schlieren images were analytically correlated to spatial density gradients of the mixture flows. Qualitative measurements for degree of mixedness, intensity of mixing, and mixing completion time are shown. Various mixing performance patterns are presented with different configurations of fuel injection points and operating conditions.

  4. Investigating plasma viscosity with fast framing photography in the ZaP-HD Flow Z-Pinch experiment

    NASA Astrophysics Data System (ADS)

    Weed, Jonathan Robert

    The ZaP-HD Flow Z-Pinch experiment investigates the stabilizing effect of sheared axial flows while scaling toward a high-energy-density laboratory plasma (HEDLP > 100 GPa). Stabilizing flows may persist until viscous forces dissipate a sheared flow profile. Plasma viscosity is investigated by measuring scale lengths in turbulence intentionally introduced in the plasma flow. A boron nitride turbulence-tripping probe excites small scale length turbulence in the plasma, and fast framing optical cameras are used to study time-evolved turbulent structures and viscous dissipation. A Hadland Imacon 790 fast framing camera is modified for digital image capture, but features insufficient resolution to study turbulent structures. A Shimadzu HPV-X camera captures the evolution of turbulent structures with great spatial and temporal resolution, but is unable to resolve the anticipated Kolmogorov scale in ZaP-HD as predicted by a simplified pinch model.

  5. The Last Meter: Blind Visual Guidance to a Target.

    PubMed

    Manduchi, Roberto; Coughlan, James M

    2014-01-01

    Smartphone apps can use object recognition software to provide information to blind or low vision users about objects in the visual environment. A crucial challenge for these users is aiming the camera properly to take a well-framed picture of the desired target object. We investigate the effects of two fundamental constraints of object recognition - frame rate and camera field of view - on a blind person's ability to use an object recognition smartphone app. The app was used by 18 blind participants to find visual targets beyond arm's reach and approach them to within 30 cm. While we expected that a faster frame rate or wider camera field of view should always improve search performance, our experimental results show that in many cases increasing the field of view does not help, and may even hurt, performance. These results have important implications for the design of object recognition systems for blind users.

  6. An Innovative Procedure for Calibration of Strapdown Electro-Optical Sensors Onboard Unmanned Air Vehicles

    PubMed Central

    Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio; Rispoli, Attilio

    2010-01-01

    This paper presents an innovative method for estimating the attitude of airborne electro-optical cameras with respect to the onboard autonomous navigation unit. The procedure is based on the use of attitude measurements under static conditions taken by an inertial unit and carrier-phase differential Global Positioning System to obtain accurate camera position estimates in the aircraft body reference frame, while image analysis allows line-of-sight unit vectors in the camera based reference frame to be computed. The method has been applied to the alignment of the visible and infrared cameras installed onboard the experimental aircraft of the Italian Aerospace Research Center and adopted for in-flight obstacle detection and collision avoidance. Results show an angular uncertainty on the order of 0.1° (rms). PMID:22315559

  7. The pressure field of imploding lightbulbs

    NASA Astrophysics Data System (ADS)

    Czechanowski, M.; Ikeda, C.; Duncan, J. H.

    2015-03-01

    The implosion of A19 incandescent lightbulbs in a high-pressure water environment is studied in a 1.77-m-diameter steel tank. Underwater blast sensors are used to measure the dynamic pressure field near the lightbulbs and the implosions are photographed with a high-speed movie camera at a frame rate of 24,000 pps. The movie camera and the pressure signal recording system are synchronized to enable correlation of features in the movie frames with those in the pressure records. It is found that the gross dimensions and weight of the bulbs are very similar from one bulb to another, but the ambient water pressure at which a given bulb implodes (, called the implosion pressure) varies from 6.29 to 11.98 atmospheres, probably due to inconsistencies in the glass wall thickness and perhaps other detailed characteristics of the bulbs. The dynamic pressures (the local pressure minus , as measured by the sensors) first drop during the implosion and then reach a strong positive peak at about the time that the bulb reaches minimum volume. The peak dynamic pressure varies from 3.61 to 28.66 atmospheres. In order to explore the physics of the implosion process, the dynamic pressure signals are compared to calculations of the pressure field generated by the collapse of a spherical bubble in a weakly compressible liquid. The wide range of implosion pressures is used in combination with the calculations to explore the effect of the relative liquid compressibility and the bulb itself on the dynamic pressure field.

  8. Mass and Size Frequency Distribution of the Impact Debris from Disruption of Chondritic Meteorites

    NASA Technical Reports Server (NTRS)

    VanVeghten, T. W.; Flynn, G. J.; Durda, D. D.; Hart, S.; Asphaug, E.

    2003-01-01

    Since direct observation of the collision of asteroids in space is not always convenient for earthbound observers, we have undertaken simulations of these collisions using the NASA Ames Vertical Gun Range (AVGR). To simulate the collision of asteroids in space, and aluminum projectiles with velocities ranging from approx.1 to approx.6 km/sec were fired at 70g to approx.200 g fragments of chondritic meteorites. The target meteorite was placed in an evacuated chamber at the AVGR. Detectors, usually four, were set up around the target meteorite. These detectors consisted of aerogel and aluminum foil of varying thickness. The aerogel's purpose was to catch debris after the collision, and the aluminum foil.s purpose was to show the size of the debris particles through the size of the holes in the aluminum foil. Outside the chamber, a camera was set up to record high-speed film of the collision. This camera recorded at either 500 frames per second or 1000 frames per second. Three different types of targets were used for these tests. The first were actual meteorites, which varied in mineralogical composition, density, and porosity. The second type of target was a Hawaiian basalt, consisting of olivine phenocrysts in a porous matrix, which we thought might be similar to the chondritic meteorites, thus providing data for comparison. The final type was made out of Styrofoam. The Styrofoam was thought to simulate very low-density asteroids and comets.

  9. Multi-frame image processing with panning cameras and moving subjects

    NASA Astrophysics Data System (ADS)

    Paolini, Aaron; Humphrey, John; Curt, Petersen; Kelmelis, Eric

    2014-06-01

    Imaging scenarios commonly involve erratic, unpredictable camera behavior or subjects that are prone to movement, complicating multi-frame image processing techniques. To address these issues, we developed three techniques that can be applied to multi-frame image processing algorithms in order to mitigate the adverse effects observed when cameras are panning or subjects within the scene are moving. We provide a detailed overview of the techniques and discuss the applicability of each to various movement types. In addition to this, we evaluated algorithm efficacy with demonstrated benefits using field test video, which has been processed using our commercially available surveillance product. Our results show that algorithm efficacy is significantly improved in common scenarios, expanding our software's operational scope. Our methods introduce little computational burden, enabling their use in real-time and low-power solutions, and are appropriate for long observation periods. Our test cases focus on imaging through turbulence, a common use case for multi-frame techniques. We present results of a field study designed to test the efficacy of these techniques under expanded use cases.

  10. Full-field transient vibrometry of the human tympanic membrane by local phase correlation and high-speed holography

    NASA Astrophysics Data System (ADS)

    Dobrev, Ivo; Furlong, Cosme; Cheng, Jeffrey T.; Rosowski, John J.

    2014-09-01

    Understanding the human hearing process would be helped by quantification of the transient mechanical response of the human ear, including the human tympanic membrane (TM or eardrum). We propose a new hybrid high-speed holographic system (HHS) for acquisition and quantification of the full-field nanometer transient (i.e., >10 kHz) displacement of the human TM. We have optimized and implemented a 2+1 frame local correlation (LC) based phase sampling method in combination with a high-speed (i.e., >40 K fps) camera acquisition system. To our knowledge, there is currently no existing system that provides such capabilities for the study of the human TM. The LC sampling method has a displacement difference of <11 nm relative to measurements obtained by a four-phase step algorithm. Comparisons between our high-speed acquisition system and a laser Doppler vibrometer indicate differences of <10 μs. The high temporal (i.e., >40 kHz) and spatial (i.e., >100 k data points) resolution of our HHS enables parallel measurements of all points on the surface of the TM, which allows quantification of spatially dependent motion parameters, such as modal frequencies and acoustic delays. Such capabilities could allow inferring local material properties across the surface of the TM.

  11. Research of flaw image collecting and processing technology based on multi-baseline stereo imaging

    NASA Astrophysics Data System (ADS)

    Yao, Yong; Zhao, Jiguang; Pang, Xiaoyan

    2008-03-01

    Aiming at the practical situations such as accurate optimal design, complex algorithms and precise technical demands of gun bore flaw image collecting, the design frame of a 3-D image collecting and processing system based on multi-baseline stereo imaging was presented in this paper. This system mainly including computer, electrical control box, stepping motor and CCD camera and it can realize function of image collection, stereo matching, 3-D information reconstruction and after-treatments etc. Proved by theoretical analysis and experiment results, images collected by this system were precise and it can slake efficiently the uncertainty problem produced by universally veins or repeated veins. In the same time, this system has faster measure speed and upper measure precision.

  12. Thunderstorm observations from Space Shuttle

    NASA Technical Reports Server (NTRS)

    Vonnegut, B.; Vaughan, O. H., Jr.; Brook, M.

    1983-01-01

    Results of the Nighttime/Daytime Optical Survey of Lightning (NOSL) experiments done on the STS-2 and STS-4 flights are covered. During these two flights of the Space Shuttle Columbia, the astronaut teams of J. Engle and R. Truly, and K. Mattingly II and H. Hartsfield took motion pictures of thunderstorms with a 16 mm cine camera. Film taken during daylight showed interesting thunderstorm cloud formations, where individual frames taken tens of seconds apart, when viewed as stereo pairs, provided information on the three-dimensional structure of the cloud systems. Film taken at night showed clouds illuminated by lightning with discharges that propagated horizontally at speeds of up to 10 to the 5th m/sec and extended for distances on the order of 60 km or more.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaeffel, J.A.; Mullinix, B.R.; Ranson, W.F.

    An experimental technique to simulate and evaluate the effects of high concentrations of x-rays resulting from a nuclear detonation on missile structures is presented. Data from 34 tests are included to demonstrate the technique. The effects of variations in the foil thickness, capacitor voltage, and plate thickness on the total impulse and maximum strain in the structure were determined. The experimental technique utilizes a high energy capacitor discharge unit to explode an aluminum foil on the surface of the structure. The structural response is evaluated by optical methods using the grid slope deflection method. The fringe patterns were recorded usingmore » a high-speed framing camera. The data were digitized using an optical comparator with an x-y table. The analysis was performed on a CDC 6600 computer.« less

  14. Software for Acquiring Image Data for PIV

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.; Cheung, H. M.; Kressler, Brian

    2003-01-01

    PIV Acquisition (PIVACQ) is a computer program for acquisition of data for particle-image velocimetry (PIV). In the PIV system for which PIVACQ was developed, small particles entrained in a flow are illuminated with a sheet of light from a pulsed laser. The illuminated region is monitored by a charge-coupled-device camera that operates in conjunction with a data-acquisition system that includes a frame grabber and a counter-timer board, both installed in a single computer. The camera operates in "frame-straddle" mode where a pair of images can be obtained closely spaced in time (on the order of microseconds). The frame grabber acquires image data from the camera and stores the data in the computer memory. The counter/timer board triggers the camera and synchronizes the pulsing of the laser with acquisition of data from the camera. PIVPROC coordinates all of these functions and provides a graphical user interface, through which the user can control the PIV data-acquisition system. PIVACQ enables the user to acquire a sequence of single-exposure images, display the images, process the images, and then save the images to the computer hard drive. PIVACQ works in conjunction with the PIVPROC program which processes the images of particles into the velocity field in the illuminated plane.

  15. Ground volume assessment using 'Structure from Motion' photogrammetry with a smartphone and a compact camera

    NASA Astrophysics Data System (ADS)

    Wróżyński, Rafał; Pyszny, Krzysztof; Sojka, Mariusz; Przybyła, Czesław; Murat-Błażejewska, Sadżide

    2017-06-01

    The article describes how the Structure-from-Motion (SfM) method can be used to calculate the volume of anthropogenic microtopography. In the proposed workflow, data is obtained using mass-market devices such as a compact camera (Canon G9) and a smartphone (iPhone5). The volume is computed using free open source software (VisualSFMv0.5.23, CMPMVSv0.6.0., MeshLab) on a PCclass computer. The input data is acquired from video frames. To verify the method laboratory tests on the embankment of a known volume has been carried out. Models of the test embankment were built using two independent measurements made with those two devices. No significant differences were found between the models in a comparative analysis. The volumes of the models differed from the actual volume just by 0.7‰ and 2‰. After a successful laboratory verification, field measurements were carried out in the same way. While building the model from the data acquired with a smartphone, it was observed that a series of frames, approximately 14% of all the frames, was rejected. The missing frames caused the point cloud to be less dense in the place where they had been rejected. This affected the model's volume differed from the volume acquired with a camera by 7%. In order to improve the homogeneity, the frame extraction frequency was increased in the place where frames have been previously missing. A uniform model was thereby obtained with point cloud density evenly distributed. There was a 1.5% difference between the embankment's volume and the volume calculated from the camera-recorded video. The presented method permits the number of input frames to be increased and the model's accuracy to be enhanced without making an additional measurement, which may not be possible in the case of temporary features.

  16. 10. 22'X34' original blueprint, VariableAngle Launcher, 'SIDE VIEW CAMERA CARSTEEL ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    10. 22'X34' original blueprint, Variable-Angle Launcher, 'SIDE VIEW CAMERA CAR-STEEL FRAME AND AXLES' drawn at 1/2'=1'-0'. (BOURD Sketch # 209124). - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  17. Variable-Interval Sequenced-Action Camera (VINSAC). Dissemination Document No. 1.

    ERIC Educational Resources Information Center

    Ward, Ted

    The 16 millimeter (mm) Variable-Interval Sequenced-Action Camera (VINSAC) is designed for inexpensive photographic recording of effective teacher instruction and use of instructional materials for teacher education and research purposes. The camera photographs single frames at preselected time intervals (.5 second to 20 seconds) which are…

  18. Students' Framing of Laboratory Exercises Using Infrared Cameras

    ERIC Educational Resources Information Center

    Haglund, Jesper; Jeppsson, Fredrik; Hedberg, David; Schönborn, Konrad J.

    2015-01-01

    Thermal science is challenging for students due to its largely imperceptible nature. Handheld infrared cameras offer a pedagogical opportunity for students to see otherwise invisible thermal phenomena. In the present study, a class of upper secondary technology students (N = 30) partook in four IR-camera laboratory activities, designed around the…

  19. Plasma measurement by optical visualization and triple probe method under high-speed impact

    NASA Astrophysics Data System (ADS)

    Sakai, T.; Umeda, K.; Kinoshita, S.; Watanabe, K.

    2017-02-01

    High-speed impact on spacecraft by space debris poses a threat. When a high-speed projectile collides with target, it is conceivable that the heat created by impact causes severe damage at impact point. Investigation of the temperature is necessary for elucidation of high-speed impact phenomena. However, it is very difficult to measure the temperature with standard methods for two main reasons. One reason is that a thermometer placed on the target is instantaneously destroyed upon impact. The other reason is that there is not enough time resolution to measure the transient temperature changes. In this study, the measurement of plasma induced by high-speed impact was investigated to estimate temperature changes near the impact point. High-speed impact experiments were performed with a vertical gas gun. The projectile speed was approximately 700 m/s, and the target material was A5052. The experimental data to calculate the plasma parameters of electron temperature and electron density were measured by triple probe method. In addition, the diffusion behavior of plasma was observed by optical visualization technique using high-speed camera. The frame rate and the exposure time were 260 kfps and 1.0 μs, respectively. These images are considered to be one proof to show the validity of plasma measurement. The experimental results showed that plasma signals were detected for around 70 μs, and the rising phase of the wave form was in good agreement with timing of optical visualization image when the plasma arrived at the tip of triple probe.

  20. On a novel low cost high accuracy experimental setup for tomographic particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Discetti, Stefano; Ianiro, Andrea; Astarita, Tommaso; Cardone, Gennaro

    2013-07-01

    This work deals with the critical aspects related to cost reduction of a Tomo PIV setup and to the bias errors introduced in the velocity measurements by the coherent motion of the ghost particles. The proposed solution consists of using two independent imaging systems composed of three (or more) low speed single frame cameras, which can be up to ten times cheaper than double shutter cameras with the same image quality. Each imaging system is used to reconstruct a particle distribution in the same measurement region, relative to the first and the second exposure, respectively. The reconstructed volumes are then interrogated by cross-correlation in order to obtain the measured velocity field, as in the standard tomographic PIV implementation. Moreover, differently from tomographic PIV, the ghost particle distributions of the two exposures are uncorrelated, since their spatial distribution is camera orientation dependent. For this reason, the proposed solution promises more accurate results, without the bias effect of the coherent ghost particles motion. Guidelines for the implementation and the application of the present method are proposed. The performances are assessed with a parametric study on synthetic experiments. The proposed low cost system produces a much lower modulation with respect to an equivalent three-camera system. Furthermore, the potential accuracy improvement using the Motion Tracking Enhanced MART (Novara et al 2010 Meas. Sci. Technol. 21 035401) is much higher than in the case of the standard implementation of tomographic PIV.

  1. Combined hostile fire and optics detection

    NASA Astrophysics Data System (ADS)

    Brännlund, Carl; Tidström, Jonas; Henriksson, Markus; Sjöqvist, Lars

    2013-10-01

    Snipers and other optically guided weapon systems are serious threats in military operations. We have studied a SWIR (Short Wave Infrared) camera-based system with capability to detect and locate snipers both before and after shot over a large field-of-view. The high frame rate SWIR-camera allows resolution of the temporal profile of muzzle flashes which is the infrared signature associated with the ejection of the bullet from the rifle. The capability to detect and discriminate sniper muzzle flashes with this system has been verified by FOI in earlier studies. In this work we have extended the system by adding a laser channel for optics detection. A laser diode with slit-shaped beam profile is scanned over the camera field-of-view to detect retro reflection from optical sights. The optics detection system has been tested at various distances up to 1.15 km showing the feasibility to detect rifle scopes in full daylight. The high speed camera gives the possibility to discriminate false alarms by analyzing the temporal data. The intensity variation, caused by atmospheric turbulence, enables discrimination of small sights from larger reflectors due to aperture averaging, although the targets only cover a single pixel. It is shown that optics detection can be integrated in combination with muzzle flash detection by adding a scanning rectangular laser slit. The overall optics detection capability by continuous surveillance of a relatively large field-of-view looks promising. This type of multifunctional system may become an important tool to detect snipers before and after shot.

  2. Modeling of a microchannel plate working in pulsed mode

    NASA Astrophysics Data System (ADS)

    Secroun, Aurelia; Mens, Alain; Segre, Jacques; Assous, Franck; Piault, Emmanuel; Rebuffie, Jean-Claude

    1997-05-01

    MicroChannel Plates (MCPs) are used in high speed cinematography systems such as MCP framing cameras and streak camera readouts. In order to know the dynamic range or the signal to noise ratio that are available in these devices, a good knowledge of the performances of the MCP is essential. The point of interest of our simulation is the working mode of the microchannel plate--that is light pulsed mode--, in which the signal level is relatively high and its duration can be shorter than the time needed to replenish the wall of the channel, when other papers mainly studied night vision applications with weak continuous and nearly single electron input signal. Also our method allows the simulation of saturation phenomena due to the large number of electrons involved, whereas the discrete models previously used for simulating pulsed mode might not be properly adapted. Here are presented the choices made in modeling the microchannel, more specifically as for the physics laws, the secondary emission parameters and the 3D- geometry. In a last part first results are shown.

  3. Benchtop and Animal Validation of a Projective Imaging System for Potential Use in Intraoperative Surgical Guidance

    PubMed Central

    Gan, Qi; Wang, Dong; Ye, Jian; Zhang, Zeshu; Wang, Xinrui; Hu, Chuanzhen; Shao, Pengfei; Xu, Ronald X.

    2016-01-01

    We propose a projective navigation system for fluorescence imaging and image display in a natural mode of visual perception. The system consists of an excitation light source, a monochromatic charge coupled device (CCD) camera, a host computer, a projector, a proximity sensor and a Complementary metal–oxide–semiconductor (CMOS) camera. With perspective transformation and calibration, our surgical navigation system is able to achieve an overall imaging speed higher than 60 frames per second, with a latency of 330 ms, a spatial sensitivity better than 0.5 mm in both vertical and horizontal directions, and a projection bias less than 1 mm. The technical feasibility of image-guided surgery is demonstrated in both agar-agar gel phantoms and an ex vivo chicken breast model embedding Indocyanine Green (ICG). The biological utility of the system is demonstrated in vivo in a classic model of ICG hepatic metabolism. Our benchtop, ex vivo and in vivo experiments demonstrate the clinical potential for intraoperative delineation of disease margin and image-guided resection surgery. PMID:27391764

  4. Performance characterization of UV science cameras developed for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP)

    NASA Astrophysics Data System (ADS)

    Champey, P.; Kobayashi, K.; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, D.; Beabout, B.; Stewart, M.

    2014-07-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras will be built and tested for flight with the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The goal of the CLASP mission is to observe the scattering polarization in Lyman-α and to detect the Hanle effect in the line core. Due to the nature of Lyman-α polarizationin the chromosphere, strict measurement sensitivity requirements are imposed on the CLASP polarimeter and spectrograph systems; science requirements for polarization measurements of Q/I and U/I are 0.1% in the line core. CLASP is a dual-beam spectro-polarimeter, which uses a continuously rotating waveplate as a polarization modulator, while the waveplate motor driver outputs trigger pulses to synchronize the exposures. The CCDs are operated in frame-transfer mode; the trigger pulse initiates the frame transfer, effectively ending the ongoing exposure and starting the next. The strict requirement of 0.1% polarization accuracy is met by using frame-transfer cameras to maximize the duty cycle in order to minimize photon noise. The CLASP cameras were designed to operate with ≤ 10 e-/pixel/second dark current, ≤ 25 e- read noise, a gain of 2.0 +- 0.5 and ≤ 1.0% residual non-linearity. We present the results of the performance characterization study performed on the CLASP prototype camera; dark current, read noise, camera gain and residual non-linearity.

  5. Effects of frame rate and image resolution on pulse rate measured using multiple camera imaging photoplethysmography

    NASA Astrophysics Data System (ADS)

    Blackford, Ethan B.; Estepp, Justin R.

    2015-03-01

    Non-contact, imaging photoplethysmography uses cameras to facilitate measurements including pulse rate, pulse rate variability, respiration rate, and blood perfusion by measuring characteristic changes in light absorption at the skin's surface resulting from changes in blood volume in the superficial microvasculature. Several factors may affect the accuracy of the physiological measurement including imager frame rate, resolution, compression, lighting conditions, image background, participant skin tone, and participant motion. Before this method can gain wider use outside basic research settings, its constraints and capabilities must be well understood. Recently, we presented a novel approach utilizing a synchronized, nine-camera, semicircular array backed by measurement of an electrocardiogram and fingertip reflectance photoplethysmogram. Twenty-five individuals participated in six, five-minute, controlled head motion artifact trials in front of a black and dynamic color backdrop. Increasing the input channel space for blind source separation using the camera array was effective in mitigating error from head motion artifact. Herein we present the effects of lower frame rates at 60 and 30 (reduced from 120) frames per second and reduced image resolution at 329x246 pixels (one-quarter of the original 658x492 pixel resolution) using bilinear and zero-order downsampling. This is the first time these factors have been examined for a multiple imager array and align well with previous findings utilizing a single imager. Examining windowed pulse rates, there is little observable difference in mean absolute error or error distributions resulting from reduced frame rates or image resolution, thus lowering requirements for systems measuring pulse rate over sufficient length time windows.

  6. Optical fringe-reflection deflectometry with bundle adjustment

    NASA Astrophysics Data System (ADS)

    Xiao, Yong-Liang; Li, Sikun; Zhang, Qican; Zhong, Jianxin; Su, Xianyu; You, Zhisheng

    2018-06-01

    Liquid crystal display (LCD) screens are located outside of a camera's field of view in fringe-reflection deflectometry. Therefore, fringes that are displayed on LCD screens are obtained through specular reflection by a fixed camera. Thus, the pose calibration between the camera and LCD screen is one of the main challenges in fringe-reflection deflectometry. A markerless planar mirror is used to reflect the LCD screen more than three times, and the fringes are mapped into the fixed camera. The geometrical calibration can be accomplished by estimating the pose between the camera and the virtual image of fringes. Considering the relation between their pose, the incidence and reflection rays can be unified in the camera frame, and a forward triangulation intersection can be operated in the camera frame to measure three-dimensional (3D) coordinates of the specular surface. In the final optimization, constraint-bundle adjustment is operated to refine simultaneously the camera intrinsic parameters, including distortion coefficients, estimated geometrical pose between the LCD screen and camera, and 3D coordinates of the specular surface, with the help of the absolute phase collinear constraint. Simulation and experiment results demonstrate that the pose calibration with planar mirror reflection is simple and feasible, and the constraint-bundle adjustment can enhance the 3D coordinate measurement accuracy in fringe-reflection deflectometry.

  7. The impacts of speed cameras on road accidents: an application of propensity score matching methods.

    PubMed

    Li, Haojie; Graham, Daniel J; Majumdar, Arnab

    2013-11-01

    This paper aims to evaluate the impacts of speed limit enforcement cameras on reducing road accidents in the UK by accounting for both confounding factors and the selection of proper reference groups. The propensity score matching (PSM) method is employed to do this. A naïve before and after approach and the empirical Bayes (EB) method are compared with the PSM method. A total of 771 sites and 4787 sites for the treatment and the potential reference groups respectively are observed for a period of 9 years in England. Both the PSM and the EB methods show similar results that there are significant reductions in the number of accidents of all severities at speed camera sites. It is suggested that the propensity score can be used as the criteria for selecting the reference group in before-after control studies. Speed cameras were found to be most effective in reducing accidents up to 200 meters from camera sites and no evidence of accident migration was found. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. A Highly Accurate Face Recognition System Using Filtering Correlation

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Sayuri; Kodate, Kashiko

    2007-09-01

    The authors previously constructed a highly accurate fast face recognition optical correlator (FARCO) [E. Watanabe and K. Kodate: Opt. Rev. 12 (2005) 460], and subsequently developed an improved, super high-speed FARCO (S-FARCO), which is able to process several hundred thousand frames per second. The principal advantage of our new system is its wide applicability to any correlation scheme. Three different configurations were proposed, each depending on correlation speed. This paper describes and evaluates a software correlation filter. The face recognition function proved highly accurate, seeing that a low-resolution facial image size (64 × 64 pixels) has been successfully implemented. An operation speed of less than 10 ms was achieved using a personal computer with a central processing unit (CPU) of 3 GHz and 2 GB memory. When we applied the software correlation filter to a high-security cellular phone face recognition system, experiments on 30 female students over a period of three months yielded low error rates: 0% false acceptance rate and 2% false rejection rate. Therefore, the filtering correlation works effectively when applied to low resolution images such as web-based images or faces captured by a monitoring camera.

  9. High-speed imaging using CMOS image sensor with quasi pixel-wise exposure

    NASA Astrophysics Data System (ADS)

    Sonoda, T.; Nagahara, H.; Endo, K.; Sugiyama, Y.; Taniguchi, R.

    2017-02-01

    Several recent studies in compressive video sensing have realized scene capture beyond the fundamental trade-off limit between spatial resolution and temporal resolution using random space-time sampling. However, most of these studies showed results for higher frame rate video that were produced by simulation experiments or using an optically simulated random sampling camera, because there are currently no commercially available image sensors with random exposure or sampling capabilities. We fabricated a prototype complementary metal oxide semiconductor (CMOS) image sensor with quasi pixel-wise exposure timing that can realize nonuniform space-time sampling. The prototype sensor can reset exposures independently by columns and fix these amount of exposure by rows for each 8x8 pixel block. This CMOS sensor is not fully controllable via the pixels, and has line-dependent controls, but it offers flexibility when compared with regular CMOS or charge-coupled device sensors with global or rolling shutters. We propose a method to realize pseudo-random sampling for high-speed video acquisition that uses the flexibility of the CMOS sensor. We reconstruct the high-speed video sequence from the images produced by pseudo-random sampling using an over-complete dictionary.

  10. Analysis of high-speed digital phonoscopy pediatric images

    NASA Astrophysics Data System (ADS)

    Unnikrishnan, Harikrishnan; Donohue, Kevin D.; Patel, Rita R.

    2012-02-01

    The quantitative characterization of vocal fold (VF) motion can greatly enhance the diagnosis and treatment of speech pathologies. The recent availability of high-speed systems has created new opportunities to understand VF dynamics. This paper presents quantitative methods for analyzing VF dynamics with high-speed digital phonoscopy, with a focus on expected VF changes during childhood. A robust method for automatic VF edge tracking during phonation is introduced and evaluated against 4 expert human observers. Results from 100 test frames show a subpixel difference between the VF edges selected by algorithm and expert observers. Waveforms created from the VF edge displacement are used to created motion features with limited sensitivity to variations of camera resolution on the imaging plane. New features are introduced based on acceleration ratios of critical points over each phonation cycle, which have the potential for studying issues related to impact stress. A novel denoising and hybrid interpolation/extrapolation scheme is also introduced to reduce the impact of quantization errors and large sampling intervals relative to the phonation cycle. Features extracted from groups of 4 adults and 5 children show large differences for features related to asymmetry between the right and left fold and consistent differences for impact acceleration ratio.

  11. Full-field Deformation Measurement Techniques for a Rotating Composite Shaft

    NASA Technical Reports Server (NTRS)

    Kohlman, Lee W.; Ruggeri, Charles R.; Martin, Richard E.; Roberts, Gary D.; Handschuh, Robert F.; Roth, Don J.

    2012-01-01

    Test methods were developed to view global and local deformation in a composite tube during a test in which the tube is rotating at speeds and torques relevant to rotorcraft shafts. Digital image correlation (DIC) was used to provide quantitative displacement measurements during the tests. High speed cameras were used for the DIC measurements in order to capture images at sufficient frame rates and with sufficient resolution while the tube was rotating at speeds up to 5,000 rpm. Surface displacement data was resolved into cylindrical coordinates in order to measure rigid body rotation and global deformation of the tube. Tests were performed on both undamaged and impact damaged tubes in order to evaluate the capability to detect local deformation near an impact damaged site. Measurement of radial displacement clearly indicated a local buckling deformation near the impacted site in both dynamic and static tests. X-ray computed tomography (CT) was used to investigate variations in fiber architecture within the composite tube and to detect impact damage. No growth in the impact damage area was observed by DIC during dynamic testing or by x-ray CT in post test inspection of the composite tube.

  12. Evaluation of the Performance Characteristics of the CGLSS and NLDN Systems Based on Two Years of Ground-Truth Data from Launch Complex 39B, Kennedy Space Center, Florida

    NASA Technical Reports Server (NTRS)

    Mata, Carlos T.; Hill, Jonathan D.; Mata, Angel G.; Cummins, Kenneth L.

    2014-01-01

    From May 2011 through July 2013, the lightning instrumentation at Launch Complex 39B (LC39B) at the Kennedy Space Center, Florida, has obtained high-speed video records and field change waveforms (dE/dt and three-axis dH/dt) for 54 negative polarity return strokes whose strike termination locations and times are known with accuracy of the order of 10 m or less and 1 µs, respectively. A total of 18 strokes terminated directly to the LC39B lighting protection system (LPS), which contains three 181 m towers in a triangular configuration, an overhead catenary wire system on insulating masts, and nine down conductors. An additional 9 strokes terminated on the 106 m lightning protection mast of Launch Complex 39A (LC39A), which is located about 2.7 km southeast of LC39B. The remaining 27 return strokes struck either on the ground or attached to low-elevation grounded objects within about 500 m of the LC39B LPS. Leader/return stroke sequences were imaged at 3200 frames/sec by a network of six Phantom V310 high-speed video cameras. Each of the three towers on LC39B had two high-speed cameras installed at the 147 m level with overlapping fields of view of the center of the pad. The locations of the strike points of 54 return strokes have been compared to time-correlated reports of the Cloud-to-Ground Lightning Surveillance System (CGLSS) and the National Lightning Detection Network (NLDN), and the results of this comparison will be presented and discussed.

  13. High-speed, two-dimensional synchrotron white-beam x-ray radiography of spray breakup and atomization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halls, Benjamin R.; Radke, Christopher D.; Reuter, Benjamin J.

    High-speed, two-dimensional synchrotron x-ray radiography and phase-contrast imaging are demonstrated in propulsion sprays. Measurements are performed at the 7-BM beamline at the Advanced Photon Source user facility at Argonne National Laboratory using a recently developed broadband x-ray white beam. This novel enhancement allows for high speed, high fidelity x-ray imaging for the community at large. Quantitative path-integrated liquid distributions and spatio-temporal dynamics of the sprays were imaged with a LuAG:Ce scintillator optically coupled to a high-speed CMOS camera. Images are collected with a microscope objective at frame rates of 20 kHz and with a macro lens at 120 kHz, achievingmore » spatial resolutions of 12 μm and 65 μm, respectively. Imaging with and without potassium iodide (KI) as a contrast-enhancing agent is compared, and the effects of broadband attenuation and spatial beam characteristics are determined through modeling and experimental calibration. In addition, phase contrast is used to differentiate liquid streams with varying concentrations of KI. The experimental approach is applied to different spray conditions, including quantitative measurements of mass distribution during primary atomization and qualitative visualization of turbulent binary fluid mixing. High-speed, two-dimensional synchrotron white-beam x-ray radiography of spray breakup and atomization. Available from: https://www.researchgate.net/publication/312567827_High-speed_two-dimensional_synchrotron_white-beam_x-ray_radiography_of_spray_breakup_and_atomization [accessed Aug 31, 2017].« less

  14. Motion analysis for duplicate frame removal in wireless capsule endoscope

    NASA Astrophysics Data System (ADS)

    Lee, Hyun-Gyu; Choi, Min-Kook; Lee, Sang-Chul

    2011-03-01

    Wireless capsule endoscopy (WCE) has been intensively researched recently due to its convenience for diagnosis and extended detection coverage of some diseases. Typically, a full recording covering entire human digestive system requires about 8 to 12 hours for a patient carrying a capsule endoscope and a portable image receiver/recorder unit, which produces 120,000 image frames on average. In spite of the benefits of close examination, WCE based test has a barrier for quick diagnosis such that a trained diagnostician must examine a huge amount of images for close investigation, normally over 2 hours. The main purpose of our work is to present a novel machine vision approach to reduce diagnosis time by automatically detecting duplicated recordings caused by backward camera movement, typically containing redundant information, in small intestine. The developed technique could be integrated with a visualization tool which supports intelligent inspection method, such as automatic play speed control. Our experimental result shows high accuracy of the technique by detecting 989 duplicate image frames out of 10,000, equivalently to 9.9% data reduction, in a WCE video from a real human subject. With some selected parameters, we achieved the correct detection ratio of 92.85% and the false detection ratio of 13.57%.

  15. Trends in high-speed camera development in the Union of Soviet Socialist Republics /USSR/ and People's Republic of China /PRC/

    NASA Astrophysics Data System (ADS)

    Hyzer, W. G.

    1981-10-01

    Significant advances in high-speed camera technology are being made in the Union of Soviet Socialist Republics (USSR) and People's Republic of China (PRC), which were revealed to the author during recent visits to both of these countries. Past and present developments in high-speed cameras are described in this paper based on personal observations by the author and on private communications with other technical observers. Detailed specifications on individual instruments are presented in those specific cases where such information has been revealed and could be verified.

  16. The dynamics and morphology of sprites

    NASA Astrophysics Data System (ADS)

    Moudry, Dana

    In 1999 the University of Alaska Fairbanks fielded a 1000 fields-per-second intensified CCD camera to study sprites and associated upper atmospheric phenomena occurring above active thunderstorms as part of the NASA Sprites99 campaign. The exceptional clarity and definition obtained by this camera the night of August 18, 1999, provides the most detailed image record of these phenomena that has been obtained to date. The result of a frame-by-frame analysis of the data permits an orderly classification of upper atmospheric optical phenomena, and is the subject matter of this thesis. The images show that both elves and halos, which are diffuse emissions preceding sprites, are largely spatially unstructured. Observations of sprites initiating outside of main parts of halos, and without a halo, suggest sprites are initiated primarily from locations of atmospheric composition and density inhomogeneities. All sprites appear to start as tendrils descending from approximately 75 km altitude, and may form other dynamic or stationary features. Dynamic features include downward developing tendrils and upward developing branches. Stationary features include beads, columns, and diffuse "puffs," all of which have durations greater than 1 ms. Stationary sprite features are responsible for a significant fraction of the total optical emissions of sprites. Velocities of sprite tendrils were measured. After initial speeds of 106--107 m/s, sprite tendrils may slow to 105 m/s. Similarly, on some occasions the dim optical emission left behind by the descending tendrils may expand horizontally, with speeds on the order of 105 m/s. The volume excited by the sprite tendrils may rebrighten after 30--100 ms in the form of one of three different sprite after effects collectively termed "crawlers." A "smooth crawler" consists of several beads moving upward (˜105 m/s) without a large vertical extent, with "smooth" dynamics at 1 ms timescale. "Embers" are bead-like forms which send a downward-propagating luminous structure towards the cloudtop at speeds of 106 m/s, and have irregular dynamics at 1 ms timescales. In TV-rate observations, the downward-propagating structure of an ember is averaged out and appears as a vertically-extended ribbon above the clouds. The third kind of crawler, so-called "palm tree," appears similar to an ember at TV-rates, but with a wider crown at top.

  17. A New Hyperspectral Designed for Small UAS Tested in Real World Applications

    NASA Astrophysics Data System (ADS)

    Marcucci, E.; Saiet, E., II; Hatfield, M. C.

    2014-12-01

    The ability to investigate landscape and vegetation from airborne instruments offers many advantages, including high resolution data, ability to deploy instruments over a specific area, and repeat measurements. The Alaska Center for Unmanned Aircraft Systems Integration (ACUASI) has recently integrated a hyperspectral imaging camera onto their Ptarmigan hexacopter. The Rikola Hyperspectral Camera manufactured by VTT and Rikola, Ltd. is capable of obtaining data within the 400-950 nm range with an accuracy of ~1 nm. Using the compact flash on the UAV limits the maximum number of channels to 24 this summer. The camera uses a single frame to sequentially record the spectral bands of interest in a 37° field-of-view. Because the camera collects data as single frames it takes a finite amount of time to compile the complete spectral. Although each frame takes only 5 nanoseconds, co-registration of frames is still required. The hovering ability of the hexacopter helps eliminate frame shift. GPS records data for incorporation into a larger dataset. Conservatively, the Ptarmigan can fly at an altitude of 400 feet, for 15 minutes, and 7000 feet away from the operator. The airborne hyperspectral instrument will be extremely useful to scientists as a platform that can provide data on-request. Since the spectral range of the camera is ideal for the study of vegetation, this study 1) examines seasonal changes of vegetation of the Fairbanks area, 2) ground-truths satellite measurements, and 3) ties vegetation conditions around a weather tower to the tower readings. Through this proof of concept, ACUASI provides a means for scientists to request the most up-to-date and location-specific data for their field sites. Additionally, the resolution of the airborne instruments is much higher than that of satellite data, these may be readily tasked, and they have the advantage over manned flights in terms of manpower and cost.

  18. A simple demonstration when studying the equivalence principle

    NASA Astrophysics Data System (ADS)

    Mayer, Valery; Varaksina, Ekaterina

    2016-06-01

    The paper proposes a lecture experiment that can be demonstrated when studying the equivalence principle formulated by Albert Einstein. The demonstration consists of creating stroboscopic photographs of a ball moving along a parabola in Earth's gravitational field. In the first experiment, a camera is stationary relative to Earth's surface. In the second, the camera falls freely downwards with the ball, allowing students to see that the ball moves uniformly and rectilinearly relative to the frame of reference of the freely falling camera. The equivalence principle explains this result, as it is always possible to propose an inertial frame of reference for a small region of a gravitational field, where space-time effects of curvature are negligible.

  19. High-frame-rate digital radiographic videography

    NASA Astrophysics Data System (ADS)

    King, Nicholas S. P.; Cverna, Frank H.; Albright, Kevin L.; Jaramillo, Steven A.; Yates, George J.; McDonald, Thomas E.; Flynn, Michael J.; Tashman, Scott

    1994-10-01

    High speed x-ray imaging can be an important tool for observing internal processes in a wide range of applications. In this paper we describe preliminary implementation of a system having the eventual goal of observing the internal dynamics of bone and joint reactions during loading. Two Los Alamos National Laboratory (LANL) gated and image intensified camera systems were used to record images from an x-ray image convertor tube to demonstrate the potential of high frame-rate digital radiographic videography in the analysis of bone and joint dynamics of the human body. Preliminary experiments were done at LANL to test the systems. Initial high frame-rate imaging (from 500 to 1000 frames/s) of a swinging pendulum mounted to the face of an X-ray image convertor tube demonstrated high contrast response and baseline sensitivity. The systems were then evaluated at the Motion Analysis Laboratory of Henry Ford Health Systems Bone and Joint Center. Imaging of a 9 inch acrylic disk with embedded lead markers rotating at approximately 1000 RPM, demonstrated the system response to a high velocity/high contrast target. By gating the P-20 phosphor image from the X-ray image convertor with a second image intensifier (II) and using a 100 microsecond wide optical gate through the second II, enough prompt light decay from the x-ray image convertor phosphor had taken place to achieve reduction of most of the motion blurring. Measurement of the marker velocity was made by using video frames acquired at 500 frames/s. The data obtained from both experiments successfully demonstrated the feasibility of the technique. Several key areas for improvement are discussed along with salient test results and experiment details.

  20. An airborne multispectral imaging system based on two consumer-grade cameras for agricultural remote sensing

    USDA-ARS?s Scientific Manuscript database

    This paper describes the design and evaluation of an airborne multispectral imaging system based on two identical consumer-grade cameras for agricultural remote sensing. The cameras are equipped with a full-frame complementary metal oxide semiconductor (CMOS) sensor with 5616 × 3744 pixels. One came...

  1. Image system for three dimensional, 360 DEGREE, time sequence surface mapping of moving objects

    DOEpatents

    Lu, Shin-Yee

    1998-01-01

    A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360.degree. all around coverage of theobject-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120.degree. apart from one another.

  2. Image system for three dimensional, 360{degree}, time sequence surface mapping of moving objects

    DOEpatents

    Lu, S.Y.

    1998-12-22

    A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest. Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360{degree} all around coverage of the object-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120{degree} apart from one another. 20 figs.

  3. Electrostatic forward-viewing scanning probe for Doppler optical coherence tomography using a dissipative polymer catheter.

    PubMed

    Munce, Nigel R; Mariampillai, Adrian; Standish, Beau A; Pop, Mihaela; Anderson, Kevan J; Liu, George Y; Luk, Tim; Courtney, Brian K; Wright, Graham A; Vitkin, I Alex; Yang, Victor X D

    2008-04-01

    A novel flexible scanning optical probe is constructed with a finely etched optical fiber strung through a platinum coil in the lumen of a dissipative polymer. The packaged probe is 2.2 mm in diameter with a rigid length of 6mm when using a ball lens or 12 mm when scanning the fiber proximal to a gradient-index (GRIN) lens. Driven by constant high voltage (1-3 kV) at low current (< 5 microA), the probe oscillates to provide wide forward-viewing angle (13 degrees and 33 degrees with ball and GRIN lens designs, respectively) and high-frame-rate (10-140 fps) operation. Motion of the probe tip is observed with a high-speed camera and compared with theory. Optical coherence tomography (OCT) imaging with the probe is demonstrated with a wavelength-swept source laser. Images of an IR card as well as in vivo Doppler OCT images of a tadpole heart are presented. This optomechanical design offers a simple, inexpensive method to obtain a high-frame-rate forward-viewing scanning probe.

  4. Nonspherical laser-induced cavitation bubbles

    NASA Astrophysics Data System (ADS)

    Lim, Kang Yuan; Quinto-Su, Pedro A.; Klaseboer, Evert; Khoo, Boo Cheong; Venugopalan, Vasan; Ohl, Claus-Dieter

    2010-01-01

    The generation of arbitrarily shaped nonspherical laser-induced cavitation bubbles is demonstrated with a optical technique. The nonspherical bubbles are formed using laser intensity patterns shaped by a spatial light modulator using linear absorption inside a liquid gap with a thickness of 40μm . In particular we demonstrate the dynamics of elliptic, toroidal, square, and V-shaped bubbles. The bubble dynamics is recorded with a high-speed camera at framing rates of up to 300000 frames per second. The observed bubble evolution is compared to predictions from an axisymmetric boundary element simulation which provides good qualitative agreement. Interesting dynamic features that are observed in both the experiment and simulation include the inversion of the major and minor axis for elliptical bubbles, the rotation of the shape for square bubbles, and the formation of a unidirectional jet for V-shaped bubbles. Further we demonstrate that specific bubble shapes can either be formed directly through the intensity distribution of a single laser focus, or indirectly using secondary bubbles that either confine the central bubble or coalesce with the main bubble. The former approach provides the ability to generate in principle any complex bubble geometry.

  5. Imaging intracellular protein dynamics by spinning disk confocal microscopy

    PubMed Central

    Stehbens, Samantha; Pemble, Hayley; Murrow, Lindsay; Wittmann, Torsten

    2012-01-01

    The palette of fluorescent proteins has grown exponentially over the last decade, and as a result live imaging of cells expressing fluorescently tagged proteins is becoming more and more main stream. Spinning disk confocal microscopy (SDC) is a high speed optical sectioning technique, and a method of choice to observe and analyze intracellular fluorescent protein dynamics at high spatial and temporal resolution. In an SDC system, a rapidly rotating pinhole disk generates thousands of points of light that scan the specimen simultaneously, which allows direct capture of the confocal image with low noise scientific grade cooled charged-coupled device (CCD) cameras, and can achieve frame rates of up 1000 frames per second. In this chapter we describe important components of a state-of-the-art spinning disk system optimized for live cell microscopy, and provide a rationale for specific design choices. We also give guidelines how other imaging techniques such as total internal reflection (TIRF) microscopy or spatially controlled photoactivation can be coupled with SDC imaging, and provide a short protocol on how to generate cell lines stably expressing fluorescently tagged proteins by lentivirus-mediated transduction. PMID:22264541

  6. 3D kinematic measurement of human movement using low cost fish-eye cameras

    NASA Astrophysics Data System (ADS)

    Islam, Atiqul; Asikuzzaman, Md.; Garratt, Matthew A.; Pickering, Mark R.

    2017-02-01

    3D motion capture is difficult when the capturing is performed in an outdoor environment without controlled surroundings. In this paper, we propose a new approach of using two ordinary cameras arranged in a special stereoscopic configuration and passive markers on a subject's body to reconstruct the motion of the subject. Firstly for each frame of the video, an adaptive thresholding algorithm is applied for extracting the markers on the subject's body. Once the markers are extracted, an algorithm for matching corresponding markers in each frame is applied. Zhang's planar calibration method is used to calibrate the two cameras. As the cameras use the fisheye lens, they cannot be well estimated using a pinhole camera model which makes it difficult to estimate the depth information. In this work, to restore the 3D coordinates we use a unique calibration method for fisheye lenses. The accuracy of the 3D coordinate reconstruction is evaluated by comparing with results from a commercially available Vicon motion capture system.

  7. Probing the nanoscale with high-speed interferometry of an impacting drop

    NASA Astrophysics Data System (ADS)

    Thoroddsen, S. T.; Li, E. Q.; Vakarelski, I. U.; Langley, K.

    2017-02-01

    The simple phenomenon of a water drop falling onto a glass plate may seem like a trivial fluid mechanics problem. However, detailed imaging has shown that this process is highly complex and a small air-bubble is always entrapped under the drop when it makes contact with the solid. This bubble can interfere with the uniformity of spray coatings and degrade inkjet fabrication of displays etc. We will describe how we use high-speed interferometry at 5 million frames per second to understand the details of this process. As the impacting drop approaches the solid, the dynamics are characterized by a balance between the lubrication pressure in the thin air layer and the inertia of the bot-tom of the drop. This deforms the drop, forming a dimple at its bottom and making the drop touch the surface along a ring, thereby entrapping the air-layer, which is typically 1-3 μm thick. This air-layer can be highly compressed and the deceleration of the bottom of the drop can be as large as 300,000 g. We describe how the thickness evolution of the lubricating air-layer is extracted from following the interference fringes between frames. Two-color interferometry is also used to extract absolute layer thicknesses. Finally, we identify the effects of nanometric surface roughness on the first contact of the drop with the substrate. Here we need to resolve the 100 nm thickness changes occurring during 200 ns intervals, requiring these state of the art high-speed cameras. Surprisingly, we see a ring of micro-bubbles marking the first contact of the drop with the glass, only for microscope slides, which have a typical roughness of 20 nm, while such rings are absent for drop impacts onto molecularly smooth mica surfaces.

  8. System selects framing rate for spectrograph camera

    NASA Technical Reports Server (NTRS)

    1965-01-01

    Circuit using zero-order light is reflected to a photomultiplier in the incoming radiation of a spectrograph monitor to provide an error signal which controls the advancing and driving rate of the film through the camera.

  9. Comet Wild 2 Up Close and Personal

    NASA Technical Reports Server (NTRS)

    2004-01-01

    On January 2, 2004 NASA's Stardust spacecraft made a close flyby of comet Wild 2 (pronounced 'Vilt-2'). Among the equipment the spacecraft carried on board was a navigation camera. This is the 34th of the 72 images taken by Stardust's navigation camera during close encounter. The exposure time was 10 milliseconds. The two frames are actually of 1 single exposure. The frame on the left depicts the comet as the human eye would see it. The frame on the right depicts the same image but 'stretched' so that the faint jets emanating from Wild 2 can be plainly seen. Comet Wild 2 is about five kilometers (3.1 miles) in diameter.

  10. Data rate enhancement of optical camera communications by compensating inter-frame gaps

    NASA Astrophysics Data System (ADS)

    Nguyen, Duy Thong; Park, Youngil

    2017-07-01

    Optical camera communications (OCC) is a convenient way of transmitting data between LED lamps and image sensors that are included in most smart devices. Although many schemes have been suggested to increase the data rate of the OCC system, it is still much lower than that of the photodiode-based LiFi system. One major reason of this low data rate is attributed to the inter-frame gap (IFG) of image sensor system, that is, the time gap between consecutive image frames. In this paper, we propose a way to compensate for this IFG efficiently by an interleaved Hamming coding scheme. The proposed scheme is implemented and the performance is measured.

  11. Speed cameras for the prevention of road traffic injuries and deaths.

    PubMed

    Wilson, Cecilia; Willis, Charlene; Hendrikz, Joan K; Le Brocque, Robyne; Bellamy, Nicholas

    2010-11-10

    It is estimated that by 2020, road traffic crashes will have moved from ninth to third in the world ranking of burden of disease, as measured in disability adjusted life years. The prevention of road traffic injuries is of global public health importance. Measures aimed at reducing traffic speed are considered essential to preventing road injuries; the use of speed cameras is one such measure. To assess whether the use of speed cameras reduces the incidence of speeding, road traffic crashes, injuries and deaths. We searched the following electronic databases covering all available years up to March 2010; the Cochrane Library, MEDLINE (WebSPIRS), EMBASE (WebSPIRS), TRANSPORT, IRRD (International Road Research Documentation), TRANSDOC (European Conference of Ministers of Transport databases), Web of Science (Science and Social Science Citation Index), PsycINFO, CINAHL, EconLit, WHO database, Sociological Abstracts, Dissertation Abstracts, Index to Theses. Randomised controlled trials, interrupted time series and controlled before-after studies that assessed the impact of speed cameras on speeding, road crashes, crashes causing injury and fatalities were eligible for inclusion. We independently screened studies for inclusion, extracted data, assessed methodological quality, reported study authors' outcomes and where possible, calculated standardised results based on the information available in each study. Due to considerable heterogeneity between and within included studies, a meta-analysis was not appropriate. Thirty five studies met the inclusion criteria. Compared with controls, the relative reduction in average speed ranged from 1% to 15% and the reduction in proportion of vehicles speeding ranged from 14% to 65%. In the vicinity of camera sites, the pre/post reductions ranged from 8% to 49% for all crashes and 11% to 44% for fatal and serious injury crashes. Compared with controls, the relative improvement in pre/post injury crash proportions ranged from 8% to 50%. Despite the methodological limitations and the variability in degree of signal to noise effect, the consistency of reported reductions in speed and crash outcomes across all studies show that speed cameras are a worthwhile intervention for reducing the number of road traffic injuries and deaths. However, whilst the the evidence base clearly demonstrates a positive direction in the effect, an overall magnitude of this effect is currently not deducible due to heterogeneity and lack of methodological rigour. More studies of a scientifically rigorous and homogenous nature are necessary, to provide the answer to the magnitude of effect.

  12. Speed cameras for the prevention of road traffic injuries and deaths.

    PubMed

    Wilson, Cecilia; Willis, Charlene; Hendrikz, Joan K; Le Brocque, Robyne; Bellamy, Nicholas

    2010-10-06

    It is estimated that by 2020, road traffic crashes will have moved from ninth to third in the world ranking of burden of disease, as measured in disability adjusted life years. The prevention of road traffic injuries is of global public health importance. Measures aimed at reducing traffic speed are considered essential to preventing road injuries; the use of speed cameras is one such measure. To assess whether the use of speed cameras reduces the incidence of speeding, road traffic crashes, injuries and deaths. We searched the following electronic databases covering all available years up to March 2010; the Cochrane Library, MEDLINE (WebSPIRS), EMBASE (WebSPIRS), TRANSPORT, IRRD (International Road Research Documentation), TRANSDOC (European Conference of Ministers of Transport databases), Web of Science (Science and Social Science Citation Index), PsycINFO, CINAHL, EconLit, WHO database, Sociological Abstracts, Dissertation Abstracts, Index to Theses. Randomised controlled trials, interrupted time series and controlled before-after studies that assessed the impact of speed cameras on speeding, road crashes, crashes causing injury and fatalities were eligible for inclusion. We independently screened studies for inclusion, extracted data, assessed methodological quality, reported study authors' outcomes and where possible, calculated standardised results based on the information available in each study. Due to considerable heterogeneity between and within included studies, a meta-analysis was not appropriate. Thirty five studies met the inclusion criteria. Compared with controls, the relative reduction in average speed ranged from 1% to 15% and the reduction in proportion of vehicles speeding ranged from 14% to 65%. In the vicinity of camera sites, the pre/post reductions ranged from 8% to 49% for all crashes and 11% to 44% for fatal and serious injury crashes. Compared with controls, the relative improvement in pre/post injury crash proportions ranged from 8% to 50%. Despite the methodological limitations and the variability in degree of signal to noise effect, the consistency of reported reductions in speed and crash outcomes across all studies show that speed cameras are a worthwhile intervention for reducing the number of road traffic injuries and deaths. However, whilst the the evidence base clearly demonstrates a positive direction in the effect, an overall magnitude of this effect is currently not deducible due to heterogeneity and lack of methodological rigour. More studies of a scientifically rigorous and homogenous nature are necessary, to provide the answer to the magnitude of effect.

  13. An Automatic Portable Telecine Camera.

    DTIC Science & Technology

    1978-08-01

    five television frames to achieve synchronous operation, that is about 0.2 second. 6.3 Video recorder noise imnunity The synchronisation pulse separator...display is filmed by a modified 16 am cine camera driven by a control unit in which the camera supply voltage is derived from the field synchronisation ...pulses of the video signal. Automatic synchronisation of the camera mechanism is achieved over a wide range of television field frequencies and the

  14. Development of biostereometric experiments. [stereometric camera system

    NASA Technical Reports Server (NTRS)

    Herron, R. E.

    1978-01-01

    The stereometric camera was designed for close-range techniques in biostereometrics. The camera focusing distance of 360 mm to infinity covers a broad field of close-range photogrammetry. The design provides for a separate unit for the lens system and interchangeable backs on the camera for the use of single frame film exposure, roll-type film cassettes, or glass plates. The system incorporates the use of a surface contrast optical projector.

  15. LabVIEW Graphical User Interface for a New High Sensitivity, High Resolution Micro-Angio-Fluoroscopic and ROI-CBCT System

    PubMed Central

    Keleshis, C; Ionita, CN; Yadava, G; Patel, V; Bednarek, DR; Hoffmann, KR; Verevkin, A; Rudin, S

    2008-01-01

    A graphical user interface based on LabVIEW software was developed to enable clinical evaluation of a new High-Sensitivity Micro-Angio-Fluoroscopic (HSMAF) system for real-time acquisition, display and rapid frame transfer of high-resolution region-of-interest images. The HSMAF detector consists of a CsI(Tl) phosphor, a light image intensifier (LII), and a fiber-optic taper coupled to a progressive scan, frame-transfer, charged-coupled device (CCD) camera which provides real-time 12 bit, 1k × 1k images capable of greater than 10 lp/mm resolution. Images can be captured in continuous or triggered mode, and the camera can be programmed by a computer using Camera Link serial communication. A graphical user interface was developed to control the camera modes such as gain and pixel binning as well as to acquire, store, display, and process the images. The program, written in LabVIEW, has the following capabilities: camera initialization, synchronized image acquisition with the x-ray pulses, roadmap and digital subtraction angiography acquisition (DSA), flat field correction, brightness and contrast control, last frame hold in fluoroscopy, looped playback of the acquired images in angiography, recursive temporal filtering and LII gain control. Frame rates can be up to 30 fps in full-resolution mode. The user friendly implementation of the interface along with the high framerate acquisition and display for this unique high-resolution detector should provide angiographers and interventionalists with a new capability for visualizing details of small vessels and endovascular devices such as stents and hence enable more accurate diagnoses and image guided interventions. (Support: NIH Grants R01NS43924, R01EB002873) PMID:18836570

  16. LabVIEW Graphical User Interface for a New High Sensitivity, High Resolution Micro-Angio-Fluoroscopic and ROI-CBCT System.

    PubMed

    Keleshis, C; Ionita, Cn; Yadava, G; Patel, V; Bednarek, Dr; Hoffmann, Kr; Verevkin, A; Rudin, S

    2008-01-01

    A graphical user interface based on LabVIEW software was developed to enable clinical evaluation of a new High-Sensitivity Micro-Angio-Fluoroscopic (HSMAF) system for real-time acquisition, display and rapid frame transfer of high-resolution region-of-interest images. The HSMAF detector consists of a CsI(Tl) phosphor, a light image intensifier (LII), and a fiber-optic taper coupled to a progressive scan, frame-transfer, charged-coupled device (CCD) camera which provides real-time 12 bit, 1k × 1k images capable of greater than 10 lp/mm resolution. Images can be captured in continuous or triggered mode, and the camera can be programmed by a computer using Camera Link serial communication. A graphical user interface was developed to control the camera modes such as gain and pixel binning as well as to acquire, store, display, and process the images. The program, written in LabVIEW, has the following capabilities: camera initialization, synchronized image acquisition with the x-ray pulses, roadmap and digital subtraction angiography acquisition (DSA), flat field correction, brightness and contrast control, last frame hold in fluoroscopy, looped playback of the acquired images in angiography, recursive temporal filtering and LII gain control. Frame rates can be up to 30 fps in full-resolution mode. The user friendly implementation of the interface along with the high framerate acquisition and display for this unique high-resolution detector should provide angiographers and interventionalists with a new capability for visualizing details of small vessels and endovascular devices such as stents and hence enable more accurate diagnoses and image guided interventions. (Support: NIH Grants R01NS43924, R01EB002873).

  17. Proposed patient motion monitoring system using feature point tracking with a web camera.

    PubMed

    Miura, Hideharu; Ozawa, Shuichi; Matsuura, Takaaki; Yamada, Kiyoshi; Nagata, Yasushi

    2017-12-01

    Patient motion monitoring systems play an important role in providing accurate treatment dose delivery. We propose a system that utilizes a web camera (frame rate up to 30 fps, maximum resolution of 640 × 480 pixels) and an in-house image processing software (developed using Microsoft Visual C++ and OpenCV). This system is simple to use and convenient to set up. The pyramidal Lucas-Kanade method was applied to calculate motions for each feature point by analysing two consecutive frames. The image processing software employs a color scheme where the defined feature points are blue under stable (no movement) conditions and turn red along with a warning message and an audio signal (beeping alarm) for large patient movements. The initial position of the marker was used by the program to determine the marker positions in all the frames. The software generates a text file that contains the calculated motion for each frame and saves it as a compressed audio video interleave (AVI) file. We proposed a patient motion monitoring system using a web camera, which is simple and convenient to set up, to increase the safety of treatment delivery.

  18. Guidebook to School Publications Photography.

    ERIC Educational Resources Information Center

    Glowacki, Joseph W.

    This guidebook for school publications photographers discusses both the self-image of the publications photographer and various aspects of photography, including components of the camera, shutter speed and action pictures, light meters, handling cameras, lenses, developing film, pushing film beyond the emulsion-speed rating recommended by the…

  19. Performance Characterization of UV Science Cameras Developed for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP)

    NASA Technical Reports Server (NTRS)

    Champey, Patrick; Kobayashi, Ken; Winebarger, Amy; Cirtin, Jonathan; Hyde, David; Robertson, Bryan; Beabout, Brent; Beabout, Dyana; Stewart, Mike

    2014-01-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras will be built and tested for flight with the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The goal of the CLASP mission is to observe the scattering polarization in Lyman-alpha and to detect the Hanle effect in the line core. Due to the nature of Lyman-alpha polarization in the chromosphere, strict measurement sensitivity requirements are imposed on the CLASP polarimeter and spectrograph systems; science requirements for polarization measurements of Q/I and U/I are 0.1% in the line core. CLASP is a dual-beam spectro-polarimeter, which uses a continuously rotating waveplate as a polarization modulator, while the waveplate motor driver outputs trigger pulses to synchronize the exposures. The CCDs are operated in frame-transfer mode; the trigger pulse initiates the frame transfer, effectively ending the ongoing exposure and starting the next. The strict requirement of 0.1% polarization accuracy is met by using frame-transfer cameras to maximize the duty cycle in order to minimize photon noise. Coating the e2v CCD57-10 512x512 detectors with Lumogen-E coating allows for a relatively high (30%) quantum efficiency at the Lyman-$\\alpha$ line. The CLASP cameras were designed to operate with =10 e- /pixel/second dark current, = 25 e- read noise, a gain of 2.0 and =0.1% residual non-linearity. We present the results of the performance characterization study performed on the CLASP prototype camera; dark current, read noise, camera gain and residual non-linearity.

  20. Performance Characterization of UV Science Cameras Developed for the Chromospheric Lyman-Alpha Spectro-Polarimeter

    NASA Technical Reports Server (NTRS)

    Champey, P.; Kobayashi, K.; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, D.; Beabout, B.; Stewart, M.

    2014-01-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras will be built and tested for flight with the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The goal of the CLASP mission is to observe the scattering polarization in Lyman-alpha and to detect the Hanle effect in the line core. Due to the nature of Lyman-alpha polarization in the chromosphere, strict measurement sensitivity requirements are imposed on the CLASP polarimeter and spectrograph systems; science requirements for polarization measurements of Q/I and U/I are 0.1 percent in the line core. CLASP is a dual-beam spectro-polarimeter, which uses a continuously rotating waveplate as a polarization modulator, while the waveplate motor driver outputs trigger pulses to synchronize the exposures. The CCDs are operated in frame-transfer mode; the trigger pulse initiates the frame transfer, effectively ending the ongoing exposure and starting the next. The strict requirement of 0.1 percent polarization accuracy is met by using frame-transfer cameras to maximize the duty cycle in order to minimize photon noise. Coating the e2v CCD57-10 512x512 detectors with Lumogen-E coating allows for a relatively high (30 percent) quantum efficiency at the Lyman-alpha line. The CLASP cameras were designed to operate with 10 e-/pixel/second dark current, 25 e- read noise, a gain of 2.0 +/- 0.5 and 1.0 percent residual non-linearity. We present the results of the performance characterization study performed on the CLASP prototype camera; dark current, read noise, camera gain and residual non-linearity.

  1. A solid state lightning propagation speed sensor

    NASA Technical Reports Server (NTRS)

    Mach, Douglas M.; Rust, W. David

    1989-01-01

    A device to measure the propagation speeds of cloud-to-ground lightning has been developed. The lightning propagation speed (LPS) device consists of eight solid state silicon photodetectors mounted behind precision horizontal slits in the focal plane of a 50-mm lens on a 35-mm camera. Although the LPS device produces results similar to those obtained from a streaking camera, the LPS device has the advantages of smaller size, lower cost, mobile use, and easier data collection and analysis. The maximum accuracy for the LPS is 0.2 microsec, compared with about 0.8 microsecs for the streaking camera. It is found that the return stroke propagation speed for triggered lightning is different than that for natural lightning if measurements are taken over channel segments less than 500 m. It is suggested that there are no significant differences between the propagation speeds of positive and negative flashes. Also, differences between natural and triggered dart leaders are discussed.

  2. Dense Region of Impact Craters

    NASA Image and Video Library

    2011-09-23

    NASA Dawn spacecraft obtained this image of the giant asteroid Vesta with its framing camera on Aug. 14 2011. This image was taken through the camera clear filter. The image has a resolution of about 260 meters per pixel.

  3. Inexpensive Neutron Imaging Cameras Using CCDs for Astronomy

    NASA Astrophysics Data System (ADS)

    Hewat, A. W.

    We have developed inexpensive neutron imaging cameras using CCDs originally designed for amateur astronomical observation. The low-light, high resolution requirements of such CCDs are similar to those for neutron imaging, except that noise as well as cost is reduced by using slower read-out electronics. For example, we use the same 2048x2048 pixel ;Kodak; KAI-4022 CCD as used in the high performance PCO-2000 CCD camera, but our electronics requires ∼5 sec for full-frame read-out, ten times slower than the PCO-2000. Since neutron exposures also require several seconds, this is not seen as a serious disadvantage for many applications. If higher frame rates are needed, the CCD unit on our camera can be easily swapped for a faster readout detector with similar chip size and resolution, such as the PCO-2000 or the sCMOS PCO.edge 4.2.

  4. An evaluation of Winnipeg's photo enforcement safety program: results of time series analyses and an intersection camera experiment.

    PubMed

    Vanlaar, Ward; Robertson, Robyn; Marcoux, Kyla

    2014-01-01

    The objective of this study was to evaluate the impact of Winnipeg's photo enforcement safety program on speeding, i.e., "speed on green", and red-light running behavior at intersections as well as on crashes resulting from these behaviors. ARIMA time series analyses regarding crashes related to red-light running (right-angle crashes and rear-end crashes) and crashes related to speeding (injury crashes and property damage only crashes) occurring at intersections were conducted using monthly crash counts from 1994 to 2008. A quasi-experimental intersection camera experiment was also conducted using roadside data on speeding and red-light running behavior at intersections. These data were analyzed using logistic regression analysis. The time series analyses showed that for crashes related to red-light running, there had been a 46% decrease in right-angle crashes at camera intersections, but that there had also been an initial 42% increase in rear-end crashes. For crashes related to speeding, analyses revealed that the installation of cameras was not associated with increases or decreases in crashes. Results of the intersection camera experiment show that there were significantly fewer red light running violations at intersections after installation of cameras and that photo enforcement had a protective effect on speeding behavior at intersections. However, the data also suggest photo enforcement may be less effective in preventing serious speeding violations at intersections. Overall, Winnipeg's photo enforcement safety program had a positive net effect on traffic safety. Results from both the ARIMA time series and the quasi-experimental design corroborate one another. However, the protective effect of photo enforcement is not equally pronounced across different conditions so further monitoring is required to improve the delivery of this measure. Results from this study as well as limitations are discussed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Head stabilization in whooping cranes

    USGS Publications Warehouse

    Kinloch, M.R.; Cronin, T.W.; Olsen, Glenn H.; Chavez-Ramirez, Felipe

    2005-01-01

    The whooping crane (Grus americana) is the tallest bird in North America, yet not much is known about its visual ecology. How these birds overcome their unusual height to identify, locate, track, and capture prey items is not well understood. There have been many studies on head and eye stabilization in large wading birds (herons and egrets), but the pattern of head movement and stabilization during foraging is unclear. Patterns of head movement and stabilization during walking were examined in whooping cranes at Patuxent Wildlife Research Center, Laurel, Maryland USA. Four whooping cranes (1 male and 3 females) were videotaped for this study. All birds were already acclimated to the presence of people and to food rewards. Whooping cranes were videotaped using both digital and Hi-8 Sony video cameras (Sony Corporation, 7-35 Kitashinagawa, 6-Chome, Shinagawa-ku, Tokyo, Japan), placed on a tripod and set at bird height in the cranes' home pens. The cranes were videotaped repeatedly, at different locations in the pens and while walking (or running) at different speeds. Rewards (meal worms, smelt, crickets and corn) were used to entice the cranes to walk across the camera's view plane. The resulting videotape was analyzed at the University of Maryland at Baltimore County. Briefly, we used a computerized reduced graphic model of a crane superimposed over each frame of analyzed tape segments by means of a custom written program (T. W. Cronin, using C++) with the ability to combine video and computer graphic input. The speed of the birds in analyzed segments ranged from 0.30 m/s to 2.64 m/s, and the proportion of time the head was stabilized ranged from 79% to 0%, respectively. The speed at which the proportion reached 0% was 1.83 m/s. The analyses suggest that the proportion of time the head is stable decreases as speed of the bird increases. In all cases, birds were able to reach their target prey with little difficulty. Thus when cranes are walking searching for food, they walk at a speed that permits them to keep their heads still and visual field immobile at least half the time.

  6. Two-dimensional real-time imaging system for subtraction angiography using an iodine filter

    NASA Astrophysics Data System (ADS)

    Umetani, Keiji; Ueda, Ken; Takeda, Tohoru; Anno, Izumi; Itai, Yuji; Akisada, Masayoshi; Nakajima, Teiichi

    1992-01-01

    A new type of subtraction imaging system was developed using an iodine filter and a single-energy broad bandwidth monochromatized x ray. The x-ray images of coronary arteries made after intravenous injection of a contrast agent are enhanced by an energy-subtraction technique. Filter chopping of the x-ray beam switches energies rapidly, so that a nearly simultaneous pair of filtered and nonfiltered images can be made. By using a high-speed video camera, a pair of two 512 × 512 pixel images can be obtained within 9 ms. Three hundred eighty-four images (raw data) are stored in a 144-Mbyte frame memory. After phantom studies, in vivo subtracted images of coronary arteries in dogs were obtained at a rate of 15 images/s.

  7. Determination of the static friction coefficient from circular motion

    NASA Astrophysics Data System (ADS)

    Molina-Bolívar, J. A.; Cabrerizo-Vílchez, M. A.

    2014-07-01

    This paper describes a physics laboratory exercise for determining the coefficient of static friction between two surfaces. The circular motion of a coin placed on the surface of a rotating turntable has been studied. For this purpose, the motion is recorded with a high-speed digital video camera recording at 240 frames s-1, and the videos are analyzed using Tracker video-analysis software, allowing the students to dynamically model the motion of the coin. The students have to obtain the static friction coefficient by comparing the centripetal and maximum static friction forces. The experiment only requires simple and inexpensive materials. The dynamics of circular motion and static friction forces are difficult for many students to understand. The proposed laboratory exercise addresses these topics, which are relevant to the physics curriculum.

  8. High-speed imaging system for observation of discharge phenomena

    NASA Astrophysics Data System (ADS)

    Tanabe, R.; Kusano, H.; Ito, Y.

    2008-11-01

    A thin metal electrode tip instantly changes its shape into a sphere or a needlelike shape in a single electrical discharge of high current. These changes occur within several hundred microseconds. To observe these high-speed phenomena in a single discharge, an imaging system using a high-speed video camera and a high repetition rate pulse laser was constructed. A nanosecond laser, the wavelength of which was 532 nm, was used as the illuminating source of a newly developed high-speed video camera, HPV-1. The time resolution of our system was determined by the laser pulse width and was about 80 nanoseconds. The system can take one hundred pictures at 16- or 64-microsecond intervals in a single discharge event. A band-pass filter at 532 nm was placed in front of the camera to block the emission of the discharge arc at other wavelengths. Therefore, clear images of the electrode were recorded even during the discharge. If the laser was not used, only images of plasma during discharge and thermal radiation from the electrode after discharge were observed. These results demonstrate that the combination of a high repetition rate and a short pulse laser with a high speed video camera provides a unique and powerful method for high speed imaging.

  9. Using a high-speed movie camera to evaluate slice dropping in clinical image interpretation with stack mode viewers.

    PubMed

    Yakami, Masahiro; Yamamoto, Akira; Yanagisawa, Morio; Sekiguchi, Hiroyuki; Kubo, Takeshi; Togashi, Kaori

    2013-06-01

    The purpose of this study is to verify objectively the rate of slice omission during paging on picture archiving and communication system (PACS) viewers by recording the images shown on the computer displays of these viewers with a high-speed movie camera. This study was approved by the institutional review board. A sequential number from 1 to 250 was superimposed on each slice of a series of clinical Digital Imaging and Communication in Medicine (DICOM) data. The slices were displayed using several DICOM viewers, including in-house developed freeware and clinical PACS viewers. The freeware viewer and one of the clinical PACS viewers included functions to prevent slice dropping. The series was displayed in stack mode and paged in both automatic and manual paging modes. The display was recorded with a high-speed movie camera and played back at a slow speed to check whether slices were dropped. The paging speeds were also measured. With a paging speed faster than half the refresh rate of the display, some viewers dropped up to 52.4 % of the slices, while other well-designed viewers did not, if used with the correct settings. Slice dropping during paging was objectively confirmed using a high-speed movie camera. To prevent slice dropping, the viewer must be specially designed for the purpose and must be used with the correct settings, or the paging speed must be slower than half of the display refresh rate.

  10. A passive terahertz video camera based on lumped element kinetic inductance detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rowe, Sam, E-mail: sam.rowe@astro.cf.ac.uk; Pascale, Enzo; Doyle, Simon

    We have developed a passive 350 GHz (850 μm) video-camera to demonstrate lumped element kinetic inductance detectors (LEKIDs)—designed originally for far-infrared astronomy—as an option for general purpose terrestrial terahertz imaging applications. The camera currently operates at a quasi-video frame rate of 2 Hz with a noise equivalent temperature difference per frame of ∼0.1 K, which is close to the background limit. The 152 element superconducting LEKID array is fabricated from a simple 40 nm aluminum film on a silicon dielectric substrate and is read out through a single microwave feedline with a cryogenic low noise amplifier and room temperature frequencymore » domain multiplexing electronics.« less

  11. False-Color Image of an Impact Crater on Vesta

    NASA Image and Video Library

    2011-08-24

    NASA Dawn spacecraft obtained this false-color image right of an impact crater in asteroid Vesta equatorial region with its framing camera on July 25, 2011. The view on the left is from the camera clear filter.

  12. Rapid orthophoto development system.

    DOT National Transportation Integrated Search

    2013-06-01

    The DMC system procured in the project represented state-of-the-art, large-format digital aerial camera systems at the start of : project. DMC is based on the frame camera model, and to achieve large ground coverage with high spatial resolution, the ...

  13. SPADAS: a high-speed 3D single-photon camera for advanced driver assistance systems

    NASA Astrophysics Data System (ADS)

    Bronzi, D.; Zou, Y.; Bellisai, S.; Villa, F.; Tisa, S.; Tosi, A.; Zappa, F.

    2015-02-01

    Advanced Driver Assistance Systems (ADAS) are the most advanced technologies to fight road accidents. Within ADAS, an important role is played by radar- and lidar-based sensors, which are mostly employed for collision avoidance and adaptive cruise control. Nonetheless, they have a narrow field-of-view and a limited ability to detect and differentiate objects. Standard camera-based technologies (e.g. stereovision) could balance these weaknesses, but they are currently not able to fulfill all automotive requirements (distance range, accuracy, acquisition speed, and frame-rate). To this purpose, we developed an automotive-oriented CMOS single-photon camera for optical 3D ranging based on indirect time-of-flight (iTOF) measurements. Imagers based on Single-photon avalanche diode (SPAD) arrays offer higher sensitivity with respect to CCD/CMOS rangefinders, have inherent better time resolution, higher accuracy and better linearity. Moreover, iTOF requires neither high bandwidth electronics nor short-pulsed lasers, hence allowing the development of cost-effective systems. The CMOS SPAD sensor is based on 64 × 32 pixels, each able to process both 2D intensity-data and 3D depth-ranging information, with background suppression. Pixel-level memories allow fully parallel imaging and prevents motion artefacts (skew, wobble, motion blur) and partial exposure effects, which otherwise would hinder the detection of fast moving objects. The camera is housed in an aluminum case supporting a 12 mm F/1.4 C-mount imaging lens, with a 40°×20° field-of-view. The whole system is very rugged and compact and a perfect solution for vehicle's cockpit, with dimensions of 80 mm × 45 mm × 70 mm, and less that 1 W consumption. To provide the required optical power (1.5 W, eye safe) and to allow fast (up to 25 MHz) modulation of the active illumination, we developed a modular laser source, based on five laser driver cards, with three 808 nm lasers each. We present the full characterization of the 3D automotive system, operated both at night and during daytime, in both indoor and outdoor, in real traffic, scenario. The achieved long-range (up to 45m), high dynamic-range (118 dB), highspeed (over 200 fps) 3D depth measurement, and high precision (better than 90 cm at 45 m), highlight the excellent performance of this CMOS SPAD camera for automotive applications.

  14. A new towed platform for the unobtrusive surveying of benthic habitats and organisms

    USGS Publications Warehouse

    Zawada, David G.; Thompson, P.R.; Butcher, J.

    2008-01-01

    Maps of coral ecosystems are needed to support many conservation and management objectives, as well as research activities. Examples include ground-truthing aerial and satellite imagery, characterizing essential habitat, assessing changes, and monitoring the progress of restoration efforts. To address some of these needs, the U.S. Geological Survey developed the Along-Track Reef-Imaging System (ATRIS), a boat-based sensor package for mapping shallow-water benthic environments. ATRIS consists of a digital still camera, a video camera, and an acoustic depth sounder affixed to a moveable pole. This design, however, restricts its deployment to clear waters less than 10 m deep. To overcome this limitation, a towed version has been developed, referred to as Deep ATRIS. The system is based on a light-weight, computer-controlled, towed vehicle that is capable of following a programmed diving profile. The vehicle is 1.3 m long with a 63-cm wing span and can carry a wide variety of research instruments, including CTDs, fluorometers, transmissometers, and cameras. Deep ATRIS is currently equipped with a high-speed (20 frames · s-1) digital camera, custom-built light-emitting-diode lights, a compass, a 3-axis orientation sensor, and a nadir-looking altimeter. The vehicle dynamically adjusts its altitude to maintain a fixed height above the seafloor. The camera has a 29° x 22° field-of-view and captures color images that are 1360 x 1024 pixels in size. GPS coordinates are recorded for each image. A gigabit ethernet connection enables the images to be displayed and archived in real time on the surface computer. Deep ATRIS has a maximum tow speed of 2.6 m · s-1and a theoretical operating tow-depth limit of 27 m. With an improved tow cable, the operating depth can be extended to 90 m. Here, we present results from the initial sea trials in the Gulf of Mexico and Biscayne National Park, Florida, USA, and discuss the utility of Deep ATRIS for map-ping coral reef habitats. Several example mosaics illustrate the high-quality imagery that can be obtained with this system. The images also reveal the potential for unobtrusive animal observations; fish and sea turtles are unperturbed by the presence of Deep ATRIS

  15. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    PubMed Central

    Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu

    2016-01-01

    Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%. PMID:26950127

  16. SPINS-IND: Pellet injector for fuelling of magnetically confined fusion systems.

    PubMed

    Gangradey, R; Mishra, J; Mukherjee, S; Panchal, P; Nayak, P; Agarwal, J; Saxena, Y C

    2017-06-01

    Using a Gifford-McMahon cycle cryocooler based refrigeration system, a single barrel hydrogen pellet injection (SPINS-IND) system is indigenously developed at Institute for Plasma Research, India. The injector is based on a pipe gun concept, where a pellet formed in situ in the gun barrel is accelerated to high speed using high pressure light propellant gas. The pellet size is decided by considering the Greenwald density limit and its speed is decided by considering a neutral gas shielding model based scaling law. The pellet shape is cylindrical of dimension (1.6 mm ℓ × 1.8 mm φ). For pellet ejection and acceleration, a fast opening valve of short opening duration is installed at the breech of the barrel. A three-stage differential pumping system is used to restrict the flow of the propellant gas into the plasma vacuum vessel. Diagnostic systems such as light gate and fast imaging camera (240 000 frames/s) are employed to measure the pellet speed and size, respectively. A trigger circuit and a programmable logic controller based integrated control system developed on LabVIEW enables to control the pellet injector remotely. Using helium as a propellant gas, the pellet speed is varied in the range 650 m/s-800 m/s. The reliability of pellet formation and ejection is found to be more than 95%. This paper describes the details of SPINS-IND and its test results.

  17. SPINS-IND: Pellet injector for fuelling of magnetically confined fusion systems

    NASA Astrophysics Data System (ADS)

    Gangradey, R.; Mishra, J.; Mukherjee, S.; Panchal, P.; Nayak, P.; Agarwal, J.; Saxena, Y. C.

    2017-06-01

    Using a Gifford-McMahon cycle cryocooler based refrigeration system, a single barrel hydrogen pellet injection (SPINS-IND) system is indigenously developed at Institute for Plasma Research, India. The injector is based on a pipe gun concept, where a pellet formed in situ in the gun barrel is accelerated to high speed using high pressure light propellant gas. The pellet size is decided by considering the Greenwald density limit and its speed is decided by considering a neutral gas shielding model based scaling law. The pellet shape is cylindrical of dimension (1.6 mm ℓ × 1.8 mm φ). For pellet ejection and acceleration, a fast opening valve of short opening duration is installed at the breech of the barrel. A three-stage differential pumping system is used to restrict the flow of the propellant gas into the plasma vacuum vessel. Diagnostic systems such as light gate and fast imaging camera (240 000 frames/s) are employed to measure the pellet speed and size, respectively. A trigger circuit and a programmable logic controller based integrated control system developed on LabVIEW enables to control the pellet injector remotely. Using helium as a propellant gas, the pellet speed is varied in the range 650 m/s-800 m/s. The reliability of pellet formation and ejection is found to be more than 95%. This paper describes the details of SPINS-IND and its test results.

  18. Full-field transient vibrometry of the human tympanic membrane by local phase correlation and high-speed holography

    PubMed Central

    Dobrev, Ivo; Furlong, Cosme; Cheng, Jeffrey T.; Rosowski, John J.

    2014-01-01

    Abstract. Understanding the human hearing process would be helped by quantification of the transient mechanical response of the human ear, including the human tympanic membrane (TM or eardrum). We propose a new hybrid high-speed holographic system (HHS) for acquisition and quantification of the full-field nanometer transient (i.e., >10  kHz) displacement of the human TM. We have optimized and implemented a 2+1 frame local correlation (LC) based phase sampling method in combination with a high-speed (i.e., >40  K fps) camera acquisition system. To our knowledge, there is currently no existing system that provides such capabilities for the study of the human TM. The LC sampling method has a displacement difference of <11  nm relative to measurements obtained by a four-phase step algorithm. Comparisons between our high-speed acquisition system and a laser Doppler vibrometer indicate differences of <10  μs. The high temporal (i.e., >40  kHz) and spatial (i.e., >100  k data points) resolution of our HHS enables parallel measurements of all points on the surface of the TM, which allows quantification of spatially dependent motion parameters, such as modal frequencies and acoustic delays. Such capabilities could allow inferring local material properties across the surface of the TM. PMID:25191832

  19. Comet Wild 2 Up Close and Personal

    NASA Image and Video Library

    2004-01-02

    On January 2, 2004 NASA's Stardust spacecraft made a close flyby of comet Wild 2 (pronounced "Vilt-2"). Among the equipment the spacecraft carried on board was a navigation camera. This is the 34th of the 72 images taken by Stardust's navigation camera during close encounter. The exposure time was 10 milliseconds. The two frames are actually of 1 single exposure. The frame on the left depicts the comet as the human eye would see it. The frame on the right depicts the same image but "stretched" so that the faint jets emanating from Wild 2 can be plainly seen. Comet Wild 2 is about five kilometers (3.1 miles) in diameter. http://photojournal.jpl.nasa.gov/catalog/PIA05571

  20. A state observer for using a slow camera as a sensor for fast control applications

    NASA Astrophysics Data System (ADS)

    Gahleitner, Reinhard; Schagerl, Martin

    2013-03-01

    This contribution concerns about a problem that often arises in vision based control, when a camera is used as a sensor for fast control applications, or more precisely, when the sample rate of the control loop is higher than the frame rate of the camera. In control applications for mechanical axes, e.g. in robotics or automated production, a camera and some image processing can be used as a sensor to detect positions or angles. The sample time in these applications is typically in the range of a few milliseconds or less and this demands the use of a camera with a high frame rate up to 1000 fps. The presented solution is a special state observer that can work with a slower and therefore cheaper camera to estimate the state variables at the higher sample rate of the control loop. To simplify the image processing for the determination of positions or angles and make it more robust, some LED markers are applied to the plant. Simulation and experimental results show that the concept can be used even if the plant is unstable like the inverted pendulum.

  1. Experimental investigation on the phenomena around the onset nucleate boiling during the impacting of a droplet on the hot surface

    NASA Astrophysics Data System (ADS)

    Mitrakusuma, Windy H.; Deendarlianto, Kamal, Samsul; Indarto, Nuriyadi, M.

    2016-06-01

    Onset of nucleate boiling of a droplet when impacted onto hot surface was investigated. Three kinds of surfaces, normal stainless steel (NSS), stainless steel with TiO2 coating (UVN), and stainless steel with TiO2 coating and radiated by ultraviolet ray were employed to examine the effect of wettability. The droplet size was 2.4 mm diameter, and dropped under different We number. The image is generated by high speed camera with the frame speed of 1000 fps. The boiling conditions are identified as natural convection, nucleate boiling, critical heat flux, transition, and film boiling. In the present report, the discussion will be focused on the beginning of nucleate boiling on the droplet. Nucleate boiling occurs when bubbles are generated. These bubbles are probably caused by nucleation on the impurities within the liquid rather than at nucleation sites on the heated surface because the bubbles appear to be in the bulk of the liquid instead of at the liquid-solid interface. In addition, the smaller the contact angle, the fastest the boiling.

  2. Rover mast calibration, exact camera pointing, and camara handoff for visual target tracking

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Ansar, Adnan I.; Steele, Robert D.

    2005-01-01

    This paper presents three technical elements that we have developed to improve the accuracy of the visual target tracking for single-sol approach-and-instrument placement in future Mars rover missions. An accurate, straightforward method of rover mast calibration is achieved by using a total station, a camera calibration target, and four prism targets mounted on the rover. The method was applied to Rocky8 rover mast calibration and yielded a 1.1-pixel rms residual error. Camera pointing requires inverse kinematic solutions for mast pan and tilt angles such that the target image appears right at the center of the camera image. Two issues were raised. Mast camera frames are in general not parallel to the masthead base frame. Further, the optical axis of the camera model in general does not pass through the center of the image. Despite these issues, we managed to derive non-iterative closed-form exact solutions, which were verified with Matlab routines. Actual camera pointing experiments aver 50 random target image paints yielded less than 1.3-pixel rms pointing error. Finally, a purely geometric method for camera handoff using stereo views of the target has been developed. Experimental test runs show less than 2.5 pixels error on high-resolution Navcam for Pancam-to-Navcam handoff, and less than 4 pixels error on lower-resolution Hazcam for Navcam-to-Hazcam handoff.

  3. Evaluation of sequential images for photogrammetrically point determination

    NASA Astrophysics Data System (ADS)

    Kowalczyk, M.

    2011-12-01

    Close range photogrammetry encounters many problems with reconstruction of objects three-dimensional shape. Relative orientation parameters of taken photos makes usually key role leading to right solution of this problem. Automation of technology process is hardly performed due to recorded scene complexity and configuration of camera positions. This configuration makes the process of joining photos into one set usually impossible automatically. Application of camcorder is the solution widely proposed in literature for support in 3D models creation. Main advantages of this tool are connected with large number of recorded images and camera positions. Exterior orientation changes barely between two neighboring frames. Those features of film sequence gives possibilities for creating models with basic algorithms, working faster and more robust, than with remotely taken photos. The first part of this paper presents results of experiments determining interior orientation parameters of some sets of frames, presenting three-dimensional test field. This section describes calibration repeatability of film frames taken from camcorder. It is important due to stability of interior camera geometric parameters. Parametric model of systematical errors was applied for correcting images. Afterwards a short film of the same test field had been taken for determination of check points group. This part has been done for controlling purposes of camera application in measurement tasks. Finally there are presented some results of experiments which compare determination of recorded object points in 3D space. In common digital photogrammetry, where separate photos are used, first levels of image pyramids are taken to connect with feature based matching. This complicated process creates a lot of emergencies, which can produce false detections of image similarities. In case of digital film camera, authors of publications avoid this dangerous step, going straightly to area based matching, aiming high degree of similarity for two corresponding film frames. First approximation, in establishing connections between photos, comes from whole image distance. This image distance method can work with more than just two dimensions of translation vector. Scale and angles are also used for improving image matching. This operation creates more similar looking frames where corresponding characteristic points lays close to each other. Procedure searching for pairs of points works faster and more accurately, because analyzed areas can be reduced. Another proposed solution comes from image created by adding differences between particular frames, gives more rough results, but works much faster than standard matching.

  4. Practical use of high-speed cameras for research and development within the automotive industry: yesterday and today

    NASA Astrophysics Data System (ADS)

    Steinmetz, Klaus

    1995-05-01

    Within the automotive industry, especially for the development and improvement of safety systems, we find a lot of high accelerated motions, that can not be followed and consequently not be analyzed by human eye. For the vehicle safety tests at AUDI, which are performed as 'Crash Tests', 'Sled Tests' and 'Static Component Tests', 'Stalex', 'Hycam', and 'Locam' cameras are in use. Nowadays the automobile production is inconceivable without the use of high speed cameras.

  5. Development of a Compact & Easy-to-Use 3-D Camera for High Speed Turbulent Flow Fields

    DTIC Science & Technology

    2013-12-05

    resolved. Also, in the case of a single camera system, the use of an aperture greatly reduces the amount of collected light. The combination of these...a study on wall-bounded turbulence [Sheng_2006]. Nevertheless, these techniques are limited to small measurement volumes, while maintaining a high...It has also been adapted to kHz rates using high-speed cameras for aeroacoustic studies (see Violato et al. [17, 18]. Tomo-PIV, however, has some

  6. Feasibility study of a ``4H'' X-ray camera based on GaAs:Cr sensor

    NASA Astrophysics Data System (ADS)

    Dragone, A.; Kenney, C.; Lozinskaya, A.; Tolbanov, O.; Tyazhev, A.; Zarubin, A.; Wang, Zhehui

    2016-11-01

    A multilayer stacked X-ray camera concept is described. This type of technology is called `4H' X-ray cameras, where 4H stands for high-Z (Z>30) sensor, high-resolution (less than 300 micron pixel pitch), high-speed (above 100 MHz), and high-energy (above 30 keV in photon energy). The components of the technology, similar to the popular two-dimensional (2D) hybrid pixelated array detectors, consists of GaAs:Cr sensors bonded to high-speed ASICs. 4H cameras based on GaAs also use integration mode of X-ray detection. The number of layers, on the order of ten, is smaller than an earlier configuration for single-photon-counting (SPC) mode of detection [1]. High-speed ASIC based on modification to the ePix family of ASIC is discussed. Applications in X-ray free electron lasers (XFELs), synchrotrons, medicine and non-destructive testing are possible.

  7. High speed multiwire photon camera

    NASA Technical Reports Server (NTRS)

    Lacy, Jeffrey L. (Inventor)

    1991-01-01

    An improved multiwire proportional counter camera having particular utility in the field of clinical nuclear medicine imaging. The detector utilizes direct coupled, low impedance, high speed delay lines, the segments of which are capacitor-inductor networks. A pile-up rejection test is provided to reject confused events otherwise caused by multiple ionization events occuring during the readout window.

  8. High speed multiwire photon camera

    NASA Technical Reports Server (NTRS)

    Lacy, Jeffrey L. (Inventor)

    1989-01-01

    An improved multiwire proportional counter camera having particular utility in the field of clinical nuclear medicine imaging. The detector utilizes direct coupled, low impedance, high speed delay lines, the segments of which are capacitor-inductor networks. A pile-up rejection test is provided to reject confused events otherwise caused by multiple ionization events occurring during the readout window.

  9. SPLASSH: Open source software for camera-based high-speed, multispectral in-vivo optical image acquisition

    PubMed Central

    Sun, Ryan; Bouchard, Matthew B.; Hillman, Elizabeth M. C.

    2010-01-01

    Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software’s framework and provide details to guide users with development of this and similar software. PMID:21258475

  10. Motor vehicle injuries in Qatar: time trends in a rapidly developing Middle Eastern nation.

    PubMed

    Mamtani, Ravinder; Al-Thani, Mohammed H; Al-Thani, Al-Anoud Mohammed; Sheikh, Javaid I; Lowenfels, Albert B

    2012-04-01

    Despite their wealth and modern road systems, traffic injury rates in Middle Eastern countries are generally higher than those in Western countries. The authors examined traffic injuries in Qatar during 2000-2010, a period of rapid population growth, focusing on the impact of speed control cameras installed in 2007 on overall injury rates and mortality. During the period 2000-2006, prior to camera installation, the mean (SD) vehicular injury death rate per 100,000 was 19.9±4.1. From 2007 to 2010, the mean (SD) vehicular death rates were significantly lower: 14.7±1.5 (p=0.028). Non-fatal severe injury rates also declined, but mild injury rates increased, perhaps because of increased traffic congestion and improved notification. It is possible that speed cameras decreased speeding enough to affect the death rate, without affecting overall injury rates. These data suggest that in a rapidly growing Middle Eastern country, photo enforcement (speed) cameras can be an important component of traffic control, but other measures will be required for maximum impact.

  11. Motor vehicle injuries in Qatar: time trends in a rapidly developing Middle Eastern nation

    PubMed Central

    Al-Thani, Mohammed H; Al-Thani, Al-Anoud Mohammed; Sheikh, Javaid I; Lowenfels, Albert B

    2011-01-01

    Despite their wealth and modern road systems, traffic injury rates in Middle Eastern countries are generally higher than those in Western countries. The authors examined traffic injuries in Qatar during 2000–2010, a period of rapid population growth, focusing on the impact of speed control cameras installed in 2007 on overall injury rates and mortality. During the period 2000–2006, prior to camera installation, the mean (SD) vehicular injury death rate per 100 000 was 19.9±4.1. From 2007 to 2010, the mean (SD) vehicular death rates were significantly lower: 14.7±1.5 (p=0.028). Non-fatal severe injury rates also declined, but mild injury rates increased, perhaps because of increased traffic congestion and improved notification. It is possible that speed cameras decreased speeding enough to affect the death rate, without affecting overall injury rates. These data suggest that in a rapidly growing Middle Eastern country, photo enforcement (speed) cameras can be an important component of traffic control, but other measures will be required for maximum impact. PMID:21994881

  12. Behavior of Compact Toroid Injected into C-2U Confinement Vessel

    NASA Astrophysics Data System (ADS)

    Matsumoto, Tadafumi; Roche, T.; Allrey, I.; Sekiguchi, J.; Asai, T.; Conroy, M.; Gota, H.; Granstedt, E.; Hooper, C.; Kinley, J.; Valentine, T.; Waggoner, W.; Binderbauer, M.; Tajima, T.; the TAE Team

    2016-10-01

    The compact toroid (CT) injector system has been developed for particle refueling on the C-2U device. A CT is formed by a magnetized coaxial plasma gun (MCPG) and the typical ejected CT/plasmoid parameters are as follows: average velocity 100 km/s, average electron density 1.9 ×1015 cm-3, electron temperature 30-40 eV, mass 12 μg . To refuel particles into FC plasma the CT must penetrate the transverse magnetic field that surrounds the FRC. The kinetic energy density of the CT should be higher than magnetic energy density of the axial magnetic field, i.e., ρv2 / 2 >=B2 / 2μ0 , where ρ, v, and B are mass density, velocity, and surrounded magnetic field, respectively. Also, the penetrated CT's trajectory is deflected by the transverse magnetic field (Bz 1 kG). Thus, we have to estimate CT's energy and track the CT trajectory inside the magnetic field, for which we adopted a fast-framing camera on C-2U: framing rate is up to 1.25 MHz for 120 frames. By employing the camera we clearly captured the CT/plasmoid trajectory. Comparisons between the fast-framing camera and some other diagnostics as well as CT injection results on C-2U will be presented.

  13. Infrared Imaging Camera Final Report CRADA No. TC02061.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roos, E. V.; Nebeker, S.

    This was a collaborative effort between the University of California, Lawrence Livermore National Laboratory (LLNL) and Cordin Company (Cordin) to enhance the U.S. ability to develop a commercial infrared camera capable of capturing high-resolution images in a l 00 nanoseconds (ns) time frame. The Department of Energy (DOE), under an Initiative for Proliferation Prevention (IPP) project, funded the Russian Federation Nuclear Center All-Russian Scientific Institute of Experimental Physics (RFNC-VNIIEF) in Sarov. VNIIEF was funded to develop a prototype commercial infrared (IR) framing camera and to deliver a prototype IR camera to LLNL. LLNL and Cordin were partners with VNIIEF onmore » this project. A prototype IR camera was delivered by VNIIEF to LLNL in December 2006. In June of 2007, LLNL and Cordin evaluated the camera and the test results revealed that the camera exceeded presently available commercial IR cameras. Cordin believes that the camera can be sold on the international market. The camera is currently being used as a scientific tool within Russian nuclear centers. This project was originally designated as a two year project. The project was not started on time due to changes in the IPP project funding conditions; the project funding was re-directed through the International Science and Technology Center (ISTC), which delayed the project start by over one year. The project was not completed on schedule due to changes within the Russian government export regulations. These changes were directed by Export Control regulations on the export of high technology items that can be used to develop military weapons. The IR camera was on the list that export controls required. The ISTC and Russian government, after negotiations, allowed the delivery of the camera to LLNL. There were no significant technical or business changes to the original project.« less

  14. High-speed visualization of fuel spray impingement in the near-wall region using a DISI injector

    NASA Astrophysics Data System (ADS)

    Kawahara, N.; Kintaka, K.; Tomita, E.

    2017-02-01

    We used a multi-hole injector to spray isooctane under atmospheric conditions and observed droplet impingement behaviors. It is generally known that droplet impact regimes such as splashing, deposition, or bouncing are governed by the Weber number. However, owing to its complexity, little has been reported on microscopic visualization of poly-dispersed spray. During the spray impingement process, a large number of droplets approach, hit, then interact with the wall. It is therefore difficult to focus on a single droplet and observe the impingement process. We solved this difficulty using high-speed microscopic visualization. The spray/wall interaction processes were recorded by a high-speed camera (Shimadzu HPV-X2) with a long-distance microscope. We captured several impinging microscopic droplets. After optimizing the magnification and frame rate, the atomization behaviors, splashing and deposition, were recorded. Then, we processed the images obtained to determine droplet parameters such as the diameter, velocity, and impingement angle. Based on this information, the critical threshold between splashing and deposition was investigated in terms of the normal and parallel components of the Weber number with respect to the wall. The results suggested that, on a dry wall, we should set the normal critical Weber number to 300.

  15. Full-Frame Reference for Test Photo of Moon

    NASA Image and Video Library

    2005-09-10

    This pair of views shows how little of the full image frame was taken up by the Moon in test images taken Sept. 8, 2005, by the High Resolution Imaging Science Experiment HiRISE camera on NASA Mars Reconnaissance Orbiter.

  16. A semi-automated software tool to study treadmill locomotion in the rat: from experiment videos to statistical gait analysis.

    PubMed

    Gravel, P; Tremblay, M; Leblond, H; Rossignol, S; de Guise, J A

    2010-07-15

    A computer-aided method for the tracking of morphological markers in fluoroscopic images of a rat walking on a treadmill is presented and validated. The markers correspond to bone articulations in a hind leg and are used to define the hip, knee, ankle and metatarsophalangeal joints. The method allows a user to identify, using a computer mouse, about 20% of the marker positions in a video and interpolate their trajectories from frame-to-frame. This results in a seven-fold speed improvement in detecting markers. This also eliminates confusion problems due to legs crossing and blurred images. The video images are corrected for geometric distortions from the X-ray camera, wavelet denoised, to preserve the sharpness of minute bone structures, and contrast enhanced. From those images, the marker positions across video frames are extracted, corrected for rat "solid body" motions on the treadmill, and used to compute the positional and angular gait patterns. Robust Bootstrap estimates of those gait patterns and their prediction and confidence bands are finally generated. The gait patterns are invaluable tools to study the locomotion of healthy animals or the complex process of locomotion recovery in animals with injuries. The method could, in principle, be adapted to analyze the locomotion of other animals as long as a fluoroscopic imager and a treadmill are available. Copyright 2010 Elsevier B.V. All rights reserved.

  17. Manual stage acquisition and interactive display of digital slides in histopathology.

    PubMed

    Gherardi, Alessandro; Bevilacqua, Alessandro

    2014-07-01

    More powerful PC architectures, high-resolution cameras working at increasing frame rates, and more and more accurate motorized microscopes have boosted new applications in the field of biomedicine and medical imaging. In histopathology, the use of digital slides (DSs) imaging through dedicated hardware for digital pathology is increasing for several reasons: digital annotation of suspicious lesions, recorded clinical history, and telepathology as a collaborative environment. In this paper, we propose the first method known in the literature for real-time whole slide acquisition and displaying conceived for conventional nonautomated microscopes. Differently from DS scanner, our software enables biologists and histopathologists to build and view the DS in real time while inspecting the sample, as they are accustomed to. In addition, since our approach is compliant with existing common microscope positions, provided with camera and PC, this could contribute to disseminate the whole slide technology in the majority of small labs not endowed with DS hardware facilities. Experiments performed with different histologic specimens (referring to tumor tissues of different body parts as well as to tumor cells), acquired under different setup conditions and devices, prove the effectiveness of our approach both in terms of quality and speed performances.

  18. Computational imaging with a balanced detector.

    PubMed

    Soldevila, F; Clemente, P; Tajahuerce, E; Uribe-Patarroyo, N; Andrés, P; Lancis, J

    2016-06-29

    Single-pixel cameras allow to obtain images in a wide range of challenging scenarios, including broad regions of the electromagnetic spectrum and through scattering media. However, there still exist several drawbacks that single-pixel architectures must address, such as acquisition speed and imaging in the presence of ambient light. In this work we introduce balanced detection in combination with simultaneous complementary illumination in a single-pixel camera. This approach enables to acquire information even when the power of the parasite signal is higher than the signal itself. Furthermore, this novel detection scheme increases both the frame rate and the signal-to-noise ratio of the system. By means of a fast digital micromirror device together with a low numerical aperture collecting system, we are able to produce a live-feed video with a resolution of 64 × 64 pixels at 5 Hz. With advanced undersampling techniques, such as compressive sensing, we can acquire information at rates of 25 Hz. By using this strategy, we foresee real-time biological imaging with large area detectors in conditions where array sensors are unable to operate properly, such as infrared imaging and dealing with objects embedded in turbid media.

  19. Computational imaging with a balanced detector

    NASA Astrophysics Data System (ADS)

    Soldevila, F.; Clemente, P.; Tajahuerce, E.; Uribe-Patarroyo, N.; Andrés, P.; Lancis, J.

    2016-06-01

    Single-pixel cameras allow to obtain images in a wide range of challenging scenarios, including broad regions of the electromagnetic spectrum and through scattering media. However, there still exist several drawbacks that single-pixel architectures must address, such as acquisition speed and imaging in the presence of ambient light. In this work we introduce balanced detection in combination with simultaneous complementary illumination in a single-pixel camera. This approach enables to acquire information even when the power of the parasite signal is higher than the signal itself. Furthermore, this novel detection scheme increases both the frame rate and the signal-to-noise ratio of the system. By means of a fast digital micromirror device together with a low numerical aperture collecting system, we are able to produce a live-feed video with a resolution of 64 × 64 pixels at 5 Hz. With advanced undersampling techniques, such as compressive sensing, we can acquire information at rates of 25 Hz. By using this strategy, we foresee real-time biological imaging with large area detectors in conditions where array sensors are unable to operate properly, such as infrared imaging and dealing with objects embedded in turbid media.

  20. A portable high-definition electronic endoscope based on embedded system

    NASA Astrophysics Data System (ADS)

    Xu, Guang; Wang, Liqiang; Xu, Jin

    2012-11-01

    This paper presents a low power and portable highdefinition (HD) electronic endoscope based on CortexA8 embedded system. A 1/6 inch CMOS image sensor is used to acquire HD images with 1280 *800 pixels. The camera interface of A8 is designed to support images of various sizes and support multiple inputs of video format such as ITUR BT601/ 656 standard. Image rotation (90 degrees clockwise) and image process functions are achieved by CAMIF. The decode engine of the processor plays back or records HD videos at speed of 30 frames per second, builtin HDMI interface transmits high definition images to the external display. Image processing procedures such as demosaicking, color correction and auto white balance are realized on the A8 platform. Other functions are selected through OSD settings. An LCD panel displays the real time images. The snapshot pictures or compressed videos are saved in an SD card or transmited to a computer through USB interface. The size of the camera head is 4×4.8×15 mm with more than 3 meters working distance. The whole endoscope system can be powered by a lithium battery, with the advantages of miniature, low cost and portability.

Top