Sample records for speed camera application

  1. Ultrahigh- and high-speed photography, videography, and photonics '91; Proceedings of the Meeting, San Diego, CA, July 24-26, 1991

    NASA Astrophysics Data System (ADS)

    Jaanimagi, Paul A.

    1992-01-01

    This volume presents papers grouped under the topics on advances in streak and framing camera technology, applications of ultrahigh-speed photography, characterizing high-speed instrumentation, high-speed electronic imaging technology and applications, new technology for high-speed photography, high-speed imaging and photonics in detonics, and high-speed velocimetry. The papers presented include those on a subpicosecond X-ray streak camera, photocathodes for ultrasoft X-ray region, streak tube dynamic range, high-speed TV cameras for streak tube readout, femtosecond light-in-flight holography, and electrooptical systems characterization techniques. Attention is also given to high-speed electronic memory video recording techniques, high-speed IR imaging of repetitive events using a standard RS-170 imager, use of a CCD array as a medium-speed streak camera, the photography of shock waves in explosive crystals, a single-frame camera based on the type LD-S-10 intensifier tube, and jitter diagnosis for pico- and femtosecond sources.

  2. The application of high-speed photography in z-pinch high-temperature plasma diagnostics

    NASA Astrophysics Data System (ADS)

    Wang, Kui-lu; Qiu, Meng-tong; Hei, Dong-wei

    2007-01-01

    This invited paper is presented to discuss the application of high speed photography in z-pinch high temperature plasma diagnostics in recent years in Northwest Institute of Nuclear Technology in concentrative mode. The developments and applications of soft x-ray framing camera, soft x-ray curved crystal spectrometer, optical framing camera, ultraviolet four-frame framing camera and ultraviolet-visible spectrometer are introduced.

  3. High-speed optical 3D sensing and its applications

    NASA Astrophysics Data System (ADS)

    Watanabe, Yoshihiro

    2016-12-01

    This paper reviews high-speed optical 3D sensing technologies for obtaining the 3D shape of a target using a camera. The focusing speed is from 100 to 1000 fps, exceeding normal camera frame rates, which are typically 30 fps. In particular, contactless, active, and real-time systems are introduced. Also, three example applications of this type of sensing technology are introduced, including surface reconstruction from time-sequential depth images, high-speed 3D user interaction, and high-speed digital archiving.

  4. A Reconfigurable Real-Time Compressive-Sampling Camera for Biological Applications

    PubMed Central

    Fu, Bo; Pitter, Mark C.; Russell, Noah A.

    2011-01-01

    Many applications in biology, such as long-term functional imaging of neural and cardiac systems, require continuous high-speed imaging. This is typically not possible, however, using commercially available systems. The frame rate and the recording time of high-speed cameras are limited by the digitization rate and the capacity of on-camera memory. Further restrictions are often imposed by the limited bandwidth of the data link to the host computer. Even if the system bandwidth is not a limiting factor, continuous high-speed acquisition results in very large volumes of data that are difficult to handle, particularly when real-time analysis is required. In response to this issue many cameras allow a predetermined, rectangular region of interest (ROI) to be sampled, however this approach lacks flexibility and is blind to the image region outside of the ROI. We have addressed this problem by building a camera system using a randomly-addressable CMOS sensor. The camera has a low bandwidth, but is able to capture continuous high-speed images of an arbitrarily defined ROI, using most of the available bandwidth, while simultaneously acquiring low-speed, full frame images using the remaining bandwidth. In addition, the camera is able to use the full-frame information to recalculate the positions of targets and update the high-speed ROIs without interrupting acquisition. In this way the camera is capable of imaging moving targets at high-speed while simultaneously imaging the whole frame at a lower speed. We have used this camera system to monitor the heartbeat and blood cell flow of a water flea (Daphnia) at frame rates in excess of 1500 fps. PMID:22028852

  5. High-Speed Videography Instrumentation And Procedures

    NASA Astrophysics Data System (ADS)

    Miller, C. E.

    1982-02-01

    High-speed videography has been an electronic analog of low-speed film cameras, but having the advantages of instant-replay and simplicity of operation. Recent advances have pushed frame-rates into the realm of the rotating prism camera. Some characteristics of videography systems are discussed in conjunction with applications in sports analysis, and with sports equipment testing.

  6. High-speed line-scan camera with digital time delay integration

    NASA Astrophysics Data System (ADS)

    Bodenstorfer, Ernst; Fürtler, Johannes; Brodersen, Jörg; Mayer, Konrad J.; Eckel, Christian; Gravogl, Klaus; Nachtnebel, Herbert

    2007-02-01

    Dealing with high-speed image acquisition and processing systems, the speed of operation is often limited by the amount of available light, due to short exposure times. Therefore, high-speed applications often use line-scan cameras, based on charge-coupled device (CCD) sensors with time delayed integration (TDI). Synchronous shift and accumulation of photoelectric charges on the CCD chip - according to the objects' movement - result in a longer effective exposure time without introducing additional motion blur. This paper presents a high-speed color line-scan camera based on a commercial complementary metal oxide semiconductor (CMOS) area image sensor with a Bayer filter matrix and a field programmable gate array (FPGA). The camera implements a digital equivalent to the TDI effect exploited with CCD cameras. The proposed design benefits from the high frame rates of CMOS sensors and from the possibility of arbitrarily addressing the rows of the sensor's pixel array. For the digital TDI just a small number of rows are read out from the area sensor which are then shifted and accumulated according to the movement of the inspected objects. This paper gives a detailed description of the digital TDI algorithm implemented on the FPGA. Relevant aspects for the practical application are discussed and key features of the camera are listed.

  7. Development of a driving method suitable for ultrahigh-speed shooting in a 2M-fps 300k-pixel single-chip color camera

    NASA Astrophysics Data System (ADS)

    Yonai, J.; Arai, T.; Hayashida, T.; Ohtake, H.; Namiki, J.; Yoshida, T.; Etoh, T. Goji

    2012-03-01

    We have developed an ultrahigh-speed CCD camera that can capture instantaneous phenomena not visible to the human eye and impossible to capture with a regular video camera. The ultrahigh-speed CCD was specially constructed so that the CCD memory between the photodiode and the vertical transfer path of each pixel can store 144 frames each. For every one-frame shot, the electric charges generated from the photodiodes are transferred in one step to the memory of all the parallel pixels, making ultrahigh-speed shooting possible. Earlier, we experimentally manufactured a 1M-fps ultrahigh-speed camera and tested it for broadcasting applications. Through those tests, we learned that there are cases that require shooting speeds (frame rate) of more than 1M fps; hence we aimed to develop a new ultrahigh-speed camera that will enable much faster shooting speeds than what is currently possible. Since shooting at speeds of more than 200,000 fps results in decreased image quality and abrupt heating of the image sensor and drive circuit board, faster speeds cannot be achieved merely by increasing the drive frequency. We therefore had to improve the image sensor wiring layout and the driving method to develop a new 2M-fps, 300k-pixel ultrahigh-speed single-chip color camera for broadcasting purposes.

  8. Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications

    NASA Astrophysics Data System (ADS)

    Olson, Gaylord G.; Walker, Jo N.

    1997-09-01

    Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.

  9. FPGA Based Adaptive Rate and Manifold Pattern Projection for Structured Light 3D Camera System †

    PubMed Central

    Lee, Sukhan

    2018-01-01

    The quality of the captured point cloud and the scanning speed of a structured light 3D camera system depend upon their capability of handling the object surface of a large reflectance variation in the trade-off of the required number of patterns to be projected. In this paper, we propose and implement a flexible embedded framework that is capable of triggering the camera single or multiple times for capturing single or multiple projections within a single camera exposure setting. This allows the 3D camera system to synchronize the camera and projector even for miss-matched frame rates such that the system is capable of projecting different types of patterns for different scan speed applications. This makes the system capturing a high quality of 3D point cloud even for the surface of a large reflectance variation while achieving a high scan speed. The proposed framework is implemented on the Field Programmable Gate Array (FPGA), where the camera trigger is adaptively generated in such a way that the position and the number of triggers are automatically determined according to camera exposure settings. In other words, the projection frequency is adaptive to different scanning applications without altering the architecture. In addition, the proposed framework is unique as it does not require any external memory for storage because pattern pixels are generated in real-time, which minimizes the complexity and size of the application-specific integrated circuit (ASIC) design and implementation. PMID:29642506

  10. Flow visualization by mobile phone cameras

    NASA Astrophysics Data System (ADS)

    Cierpka, Christian; Hain, Rainer; Buchmann, Nicolas A.

    2016-06-01

    Mobile smart phones were completely changing people's communication within the last ten years. However, these devices do not only offer communication through different channels but also devices and applications for fun and recreation. In this respect, mobile phone cameras include now relatively fast (up to 240 Hz) cameras to capture high-speed videos of sport events or other fast processes. The article therefore explores the possibility to make use of this development and the wide spread availability of these cameras in the terms of velocity measurements for industrial or technical applications and fluid dynamics education in high schools and at universities. The requirements for a simplistic PIV (particle image velocimetry) system are discussed. A model experiment of a free water jet was used to prove the concept and shed some light on the achievable quality and determine bottle necks by comparing the results obtained with a mobile phone camera with data taken by a high-speed camera suited for scientific experiments.

  11. International Congress on High-Speed Photography and Photonics, 19th, Cambridge, England, Sept. 16-21, 1990, Proceedings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garfield, B.R.; Rendell, J.T.

    1991-01-01

    The present conference discusses the application of schlieren photography in industry, laser fiber-optic high speed photography, holographic visualization of hypervelocity explosions, sub-100-picosec X-ray grating cameras, flash soft X-radiography, a novel approach to synchroballistic photography, a programmable image converter framing camera, high speed readout CCDs, an ultrafast optomechanical camera, a femtosec streak tube, a modular streak camera for laser ranging, and human-movement analysis with real-time imaging. Also discussed are high-speed photography of high-resolution moire patterns, a 2D electron-bombarded CCD readout for picosec electrooptical data, laser-generated plasma X-ray diagnostics, 3D shape restoration with virtual grating phase detection, Cu vapor lasers for highmore » speed photography, a two-frequency picosec laser with electrooptical feedback, the conversion of schlieren systems to high speed interferometers, laser-induced cavitation bubbles, stereo holographic cinematography, a gatable photonic detector, and laser generation of Stoneley waves at liquid-solid boundaries.« less

  12. A compact high-speed pnCCD camera for optical and x-ray applications

    NASA Astrophysics Data System (ADS)

    Ihle, Sebastian; Ordavo, Ivan; Bechteler, Alois; Hartmann, Robert; Holl, Peter; Liebel, Andreas; Meidinger, Norbert; Soltau, Heike; Strüder, Lothar; Weber, Udo

    2012-07-01

    We developed a camera with a 264 × 264 pixel pnCCD of 48 μm size (thickness 450 μm) for X-ray and optical applications. It has a high quantum efficiency and can be operated up to 400 / 1000 Hz (noise≍ 2:5 ° ENC / ≍4:0 ° ENC). High-speed astronomical observations can be performed with low light levels. Results of test measurements will be presented. The camera is well suitable for ground based preparation measurements for future X-ray missions. For X-ray single photons, the spatial position can be determined with significant sub-pixel resolution.

  13. High speed imaging - An important industrial tool

    NASA Technical Reports Server (NTRS)

    Moore, Alton; Pinelli, Thomas E.

    1986-01-01

    High-speed photography, which is a rapid sequence of photographs that allow an event to be analyzed through the stoppage of motion or the production of slow-motion effects, is examined. In high-speed photography 16, 35, and 70 mm film and framing rates between 64-12,000 frames per second are utilized to measure such factors as angles, velocities, failure points, and deflections. The use of dual timing lamps in high-speed photography and the difficulties encountered with exposure and programming the camera and event are discussed. The application of video cameras to the recording of high-speed events is described.

  14. A feasibility study of damage detection in beams using high-speed camera (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Wan, Chao; Yuan, Fuh-Gwo

    2017-04-01

    In this paper a method for damage detection in beam structures using high-speed camera is presented. Traditional methods of damage detection in structures typically involve contact (i.e., piezoelectric sensor or accelerometer) or non-contact sensors (i.e., laser vibrometer) which can be costly and time consuming to inspect an entire structure. With the popularity of the digital camera and the development of computer vision technology, video cameras offer a viable capability of measurement including higher spatial resolution, remote sensing and low-cost. In the study, a damage detection method based on the high-speed camera was proposed. The system setup comprises a high-speed camera and a line-laser which can capture the out-of-plane displacement of a cantilever beam. The cantilever beam with an artificial crack was excited and the vibration process was recorded by the camera. A methodology called motion magnification, which can amplify subtle motions in a video is used for modal identification of the beam. A finite element model was used for validation of the proposed method. Suggestions for applications of this methodology and challenges in future work will be discussed.

  15. High speed movies of turbulence in Alcator C-Mod

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terry, J.L.; Zweben, S.J.; Bose, B.

    2004-10-01

    A high speed (250 kHz), 300 frame charge coupled device camera has been used to image turbulence in the Alcator C-Mod Tokamak. The camera system is described and some of its important characteristics are measured, including time response and uniformity over the field-of-view. The diagnostic has been used in two applications. One uses gas-puff imaging to illuminate the turbulence in the edge/scrape-off-layer region, where D{sub 2} gas puffs localize the emission in a plane perpendicular to the magnetic field when viewed by the camera system. The dynamics of the underlying turbulence around and outside the separatrix are detected in thismore » manner. In a second diagnostic application, the light from an injected, ablating, high speed Li pellet is observed radially from the outer midplane, and fast poloidal motion of toroidal striations are seen in the Li{sup +} light well inside the separatrix.« less

  16. The impacts of speed cameras on road accidents: an application of propensity score matching methods.

    PubMed

    Li, Haojie; Graham, Daniel J; Majumdar, Arnab

    2013-11-01

    This paper aims to evaluate the impacts of speed limit enforcement cameras on reducing road accidents in the UK by accounting for both confounding factors and the selection of proper reference groups. The propensity score matching (PSM) method is employed to do this. A naïve before and after approach and the empirical Bayes (EB) method are compared with the PSM method. A total of 771 sites and 4787 sites for the treatment and the potential reference groups respectively are observed for a period of 9 years in England. Both the PSM and the EB methods show similar results that there are significant reductions in the number of accidents of all severities at speed camera sites. It is suggested that the propensity score can be used as the criteria for selecting the reference group in before-after control studies. Speed cameras were found to be most effective in reducing accidents up to 200 meters from camera sites and no evidence of accident migration was found. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Adaptation of the Camera Link Interface for Flight-Instrument Applications

    NASA Technical Reports Server (NTRS)

    Randall, David P.; Mahoney, John C.

    2010-01-01

    COTS (commercial-off-the-shelf) hard ware using an industry-standard Camera Link interface is proposed to accomplish the task of designing, building, assembling, and testing electronics for an airborne spectrometer that would be low-cost, but sustain the required data speed and volume. The focal plane electronics were designed to support that hardware standard. Analysis was done to determine how these COTS electronics could be interfaced with space-qualified camera electronics. Interfaces available for spaceflight application do not support the industry standard Camera Link interface, but with careful design, COTS EGSE (electronics ground support equipment), including camera interfaces and camera simulators, can still be used.

  18. Network-linked long-time recording high-speed video camera system

    NASA Astrophysics Data System (ADS)

    Kimura, Seiji; Tsuji, Masataka

    2001-04-01

    This paper describes a network-oriented, long-recording-time high-speed digital video camera system that utilizes an HDD (Hard Disk Drive) as a recording medium. Semiconductor memories (DRAM, etc.) are the most common image data recording media with existing high-speed digital video cameras. They are extensively used because of their advantage of high-speed writing and reading of picture data. The drawback is that their recording time is limited to only several seconds because the data amount is very large. A recording time of several seconds is sufficient for many applications. However, a much longer recording time is required in some applications where an exact prediction of trigger timing is hard to make. In the Late years, the recording density of the HDD has been dramatically improved, which has attracted more attention to its value as a long-recording-time medium. We conceived an idea that we would be able to build a compact system that makes possible a long time recording if the HDD can be used as a memory unit for high-speed digital image recording. However, the data rate of such a system, capable of recording 640 X 480 pixel resolution pictures at 500 frames per second (fps) with 8-bit grayscale is 153.6 Mbyte/sec., and is way beyond the writing speed of the commonly used HDD. So, we developed a dedicated image compression system and verified its capability to lower the data rate from the digital camera to match the HDD writing rate.

  19. Applications of a shadow camera system for energy meteorology

    NASA Astrophysics Data System (ADS)

    Kuhn, Pascal; Wilbert, Stefan; Prahl, Christoph; Garsche, Dominik; Schüler, David; Haase, Thomas; Ramirez, Lourdes; Zarzalejo, Luis; Meyer, Angela; Blanc, Philippe; Pitz-Paal, Robert

    2018-02-01

    Downward-facing shadow cameras might play a major role in future energy meteorology. Shadow cameras directly image shadows on the ground from an elevated position. They are used to validate other systems (e.g. all-sky imager based nowcasting systems, cloud speed sensors or satellite forecasts) and can potentially provide short term forecasts for solar power plants. Such forecasts are needed for electricity grids with high penetrations of renewable energy and can help to optimize plant operations. In this publication, two key applications of shadow cameras are briefly presented.

  20. Feasibility study of a ``4H'' X-ray camera based on GaAs:Cr sensor

    NASA Astrophysics Data System (ADS)

    Dragone, A.; Kenney, C.; Lozinskaya, A.; Tolbanov, O.; Tyazhev, A.; Zarubin, A.; Wang, Zhehui

    2016-11-01

    A multilayer stacked X-ray camera concept is described. This type of technology is called `4H' X-ray cameras, where 4H stands for high-Z (Z>30) sensor, high-resolution (less than 300 micron pixel pitch), high-speed (above 100 MHz), and high-energy (above 30 keV in photon energy). The components of the technology, similar to the popular two-dimensional (2D) hybrid pixelated array detectors, consists of GaAs:Cr sensors bonded to high-speed ASICs. 4H cameras based on GaAs also use integration mode of X-ray detection. The number of layers, on the order of ten, is smaller than an earlier configuration for single-photon-counting (SPC) mode of detection [1]. High-speed ASIC based on modification to the ePix family of ASIC is discussed. Applications in X-ray free electron lasers (XFELs), synchrotrons, medicine and non-destructive testing are possible.

  1. Ultrafast Imaging using Spectral Resonance Modulation

    NASA Astrophysics Data System (ADS)

    Huang, Eric; Ma, Qian; Liu, Zhaowei

    2016-04-01

    CCD cameras are ubiquitous in research labs, industry, and hospitals for a huge variety of applications, but there are many dynamic processes in nature that unfold too quickly to be captured. Although tradeoffs can be made between exposure time, sensitivity, and area of interest, ultimately the speed limit of a CCD camera is constrained by the electronic readout rate of the sensors. One potential way to improve the imaging speed is with compressive sensing (CS), a technique that allows for a reduction in the number of measurements needed to record an image. However, most CS imaging methods require spatial light modulators (SLMs), which are subject to mechanical speed limitations. Here, we demonstrate an etalon array based SLM without any moving elements that is unconstrained by either mechanical or electronic speed limitations. This novel spectral resonance modulator (SRM) shows great potential in an ultrafast compressive single pixel camera.

  2. SPLASSH: Open source software for camera-based high-speed, multispectral in-vivo optical image acquisition

    PubMed Central

    Sun, Ryan; Bouchard, Matthew B.; Hillman, Elizabeth M. C.

    2010-01-01

    Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software’s framework and provide details to guide users with development of this and similar software. PMID:21258475

  3. Precise color images a high-speed color video camera system with three intensified sensors

    NASA Astrophysics Data System (ADS)

    Oki, Sachio; Yamakawa, Masafumi; Gohda, Susumu; Etoh, Takeharu G.

    1999-06-01

    High speed imaging systems have been used in a large field of science and engineering. Although the high speed camera systems have been improved to high performance, most of their applications are only to get high speed motion pictures. However, in some fields of science and technology, it is useful to get some other information, such as temperature of combustion flame, thermal plasma and molten materials. Recent digital high speed video imaging technology should be able to get such information from those objects. For this purpose, we have already developed a high speed video camera system with three-intensified-sensors and cubic prism image splitter. The maximum frame rate is 40,500 pps (picture per second) at 64 X 64 pixels and 4,500 pps at 256 X 256 pixels with 256 (8 bit) intensity resolution for each pixel. The camera system can store more than 1,000 pictures continuously in solid state memory. In order to get the precise color images from this camera system, we need to develop a digital technique, which consists of a computer program and ancillary instruments, to adjust displacement of images taken from two or three image sensors and to calibrate relationship between incident light intensity and corresponding digital output signals. In this paper, the digital technique for pixel-based displacement adjustment are proposed. Although the displacement of the corresponding circle was more than 8 pixels in original image, the displacement was adjusted within 0.2 pixels at most by this method.

  4. Binary pressure-sensitive paint measurements using miniaturised, colour, machine vision cameras

    NASA Astrophysics Data System (ADS)

    Quinn, Mark Kenneth

    2018-05-01

    Recent advances in machine vision technology and capability have led to machine vision cameras becoming applicable for scientific imaging. This study aims to demonstrate the applicability of machine vision colour cameras for the measurement of dual-component pressure-sensitive paint (PSP). The presence of a second luminophore component in the PSP mixture significantly reduces its inherent temperature sensitivity, increasing its applicability at low speeds. All of the devices tested are smaller than the cooled CCD cameras traditionally used and most are of significantly lower cost, thereby increasing the accessibility of such technology and techniques. Comparisons between three machine vision cameras, a three CCD camera, and a commercially available specialist PSP camera are made on a range of parameters, and a detailed PSP calibration is conducted in a static calibration chamber. The findings demonstrate that colour machine vision cameras can be used for quantitative, dual-component, pressure measurements. These results give rise to the possibility of performing on-board dual-component PSP measurements in wind tunnels or on real flight/road vehicles.

  5. Object tracking using multiple camera video streams

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  6. Overt vs. covert speed cameras in combination with delayed vs. immediate feedback to the offender.

    PubMed

    Marciano, Hadas; Setter, Pe'erly; Norman, Joel

    2015-06-01

    Speeding is a major problem in road safety because it increases both the probability of accidents and the severity of injuries if an accident occurs. Speed cameras are one of the most common speed enforcement tools. Most of the speed cameras around the world are overt, but there is evidence that this can cause a "kangaroo effect" in driving patterns. One suggested alternative to prevent this kangaroo effect is the use of covert cameras. Another issue relevant to the effect of enforcement countermeasures on speeding is the timing of the fine. There is general agreement on the importance of the immediacy of the punishment, however, in the context of speed limit enforcement, implementing such immediate punishment is difficult. An immediate feedback that mediates the delay between the speed violation and getting a ticket is one possible solution. This study examines combinations of concealment and the timing of the fine in operating speed cameras in order to evaluate the most effective one in terms of enforcing speed limits. Using a driving simulator, the driving performance of the following four experimental groups was tested: (1) overt cameras with delayed feedback, (2) overt cameras with immediate feedback, (3) covert cameras with delayed feedback, and (4) covert cameras with immediate feedback. Each of the 58 participants drove in the same scenario on three different days. The results showed that both median speed and speed variance were higher with overt than with covert cameras. Moreover, implementing a covert camera system along with immediate feedback was more conducive to drivers maintaining steady speeds at the permitted levels from the very beginning. Finally, both 'overt cameras' groups exhibit a kangaroo effect throughout the entire experiment. It can be concluded that an implementation strategy consisting of covert speed cameras combined with immediate feedback to the offender is potentially an optimal way to motivate drivers to maintain speeds at the speed limit. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Low Noise Camera for Suborbital Science Applications

    NASA Technical Reports Server (NTRS)

    Hyde, David; Robertson, Bryan; Holloway, Todd

    2015-01-01

    Low-cost, commercial-off-the-shelf- (COTS-) based science cameras are intended for lab use only and are not suitable for flight deployment as they are difficult to ruggedize and repackage into instruments. Also, COTS implementation may not be suitable since mission science objectives are tied to specific measurement requirements, and often require performance beyond that required by the commercial market. Custom camera development for each application is cost prohibitive for the International Space Station (ISS) or midrange science payloads due to nonrecurring expenses ($2,000 K) for ground-up camera electronics design. While each new science mission has a different suite of requirements for camera performance (detector noise, speed of image acquisition, charge-coupled device (CCD) size, operation temperature, packaging, etc.), the analog-to-digital conversion, power supply, and communications can be standardized to accommodate many different applications. The low noise camera for suborbital applications is a rugged standard camera platform that can accommodate a range of detector types and science requirements for use in inexpensive to mid range payloads supporting Earth science, solar physics, robotic vision, or astronomy experiments. Cameras developed on this platform have demonstrated the performance found in custom flight cameras at a price per camera more than an order of magnitude lower.

  8. Application of Optical Measurement Techniques During Stages of Pregnancy: Use of Phantom High Speed Cameras for Digital Image Correlation (D.I.C.) During Baby Kicking and Abdomen Movements

    NASA Technical Reports Server (NTRS)

    Gradl, Paul

    2016-01-01

    Paired images were collected using a projected pattern instead of standard painting of the speckle pattern on her abdomen. High Speed cameras were post triggered after movements felt. Data was collected at 120 fps -limited due to 60hz frequency of projector. To ensure that kicks and movement data was real a background test was conducted with no baby movement (to correct for breathing and body motion).

  9. New Modular Camera No Ordinary Joe

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Although dubbed 'Little Joe' for its small-format characteristics, a new wavefront sensor camera has proved that it is far from coming up short when paired with high-speed, low-noise applications. SciMeasure Analytical Systems, Inc., a provider of cameras and imaging accessories for use in biomedical research and industrial inspection and quality control, is the eye behind Little Joe's shutter, manufacturing and selling the modular, multi-purpose camera worldwide to advance fields such as astronomy, neurobiology, and cardiology.

  10. Feasibility study of a ``4H'' X-ray camera based on GaAs:Cr sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dragone, Angelo; Kenney, Chris; Lozinskaya, Anastassiya

    Here, we describe a multilayer stacked X-ray camera concept. This type of technology is called `4H' X-ray cameras, where 4H stands for high-Z (Z>30) sensor, high-resolution (less than 300 micron pixel pitch), high-speed (above 100 MHz), and high-energy (above 30 keV in photon energy). The components of the technology, similar to the popular two-dimensional (2D) hybrid pixelated array detectors, consists of GaAs:Cr sensors bonded to high-speed ASICs. 4H cameras based on GaAs also use integration mode of X-ray detection. The number of layers, on the order of ten, is smaller than an earlier configuration for single-photon-counting (SPC) mode of detectionmore » [1]. High-speed ASIC based on modification to the ePix family of ASIC is discussed. Applications in X-ray free electron lasers (XFELs), synchrotrons, medicine and non-destructive testing are possible.« less

  11. Feasibility study of a ``4H'' X-ray camera based on GaAs:Cr sensor

    DOE PAGES

    Dragone, Angelo; Kenney, Chris; Lozinskaya, Anastassiya; ...

    2016-11-29

    Here, we describe a multilayer stacked X-ray camera concept. This type of technology is called `4H' X-ray cameras, where 4H stands for high-Z (Z>30) sensor, high-resolution (less than 300 micron pixel pitch), high-speed (above 100 MHz), and high-energy (above 30 keV in photon energy). The components of the technology, similar to the popular two-dimensional (2D) hybrid pixelated array detectors, consists of GaAs:Cr sensors bonded to high-speed ASICs. 4H cameras based on GaAs also use integration mode of X-ray detection. The number of layers, on the order of ten, is smaller than an earlier configuration for single-photon-counting (SPC) mode of detectionmore » [1]. High-speed ASIC based on modification to the ePix family of ASIC is discussed. Applications in X-ray free electron lasers (XFELs), synchrotrons, medicine and non-destructive testing are possible.« less

  12. Recent Developments In High Speed Lens Design At The NPRL

    NASA Astrophysics Data System (ADS)

    Mcdowell, M. W.; Klee, H. W.

    1987-09-01

    Although the lens provides the link between the high speed camera and the outside world, there has over the years been little evidence of co-operation between the optical design and high speed photography communities. It is still only too common for a manufacturer to develop a camera of improved performance and resolution and then to combine this with a standard camera lens. These lenses were often designed for a completely different recording medium and, more often than not, their use results in avoidable degradation of the overall system performance. There is a tendency to assume that a specialized lens would be too expensive and that pushing the aperture automatically implies more complex optical systems. In the present paper some recent South African developments in the design of large aperture lenses are described. The application of a new design principle, based on the work earlier this century of Bernhard Schmidt, shows that ultra-fast lenses need not be overly complex and a basic four-element lens configuration can be adapted to a wide variety of applications.

  13. International Congress on High Speed Photography and Photonics, 17th, Pretoria, Republic of South Africa, Sept. 1-5, 1986, Proceedings. Volumes 1 & 2

    NASA Astrophysics Data System (ADS)

    McDowell, M. W.; Hollingworth, D.

    1986-01-01

    The present conference discusses topics in mining applications of high speed photography, ballistic, shock wave and detonation studies employing high speed photography, laser and X-ray diagnostics, biomechanical photography, millisec-microsec-nanosec-picosec-femtosec photographic methods, holographic, schlieren, and interferometric techniques, and videography. Attention is given to such issues as the pulse-shaping of ultrashort optical pulses, the performance of soft X-ray streak cameras, multiple-frame image tube operation, moire-enlargement motion-raster photography, two-dimensional imaging with tomographic techniques, photochron TV streak cameras, and streak techniques in detonics.

  14. Effects of automated speed enforcement in Montgomery County, Maryland, on vehicle speeds, public opinion, and crashes.

    PubMed

    Hu, Wen; McCartt, Anne T

    2016-09-01

    In May 2007, Montgomery County, Maryland, implemented an automated speed enforcement program, with cameras allowed on residential streets with speed limits of 35 mph or lower and in school zones. In 2009, the state speed camera law increased the enforcement threshold from 11 to 12 mph over the speed limit and restricted school zone enforcement hours. In 2012, the county began using a corridor approach, in which cameras were periodically moved along the length of a roadway segment. The long-term effects of the speed camera program on travel speeds, public attitudes, and crashes were evaluated. Changes in travel speeds at camera sites from 6 months before the program began to 7½ years after were compared with changes in speeds at control sites in the nearby Virginia counties of Fairfax and Arlington. A telephone survey of Montgomery County drivers was conducted in Fall 2014 to examine attitudes and experiences related to automated speed enforcement. Using data on crashes during 2004-2013, logistic regression models examined the program's effects on the likelihood that a crash involved an incapacitating or fatal injury on camera-eligible roads and on potential spillover roads in Montgomery County, using crashes in Fairfax County on similar roads as controls. About 7½ years after the program began, speed cameras were associated with a 10% reduction in mean speeds and a 62% reduction in the likelihood that a vehicle was traveling more than 10 mph above the speed limit at camera sites. When interviewed in Fall 2014, 95% of drivers were aware of the camera program, 62% favored it, and most had received a camera ticket or knew someone else who had. The overall effect of the camera program in its modified form, including both the law change and the corridor approach, was a 39% reduction in the likelihood that a crash resulted in an incapacitating or fatal injury. Speed cameras alone were associated with a 19% reduction in the likelihood that a crash resulted in an incapacitating or fatal injury, the law change was associated with a nonsignificant 8% increase, and the corridor approach provided an additional 30% reduction over and above the cameras. This study adds to the evidence that speed cameras can reduce speeding, which can lead to reductions in speeding-related crashes and crashes involving serious injuries or fatalities.

  15. GPU-based real-time trinocular stereo vision

    NASA Astrophysics Data System (ADS)

    Yao, Yuanbin; Linton, R. J.; Padir, Taskin

    2013-01-01

    Most stereovision applications are binocular which uses information from a 2-camera array to perform stereo matching and compute the depth image. Trinocular stereovision with a 3-camera array has been proved to provide higher accuracy in stereo matching which could benefit applications like distance finding, object recognition, and detection. This paper presents a real-time stereovision algorithm implemented on a GPGPU (General-purpose graphics processing unit) using a trinocular stereovision camera array. Algorithm employs a winner-take-all method applied to perform fusion of disparities in different directions following various image processing techniques to obtain the depth information. The goal of the algorithm is to achieve real-time processing speed with the help of a GPGPU involving the use of Open Source Computer Vision Library (OpenCV) in C++ and NVidia CUDA GPGPU Solution. The results are compared in accuracy and speed to verify the improvement.

  16. Digital holographic interferometry for characterizing deformable mirrors in aero-optics

    NASA Astrophysics Data System (ADS)

    Trolinger, James D.; Hess, Cecil F.; Razavi, Payam; Furlong, Cosme

    2016-08-01

    Measuring and understanding the transient behavior of a surface with high spatial and temporal resolution are required in many areas of science. This paper describes the development and application of a high-speed, high-dynamic range, digital holographic interferometer for high-speed surface contouring with fractional wavelength precision and high-spatial resolution. The specific application under investigation here is to characterize deformable mirrors (DM) employed in aero-optics. The developed instrument was shown capable of contouring a deformable mirror with extremely high-resolution at frequencies exceeding 40 kHz. We demonstrated two different procedures for characterizing the mechanical response of a surface to a wide variety of input forces, one that employs a high-speed digital camera and a second that employs a low-speed, low-cost digital camera. The latter is achieved by cycling the DM actuators with a step input, producing a transient that typically lasts up to a millisecond before reaching equilibrium. Recordings are made at increasing times after the DM initiation from zero to equilibrium to analyze the transient. Because the wave functions are stored and reconstructable, they can be compared with each other to produce contours including absolute, difference, and velocity. High-speed digital cameras recorded the wave functions during a single transient at rates exceeding 40 kHz. We concluded that either method is fully capable of characterizing a typical DM to the extent required by aero-optical engineers.

  17. Single software platform used for high speed data transfer implementation in a 65k pixel camera working in single photon counting mode

    NASA Astrophysics Data System (ADS)

    Maj, P.; Kasiński, K.; Gryboś, P.; Szczygieł, R.; Kozioł, A.

    2015-12-01

    Integrated circuits designed for specific applications generally use non-standard communication methods. Hybrid pixel detector readout electronics produces a huge amount of data as a result of number of frames per seconds. The data needs to be transmitted to a higher level system without limiting the ASIC's capabilities. Nowadays, the Camera Link interface is still one of the fastest communication methods, allowing transmission speeds up to 800 MB/s. In order to communicate between a higher level system and the ASIC with a dedicated protocol, an FPGA with dedicated code is required. The configuration data is received from the PC and written to the ASIC. At the same time, the same FPGA should be able to transmit the data from the ASIC to the PC at the very high speed. The camera should be an embedded system enabling autonomous operation and self-monitoring. In the presented solution, at least three different hardware platforms are used—FPGA, microprocessor with real-time operating system and the PC with end-user software. We present the use of a single software platform for high speed data transfer from 65k pixel camera to the personal computer.

  18. Compact opto-electronic engine for high-speed compressive sensing

    NASA Astrophysics Data System (ADS)

    Tidman, James; Weston, Tyler; Hewitt, Donna; Herman, Matthew A.; McMackin, Lenore

    2013-09-01

    The measurement efficiency of Compressive Sensing (CS) enables the computational construction of images from far fewer measurements than what is usually considered necessary by the Nyquist- Shannon sampling theorem. There is now a vast literature around CS mathematics and applications since the development of its theoretical principles about a decade ago. Applications include quantum information to optical microscopy to seismic and hyper-spectral imaging. In the application of shortwave infrared imaging, InView has developed cameras based on the CS single-pixel camera architecture. This architecture is comprised of an objective lens to image the scene onto a Texas Instruments DLP® Micromirror Device (DMD), which by using its individually controllable mirrors, modulates the image with a selected basis set. The intensity of the modulated image is then recorded by a single detector. While the design of a CS camera is straightforward conceptually, its commercial implementation requires significant development effort in optics, electronics, hardware and software, particularly if high efficiency and high-speed operation are required. In this paper, we describe the development of a high-speed CS engine as implemented in a lab-ready workstation. In this engine, configurable measurement patterns are loaded into the DMD at speeds up to 31.5 kHz. The engine supports custom reconstruction algorithms that can be quickly implemented. Our work includes optical path design, Field programmable Gate Arrays for DMD pattern generation, and circuit boards for front end data acquisition, ADC and system control, all packaged in a compact workstation.

  19. Dynamic calibration of pan-tilt-zoom cameras for traffic monitoring.

    PubMed

    Song, Kai-Tai; Tai, Jen-Chao

    2006-10-01

    Pan-tilt-zoom (PTZ) cameras have been widely used in recent years for monitoring and surveillance applications. These cameras provide flexible view selection as well as a wider observation range. This makes them suitable for vision-based traffic monitoring and enforcement systems. To employ PTZ cameras for image measurement applications, one first needs to calibrate the camera to obtain meaningful results. For instance, the accuracy of estimating vehicle speed depends on the accuracy of camera calibration and that of vehicle tracking results. This paper presents a novel calibration method for a PTZ camera overlooking a traffic scene. The proposed approach requires no manual operation to select the positions of special features. It automatically uses a set of parallel lane markings and the lane width to compute the camera parameters, namely, focal length, tilt angle, and pan angle. Image processing procedures have been developed for automatically finding parallel lane markings. Interesting experimental results are presented to validate the robustness and accuracy of the proposed method.

  20. Determination of feature generation methods for PTZ camera object tracking

    NASA Astrophysics Data System (ADS)

    Doyle, Daniel D.; Black, Jonathan T.

    2012-06-01

    Object detection and tracking using computer vision (CV) techniques have been widely applied to sensor fusion applications. Many papers continue to be written that speed up performance and increase learning of artificially intelligent systems through improved algorithms, workload distribution, and information fusion. Military application of real-time tracking systems is becoming more and more complex with an ever increasing need of fusion and CV techniques to actively track and control dynamic systems. Examples include the use of metrology systems for tracking and measuring micro air vehicles (MAVs) and autonomous navigation systems for controlling MAVs. This paper seeks to contribute to the determination of select tracking algorithms that best track a moving object using a pan/tilt/zoom (PTZ) camera applicable to both of the examples presented. The select feature generation algorithms compared in this paper are the trained Scale-Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), the Mixture of Gaussians (MoG) background subtraction method, the Lucas- Kanade optical flow method (2000) and the Farneback optical flow method (2003). The matching algorithm used in this paper for the trained feature generation algorithms is the Fast Library for Approximate Nearest Neighbors (FLANN). The BSD licensed OpenCV library is used extensively to demonstrate the viability of each algorithm and its performance. Initial testing is performed on a sequence of images using a stationary camera. Further testing is performed on a sequence of images such that the PTZ camera is moving in order to capture the moving object. Comparisons are made based upon accuracy, speed and memory.

  1. Embedded processor extensions for image processing

    NASA Astrophysics Data System (ADS)

    Thevenin, Mathieu; Paindavoine, Michel; Letellier, Laurent; Heyrman, Barthélémy

    2008-04-01

    The advent of camera phones marks a new phase in embedded camera sales. By late 2009, the total number of camera phones will exceed that of both conventional and digital cameras shipped since the invention of photography. Use in mobile phones of applications like visiophony, matrix code readers and biometrics requires a high degree of component flexibility that image processors (IPs) have not, to date, been able to provide. For all these reasons, programmable processor solutions have become essential. This paper presents several techniques geared to speeding up image processors. It demonstrates that a gain of twice is possible for the complete image acquisition chain and the enhancement pipeline downstream of the video sensor. Such results confirm the potential of these computing systems for supporting future applications.

  2. Cameras for semiconductor process control

    NASA Technical Reports Server (NTRS)

    Porter, W. A.; Parker, D. L.

    1977-01-01

    The application of X-ray topography to semiconductor process control is described, considering the novel features of the high speed camera and the difficulties associated with this technique. The most significant results on the effects of material defects on device performance are presented, including results obtained using wafers processed entirely within this institute. Defects were identified using the X-ray camera and correlations made with probe data. Also included are temperature dependent effects of material defects. Recent applications and improvements of X-ray topographs of silicon-on-sapphire and gallium arsenide are presented with a description of a real time TV system prototype and of the most recent vacuum chuck design. Discussion is included of our promotion of the use of the camera by various semiconductor manufacturers.

  3. Dynamic displacement measurement of large-scale structures based on the Lucas-Kanade template tracking algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Jie; Zhu, Chang`an

    2016-01-01

    The development of optics and computer technologies enables the application of the vision-based technique that uses digital cameras to the displacement measurement of large-scale structures. Compared with traditional contact measurements, vision-based technique allows for remote measurement, has a non-intrusive characteristic, and does not necessitate mass introduction. In this study, a high-speed camera system is developed to complete the displacement measurement in real time. The system consists of a high-speed camera and a notebook computer. The high-speed camera can capture images at a speed of hundreds of frames per second. To process the captured images in computer, the Lucas-Kanade template tracking algorithm in the field of computer vision is introduced. Additionally, a modified inverse compositional algorithm is proposed to reduce the computing time of the original algorithm and improve the efficiency further. The modified algorithm can rapidly accomplish one displacement extraction within 1 ms without having to install any pre-designed target panel onto the structures in advance. The accuracy and the efficiency of the system in the remote measurement of dynamic displacement are demonstrated in the experiments on motion platform and sound barrier on suspension viaduct. Experimental results show that the proposed algorithm can extract accurate displacement signal and accomplish the vibration measurement of large-scale structures.

  4. Photogrammetric Accuracy and Modeling of Rolling Shutter Cameras

    NASA Astrophysics Data System (ADS)

    Vautherin, Jonas; Rutishauser, Simon; Schneider-Zapp, Klaus; Choi, Hon Fai; Chovancova, Venera; Glass, Alexis; Strecha, Christoph

    2016-06-01

    Unmanned aerial vehicles (UAVs) are becoming increasingly popular in professional mapping for stockpile analysis, construction site monitoring, and many other applications. Due to their robustness and competitive pricing, consumer UAVs are used more and more for these applications, but they are usually equipped with rolling shutter cameras. This is a significant obstacle when it comes to extracting high accuracy measurements using available photogrammetry software packages. In this paper, we evaluate the impact of the rolling shutter cameras of typical consumer UAVs on the accuracy of a 3D reconstruction. Hereto, we use a beta-version of the Pix4Dmapper 2.1 software to compare traditional (non rolling shutter) camera models against a newly implemented rolling shutter model with respect to both the accuracy of geo-referenced validation points and to the quality of the motion estimation. Multiple datasets have been acquired using popular quadrocopters (DJI Phantom 2 Vision+, DJI Inspire 1 and 3DR Solo) following a grid flight plan. For comparison, we acquired a dataset using a professional mapping drone (senseFly eBee) equipped with a global shutter camera. The bundle block adjustment of each dataset shows a significant accuracy improvement on validation ground control points when applying the new rolling shutter camera model for flights at higher speed (8m=s). Competitive accuracies can be obtained by using the rolling shutter model, although global shutter cameras are still superior. Furthermore, we are able to show that the speed of the drone (and its direction) can be solely estimated from the rolling shutter effect of the camera.

  5. High speed photography, videography, and photonics V; Proceedings of the Meeting, San Diego, CA, Aug. 17-19, 1987

    NASA Technical Reports Server (NTRS)

    Johnson, Howard C. (Editor)

    1988-01-01

    Recent advances in high-speed optical and electrooptic devices are discussed in reviews and reports. Topics examined include data quantification and related technologies, high-speed photographic applications and instruments, flash and cine radiography, and novel ultrafast methods. Also considered are optical streak technology, high-speed videographic and photographic equipment, and X-ray streak cameras. Extensive diagrams, drawings, graphs, sample images, and tables of numerical data are provided.

  6. High speed photography and photonics applications: An underutilized technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paisley, D.L.

    1996-10-01

    Snapshot: Paisley describes the development of high-speed photography including the role of streak cameras, fiber optics, and lasers. Progress in this field has created a powerful tool for viewing such ultrafast processes as hypersonic events and ballistics. {copyright} {ital 1996 Optical Society of America.} [1047-6938-96-10-9939-04

  7. Full-frame, high-speed 3D shape and deformation measurements using stereo-digital image correlation and a single color high-speed camera

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2017-08-01

    Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.

  8. Application of infrared camera to bituminous concrete pavements: measuring vehicle

    NASA Astrophysics Data System (ADS)

    Janků, Michal; Stryk, Josef

    2017-09-01

    Infrared thermography (IR) has been used for decades in certain fields. However, the technological level of advancement of measuring devices has not been sufficient for some applications. Over the recent years, good quality thermal cameras with high resolution and very high thermal sensitivity have started to appear on the market. The development in the field of measuring technologies allowed the use of infrared thermography in new fields and for larger number of users. This article describes the research in progress in Transport Research Centre with a focus on the use of infrared thermography for diagnostics of bituminous road pavements. A measuring vehicle, equipped with a thermal camera, digital camera and GPS sensor, was designed for the diagnostics of pavements. New, highly sensitive, thermal cameras allow to measure very small temperature differences from the moving vehicle. This study shows the potential of a high-speed inspection without lane closures while using IR thermography.

  9. High speed photography, videography, and photonics IV; Proceedings of the Meeting, San Diego, CA, Aug. 19, 20, 1986

    NASA Technical Reports Server (NTRS)

    Ponseggi, B. G. (Editor)

    1986-01-01

    Various papers on high-speed photography, videography, and photonics are presented. The general topics addressed include: photooptical and video instrumentation, streak camera data acquisition systems, photooptical instrumentation in wind tunnels, applications of holography and interferometry in wind tunnel research programs, and data analysis for photooptical and video instrumentation.

  10. Real-time multiple objects tracking on Raspberry-Pi-based smart embedded camera

    NASA Astrophysics Data System (ADS)

    Dziri, Aziz; Duranton, Marc; Chapuis, Roland

    2016-07-01

    Multiple-object tracking constitutes a major step in several computer vision applications, such as surveillance, advanced driver assistance systems, and automatic traffic monitoring. Because of the number of cameras used to cover a large area, these applications are constrained by the cost of each node, the power consumption, the robustness of the tracking, the processing time, and the ease of deployment of the system. To meet these challenges, the use of low-power and low-cost embedded vision platforms to achieve reliable tracking becomes essential in networks of cameras. We propose a tracking pipeline that is designed for fixed smart cameras and which can handle occlusions between objects. We show that the proposed pipeline reaches real-time processing on a low-cost embedded smart camera composed of a Raspberry-Pi board and a RaspiCam camera. The tracking quality and the processing speed obtained with the proposed pipeline are evaluated on publicly available datasets and compared to the state-of-the-art methods.

  11. A computational approach to real-time image processing for serial time-encoded amplified microscopy

    NASA Astrophysics Data System (ADS)

    Oikawa, Minoru; Hiyama, Daisuke; Hirayama, Ryuji; Hasegawa, Satoki; Endo, Yutaka; Sugie, Takahisa; Tsumura, Norimichi; Kuroshima, Mai; Maki, Masanori; Okada, Genki; Lei, Cheng; Ozeki, Yasuyuki; Goda, Keisuke; Shimobaba, Tomoyoshi

    2016-03-01

    High-speed imaging is an indispensable technique, particularly for identifying or analyzing fast-moving objects. The serial time-encoded amplified microscopy (STEAM) technique was proposed to enable us to capture images with a frame rate 1,000 times faster than using conventional methods such as CCD (charge-coupled device) cameras. The application of this high-speed STEAM imaging technique to a real-time system, such as flow cytometry for a cell-sorting system, requires successively processing a large number of captured images with high throughput in real time. We are now developing a high-speed flow cytometer system including a STEAM camera. In this paper, we describe our approach to processing these large amounts of image data in real time. We use an analog-to-digital converter that has up to 7.0G samples/s and 8-bit resolution for capturing the output voltage signal that involves grayscale images from the STEAM camera. Therefore the direct data output from the STEAM camera generates 7.0G byte/s continuously. We provided a field-programmable gate array (FPGA) device as a digital signal pre-processor for image reconstruction and finding objects in a microfluidic channel with high data rates in real time. We also utilized graphics processing unit (GPU) devices for accelerating the calculation speed of identification of the reconstructed images. We built our prototype system, which including a STEAM camera, a FPGA device and a GPU device, and evaluated its performance in real-time identification of small particles (beads), as virtual biological cells, owing through a microfluidic channel.

  12. The Interaction between Speed Camera Enforcement and Speed-Related Mass Media Publicity in Victoria, Australia

    PubMed Central

    Cameron, M. H.; Newstead, S. V.; Diamantopoulou, K.; Oxley, P.

    2003-01-01

    The objective was to measure the presence of any interaction between the effect of mobile covert speed camera enforcement and the effect of intensive mass media road safety publicity with speed-related themes. During 1999, the Victoria Police varied the levels of speed camera activity substantially in four Melbourne police districts according to a systematic plan. Camera hours were increased or reduced by 50% or 100% in respective districts for a month at a time, during months when speed-related publicity was present and during months when it was absent. Monthly frequencies of casualty crashes, and their severe injury outcome, in each district during 1996–2000 were analysed to test the effects of the enforcement, publicity and their interaction. Reductions in crash frequency were associated monotonically with increasing levels of speed camera ticketing, and there was a statistically significant 41% reduction in fatal crash outcome associated with very high camera activity. High publicity awareness was associated with 12% reduction in crash frequency. The interaction between the enforcement and publicity was not statistically significant. PMID:12941230

  13. High Speed Digital Camera Technology Review

    NASA Technical Reports Server (NTRS)

    Clements, Sandra D.

    2009-01-01

    A High Speed Digital Camera Technology Review (HSD Review) is being conducted to evaluate the state-of-the-shelf in this rapidly progressing industry. Five HSD cameras supplied by four camera manufacturers participated in a Field Test during the Space Shuttle Discovery STS-128 launch. Each camera was also subjected to Bench Tests in the ASRC Imaging Development Laboratory. Evaluation of the data from the Field and Bench Tests is underway. Representatives from the imaging communities at NASA / KSC and the Optical Systems Group are participating as reviewers. A High Speed Digital Video Camera Draft Specification was updated to address Shuttle engineering imagery requirements based on findings from this HSD Review. This draft specification will serve as the template for a High Speed Digital Video Camera Specification to be developed for the wider OSG imaging community under OSG Task OS-33.

  14. Low-cost digital dynamic visualization system

    NASA Astrophysics Data System (ADS)

    Asundi, Anand K.; Sajan, M. R.

    1995-05-01

    High speed photographic systems like the image rotation camera, the Cranz Schardin camera and the drum camera are typically used for recording and visualization of dynamic events in stress analysis, fluid mechanics, etc. All these systems are fairly expensive and generally not simple to use. Furthermore they are all based on photographic film recording systems requiring time consuming and tedious wet processing of the films. Currently digital cameras are replacing to certain extent the conventional cameras for static experiments. Recently, there is lot of interest in developing and modifying CCD architectures and recording arrangements for dynamic scene analysis. Herein we report the use of a CCD camera operating in the Time Delay and Integration (TDI) mode for digitally recording dynamic scenes. Applications in solid as well as fluid impact problems are presented.

  15. The NACA High-Speed Motion-Picture Camera Optical Compensation at 40,000 Photographs Per Second

    NASA Technical Reports Server (NTRS)

    Miller, Cearcy D

    1946-01-01

    The principle of operation of the NACA high-speed camera is completely explained. This camera, operating at the rate of 40,000 photographs per second, took the photographs presented in numerous NACA reports concerning combustion, preignition, and knock in the spark-ignition engine. Many design details are presented and discussed, details of an entirely conventional nature are omitted. The inherent aberrations of the camera are discussed and partly evaluated. The focal-plane-shutter effect of the camera is explained. Photographs of the camera are presented. Some high-speed motion pictures of familiar objects -- photoflash bulb, firecrackers, camera shutter -- are reproduced as an illustration of the quality of the photographs taken by the camera.

  16. High-Speed Schlieren Movies of Decelerators at Supersonic Speeds

    NASA Technical Reports Server (NTRS)

    1960-01-01

    Tests were conducted on several types of porous parachutes, a paraglider, and a simulated retrorocket. Mach numbers ranged from 1.8-3.0, porosity from 20-80 percent, and camera speeds from 1680-3000 feet per second (fps) in trials with porous parachutes. Trials of reefed parachutes were conducted at Mach number 2.0 and reefing of 12-33 percent at camera speeds of 600 fps. A flexible parachute with an inflatable ring in the periphery of the canopy was tested at Reynolds number 750,000 per foot, Mach number 2.85, porosity of 28 percent, and camera speed of 36oo fps. A vortex-ring parachute was tested at Mach number 2.2 and camera speed of 3000 fps. The paraglider, with a sweepback of 45 degrees at an angle of attack of 45 degrees was tested at Mach number 2.65, drag coefficient of 0.200, and lift coefficient of 0.278 at a camera speed of 600 fps. A cold air jet exhausting upstream from the center of a bluff body was used to simulate a retrorocket. The free-stream Mach number was 2.0, free-stream dynamic pressure was 620 lb/sq ft, jet-exit static pressure ratio was 10.9, and camera speed was 600 fps.

  17. Development of a 300,000-pixel ultrahigh-speed high-sensitivity CCD

    NASA Astrophysics Data System (ADS)

    Ohtake, H.; Hayashida, T.; Kitamura, K.; Arai, T.; Yonai, J.; Tanioka, K.; Maruyama, H.; Etoh, T. Goji; Poggemann, D.; Ruckelshausen, A.; van Kuijk, H.; Bosiers, Jan T.

    2006-02-01

    We are developing an ultrahigh-speed, high-sensitivity broadcast camera that is capable of capturing clear, smooth slow-motion videos even where lighting is limited, such as at professional baseball games played at night. In earlier work, we developed an ultrahigh-speed broadcast color camera1) using three 80,000-pixel ultrahigh-speed, highsensitivity CCDs2). This camera had about ten times the sensitivity of standard high-speed cameras, and enabled an entirely new style of presentation for sports broadcasts and science programs. Most notably, increasing the pixel count is crucially important for applying ultrahigh-speed, high-sensitivity CCDs to HDTV broadcasting. This paper provides a summary of our experimental development aimed at improving the resolution of CCD even further: a new ultrahigh-speed high-sensitivity CCD that increases the pixel count four-fold to 300,000 pixels.

  18. Real-time 3D measurement based on structured light illumination considering camera lens distortion

    NASA Astrophysics Data System (ADS)

    Feng, Shijie; Chen, Qian; Zuo, Chao; Sun, Jiasong; Yu, ShiLing

    2014-12-01

    Optical three-dimensional (3-D) profilometry is gaining increasing attention for its simplicity, flexibility, high accuracy, and non-contact nature. Recent advances in imaging sensors and digital projection technology further its progress in high-speed, real-time applications, enabling 3-D shapes reconstruction of moving objects and dynamic scenes. In traditional 3-D measurement system where the processing time is not a key factor, camera lens distortion correction is performed directly. However, for the time-critical high-speed applications, the time-consuming correction algorithm is inappropriate to be performed directly during the real-time process. To cope with this issue, here we present a novel high-speed real-time 3-D coordinates measuring technique based on fringe projection with the consideration of the camera lens distortion. A pixel mapping relation between a distorted image and a corrected one is pre-determined and stored in computer memory for real-time fringe correction. And a method of lookup table (LUT) is introduced as well for fast data processing. Our experimental results reveal that the measurement error of the in-plane coordinates has been reduced by one order of magnitude and the accuracy of the out-plane coordinate been tripled after the distortions being eliminated. Moreover, owing to the merit of the LUT, the 3-D reconstruction can be achieved at 92.34 frames per second.

  19. ePix100 camera: Use and applications at LCLS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carini, G. A., E-mail: carini@slac.stanford.edu; Alonso-Mori, R.; Blaj, G.

    2016-07-27

    The ePix100 x-ray camera is a new system designed and built at SLAC for experiments at the Linac Coherent Light Source (LCLS). The camera is the first member of a family of detectors built around a single hardware and software platform, supporting a variety of front-end chips. With a readout speed of 120 Hz, matching the LCLS repetition rate, a noise lower than 80 e-rms and pixels of 50 µm × 50 µm, this camera offers a viable alternative to fast readout, direct conversion, scientific CCDs in imaging mode. The detector, designed for applications such as X-ray Photon Correlation Spectroscopymore » (XPCS) and wavelength dispersive X-ray Emission Spectroscopy (XES) in the energy range from 2 to 10 keV and above, comprises up to 0.5 Mpixels in a very compact form factor. In this paper, we report the performance of the camera during its first use at LCLS.« less

  20. 3D bubble reconstruction using multiple cameras and space carving method

    NASA Astrophysics Data System (ADS)

    Fu, Yucheng; Liu, Yang

    2018-07-01

    An accurate measurement of bubble shape and size has a significant value in understanding the behavior of bubbles that exist in many engineering applications. Past studies usually use one or two cameras to estimate bubble volume, surface area, among other parameters. The 3D bubble shape and rotation angle are generally not available in these studies. To overcome this challenge and obtain more detailed information of individual bubbles, a 3D imaging system consisting of four high-speed cameras is developed in this paper, and the space carving method is used to reconstruct the 3D bubble shape based on the recorded high-speed images from different view angles. The proposed method can reconstruct the bubble surface with minimal assumptions. A benchmarking test is performed in a 3 cm  ×  1 cm rectangular channel with stagnant water. The results show that the newly proposed method can measure the bubble volume with an error of less than 2% compared with the syringe reading. The conventional two-camera system has an error around 10%. The one-camera system has an error greater than 25%. The visualization of a 3D bubble rising demonstrates the wall influence on bubble rotation angle and aspect ratio. This also explains the large error that exists in the single camera measurement.

  1. An Application Of High-Speed Photography To The Real Ignition Course Of Composite Propellants

    NASA Astrophysics Data System (ADS)

    Fusheng, Zhang; Gongshan, Cheng; Yong, Zhang; Fengchun, Li; Fanpei, Lei

    1989-06-01

    That the actual solid rocket motor behavior and delay time of the ignition of Ap/HTPB composite propellant ignited by high energy pyrotechics contained condensed particles have been investigated is the key of this paper. In experiments, using high speed camera, the pressure transducer, the photodiode and synchro circuit control system designed by us synchronistically observe and record all course and details of the ignition. And pressure signal, photodiode signal and high speed photography frame are corresponded one by one.

  2. Interferometric Dynamic Measurement: Techniques Based on High-Speed Imaging or a Single Photodetector

    PubMed Central

    Fu, Yu; Pedrini, Giancarlo

    2014-01-01

    In recent years, optical interferometry-based techniques have been widely used to perform noncontact measurement of dynamic deformation in different industrial areas. In these applications, various physical quantities need to be measured in any instant and the Nyquist sampling theorem has to be satisfied along the time axis on each measurement point. Two types of techniques were developed for such measurements: one is based on high-speed cameras and the other uses a single photodetector. The limitation of the measurement range along the time axis in camera-based technology is mainly due to the low capturing rate, while the photodetector-based technology can only do the measurement on a single point. In this paper, several aspects of these two technologies are discussed. For the camera-based interferometry, the discussion includes the introduction of the carrier, the processing of the recorded images, the phase extraction algorithms in various domains, and how to increase the temporal measurement range by using multiwavelength techniques. For the detector-based interferometry, the discussion mainly focuses on the single-point and multipoint laser Doppler vibrometers and their applications for measurement under extreme conditions. The results show the effort done by researchers for the improvement of the measurement capabilities using interferometry-based techniques to cover the requirements needed for the industrial applications. PMID:24963503

  3. Development of high-speed video cameras

    NASA Astrophysics Data System (ADS)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  4. Dynamic photoelasticity by TDI imaging

    NASA Astrophysics Data System (ADS)

    Asundi, Anand K.; Sajan, M. R.

    2001-06-01

    High speed photographic system like the image rotation camera, the Cranz Schardin camera and the drum camera are typically used for the recording and visualization of dynamic events in stress analysis, fluid mechanics, etc. All these systems are fairly expensive and generally not simple to use. Furthermore they are all based on photographic film recording system requiring time consuming and tedious wet processing of the films. Digital cameras are replacing the conventional cameras, to certain extent in static experiments. Recently, there is lots of interest in development and modifying CCD architectures and recording arrangements for dynamic scenes analysis. Herein we report the use of a CCD camera operating in the Time Delay and Integration mode for digitally recording dynamic photoelastic stress patterns. Applications in strobe and streak photoelastic pattern recording and system limitations will be explained in the paper.

  5. The application of holography as a real-time three-dimensional motion picture camera

    NASA Technical Reports Server (NTRS)

    Kurtz, R. L.

    1973-01-01

    A historical introduction to holography is presented, as well as a basic description of sideband holography for stationary objects. A brief theoretical development of both time-dependent and time-independent holography is also provided, along with an analytical and intuitive discussion of a unique holographic arrangement which allows the resolution of front surface detail from an object moving at high speeds. As an application of such a system, a real-time three-dimensional motion picture camera system is discussed and the results of a recent demonstration of the world's first true three-dimensional motion picture are given.

  6. Differences in glance behavior between drivers using a rearview camera, parking sensor system, both technologies, or no technology during low-speed parking maneuvers.

    PubMed

    Kidd, David G; McCartt, Anne T

    2016-02-01

    This study characterized the use of various fields of view during low-speed parking maneuvers by drivers with a rearview camera, a sensor system, a camera and sensor system combined, or neither technology. Participants performed four different low-speed parking maneuvers five times. Glances to different fields of view the second time through the four maneuvers were coded along with the glance locations at the onset of the audible warning from the sensor system and immediately after the warning for participants in the sensor and camera-plus-sensor conditions. Overall, the results suggest that information from cameras and/or sensor systems is used in place of mirrors and shoulder glances. Participants with a camera, sensor system, or both technologies looked over their shoulders significantly less than participants without technology. Participants with cameras (camera and camera-plus-sensor conditions) used their mirrors significantly less compared with participants without cameras (no-technology and sensor conditions). Participants in the camera-plus-sensor condition looked at the center console/camera display for a smaller percentage of the time during the low-speed maneuvers than participants in the camera condition and glanced more frequently to the center console/camera display immediately after the warning from the sensor system compared with the frequency of glances to this location at warning onset. Although this increase was not statistically significant, the pattern suggests that participants in the camera-plus-sensor condition may have used the warning as a cue to look at the camera display. The observed differences in glance behavior between study groups were illustrated by relating it to the visibility of a 12-15-month-old child-size object. These findings provide evidence that drivers adapt their glance behavior during low-speed parking maneuvers following extended use of rearview cameras and parking sensors, and suggest that other technologies which augment the driving task may do the same. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Game of thrown bombs in 3D: using high speed cameras and photogrammetry techniques to reconstruct bomb trajectories at Stromboli (Italy)

    NASA Astrophysics Data System (ADS)

    Gaudin, D.; Taddeucci, J.; Scarlato, P.; Del Bello, E.; Houghton, B. F.; Orr, T. R.; Andronico, D.; Kueppers, U.

    2015-12-01

    Large juvenile bombs and lithic clasts, produced and ejected during explosive volcanic eruptions, follow ballistic trajectories. Of particular interest are: 1) the determination of ejection velocity and launch angle, which give insights into shallow conduit conditions and geometry; 2) particle trajectories, with an eye on trajectory evolution caused by collisions between bombs, as well as the interaction between bombs and ash/gas plumes; and 3) the computation of the final emplacement of bomb-sized clasts, which is important for hazard assessment and risk management. Ground-based imagery from a single camera only allows the reconstruction of bomb trajectories in a plan perpendicular to the line of sight, which may lead to underestimation of bomb velocities and does not allow the directionality of the ejections to be studied. To overcome this limitation, we adapted photogrammetry techniques to reconstruct 3D bomb trajectories from two or three synchronized high-speed video cameras. In particular, we modified existing algorithms to consider the errors that may arise from the very high velocity of the particles and the impossibility of measuring tie points close to the scene. Our method was tested during two field campaigns at Stromboli. In 2014, two high-speed cameras with a 500 Hz frame rate and a ~2 cm resolution were set up ~350m from the crater, 10° apart and synchronized. The experiment was repeated with similar parameters in 2015, but using three high-speed cameras in order to significantly reduce uncertainties and allow their estimation. Trajectory analyses for tens of bombs at various times allowed for the identification of shifts in the mean directivity and dispersal angle of the jets during the explosions. These time evolutions are also visible on the permanent video-camera monitoring system, demonstrating the applicability of our method to all kinds of explosive volcanoes.

  8. A compressed sensing X-ray camera with a multilayer architecture

    NASA Astrophysics Data System (ADS)

    Wang, Zhehui; Iaroshenko, O.; Li, S.; Liu, T.; Parab, N.; Chen, W. W.; Chu, P.; Kenyon, G. T.; Lipton, R.; Sun, K.-X.

    2018-01-01

    Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. Here we first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.

  9. Field Test Data for Detecting Vibrations of a Building Using High-Speed Video Cameras

    DTIC Science & Technology

    2017-10-01

    ARL-TR-8185 ● OCT 2017 US Army Research Laboratory Field Test Data for Detecting Vibrations of a Building Using High -Speed Video...Field Test Data for Detecting Vibrations of a Building Using High -Speed Video Cameras by Caitlin P Conn and Geoffrey H Goldman Sensors and...June 2016 – October 2017 4. TITLE AND SUBTITLE Field Test Data for Detecting Vibrations of a Building Using High -Speed Video Cameras 5a. CONTRACT

  10. High speed photography, videography, and photonics III; Proceedings of the Meeting, San Diego, CA, August 22, 23, 1985

    NASA Technical Reports Server (NTRS)

    Ponseggi, B. G. (Editor); Johnson, H. C. (Editor)

    1985-01-01

    Papers are presented on the picosecond electronic framing camera, photogrammetric techniques using high-speed cineradiography, picosecond semiconductor lasers for characterizing high-speed image shutters, the measurement of dynamic strain by high-speed moire photography, the fast framing camera with independent frame adjustments, design considerations for a data recording system, and nanosecond optical shutters. Consideration is given to boundary-layer transition detectors, holographic imaging, laser holographic interferometry in wind tunnels, heterodyne holographic interferometry, a multispectral video imaging and analysis system, a gated intensified camera, a charge-injection-device profile camera, a gated silicon-intensified-target streak tube and nanosecond-gated photoemissive shutter tubes. Topics discussed include high time-space resolved photography of lasers, time-resolved X-ray spectrographic instrumentation for laser studies, a time-resolving X-ray spectrometer, a femtosecond streak camera, streak tubes and cameras, and a short pulse X-ray diagnostic development facility.

  11. Aircraft engine-mounted camera system for long wavelength infrared imaging of in-service thermal barrier coated turbine blades

    NASA Astrophysics Data System (ADS)

    Markham, James; Cosgrove, Joseph; Scire, James; Haldeman, Charles; Agoos, Ian

    2014-12-01

    This paper announces the implementation of a long wavelength infrared camera to obtain high-speed thermal images of an aircraft engine's in-service thermal barrier coated turbine blades. Long wavelength thermal images were captured of first-stage blades. The achieved temporal and spatial resolutions allowed for the identification of cooling-hole locations. The software and synchronization components of the system allowed for the selection of any blade on the turbine wheel, with tuning capability to image from leading edge to trailing edge. Its first application delivered calibrated thermal images as a function of turbine rotational speed at both steady state conditions and during engine transients. In advance of presenting these data for the purpose of understanding engine operation, this paper focuses on the components of the system, verification of high-speed synchronized operation, and the integration of the system with the commercial jet engine test bed.

  12. Aircraft engine-mounted camera system for long wavelength infrared imaging of in-service thermal barrier coated turbine blades.

    PubMed

    Markham, James; Cosgrove, Joseph; Scire, James; Haldeman, Charles; Agoos, Ian

    2014-12-01

    This paper announces the implementation of a long wavelength infrared camera to obtain high-speed thermal images of an aircraft engine's in-service thermal barrier coated turbine blades. Long wavelength thermal images were captured of first-stage blades. The achieved temporal and spatial resolutions allowed for the identification of cooling-hole locations. The software and synchronization components of the system allowed for the selection of any blade on the turbine wheel, with tuning capability to image from leading edge to trailing edge. Its first application delivered calibrated thermal images as a function of turbine rotational speed at both steady state conditions and during engine transients. In advance of presenting these data for the purpose of understanding engine operation, this paper focuses on the components of the system, verification of high-speed synchronized operation, and the integration of the system with the commercial jet engine test bed.

  13. The use of high-speed imaging in education

    NASA Astrophysics Data System (ADS)

    Kleine, H.; McNamara, G.; Rayner, J.

    2017-02-01

    Recent improvements in camera technology and the associated improved access to high-speed camera equipment have made it possible to use high-speed imaging not only in a research environment but also specifically for educational purposes. This includes high-speed sequences that are created both with and for a target audience of students in high schools and universities. The primary goal is to engage students in scientific exploration by providing them with a tool that allows them to see and measure otherwise inaccessible phenomena. High-speed imaging has the potential to stimulate students' curiosity as the results are often surprising or may contradict initial assumptions. "Live" demonstrations in class or student- run experiments are highly suitable to have a profound influence on student learning. Another aspect is the production of high-speed images for demonstration purposes. While some of the approaches known from the application of high speed imaging in a research environment can simply be transferred, additional techniques must often be developed to make the results more easily accessible for the targeted audience. This paper describes a range of student-centered activities that can be undertaken which demonstrate how student engagement and learning can be enhanced through the use of high speed imaging using readily available technologies.

  14. Application of a digital high-speed camera and image processing system for investigations of short-term hypersonic fluids

    NASA Astrophysics Data System (ADS)

    Renken, Hartmut; Oelze, Holger W.; Rath, Hans J.

    1998-04-01

    The design and application of a digital high sped image data capturing system with a following image processing system applied to the Bremer Hochschul Hyperschallkanal BHHK is the content of this presentation. It is also the result of the cooperation between the departments aerodynamic and image processing at the ZARM-institute at the Drop Tower of Brennen. Similar systems are used by the combustion working group at ZARM and other external project partners. The BHHK, camera- and image storage system as well as the personal computer based image processing software are described next. Some examples of images taken at the BHHK are shown to illustrate the application. The new and very user-friendly Windows 32-bit system is capable to capture all camera data with a maximum pixel clock of 43 MHz and to process complete sequences of images in one step by using only one comfortable program.

  15. Changes in speed distribution: Applying aggregated safety effect models to individual vehicle speeds.

    PubMed

    Vadeby, Anna; Forsman, Åsa

    2017-06-01

    This study investigated the effect of applying two aggregated models (the Power model and the Exponential model) to individual vehicle speeds instead of mean speeds. This is of particular interest when the measure introduced affects different parts of the speed distribution differently. The aim was to examine how the estimated overall risk was affected when assuming the models are valid on an individual vehicle level. Speed data from two applications of speed measurements were used in the study: an evaluation of movable speed cameras and a national evaluation of new speed limits in Sweden. The results showed that when applied on individual vehicle speed level compared with aggregated level, there was essentially no difference between these for the Power model in the case of injury accidents. However, for fatalities the difference was greater, especially for roads with new cameras where those driving fastest reduced their speed the most. For the case with new speed limits, the individual approach estimated a somewhat smaller effect, reflecting that changes in the 15th percentile (P15) were somewhat larger than changes in P85 in this case. For the Exponential model there was also a clear, although small, difference between applying the model to mean speed changes and individual vehicle speed changes when speed cameras were used. This applied both for injury accidents and fatalities. There were also larger effects for the Exponential model than for the Power model, especially for injury accidents. In conclusion, applying the Power or Exponential model to individual vehicle speeds is an alternative that provides reasonable results in relation to the original Power and Exponential models, but more research is needed to clarify the shape of the individual risk curve. It is not surprising that the impact on severe traffic crashes was larger in situations where those driving fastest reduced their speed the most. Further investigations on use of the Power and/or the Exponential model at individual vehicle level would require more data on the individual level from a range of international studies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. High-speed and ultrahigh-speed cinematographic recording techniques

    NASA Astrophysics Data System (ADS)

    Miquel, J. C.

    1980-12-01

    A survey is presented of various high-speed and ultrahigh-speed cinematographic recording systems (covering a range of speeds from 100 to 14-million pps). Attention is given to the functional and operational characteristics of cameras and to details of high-speed cinematography techniques (including image processing, and illumination). A list of cameras (many of them French) available in 1980 is presented

  17. An ultrahigh-speed color video camera operating at 1,000,000 fps with 288 frame memories

    NASA Astrophysics Data System (ADS)

    Kitamura, K.; Arai, T.; Yonai, J.; Hayashida, T.; Kurita, T.; Maruyama, H.; Namiki, J.; Yanagi, T.; Yoshida, T.; van Kuijk, H.; Bosiers, Jan T.; Saita, A.; Kanayama, S.; Hatade, K.; Kitagawa, S.; Etoh, T. Goji

    2008-11-01

    We developed an ultrahigh-speed color video camera that operates at 1,000,000 fps (frames per second) and had capacity to store 288 frame memories. In 2005, we developed an ultrahigh-speed, high-sensitivity portable color camera with a 300,000-pixel single CCD (ISIS-V4: In-situ Storage Image Sensor, Version 4). Its ultrahigh-speed shooting capability of 1,000,000 fps was made possible by directly connecting CCD storages, which record video images, to the photodiodes of individual pixels. The number of consecutive frames was 144. However, longer capture times were demanded when the camera was used during imaging experiments and for some television programs. To increase ultrahigh-speed capture times, we used a beam splitter and two ultrahigh-speed 300,000-pixel CCDs. The beam splitter was placed behind the pick up lens. One CCD was located at each of the two outputs of the beam splitter. The CCD driving unit was developed to separately drive two CCDs, and the recording period of the two CCDs was sequentially switched. This increased the recording capacity to 288 images, an increase of a factor of two over that of conventional ultrahigh-speed camera. A problem with the camera was that the incident light on each CCD was reduced by a factor of two by using the beam splitter. To improve the light sensitivity, we developed a microlens array for use with the ultrahigh-speed CCDs. We simulated the operation of the microlens array in order to optimize its shape and then fabricated it using stamping technology. Using this microlens increased the light sensitivity of the CCDs by an approximate factor of two. By using a beam splitter in conjunction with the microlens array, it was possible to make an ultrahigh-speed color video camera that has 288 frame memories but without decreasing the camera's light sensitivity.

  18. A detailed comparison of single-camera light-field PIV and tomographic PIV

    NASA Astrophysics Data System (ADS)

    Shi, Shengxian; Ding, Junfei; Atkinson, Callum; Soria, Julio; New, T. H.

    2018-03-01

    This paper conducts a comprehensive study between the single-camera light-field particle image velocimetry (LF-PIV) and the multi-camera tomographic particle image velocimetry (Tomo-PIV). Simulation studies were first performed using synthetic light-field and tomographic particle images, which extensively examine the difference between these two techniques by varying key parameters such as pixel to microlens ratio (PMR), light-field camera Tomo-camera pixel ratio (LTPR), particle seeding density and tomographic camera number. Simulation results indicate that the single LF-PIV can achieve accuracy consistent with that of multi-camera Tomo-PIV, but requires the use of overall greater number of pixels. Experimental studies were then conducted by simultaneously measuring low-speed jet flow with single-camera LF-PIV and four-camera Tomo-PIV systems. Experiments confirm that given a sufficiently high pixel resolution, a single-camera LF-PIV system can indeed deliver volumetric velocity field measurements for an equivalent field of view with a spatial resolution commensurate with those of multi-camera Tomo-PIV system, enabling accurate 3D measurements in applications where optical access is limited.

  19. The use of uncalibrated roadside CCTV cameras to estimate mean traffic speed

    DOT National Transportation Integrated Search

    2001-12-01

    In this report, we present a novel approach for estimating traffic speed using a sequence of images from an un-calibrated camera. We assert that exact calibration is not necessary to estimate speed. Instead, to estimate speed, we use: (1) geometric r...

  20. Strategic options towards an affordable high-performance infrared camera

    NASA Astrophysics Data System (ADS)

    Oduor, Patrick; Mizuno, Genki; Dutta, Achyut K.; Lewis, Jay; Dhar, Nibir K.

    2016-05-01

    The promise of infrared (IR) imaging attaining low-cost akin to CMOS sensors success has been hampered by the inability to achieve cost advantages that are necessary for crossover from military and industrial applications into the consumer and mass-scale commercial realm despite well documented advantages. Banpil Photonics is developing affordable IR cameras by adopting new strategies to speed-up the decline of the IR camera cost curve. We present a new short-wave IR (SWIR) camera; 640x512 pixel InGaAs uncooled system that is high sensitivity low noise (<50e-), high dynamic range (100 dB), high-frame rates (> 500 frames per second (FPS)) at full resolution, and low power consumption (< 1 W) in a compact system. This camera paves the way towards mass market adoption by not only demonstrating high-performance IR imaging capability value add demanded by military and industrial application, but also illuminates a path towards justifiable price points essential for consumer facing application industries such as automotive, medical, and security imaging adoption. Among the strategic options presented include new sensor manufacturing technologies that scale favorably towards automation, multi-focal plane array compatible readout electronics, and dense or ultra-small pixel pitch devices.

  1. Using a High-Speed Camera to Measure the Speed of Sound

    ERIC Educational Resources Information Center

    Hack, William Nathan; Baird, William H.

    2012-01-01

    The speed of sound is a physical property that can be measured easily in the lab. However, finding an inexpensive and intuitive way for students to determine this speed has been more involved. The introduction of affordable consumer-grade high-speed cameras (such as the Exilim EX-FC100) makes conceptually simple experiments feasible. Since the…

  2. Measuring frequency of one-dimensional vibration with video camera using electronic rolling shutter

    NASA Astrophysics Data System (ADS)

    Zhao, Yipeng; Liu, Jinyue; Guo, Shijie; Li, Tiejun

    2018-04-01

    Cameras offer a unique capability of collecting high density spatial data from a distant scene of interest. They can be employed as remote monitoring or inspection sensors to measure vibrating objects because of their commonplace availability, simplicity, and potentially low cost. A defect of vibrating measurement with the camera is to process the massive data generated by camera. In order to reduce the data collected from the camera, the camera using electronic rolling shutter (ERS) is applied to measure the frequency of one-dimensional vibration, whose frequency is much higher than the speed of the camera. Every row in the image captured by the ERS camera records the vibrating displacement at different times. Those displacements that form the vibration could be extracted by local analysis with sliding windows. This methodology is demonstrated on vibrating structures, a cantilever beam, and an air compressor to identify the validity of the proposed algorithm. Suggestions for applications of this methodology and challenges in real-world implementation are given at last.

  3. Reducing road traffic injuries: effectiveness of speed cameras in an urban setting.

    PubMed

    Pérez, Katherine; Marí-Dell'Olmo, Marc; Tobias, Aurelio; Borrell, Carme

    2007-09-01

    We assessed the effectiveness of speed cameras on Barcelona's beltway in reducing the numbers of road collisions and injuries and the number of vehicles involved in collisions. We designed a time-series study with a comparison group to assess the effects of the speed cameras. The "intervention group" was the beltway, and the comparison group consisted of arterial roads on which no fixed speed cameras had been installed. The outcome measures were number of road collisions, number of people injured, and number of vehicles involved in collisions. We fit the data to Poisson regression models that were adjusted according to trends and seasonality. The relative risk (RR) of a road collision occurring on the beltway after (vs before) installation of speed cameras was 0.73 (95% confidence interval [CI]=0.63, 0.85). This protective effect was greater during weekend periods. No differences were observed for arterial roads (RR=0.99; 95% CI=0.90, 1.10). Attributable fraction estimates for the 2 years of the study intervention showed 364 collisions prevented, 507 fewer people injured, and 789 fewer vehicles involved in collisions. Speed cameras installed in an urban setting are effective in reducing the numbers of road collisions and, consequently, the numbers of injured people and vehicles involved in collisions.

  4. C-RED one: ultra-high speed wavefront sensing in the infrared made possible

    NASA Astrophysics Data System (ADS)

    Gach, J.-L.; Feautrier, Philippe; Stadler, Eric; Greffe, Timothee; Clop, Fabien; Lemarchand, Stéphane; Carmignani, Thomas; Boutolleau, David; Baker, Ian

    2016-07-01

    First Light Imaging's CRED-ONE infrared camera is capable of capturing up to 3500 full frames per second with a subelectron readout noise. This breakthrough has been made possible thanks to the use of an e-APD infrared focal plane array which is a real disruptive technology in imagery. We will show the performances of the camera, its main features and compare them to other high performance wavefront sensing cameras like OCAM2 in the visible and in the infrared. The project leading to this application has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement N° 673944.

  5. Development of two-framing camera with large format and ultrahigh speed

    NASA Astrophysics Data System (ADS)

    Jiang, Xiaoguo; Wang, Yuan; Wang, Yi

    2012-10-01

    High-speed imaging facility is important and necessary for the formation of time-resolved measurement system with multi-framing capability. The framing camera which satisfies the demands of both high speed and large format needs to be specially developed in the ultrahigh speed research field. A two-framing camera system with high sensitivity and time-resolution has been developed and used for the diagnosis of electron beam parameters of Dragon-I linear induction accelerator (LIA). The camera system, which adopts the principle of light beam splitting in the image space behind the lens with long focus length, mainly consists of lens-coupled gated image intensifier, CCD camera and high-speed shutter trigger device based on the programmable integrated circuit. The fastest gating time is about 3 ns, and the interval time between the two frames can be adjusted discretely at the step of 0.5 ns. Both the gating time and the interval time can be tuned to the maximum value of about 1 s independently. Two images with the size of 1024×1024 for each can be captured simultaneously in our developed camera. Besides, this camera system possesses a good linearity, uniform spatial response and an equivalent background illumination as low as 5 electrons/pix/sec, which fully meets the measurement requirements of Dragon-I LIA.

  6. Large format geiger-mode avalanche photodiode LADAR camera

    NASA Astrophysics Data System (ADS)

    Yuan, Ping; Sudharsanan, Rengarajan; Bai, Xiaogang; Labios, Eduardo; Morris, Bryan; Nicholson, John P.; Stuart, Gary M.; Danny, Harrison

    2013-05-01

    Recently Spectrolab has successfully demonstrated a compact 32x32 Laser Detection and Range (LADAR) camera with single photo-level sensitivity with small size, weight, and power (SWAP) budget for threedimensional (3D) topographic imaging at 1064 nm on various platforms. With 20-kHz frame rate and 500- ps timing uncertainty, this LADAR system provides coverage down to inch-level fidelity and allows for effective wide-area terrain mapping. At a 10 mph forward speed and 1000 feet above ground level (AGL), it covers 0.5 square-mile per hour with a resolution of 25 in2/pixel after data averaging. In order to increase the forward speed to fit for more platforms and survey a large area more effectively, Spectrolab is developing 32x128 Geiger-mode LADAR camera with 43 frame rate. With the increase in both frame rate and array size, the data collection rate is improved by 10 times. With a programmable bin size from 0.3 ps to 0.5 ns and 14-bit timing dynamic range, LADAR developers will have more freedom in system integration for various applications. Most of the special features of Spectrolab 32x32 LADAR camera, such as non-uniform bias correction, variable range gate width, windowing for smaller arrays, and short pixel protection, are implemented in this camera.

  7. Development of an imaging system for single droplet characterization using a droplet generator.

    PubMed

    Minov, S Vulgarakis; Cointault, F; Vangeyte, J; Pieters, J G; Hijazi, B; Nuyttens, D

    2012-01-01

    The spray droplets generated by agricultural nozzles play an important role in the application accuracy and efficiency of plant protection products. The limitations of the non-imaging techniques and the recent improvements in digital image acquisition and processing increased the interest in using high speed imaging techniques in pesticide spray characterisation. The goal of this study was to develop an imaging technique to evaluate the characteristics of a single spray droplet using a piezoelectric single droplet generator and a high speed imaging technique. Tests were done with different camera settings, lenses, diffusers and light sources. The experiments have shown the necessity for having a good image acquisition and processing system. Image analysis results contributed in selecting the optimal set-up for measuring droplet size and velocity which consisted of a high speed camera with a 6 micros exposure time, a microscope lens at a working distance of 43 cm resulting in a field of view of 1.0 cm x 0.8 cm and a Xenon light source without diffuser used as a backlight. For measuring macro-spray characteristics as the droplet trajectory, the spray angle and the spray shape, a Macro Video Zoom lens at a working distance of 14.3 cm with a bigger field of view of 7.5 cm x 9.5 cm in combination with a halogen spotlight with a diffuser and the high speed camera can be used.

  8. Characterization of dynamic droplet impaction and deposit formation on leaf surfaces

    USDA-ARS?s Scientific Manuscript database

    Elucidation of droplet dynamic impaction and deposition formation on leaf surfaces would assist to optimize application strategies, improve biological control efficiency, and minimize pesticide waste. A custom-designed system consisting of two high-speed digital cameras and a uniform-size droplet ge...

  9. A compressed sensing X-ray camera with a multilayer architecture

    DOE PAGES

    Wang, Zhehui; Laroshenko, O.; Li, S.; ...

    2018-01-25

    Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. In this work, wemore » first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.« less

  10. Random On-Board Pixel Sampling (ROPS) X-Ray Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zhehui; Iaroshenko, O.; Li, S.

    Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. Here we first illustratemore » the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.« less

  11. A compressed sensing X-ray camera with a multilayer architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zhehui; Laroshenko, O.; Li, S.

    Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. In this work, wemore » first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.« less

  12. Variable-Speed Instrumented Centrifuges

    NASA Technical Reports Server (NTRS)

    Chapman, David K.; Brown, Allan H.

    1991-01-01

    Report describes conceptual pair of centrifuges, speed of which varied to produce range of artificial gravities in zero-gravity environment. Image and data recording and controlled temperature and gravity provided for 12 experiments. Microprocessor-controlled centrifuges include video cameras to record stop-motion images of experiments. Potential applications include studies of effect of gravity on growth and on production of hormones in corn seedlings, experiments with magnetic flotation to separate cells, and electrophoresis to separate large fragments of deoxyribonucleic acid.

  13. Ultra-high-speed variable focus optics for novel applications in advanced imaging

    NASA Astrophysics Data System (ADS)

    Kang, S.; Dotsenko, E.; Amrhein, D.; Theriault, C.; Arnold, C. B.

    2018-02-01

    With the advancement of ultra-fast manufacturing technologies, high speed imaging with high 3D resolution has become increasingly important. Here we show the use of an ultra-high-speed variable focus optical element, the TAG Lens, to enable new ways to acquire 3D information from an object. The TAG Lens uses sound to adjust the index of refraction profile in a liquid and thereby can achieve focal scanning rates greater than 100 kHz. When combined with a high-speed pulsed LED and a high-speed camera, we can exploit this phenomenon to achieve high-resolution imaging through large depths. By combining the image acquisition with digital image processing, we can extract relevant parameters such as tilt and angle information from objects in the image. Due to the high speeds at which images can be collected and processed, we believe this technique can be used as an efficient method of industrial inspection and metrology for high throughput applications.

  14. Rapid and highly integrated FPGA-based Shack-Hartmann wavefront sensor for adaptive optics system

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Pin; Chang, Chia-Yuan; Chen, Shean-Jen

    2018-02-01

    In this study, a field programmable gate array (FPGA)-based Shack-Hartmann wavefront sensor (SHWS) programmed on LabVIEW can be highly integrated into customized applications such as adaptive optics system (AOS) for performing real-time wavefront measurement. Further, a Camera Link frame grabber embedded with FPGA is adopted to enhance the sensor speed reacting to variation considering its advantage of the highest data transmission bandwidth. Instead of waiting for a frame image to be captured by the FPGA, the Shack-Hartmann algorithm are implemented in parallel processing blocks design and let the image data transmission synchronize with the wavefront reconstruction. On the other hand, we design a mechanism to control the deformable mirror in the same FPGA and verify the Shack-Hartmann sensor speed by controlling the frequency of the deformable mirror dynamic surface deformation. Currently, this FPGAbead SHWS design can achieve a 266 Hz cyclic speed limited by the camera frame rate as well as leaves 40% logic slices for additionally flexible design.

  15. A self-learning camera for the validation of highly variable and pseudorandom patterns

    NASA Astrophysics Data System (ADS)

    Kelley, Michael

    2004-05-01

    Reliable and productive manufacturing operations have depended on people to quickly detect and solve problems whenever they appear. Over the last 20 years, more and more manufacturing operations have embraced machine vision systems to increase productivity, reliability and cost-effectiveness, including reducing the number of human operators required. Although machine vision technology has long been capable of solving simple problems, it has still not been broadly implemented. The reason is that until now, no machine vision system has been designed to meet the unique demands of complicated pattern recognition. The ZiCAM family was specifically developed to be the first practical hardware to meet these needs. To be able to address non-traditional applications, the machine vision industry must include smart camera technology that meets its users" demands for lower costs, better performance and the ability to address applications of irregular lighting, patterns and color. The next-generation smart cameras will need to evolve as a fundamentally different kind of sensor, with new technology that behaves like a human but performs like a computer. Neural network based systems, coupled with self-taught, n-space, non-linear modeling, promises to be the enabler of the next generation of machine vision equipment. Image processing technology is now available that enables a system to match an operator"s subjectivity. A Zero-Instruction-Set-Computer (ZISC) powered smart camera allows high-speed fuzzy-logic processing, without the need for computer programming. This can address applications of validating highly variable and pseudo-random patterns. A hardware-based implementation of a neural network, Zero-Instruction-Set-Computer, enables a vision system to "think" and "inspect" like a human, with the speed and reliability of a machine.

  16. Control system for several rotating mirror camera synchronization operation

    NASA Astrophysics Data System (ADS)

    Liu, Ningwen; Wu, Yunfeng; Tan, Xianxiang; Lai, Guoji

    1997-05-01

    This paper introduces a single chip microcomputer control system for synchronization operation of several rotating mirror high-speed cameras. The system consists of four parts: the microcomputer control unit (including the synchronization part and precise measurement part and the time delay part), the shutter control unit, the motor driving unit and the high voltage pulse generator unit. The control system has been used to control the synchronization working process of the GSI cameras (driven by a motor) and FJZ-250 rotating mirror cameras (driven by a gas driven turbine). We have obtained the films of the same objective from different directions in different speed or in same speed.

  17. Design and realization of an AEC&AGC system for the CCD aerial camera

    NASA Astrophysics Data System (ADS)

    Liu, Hai ying; Feng, Bing; Wang, Peng; Li, Yan; Wei, Hao yun

    2015-08-01

    An AEC and AGC(Automatic Exposure Control and Automatic Gain Control) system was designed for a CCD aerial camera with fixed aperture and electronic shutter. The normal AEC and AGE algorithm is not suitable to the aerial camera since the camera always takes high-resolution photographs in high-speed moving. The AEC and AGE system adjusts electronic shutter and camera gain automatically according to the target brightness and the moving speed of the aircraft. An automatic Gamma correction is used before the image is output so that the image is better for watching and analyzing by human eyes. The AEC and AGC system could avoid underexposure, overexposure, or image blurring caused by fast moving or environment vibration. A series of tests proved that the system meet the requirements of the camera system with its fast adjusting speed, high adaptability, high reliability in severe complex environment.

  18. [Are speed cameras able to reduce traffic noise disturbances? An intervention study in Luebeck].

    PubMed

    Schnoor, M; Waldmann, A; Pritzkuleit, R; Tchorz, J; Gigla, B; Katalinic, A

    2014-12-01

    Disturbance by traffic noise can result in health problems in the long run. However, the subjective perception of noise plays an important role in their development. The aim of this study was to determine if speed cameras are able to reduce subjective traffic noise disturbance of residents of high-traffic roads in Luebeck? In August 2012 a speed camera has been installed in 2 high-traffic roads in Luebeck (IG). Residents living 1.5 km in front of the installed speed cameras and behind them received a postal questionnaire to evaluate their subjective noise perception before (t0), 8 weeks (t1) and 12 months (t2) after the installation of the speed camera. As controls (CG) we asked residents of another high-traffic road in Luebeck without speed cameras and residents of 2 roads with several consecutive speed cameras installed a few years ago. Furthermore, objective measures of the traffic noise level were conducted. Response rates declined from 35.9% (t0) to 27.2% (t2). The proportion of women in the CG (61.4-63.7%) was significantly higher than in the IG (53.7-58.1%, p<0.05), and responders were significantly younger (46.5±20.5-50±22.0 vs. 59.1±17.0-60.5±16.9 years, p<0.05). A reduction of the perceived noise disturbance of 0.2 point, measured on a scale from 0 (no disturbance) to 10 (heavy disturbance), could be observed in both IG and CG. Directly asked, 15.2% of the IG and 19.3% of the CG reported a traffic noise reduction at t2. The objective measure shows a mean reduction of 0.6 dB at t1. The change of noise level of 0.6 dB, which could only be experienced by direct comparison, is in line with the subjective noise perception. As sole method to reduce traffic noise (and for health promotion) a speed camera is insufficient. © Georg Thieme Verlag KG Stuttgart · New York.

  19. Fast noninvasive eye-tracking and eye-gaze determination for biomedical and remote monitoring applications

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Morookian, John M.; Monacos, Steve P.; Lam, Raymond K.; Lebaw, C.; Bond, A.

    2004-04-01

    Eyetracking is one of the latest technologies that has shown potential in several areas including human-computer interaction for people with and without disabilities, and for noninvasive monitoring, detection, and even diagnosis of physiological and neurological problems in individuals. Current non-invasive eyetracking methods achieve a 30 Hz rate with possibly low accuracy in gaze estimation, that is insufficient for many applications. We propose a new non-invasive visual eyetracking system that is capable of operating at speeds as high as 6-12 KHz. A new CCD video camera and hardware architecture is used, and a novel fast image processing algorithm leverages specific features of the input CCD camera to yield a real-time eyetracking system. A field programmable gate array (FPGA) is used to control the CCD camera and execute the image processing operations. Initial results show the excellent performance of our system under severe head motion and low contrast conditions.

  20. Real-time image mosaicing for medical applications.

    PubMed

    Loewke, Kevin E; Camarillo, David B; Jobst, Christopher A; Salisbury, J Kenneth

    2007-01-01

    In this paper we describe the development of a robotically-assisted image mosaicing system for medical applications. The processing occurs in real-time due to a fast initial image alignment provided by robotic position sensing. Near-field imaging, defined by relatively large camera motion, requires translations as well as pan and tilt orientations to be measured. To capture these measurements we use 5-d.o.f. sensing along with a hand-eye calibration to account for sensor offset. This sensor-based approach speeds up the mosaicing, eliminates cumulative errors, and readily handles arbitrary camera motions. Our results have produced visually satisfactory mosaics on a dental model but can be extended to other medical images.

  1. Maximum-Likelihood Estimation With a Contracting-Grid Search Algorithm

    PubMed Central

    Hesterman, Jacob Y.; Caucci, Luca; Kupinski, Matthew A.; Barrett, Harrison H.; Furenlid, Lars R.

    2010-01-01

    A fast search algorithm capable of operating in multi-dimensional spaces is introduced. As a sample application, we demonstrate its utility in the 2D and 3D maximum-likelihood position-estimation problem that arises in the processing of PMT signals to derive interaction locations in compact gamma cameras. We demonstrate that the algorithm can be parallelized in pipelines, and thereby efficiently implemented in specialized hardware, such as field-programmable gate arrays (FPGAs). A 2D implementation of the algorithm is achieved in Cell/BE processors, resulting in processing speeds above one million events per second, which is a 20× increase in speed over a conventional desktop machine. Graphics processing units (GPUs) are used for a 3D application of the algorithm, resulting in processing speeds of nearly 250,000 events per second which is a 250× increase in speed over a conventional desktop machine. These implementations indicate the viability of the algorithm for use in real-time imaging applications. PMID:20824155

  2. The duration perception of loading applications in smartphone: Effects of different loading types.

    PubMed

    Zhao, Wenguo; Ge, Yan; Qu, Weina; Zhang, Kan; Sun, Xianghong

    2017-11-01

    The loading time of a smartphone application is an important issue, which affects the satisfaction of phone users. This study evaluated the effects of black loading screen (BLS) and animation loading screen (ALS) during application loading on users' duration perception and satisfaction. A total of 43 volunteers were enrolled. They were asked to complete several tasks by clicking the icons of each application, such as camera or message. The duration of loading time for each application was manipulated. The participants were asked to estimate the duration, evaluate the loading speed and their satisfaction. The results showed that the estimated duration increased and the satisfaction for loading period declined along with the loading time increased. Compared with the BLS, the ALS prolonged the estimated duration, and lowered the evaluation of speed and satisfaction. We also discussed the tendency and key inflection points of the curves involving the estimated duration, speed evaluation and satisfaction with the loading time. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. High-speed real-time 3-D coordinates measurement based on fringe projection profilometry considering camera lens distortion

    NASA Astrophysics Data System (ADS)

    Feng, Shijie; Chen, Qian; Zuo, Chao; Sun, Jiasong; Yu, Shi Ling

    2014-10-01

    Optical three-dimensional (3-D) profilometry is gaining increasing attention for its simplicity, flexibility, high accuracy, and non-contact nature. Recent advances in imaging sensors and digital projection technology further its progress in high-speed, real-time applications, enabling 3-D shapes reconstruction of moving objects and dynamic scenes. However, the camera lens is never perfect and the lens distortion does influence the accuracy of the measurement result, which is often overlooked in the existing real-time 3-D shape measurement systems. To this end, here we present a novel high-speed real-time 3-D coordinates measuring technique based on fringe projection with the consideration of the camera lens distortion. A pixel mapping relation between a distorted image and a corrected one is pre-determined and stored in computer memory for real-time fringe correction. The out-of-plane height is obtained firstly and the acquisition for the two corresponding in-plane coordinates follows on the basis of the solved height. Besides, a method of lookup table (LUT) is introduced as well for fast data processing. Our experimental results reveal that the measurement error of the in-plane coordinates has been reduced by one order of magnitude and the accuracy of the out-plane coordinate been tripled after the distortions being eliminated. Moreover, owing to the generated LUTs, a 3-D reconstruction speed of 92.34 frames per second can be achieved.

  4. Turbulent Mixing and Combustion for High-Speed Air-Breathing Propulsion Application

    DTIC Science & Technology

    2007-08-12

    deficit (the velocity of the wake relative to the free-stream velocity), decays rapidly with downstream distance, so that the streamwise velocity is...switched laser with double-pulse option) and a new imaging system (high-resolution: 4008x2672 pix2, low- noise (cooled) Cooke PCO-4000 CCD camera). The...was designed in-house for high-speed low- noise image acquisition. The KFS CCD image sensor was designed by Mark Wadsworth of JPL and has a resolution

  5. CMOS Image Sensors for High Speed Applications.

    PubMed

    El-Desouki, Munir; Deen, M Jamal; Fang, Qiyin; Liu, Louis; Tse, Frances; Armstrong, David

    2009-01-01

    Recent advances in deep submicron CMOS technologies and improved pixel designs have enabled CMOS-based imagers to surpass charge-coupled devices (CCD) imaging technology for mainstream applications. The parallel outputs that CMOS imagers can offer, in addition to complete camera-on-a-chip solutions due to being fabricated in standard CMOS technologies, result in compelling advantages in speed and system throughput. Since there is a practical limit on the minimum pixel size (4∼5 μm) due to limitations in the optics, CMOS technology scaling can allow for an increased number of transistors to be integrated into the pixel to improve both detection and signal processing. Such smart pixels truly show the potential of CMOS technology for imaging applications allowing CMOS imagers to achieve the image quality and global shuttering performance necessary to meet the demands of ultrahigh-speed applications. In this paper, a review of CMOS-based high-speed imager design is presented and the various implementations that target ultrahigh-speed imaging are described. This work also discusses the design, layout and simulation results of an ultrahigh acquisition rate CMOS active-pixel sensor imager that can take 8 frames at a rate of more than a billion frames per second (fps).

  6. Commercially available high-speed system for recording and monitoring vocal fold vibrations.

    PubMed

    Sekimoto, Sotaro; Tsunoda, Koichi; Kaga, Kimitaka; Makiyama, Kiyoshi; Tsunoda, Atsunobu; Kondo, Kenji; Yamasoba, Tatsuya

    2009-12-01

    We have developed a special purpose adaptor making it possible to use a commercially available high-speed camera to observe vocal fold vibrations during phonation. The camera can capture dynamic digital images at speeds of 600 or 1200 frames per second. The adaptor is equipped with a universal-type attachment and can be used with most endoscopes sold by various manufacturers. Satisfactory images can be obtained with a rigid laryngoscope even with the standard light source. The total weight of the adaptor and camera (including battery) is only 1010 g. The new system comprising the high-speed camera and the new adaptor can be purchased for about $3000 (US), while the least expensive stroboscope costs about 10 times that price, and a high-performance high-speed imaging system may cost 100 times as much. Therefore the system is both cost-effective and useful in the outpatient clinic or casualty setting, on house calls, and for the purpose of student or patient education.

  7. New-style defect inspection system of film

    NASA Astrophysics Data System (ADS)

    Liang, Yan; Liu, Wenyao; Liu, Ming; Lee, Ronggang

    2002-09-01

    An inspection system has been developed for on-line detection of film defects, which bases on combination of photoelectric imaging and digital image processing. The system runs in high speed of maximum 60m/min. Moving film is illuminated by LED array which emits even infrared (peak wavelength λp=940nm), and infrared images are obtained with a high quality and high speed CCD camera. The application software based on Visual C++6.0 under Windows processes images in real time by means of such algorithms as median filter, edge detection and projection, etc. The system is made up of four modules, which are introduced in detail in the paper. On-line experiment results shows that the inspection system can recognize defects precisely in high speed and run reliably in practical application.

  8. Capturing migration phenology of terrestrial wildlife using camera traps

    USGS Publications Warehouse

    Tape, Ken D.; Gustine, David D.

    2014-01-01

    Remote photography, using camera traps, can be an effective and noninvasive tool for capturing the migration phenology of terrestrial wildlife. We deployed 14 digital cameras along a 104-kilometer longitudinal transect to record the spring migrations of caribou (Rangifer tarandus) and ptarmigan (Lagopus spp.) in the Alaskan Arctic. The cameras recorded images at 15-minute intervals, producing approximately 40,000 images, including 6685 caribou observations and 5329 ptarmigan observations. The northward caribou migration was evident because the median caribou observation (i.e., herd median) occurred later with increasing latitude; average caribou migration speed also increased with latitude (r2 = .91). Except at the northernmost latitude, a northward ptarmigan migration was similarly evident (r2 = .93). Future applications of this method could be used to examine the conditions proximate to animal movement, such as habitat or snow cover, that may influence migration phenology.

  9. High-speed light field camera and frequency division multiplexing for fast multi-plane velocity measurements.

    PubMed

    Fischer, Andreas; Kupsch, Christian; Gürtler, Johannes; Czarske, Jürgen

    2015-09-21

    Non-intrusive fast 3d measurements of volumetric velocity fields are necessary for understanding complex flows. Using high-speed cameras and spectroscopic measurement principles, where the Doppler frequency of scattered light is evaluated within the illuminated plane, each pixel allows one measurement and, thus, planar measurements with high data rates are possible. While scanning is one standard technique to add the third dimension, the volumetric data is not acquired simultaneously. In order to overcome this drawback, a high-speed light field camera is proposed for obtaining volumetric data with each single frame. The high-speed light field camera approach is applied to a Doppler global velocimeter with sinusoidal laser frequency modulation. As a result, a frequency multiplexing technique is required in addition to the plenoptic refocusing for eliminating the crosstalk between the measurement planes. However, the plenoptic refocusing is still necessary in order to achieve a large refocusing range for a high numerical aperture that minimizes the measurement uncertainty. Finally, two spatially separated measurement planes with 25×25 pixels each are simultaneously acquired with a measurement rate of 0.5 kHz with a single high-speed camera.

  10. Electronic cameras for low-light microscopy.

    PubMed

    Rasnik, Ivan; French, Todd; Jacobson, Ken; Berland, Keith

    2013-01-01

    This chapter introduces to electronic cameras, discusses the various parameters considered for evaluating their performance, and describes some of the key features of different camera formats. The chapter also presents the basic understanding of functioning of the electronic cameras and how these properties can be exploited to optimize image quality under low-light conditions. Although there are many types of cameras available for microscopy, the most reliable type is the charge-coupled device (CCD) camera, which remains preferred for high-performance systems. If time resolution and frame rate are of no concern, slow-scan CCDs certainly offer the best available performance, both in terms of the signal-to-noise ratio and their spatial resolution. Slow-scan cameras are thus the first choice for experiments using fixed specimens such as measurements using immune fluorescence and fluorescence in situ hybridization. However, if video rate imaging is required, one need not evaluate slow-scan CCD cameras. A very basic video CCD may suffice if samples are heavily labeled or are not perturbed by high intensity illumination. When video rate imaging is required for very dim specimens, the electron multiplying CCD camera is probably the most appropriate at this technological stage. Intensified CCDs provide a unique tool for applications in which high-speed gating is required. The variable integration time video cameras are very attractive options if one needs to acquire images at video rate acquisition, as well as with longer integration times for less bright samples. This flexibility can facilitate many diverse applications with highly varied light levels. Copyright © 2007 Elsevier Inc. All rights reserved.

  11. A high resolution and high speed 3D imaging system and its application on ATR

    NASA Astrophysics Data System (ADS)

    Lu, Thomas T.; Chao, Tien-Hsin

    2006-04-01

    The paper presents an advanced 3D imaging system based on a combination of stereo vision and light projection methods. A single digital camera is used to take only one shot of the object and reconstruct the 3D model of an object. The stereo vision is achieved by employing a prism and mirror setup to split the views and combine them side by side in the camera. The advantage of this setup is its simple system architecture, easy synchronization, fast 3D imaging speed and high accuracy. The 3D imaging algorithms and potential applications are discussed. For ATR applications, it is critically important to extract maximum information for the potential targets and to separate the targets from the background and clutter noise. The added dimension of a 3D model provides additional features of surface profile, range information of the target. It is capable of removing the false shadow from camouflage and reveal the 3D profile of the object. It also provides arbitrary viewing angles and distances for training the filter bank for invariant ATR. The system architecture can be scaled to take large objects and to perform area 3D modeling onboard a UAV.

  12. Minimum Requirements for Taxicab Security Cameras*

    PubMed Central

    Zeng, Shengke; Amandus, Harlan E.; Amendola, Alfred A.; Newbraugh, Bradley H.; Cantis, Douglas M.; Weaver, Darlene

    2015-01-01

    Problem The homicide rate of taxicab-industry is 20 times greater than that of all workers. A NIOSH study showed that cities with taxicab-security cameras experienced significant reduction in taxicab driver homicides. Methods Minimum technical requirements and a standard test protocol for taxicab-security cameras for effective taxicab-facial identification were determined. The study took more than 10,000 photographs of human-face charts in a simulated-taxicab with various photographic resolutions, dynamic ranges, lens-distortions, and motion-blurs in various light and cab-seat conditions. Thirteen volunteer photograph-evaluators evaluated these face photographs and voted for the minimum technical requirements for taxicab-security cameras. Results Five worst-case scenario photographic image quality thresholds were suggested: the resolution of XGA-format, highlight-dynamic-range of 1 EV, twilight-dynamic-range of 3.3 EV, lens-distortion of 30%, and shutter-speed of 1/30 second. Practical Applications These minimum requirements will help taxicab regulators and fleets to identify effective taxicab-security cameras, and help taxicab-security camera manufacturers to improve the camera facial identification capability. PMID:26823992

  13. An HDR imaging method with DTDI technology for push-broom cameras

    NASA Astrophysics Data System (ADS)

    Sun, Wu; Han, Chengshan; Xue, Xucheng; Lv, Hengyi; Shi, Junxia; Hu, Changhong; Li, Xiangzhi; Fu, Yao; Jiang, Xiaonan; Huang, Liang; Han, Hongyin

    2018-03-01

    Conventionally, high dynamic-range (HDR) imaging is based on taking two or more pictures of the same scene with different exposure. However, due to a high-speed relative motion between the camera and the scene, it is hard for this technique to be applied to push-broom remote sensing cameras. For the sake of HDR imaging in push-broom remote sensing applications, the present paper proposes an innovative method which can generate HDR images without redundant image sensors or optical components. Specifically, this paper adopts an area array CMOS (complementary metal oxide semiconductor) with the digital domain time-delay-integration (DTDI) technology for imaging, instead of adopting more than one row of image sensors, thereby taking more than one picture with different exposure. And then a new HDR image by fusing two original images with a simple algorithm can be achieved. By conducting the experiment, the dynamic range (DR) of the image increases by 26.02 dB. The proposed method is proved to be effective and has potential in other imaging applications where there is a relative motion between the cameras and scenes.

  14. Integration of image capture and processing: beyond single-chip digital camera

    NASA Astrophysics Data System (ADS)

    Lim, SukHwan; El Gamal, Abbas

    2001-05-01

    An important trend in the design of digital cameras is the integration of capture and processing onto a single CMOS chip. Although integrating the components of a digital camera system onto a single chip significantly reduces system size and power, it does not fully exploit the potential advantages of integration. We argue that a key advantage of integration is the ability to exploit the high speed imaging capability of CMOS image senor to enable new applications such as multiple capture for enhancing dynamic range and to improve the performance of existing applications such as optical flow estimation. Conventional digital cameras operate at low frame rates and it would be too costly, if not infeasible, to operate their chips at high frame rates. Integration solves this problem. The idea is to capture images at much higher frame rates than he standard frame rate, process the high frame rate data on chip, and output the video sequence and the application specific data at standard frame rate. This idea is applied to optical flow estimation, where significant performance improvements are demonstrate over methods using standard frame rate sequences. We then investigate the constraints on memory size and processing power that can be integrated with a CMOS image sensor in a 0.18 micrometers process and below. We show that enough memory and processing power can be integrated to be able to not only perform the functions of a conventional camera system but also to perform applications such as real time optical flow estimation.

  15. Efficient subtle motion detection from high-speed video for sound recovery and vibration analysis using singular value decomposition-based approach

    NASA Astrophysics Data System (ADS)

    Zhang, Dashan; Guo, Jie; Jin, Yi; Zhu, Chang'an

    2017-09-01

    High-speed cameras provide full field measurement of structure motions and have been applied in nondestructive testing and noncontact structure monitoring. Recently, a phase-based method has been proposed to extract sound-induced vibrations from phase variations in videos, and this method provides insights into the study of remote sound surveillance and material analysis. An efficient singular value decomposition (SVD)-based approach is introduced to detect sound-induced subtle motions from pixel intensities in silent high-speed videos. A high-speed camera is initially applied to capture a video of the vibrating objects stimulated by sound fluctuations. Then, subimages collected from a small region on the captured video are reshaped into vectors and reconstructed to form a matrix. Orthonormal image bases (OIBs) are obtained from the SVD of the matrix; available vibration signal can then be obtained by projecting subsequent subimages onto specific OIBs. A simulation test is initiated to validate the effectiveness and efficiency of the proposed method. Two experiments are conducted to demonstrate the potential applications in sound recovery and material analysis. Results show that the proposed method efficiently detects subtle motions from the video.

  16. Per-Pixel Coded Exposure for High-Speed and High-Resolution Imaging Using a Digital Micromirror Device Camera

    PubMed Central

    Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei

    2016-01-01

    High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera. PMID:26959023

  17. Per-Pixel Coded Exposure for High-Speed and High-Resolution Imaging Using a Digital Micromirror Device Camera.

    PubMed

    Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei

    2016-03-04

    High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera.

  18. SpectraCAM SPM: a camera system with high dynamic range for scientific and medical applications

    NASA Astrophysics Data System (ADS)

    Bhaskaran, S.; Baiko, D.; Lungu, G.; Pilon, M.; VanGorden, S.

    2005-08-01

    A scientific camera system having high dynamic range designed and manufactured by Thermo Electron for scientific and medical applications is presented. The newly developed CID820 image sensor with preamplifier-per-pixel technology is employed in this camera system. The 4 Mega-pixel imaging sensor has a raw dynamic range of 82dB. Each high-transparent pixel is based on a preamplifier-per-pixel architecture and contains two photogates for non-destructive readout of the photon-generated charge (NDRO). Readout is achieved via parallel row processing with on-chip correlated double sampling (CDS). The imager is capable of true random pixel access with a maximum operating speed of 4MHz. The camera controller consists of a custom camera signal processor (CSP) with an integrated 16-bit A/D converter and a PowerPC-based CPU running a Linux embedded operating system. The imager is cooled to -40C via three-stage cooler to minimize dark current. The camera housing is sealed and is designed to maintain the CID820 imager in the evacuated chamber for at least 5 years. Thermo Electron has also developed custom software and firmware to drive the SpectraCAM SPM camera. Included in this firmware package is the new Extreme DRTM algorithm that is designed to extend the effective dynamic range of the camera by several orders of magnitude up to 32-bit dynamic range. The RACID Exposure graphical user interface image analysis software runs on a standard PC that is connected to the camera via Gigabit Ethernet.

  19. Advanced High-Definition Video Cameras

    NASA Technical Reports Server (NTRS)

    Glenn, William

    2007-01-01

    A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.

  20. Trends in high-speed camera development in the Union of Soviet Socialist Republics /USSR/ and People's Republic of China /PRC/

    NASA Astrophysics Data System (ADS)

    Hyzer, W. G.

    1981-10-01

    Significant advances in high-speed camera technology are being made in the Union of Soviet Socialist Republics (USSR) and People's Republic of China (PRC), which were revealed to the author during recent visits to both of these countries. Past and present developments in high-speed cameras are described in this paper based on personal observations by the author and on private communications with other technical observers. Detailed specifications on individual instruments are presented in those specific cases where such information has been revealed and could be verified.

  1. Modeling of digital information optical encryption system with spatially incoherent illumination

    NASA Astrophysics Data System (ADS)

    Bondareva, Alyona P.; Cheremkhin, Pavel A.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.; Starikov, Sergey N.

    2015-10-01

    State of the art micromirror DMD spatial light modulators (SLM) offer unprecedented framerate up to 30000 frames per second. This, in conjunction with high speed digital camera, should allow to build high speed optical encryption system. Results of modeling of digital information optical encryption system with spatially incoherent illumination are presented. Input information is displayed with first SLM, encryption element - with second SLM. Factors taken into account are: resolution of SLMs and camera, holograms reconstruction noise, camera noise and signal sampling. Results of numerical simulation demonstrate high speed (several gigabytes per second), low bit error rate and high crypto-strength.

  2. High-speed imaging using 3CCD camera and multi-color LED flashes

    NASA Astrophysics Data System (ADS)

    Hijazi, Ala; Friedl, Alexander; Cierpka, Christian; Kähler, Christian; Madhavan, Vis

    2017-11-01

    This paper demonstrates the possibility of capturing full-resolution, high-speed image sequences using a regular 3CCD color camera in conjunction with high-power light emitting diodes of three different colors. This is achieved using a novel approach, referred to as spectral-shuttering, where a high-speed image sequence is captured using short duration light pulses of different colors that are sent consecutively in very close succession. The work presented in this paper demonstrates the feasibility of configuring a high-speed camera system using low cost and readily available off-the-shelf components. This camera can be used for recording six-frame sequences at frame rates up to 20 kHz or three-frame sequences at even higher frame rates. Both color crosstalk and spatial matching between the different channels of the camera are found to be within acceptable limits. A small amount of magnification difference between the different channels is found and a simple calibration procedure for correcting the images is introduced. The images captured using the approach described here are of good quality to be used for obtaining full-field quantitative information using techniques such as digital image correlation and particle image velocimetry. A sequence of six high-speed images of a bubble splash recorded at 400 Hz is presented as a demonstration.

  3. The application of high-speed cinematography for the quantitative analysis of equine locomotion.

    PubMed

    Fredricson, I; Drevemo, S; Dalin, G; Hjertën, G; Björne, K

    1980-04-01

    Locomotive disorders constitute a serious problem in horse racing which will only be rectified by a better understanding of the causative factors associated with disturbances of gait. This study describes a system for the quantitative analysis of the locomotion of horses at speed. The method is based on high-speed cinematography with a semi-automatic system of analysis of the films. The recordings are made with a 16 mm high-speed camera run at 500 frames per second (fps) and the films are analysed by special film-reading equipment and a mini-computer. The time and linear gait variables are presented in tabular form and the angles and trajectories of the joints and body segments are presented graphically.

  4. Real-time Accurate Surface Reconstruction Pipeline for Vision Guided Planetary Exploration Using Unmanned Ground and Aerial Vehicles

    NASA Technical Reports Server (NTRS)

    Almeida, Eduardo DeBrito

    2012-01-01

    This report discusses work completed over the summer at the Jet Propulsion Laboratory (JPL), California Institute of Technology. A system is presented to guide ground or aerial unmanned robots using computer vision. The system performs accurate camera calibration, camera pose refinement and surface extraction from images collected by a camera mounted on the vehicle. The application motivating the research is planetary exploration and the vehicles are typically rovers or unmanned aerial vehicles. The information extracted from imagery is used primarily for navigation, as robot location is the same as the camera location and the surfaces represent the terrain that rovers traverse. The processed information must be very accurate and acquired very fast in order to be useful in practice. The main challenge being addressed by this project is to achieve high estimation accuracy and high computation speed simultaneously, a difficult task due to many technical reasons.

  5. Soft X-ray streak camera for laser fusion applications

    NASA Astrophysics Data System (ADS)

    Stradling, G. L.

    1981-04-01

    The development and significance of the soft x-ray streak camera (SXRSC) in the context of inertial confinement fusion energy development is reviewed as well as laser fusion and laser fusion diagnostics. The SXRSC design criteria, the requirement for a subkilovolt x-ray transmitting window, and the resulting camera design are explained. Theory and design of reflector-filter pair combinations for three subkilovolt channels centered at 220 eV, 460 eV, and 620 eV are also presented. Calibration experiments are explained and data showing a dynamic range of 1000 and a sweep speed of 134 psec/mm are presented. Sensitivity modifications to the soft x-ray streak camera for a high-power target shot are described. A preliminary investigation, using a stepped cathode, of the thickness dependence of the gold photocathode response is discussed. Data from a typical Argus laser gold-disk target experiment are shown.

  6. Visualization of high speed liquid jet impaction on a moving surface.

    PubMed

    Guo, Yuchen; Green, Sheldon

    2015-04-17

    Two apparatuses for examining liquid jet impingement on a high-speed moving surface are described: an air cannon device (for examining surface speeds between 0 and 25 m/sec) and a spinning disk device (for examining surface speeds between 15 and 100 m/sec). The air cannon linear traverse is a pneumatic energy-powered system that is designed to accelerate a metal rail surface mounted on top of a wooden projectile. A pressurized cylinder fitted with a solenoid valve rapidly releases pressurized air into the barrel, forcing the projectile down the cannon barrel. The projectile travels beneath a spray nozzle, which impinges a liquid jet onto its metal upper surface, and the projectile then hits a stopping mechanism. A camera records the jet impingement, and a pressure transducer records the spray nozzle backpressure. The spinning disk set-up consists of a steel disk that reaches speeds of 500 to 3,000 rpm via a variable frequency drive (VFD) motor. A spray system similar to that of the air cannon generates a liquid jet that impinges onto the spinning disc, and cameras placed at several optical access points record the jet impingement. Video recordings of jet impingement processes are recorded and examined to determine whether the outcome of impingement is splash, splatter, or deposition. The apparatuses are the first that involve the high speed impingement of low-Reynolds-number liquid jets on high speed moving surfaces. In addition to its rail industry applications, the described technique may be used for technical and industrial purposes such as steelmaking and may be relevant to high-speed 3D printing.

  7. Visualization of High Speed Liquid Jet Impaction on a Moving Surface

    PubMed Central

    Guo, Yuchen; Green, Sheldon

    2015-01-01

    Two apparatuses for examining liquid jet impingement on a high-speed moving surface are described: an air cannon device (for examining surface speeds between 0 and 25 m/sec) and a spinning disk device (for examining surface speeds between 15 and 100 m/sec). The air cannon linear traverse is a pneumatic energy-powered system that is designed to accelerate a metal rail surface mounted on top of a wooden projectile. A pressurized cylinder fitted with a solenoid valve rapidly releases pressurized air into the barrel, forcing the projectile down the cannon barrel. The projectile travels beneath a spray nozzle, which impinges a liquid jet onto its metal upper surface, and the projectile then hits a stopping mechanism. A camera records the jet impingement, and a pressure transducer records the spray nozzle backpressure. The spinning disk set-up consists of a steel disk that reaches speeds of 500 to 3,000 rpm via a variable frequency drive (VFD) motor. A spray system similar to that of the air cannon generates a liquid jet that impinges onto the spinning disc, and cameras placed at several optical access points record the jet impingement. Video recordings of jet impingement processes are recorded and examined to determine whether the outcome of impingement is splash, splatter, or deposition. The apparatuses are the first that involve the high speed impingement of low-Reynolds-number liquid jets on high speed moving surfaces. In addition to its rail industry applications, the described technique may be used for technical and industrial purposes such as steelmaking and may be relevant to high-speed 3D printing. PMID:25938331

  8. Soft x-ray streak camera for laser fusion applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stradling, G.L.

    This thesis reviews the development and significance of the soft x-ray streak camera (SXRSC) in the context of inertial confinement fusion energy development. A brief introduction of laser fusion and laser fusion diagnostics is presented. The need for a soft x-ray streak camera as a laser fusion diagnostic is shown. Basic x-ray streak camera characteristics, design, and operation are reviewed. The SXRSC design criteria, the requirement for a subkilovolt x-ray transmitting window, and the resulting camera design are explained. Theory and design of reflector-filter pair combinations for three subkilovolt channels centered at 220 eV, 460 eV, and 620 eV aremore » also presented. Calibration experiments are explained and data showing a dynamic range of 1000 and a sweep speed of 134 psec/mm are presented. Sensitivity modifications to the soft x-ray streak camera for a high-power target shot are described. A preliminary investigation, using a stepped cathode, of the thickness dependence of the gold photocathode response is discussed. Data from a typical Argus laser gold-disk target experiment are shown.« less

  9. Speed cameras for the prevention of road traffic injuries and deaths.

    PubMed

    Wilson, Cecilia; Willis, Charlene; Hendrikz, Joan K; Le Brocque, Robyne; Bellamy, Nicholas

    2010-11-10

    It is estimated that by 2020, road traffic crashes will have moved from ninth to third in the world ranking of burden of disease, as measured in disability adjusted life years. The prevention of road traffic injuries is of global public health importance. Measures aimed at reducing traffic speed are considered essential to preventing road injuries; the use of speed cameras is one such measure. To assess whether the use of speed cameras reduces the incidence of speeding, road traffic crashes, injuries and deaths. We searched the following electronic databases covering all available years up to March 2010; the Cochrane Library, MEDLINE (WebSPIRS), EMBASE (WebSPIRS), TRANSPORT, IRRD (International Road Research Documentation), TRANSDOC (European Conference of Ministers of Transport databases), Web of Science (Science and Social Science Citation Index), PsycINFO, CINAHL, EconLit, WHO database, Sociological Abstracts, Dissertation Abstracts, Index to Theses. Randomised controlled trials, interrupted time series and controlled before-after studies that assessed the impact of speed cameras on speeding, road crashes, crashes causing injury and fatalities were eligible for inclusion. We independently screened studies for inclusion, extracted data, assessed methodological quality, reported study authors' outcomes and where possible, calculated standardised results based on the information available in each study. Due to considerable heterogeneity between and within included studies, a meta-analysis was not appropriate. Thirty five studies met the inclusion criteria. Compared with controls, the relative reduction in average speed ranged from 1% to 15% and the reduction in proportion of vehicles speeding ranged from 14% to 65%. In the vicinity of camera sites, the pre/post reductions ranged from 8% to 49% for all crashes and 11% to 44% for fatal and serious injury crashes. Compared with controls, the relative improvement in pre/post injury crash proportions ranged from 8% to 50%. Despite the methodological limitations and the variability in degree of signal to noise effect, the consistency of reported reductions in speed and crash outcomes across all studies show that speed cameras are a worthwhile intervention for reducing the number of road traffic injuries and deaths. However, whilst the the evidence base clearly demonstrates a positive direction in the effect, an overall magnitude of this effect is currently not deducible due to heterogeneity and lack of methodological rigour. More studies of a scientifically rigorous and homogenous nature are necessary, to provide the answer to the magnitude of effect.

  10. Speed cameras for the prevention of road traffic injuries and deaths.

    PubMed

    Wilson, Cecilia; Willis, Charlene; Hendrikz, Joan K; Le Brocque, Robyne; Bellamy, Nicholas

    2010-10-06

    It is estimated that by 2020, road traffic crashes will have moved from ninth to third in the world ranking of burden of disease, as measured in disability adjusted life years. The prevention of road traffic injuries is of global public health importance. Measures aimed at reducing traffic speed are considered essential to preventing road injuries; the use of speed cameras is one such measure. To assess whether the use of speed cameras reduces the incidence of speeding, road traffic crashes, injuries and deaths. We searched the following electronic databases covering all available years up to March 2010; the Cochrane Library, MEDLINE (WebSPIRS), EMBASE (WebSPIRS), TRANSPORT, IRRD (International Road Research Documentation), TRANSDOC (European Conference of Ministers of Transport databases), Web of Science (Science and Social Science Citation Index), PsycINFO, CINAHL, EconLit, WHO database, Sociological Abstracts, Dissertation Abstracts, Index to Theses. Randomised controlled trials, interrupted time series and controlled before-after studies that assessed the impact of speed cameras on speeding, road crashes, crashes causing injury and fatalities were eligible for inclusion. We independently screened studies for inclusion, extracted data, assessed methodological quality, reported study authors' outcomes and where possible, calculated standardised results based on the information available in each study. Due to considerable heterogeneity between and within included studies, a meta-analysis was not appropriate. Thirty five studies met the inclusion criteria. Compared with controls, the relative reduction in average speed ranged from 1% to 15% and the reduction in proportion of vehicles speeding ranged from 14% to 65%. In the vicinity of camera sites, the pre/post reductions ranged from 8% to 49% for all crashes and 11% to 44% for fatal and serious injury crashes. Compared with controls, the relative improvement in pre/post injury crash proportions ranged from 8% to 50%. Despite the methodological limitations and the variability in degree of signal to noise effect, the consistency of reported reductions in speed and crash outcomes across all studies show that speed cameras are a worthwhile intervention for reducing the number of road traffic injuries and deaths. However, whilst the the evidence base clearly demonstrates a positive direction in the effect, an overall magnitude of this effect is currently not deducible due to heterogeneity and lack of methodological rigour. More studies of a scientifically rigorous and homogenous nature are necessary, to provide the answer to the magnitude of effect.

  11. High-speed flow visualization in hypersonic, transonic, and shock tube flows

    NASA Astrophysics Data System (ADS)

    Kleine, H.; Olivier, H.

    2017-02-01

    High-speed flow visualisation has played an important role in the investigations conducted at the Stoßwellenlabor of the RWTH Aachen University for many decades. In addition to applying the techniques of high-speed imaging, this laboratory has been actively developing new or enhanced visualisation techniques and approaches such as various schlieren methods or time-resolved Mach-Zehnder interferometry. The investigated high-speed flows are inherently highly transient, with flow Mach numbers ranging from about M = 0.7 to M = 8. The availability of modern high-speed cameras has allowed us to expand the investigations into problems where reduced reproducibility had so far limited the amount of information that could be extracted from a limited number of flow visualisation records. Following a brief historical overview, some examples of recent studies are given, which represent the breadth of applications in which high-speed imaging has been an essential diagnostic tool to uncover the physics of high-speed flows. Applications include the stability of hypersonic corner flows, the establishment of shock wave systems in transonic airfoil flow, and the complexities of the interactions of shock waves with obstacles of various shapes.

  12. Aerodynamic Performance and Particle Image Velocimetery of Piezo Actuated Biomimetic Manduca Sexta Engineered Wings Towards the Design and Application of a Flapping Wing Flight Vehicle

    DTIC Science & Technology

    2013-12-01

    95 3.3. Displacement sensor ... Bio vs. engineered wing modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 3.1. High speed camera specifications...expanding and evolving mission areas, especially in the arena of bio -inspired Flap- ping Wing Micro Air Vehicles (FWMAV). This chapter will introduce the

  13. Guidebook to School Publications Photography.

    ERIC Educational Resources Information Center

    Glowacki, Joseph W.

    This guidebook for school publications photographers discusses both the self-image of the publications photographer and various aspects of photography, including components of the camera, shutter speed and action pictures, light meters, handling cameras, lenses, developing film, pushing film beyond the emulsion-speed rating recommended by the…

  14. Image intensification; Proceedings of the Meeting, Los Angeles, CA, Jan. 17, 18, 1989

    NASA Astrophysics Data System (ADS)

    Csorba, Illes P.

    Various papers on image intensification are presented. Individual topics discussed include: status of high-speed optical detector technologies, super second generation imge intensifier, gated image intensifiers and applications, resistive-anode position-sensing photomultiplier tube operational modeling, undersea imaging and target detection with gated image intensifier tubes, image intensifier modules for use with commercially available solid state cameras, specifying the components of an intensified solid state television camera, superconducting IR focal plane arrays, one-inch TV camera tube with very high resolution capacity, CCD-Digicon detector system performance parameters, high-resolution X-ray imaging device, high-output technology microchannel plate, preconditioning of microchannel plate stacks, recent advances in small-pore microchannel plate technology, performance of long-life curved channel microchannel plates, low-noise microchannel plates, development of a quartz envelope heater.

  15. High-Speed Camera and High-Vision Camera Observations of TLEs from Jet Aircraft in Winter Japan and in Summer US

    NASA Astrophysics Data System (ADS)

    Sato, M.; Takahashi, Y.; Kudo, T.; Yanagi, Y.; Kobayashi, N.; Yamada, T.; Project, N.; Stenbaek-Nielsen, H. C.; McHarg, M. G.; Haaland, R. K.; Kammae, T.; Cummer, S. A.; Yair, Y.; Lyons, W. A.; Ahrns, J.; Yukman, P.; Warner, T. A.; Sonnenfeld, R. G.; Li, J.; Lu, G.

    2011-12-01

    The time evolution and spatial distributions of transient luminous events (TLEs) are the key parameters to identify the relationship between TLEs and parent lightning discharges, roles of electromagnetic pulses (EMPs) emitted by horizontal and vertical lightning currents in the formation of TLEs, and the occurrence condition and mechanisms of TLEs. Since the time scales of TLEs is typically less than a few milliseconds, new imaging technique that enable us to capture images with a high time resolution of < 1ms is awaited. By courtesy of "Cosmic Shore" Project conducted by Japan Broadcasting Corporation (NHK), we have carried out optical observations using a high-speed Image-Intensified (II) CMOS camera and a high-vision three-CCD camera from a jet aircraft on November 28 and December 3, 2010 in winter Japan. Using the high-speed II-CMOS camera, it is possible to capture images with 8,300 frames per second (fps), which corresponds to the time resolution of 120 us. Using the high-vision three-CCD camera, it is possible to capture high quality, true color images of TLEs with a 1920x1080 pixel size and with a frame rate of 30 fps. During the two observation flights, we have succeeded to detect 28 sprite events, and 3 elves events totally. In response to this success, we have conducted a combined aircraft and ground-based campaign of TLE observations at the High Plains in summer US. We have installed same NHK high-speed and high-vision cameras in a jet aircraft. In the period from June 27 and July 10, 2011, we have operated aircraft observations in 8 nights, and we have succeeded to capture TLE images for over a hundred events by the high-vision camera and succeeded to acquire over 40 high-speed images simultaneously. At the presentation, we will introduce the outlines of the two aircraft campaigns, and will introduce the characteristics of the time evolution and spatial distributions of TLEs observed in winter Japan, and will show the initial results of high-speed image data analysis of TLEs in summer US.

  16. A solid state lightning propagation speed sensor

    NASA Technical Reports Server (NTRS)

    Mach, Douglas M.; Rust, W. David

    1989-01-01

    A device to measure the propagation speeds of cloud-to-ground lightning has been developed. The lightning propagation speed (LPS) device consists of eight solid state silicon photodetectors mounted behind precision horizontal slits in the focal plane of a 50-mm lens on a 35-mm camera. Although the LPS device produces results similar to those obtained from a streaking camera, the LPS device has the advantages of smaller size, lower cost, mobile use, and easier data collection and analysis. The maximum accuracy for the LPS is 0.2 microsec, compared with about 0.8 microsecs for the streaking camera. It is found that the return stroke propagation speed for triggered lightning is different than that for natural lightning if measurements are taken over channel segments less than 500 m. It is suggested that there are no significant differences between the propagation speeds of positive and negative flashes. Also, differences between natural and triggered dart leaders are discussed.

  17. Inspecting rapidly moving surfaces for small defects using CNN cameras

    NASA Astrophysics Data System (ADS)

    Blug, Andreas; Carl, Daniel; Höfler, Heinrich

    2013-04-01

    A continuous increase in production speed and manufacturing precision raises a demand for the automated detection of small image features on rapidly moving surfaces. An example are wire drawing processes where kilometers of cylindrical metal surfaces moving with 10 m/s have to be inspected for defects such as scratches, dents, grooves, or chatter marks with a lateral size of 100 μm in real time. Up to now, complex eddy current systems are used for quality control instead of line cameras, because the ratio between lateral feature size and surface speed is limited by the data transport between camera and computer. This bottleneck is avoided by "cellular neural network" (CNN) cameras which enable image processing directly on the camera chip. This article reports results achieved with a demonstrator based on this novel analogue camera - computer system. The results show that computational speed and accuracy of the analogue computer system are sufficient to detect and discriminate the different types of defects. Area images with 176 x 144 pixels are acquired and evaluated in real time with frame rates of 4 to 10 kHz - depending on the number of defects to be detected. These frame rates correspond to equivalent line rates on line cameras between 360 and 880 kHz, a number far beyond the available features. Using the relation between lateral feature size and surface speed as a figure of merit, the CNN based system outperforms conventional image processing systems by an order of magnitude.

  18. The application of the high-speed photography in the experiments of boiling liquid expanding vapor explosions

    NASA Astrophysics Data System (ADS)

    Chen, Sining; Sun, Jinhua; Chen, Dongliang

    2007-01-01

    The liquefied-petroleum gas tank in some failure situations may release its contents, and then a series of hazards with different degrees of severity may occur. The most dangerous accident is the boiling liquid expanding vapor explosion (BLEVE). In this paper, a small-scale experiment was established to experimentally investigate the possible processes that could lead to a BLEVE. As there is some danger in using LPG in the experiments, water was used as the test fluid. The change of pressure and temperature was measured during the experiment. The ejection of the vapor and the sequent two-phase flow were recorded by a high-speed video camera. It was observed that two pressure peaks result after the pressure is released. The vapor was first ejected at a high speed; there was a sudden pressure drop which made the liquid superheated. The superheated liquid then boiled violently causing the liquid contents to swell, and also, the vapor pressure in the tank increased rapidly. The second pressure peak was possibly due to the swell of this two-phase flow which was likely to violently impact the wall of the tank with high speed. The whole evolution of the two-phase flow was recorded through photos captured by the high-speed video camera, and the "two step" BLEVE process was confirmed.

  19. An evaluation of Winnipeg's photo enforcement safety program: results of time series analyses and an intersection camera experiment.

    PubMed

    Vanlaar, Ward; Robertson, Robyn; Marcoux, Kyla

    2014-01-01

    The objective of this study was to evaluate the impact of Winnipeg's photo enforcement safety program on speeding, i.e., "speed on green", and red-light running behavior at intersections as well as on crashes resulting from these behaviors. ARIMA time series analyses regarding crashes related to red-light running (right-angle crashes and rear-end crashes) and crashes related to speeding (injury crashes and property damage only crashes) occurring at intersections were conducted using monthly crash counts from 1994 to 2008. A quasi-experimental intersection camera experiment was also conducted using roadside data on speeding and red-light running behavior at intersections. These data were analyzed using logistic regression analysis. The time series analyses showed that for crashes related to red-light running, there had been a 46% decrease in right-angle crashes at camera intersections, but that there had also been an initial 42% increase in rear-end crashes. For crashes related to speeding, analyses revealed that the installation of cameras was not associated with increases or decreases in crashes. Results of the intersection camera experiment show that there were significantly fewer red light running violations at intersections after installation of cameras and that photo enforcement had a protective effect on speeding behavior at intersections. However, the data also suggest photo enforcement may be less effective in preventing serious speeding violations at intersections. Overall, Winnipeg's photo enforcement safety program had a positive net effect on traffic safety. Results from both the ARIMA time series and the quasi-experimental design corroborate one another. However, the protective effect of photo enforcement is not equally pronounced across different conditions so further monitoring is required to improve the delivery of this measure. Results from this study as well as limitations are discussed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Avionic Pictorial Tunnel-/Pathway-/Highway-In-The-Sky Workshops

    NASA Technical Reports Server (NTRS)

    Parrish, Russell V. (Compiler)

    2003-01-01

    In 1994-96, Langley Research Center held a series of interactive workshops investigating highway-in-the-sky concepts, which enable precise flight path control. These workshops brought together government and industry display designers and pilots to discuss and fly various concepts in an iterative manner. The primary emphasis of the first workshops was the utility and usability of pathways and the pros and cons of various features available. The final workshops were focused on the specific applications to the eXternal Visibility System (XVS) of the NASA High-speed Research Program, which was concerned with replacement of the forward windows in a High-speed Civil Transport with electronic displays and high resolution video cameras to enable a "No-Droop" configuration. The primary concerns in the XVS application were the prevention of display clutter and obscuration of hazards, as the camera image was the primary means of traffic separation in clear visibility conditions. These concerns were not so prominent in the first workshops, which assumed a Synthetic Vision System application in which hazard locations are known and obscuration is handled easily. The resulting consensus concept has been used since in simulation and flight test activities of many Government programs. and other concepts have been influenced by the workshop discussions.

  1. High-speed imaging system for observation of discharge phenomena

    NASA Astrophysics Data System (ADS)

    Tanabe, R.; Kusano, H.; Ito, Y.

    2008-11-01

    A thin metal electrode tip instantly changes its shape into a sphere or a needlelike shape in a single electrical discharge of high current. These changes occur within several hundred microseconds. To observe these high-speed phenomena in a single discharge, an imaging system using a high-speed video camera and a high repetition rate pulse laser was constructed. A nanosecond laser, the wavelength of which was 532 nm, was used as the illuminating source of a newly developed high-speed video camera, HPV-1. The time resolution of our system was determined by the laser pulse width and was about 80 nanoseconds. The system can take one hundred pictures at 16- or 64-microsecond intervals in a single discharge event. A band-pass filter at 532 nm was placed in front of the camera to block the emission of the discharge arc at other wavelengths. Therefore, clear images of the electrode were recorded even during the discharge. If the laser was not used, only images of plasma during discharge and thermal radiation from the electrode after discharge were observed. These results demonstrate that the combination of a high repetition rate and a short pulse laser with a high speed video camera provides a unique and powerful method for high speed imaging.

  2. Using a high-speed movie camera to evaluate slice dropping in clinical image interpretation with stack mode viewers.

    PubMed

    Yakami, Masahiro; Yamamoto, Akira; Yanagisawa, Morio; Sekiguchi, Hiroyuki; Kubo, Takeshi; Togashi, Kaori

    2013-06-01

    The purpose of this study is to verify objectively the rate of slice omission during paging on picture archiving and communication system (PACS) viewers by recording the images shown on the computer displays of these viewers with a high-speed movie camera. This study was approved by the institutional review board. A sequential number from 1 to 250 was superimposed on each slice of a series of clinical Digital Imaging and Communication in Medicine (DICOM) data. The slices were displayed using several DICOM viewers, including in-house developed freeware and clinical PACS viewers. The freeware viewer and one of the clinical PACS viewers included functions to prevent slice dropping. The series was displayed in stack mode and paged in both automatic and manual paging modes. The display was recorded with a high-speed movie camera and played back at a slow speed to check whether slices were dropped. The paging speeds were also measured. With a paging speed faster than half the refresh rate of the display, some viewers dropped up to 52.4 % of the slices, while other well-designed viewers did not, if used with the correct settings. Slice dropping during paging was objectively confirmed using a high-speed movie camera. To prevent slice dropping, the viewer must be specially designed for the purpose and must be used with the correct settings, or the paging speed must be slower than half of the display refresh rate.

  3. A deep-sea, high-speed, stereoscopic imaging system for in situ measurement of natural seep bubble and droplet characteristics

    NASA Astrophysics Data System (ADS)

    Wang, Binbin; Socolofsky, Scott A.

    2015-10-01

    Development, testing, and application of a deep-sea, high-speed, stereoscopic imaging system are presented. The new system is designed for field-ready deployment, focusing on measurement of the characteristics of natural seep bubbles and droplets with high-speed and high-resolution image capture. The stereo view configuration allows precise evaluation of the physical scale of the moving particles in image pairs. Two laboratory validation experiments (a continuous bubble chain and an airstone bubble plume) were carried out to test the calibration procedure, performance of image processing and bubble matching algorithms, three-dimensional viewing, and estimation of bubble size distribution and volumetric flow rate. The results showed that the stereo view was able to improve the individual bubble size measurement over the single-camera view by up to 90% in the two validation cases, with the single-camera being biased toward overestimation of the flow rate. We also present the first application of this imaging system in a study of natural gas seeps in the Gulf of Mexico. The high-speed images reveal the rigidity of the transparent bubble interface, indicating the presence of clathrate hydrate skins on the natural gas bubbles near the source (lowest measurement 1.3 m above the vent). We estimated the dominant bubble size at the seep site Sleeping Dragon in Mississippi Canyon block 118 to be in the range of 2-4 mm and the volumetric flow rate to be 0.2-0.3 L/min during our measurements from 17 to 21 July 2014.

  4. On The Export Control Of High Speed Imaging For Nuclear Weapons Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watson, Scott Avery; Altherr, Michael Robert

    Since the Manhattan Project, the use of high-speed photography, and its cousins flash radiography1 and schieleren photography have been a technological proliferation concern. Indeed, like the supercomputer, the development of high-speed photography as we now know it essentially grew out of the nuclear weapons program at Los Alamos2,3,4. Naturally, during the course of the last 75 years the technology associated with computers and cameras has been export controlled by the United States and others to prevent both proliferation among non-P5-nations and technological parity among potential adversaries among P5 nations. Here we revisit these issues as they relate to high-speed photographicmore » technologies and make recommendations about how future restrictions, if any, should be guided.« less

  5. Location accuracy evaluation of lightning location systems using natural lightning flashes recorded by a network of high-speed cameras

    NASA Astrophysics Data System (ADS)

    Alves, J.; Saraiva, A. C. V.; Campos, L. Z. D. S.; Pinto, O., Jr.; Antunes, L.

    2014-12-01

    This work presents a method for the evaluation of location accuracy of all Lightning Location System (LLS) in operation in southeastern Brazil, using natural cloud-to-ground (CG) lightning flashes. This can be done through a multiple high-speed cameras network (RAMMER network) installed in the Paraiba Valley region - SP - Brazil. The RAMMER network (Automated Multi-camera Network for Monitoring and Study of Lightning) is composed by four high-speed cameras operating at 2,500 frames per second. Three stationary black-and-white (B&W) cameras were situated in the cities of São José dos Campos and Caçapava. A fourth color camera was mobile (installed in a car), but operated in a fixed location during the observation period, within the city of São José dos Campos. The average distance among cameras was 13 kilometers. Each RAMMER sensor position was determined so that the network can observe the same lightning flash from different angles and all recorded videos were GPS (Global Position System) time stamped, allowing comparisons of events between cameras and the LLS. The RAMMER sensor is basically composed by a computer, a Phantom high-speed camera version 9.1 and a GPS unit. The lightning cases analyzed in the present work were observed by at least two cameras, their position was visually triangulated and the results compared with BrasilDAT network, during the summer seasons of 2011/2012 and 2012/2013. The visual triangulation method is presented in details. The calibration procedure showed an accuracy of 9 meters between the accurate GPS position of the object triangulated and the result from the visual triangulation method. Lightning return stroke positions, estimated with the visual triangulation method, were compared with LLS locations. Differences between solutions were not greater than 1.8 km.

  6. Measuring full-field displacement spectral components using photographs taken with a DSLR camera via an analogue Fourier integral

    NASA Astrophysics Data System (ADS)

    Javh, Jaka; Slavič, Janko; Boltežar, Miha

    2018-02-01

    Instantaneous full-field displacement fields can be measured using cameras. In fact, using high-speed cameras full-field spectral information up to a couple of kHz can be measured. The trouble is that high-speed cameras capable of measuring high-resolution fields-of-view at high frame rates prove to be very expensive (from tens to hundreds of thousands of euro per camera). This paper introduces a measurement set-up capable of measuring high-frequency vibrations using slow cameras such as DSLR, mirrorless and others. The high-frequency displacements are measured by harmonically blinking the lights at specified frequencies. This harmonic blinking of the lights modulates the intensity changes of the filmed scene and the camera-image acquisition makes the integration over time, thereby producing full-field Fourier coefficients of the filmed structure's displacements.

  7. Modification of a Kowa RC-2 fundus camera for self-photography without the use of mydriatics.

    PubMed

    Philpott, D E; Bailey, P F; Harrison, G; Turnbill, C

    1979-01-01

    Research on retinal circulation during space flight required the development of a simple technique to provide self monitoring of blood vessel changes in the fundus without the use of mydriatics. A Kowa RC-2 fundus camera was modified for self-photography by the use of a bite plate for positioning and cross hairs for focusing the subject's retina relative to the film plane. Dilation of the pupils without the use of mydriatics was accomplished by dark adaption of the subject. Pictures were obtained without pupil constriction by the use of a high speed strobe light. This method also has applications for clinical medicine.

  8. Field camera measurements of gradient and shim impulse responses using frequency sweeps.

    PubMed

    Vannesjo, S Johanna; Dietrich, Benjamin E; Pavan, Matteo; Brunner, David O; Wilm, Bertram J; Barmet, Christoph; Pruessmann, Klaas P

    2014-08-01

    Applications of dynamic shimming require high field fidelity, and characterizing the shim field dynamics is therefore necessary. Modeling the system as linear and time-invariant, the purpose of this work was to measure the impulse response function with optimal sensitivity. Frequency-swept pulses as inputs are analyzed theoretically, showing that the sweep speed is a key factor for the measurement sensitivity. By adjusting the sweep speed it is possible to achieve any prescribed noise profile in the measured system response. Impulse response functions were obtained for the third-order shim system of a 7 Tesla whole-body MR scanner. Measurements of the shim fields were done with a dynamic field camera, yielding also cross-term responses. The measured shim impulse response functions revealed system characteristics such as response bandwidth, eddy currents and specific resonances, possibly of mechanical origin. Field predictions based on the shim characterization were shown to agree well with directly measured fields, also in the cross-terms. Frequency sweeps provide a flexible tool for shim or gradient system characterization. This may prove useful for applications involving dynamic shimming by yielding accurate estimates of the shim fields and a basis for setting shim pre-emphasis. Copyright © 2013 Wiley Periodicals, Inc.

  9. 4DCAPTURE: a general purpose software package for capturing and analyzing two- and three-dimensional motion data acquired from video sequences

    NASA Astrophysics Data System (ADS)

    Walton, James S.; Hodgson, Peter; Hallamasek, Karen; Palmer, Jake

    2003-07-01

    4DVideo is creating a general purpose capability for capturing and analyzing kinematic data from video sequences in near real-time. The core element of this capability is a software package designed for the PC platform. The software ("4DCapture") is designed to capture and manipulate customized AVI files that can contain a variety of synchronized data streams -- including audio, video, centroid locations -- and signals acquired from more traditional sources (such as accelerometers and strain gauges.) The code includes simultaneous capture or playback of multiple video streams, and linear editing of the images (together with the ancilliary data embedded in the files). Corresponding landmarks seen from two or more views are matched automatically, and photogrammetric algorithms permit multiple landmarks to be tracked in two- and three-dimensions -- with or without lens calibrations. Trajectory data can be processed within the main application or they can be exported to a spreadsheet where they can be processed or passed along to a more sophisticated, stand-alone, data analysis application. Previous attempts to develop such applications for high-speed imaging have been limited in their scope, or by the complexity of the application itself. 4DVideo has devised a friendly ("FlowStack") user interface that assists the end-user to capture and treat image sequences in a natural progression. 4DCapture employs the AVI 2.0 standard and DirectX technology which effectively eliminates the file size limitations found in older applications. In early tests, 4DVideo has streamed three RS-170 video sources to disk for more than an hour without loss of data. At this time, the software can acquire video sequences in three ways: (1) directly, from up to three hard-wired cameras supplying RS-170 (monochrome) signals; (2) directly, from a single camera or video recorder supplying an NTSC (color) signal; and (3) by importing existing video streams in the AVI 1.0 or AVI 2.0 formats. The latter is particularly useful for high-speed applications where the raw images are often captured and stored by the camera before being downloaded. Provision has been made to synchronize data acquired from any combination of these video sources using audio and visual "tags." Additional "front-ends," designed for digital cameras, are anticipated.

  10. Practical use of high-speed cameras for research and development within the automotive industry: yesterday and today

    NASA Astrophysics Data System (ADS)

    Steinmetz, Klaus

    1995-05-01

    Within the automotive industry, especially for the development and improvement of safety systems, we find a lot of high accelerated motions, that can not be followed and consequently not be analyzed by human eye. For the vehicle safety tests at AUDI, which are performed as 'Crash Tests', 'Sled Tests' and 'Static Component Tests', 'Stalex', 'Hycam', and 'Locam' cameras are in use. Nowadays the automobile production is inconceivable without the use of high speed cameras.

  11. Development of a Compact & Easy-to-Use 3-D Camera for High Speed Turbulent Flow Fields

    DTIC Science & Technology

    2013-12-05

    resolved. Also, in the case of a single camera system, the use of an aperture greatly reduces the amount of collected light. The combination of these...a study on wall-bounded turbulence [Sheng_2006]. Nevertheless, these techniques are limited to small measurement volumes, while maintaining a high...It has also been adapted to kHz rates using high-speed cameras for aeroacoustic studies (see Violato et al. [17, 18]. Tomo-PIV, however, has some

  12. UCam: universal camera controller and data acquisition system

    NASA Astrophysics Data System (ADS)

    McLay, S. A.; Bezawada, N. N.; Atkinson, D. C.; Ives, D. J.

    2010-07-01

    This paper describes the software architecture and design concepts used in the UKATC's generic camera control and data acquisition software system (UCam) which was originally developed for use with the ARC controller hardware. The ARC detector control electronics are developed by Astronomical Research Cameras (ARC), of San Diego, USA. UCam provides an alternative software solution programmed in C/C++ and python that runs on a real-time Linux operating system to achieve critical speed performance for high time resolution instrumentation. UCam is a server based application that can be accessed remotely and easily integrated as part of a larger instrument control system. It comes with a user friendly client application interface that has several features including a FITS header editor and support for interfacing with network devices. Support is also provided for writing automated scripts in python or as text files. UCam has an application centric design where custom applications for different types of detectors and read out modes can be developed, downloaded and executed on the ARC controller. The built-in de-multiplexer can be easily reconfigured to readout any number of channels for almost any type of detector. It also provides support for numerous sampling modes such as CDS, FOWLER, NDR and threshold limited NDR. UCam has been developed over several years for use on many instruments such as the Wide Field Infra Red Camera (WFCAM) at UKIRT in Hawaii, the mid-IR imager/spectrometer UIST and is also used on instruments at SUBARU, Gemini and Palomar.

  13. Brandaris 128 ultra-high-speed imaging facility: 10 years of operation, updates, and enhanced features

    NASA Astrophysics Data System (ADS)

    Gelderblom, Erik C.; Vos, Hendrik J.; Mastik, Frits; Faez, Telli; Luan, Ying; Kokhuis, Tom J. A.; van der Steen, Antonius F. W.; Lohse, Detlef; de Jong, Nico; Versluis, Michel

    2012-10-01

    The Brandaris 128 ultra-high-speed imaging facility has been updated over the last 10 years through modifications made to the camera's hardware and software. At its introduction the camera was able to record 6 sequences of 128 images (500 × 292 pixels) at a maximum frame rate of 25 Mfps. The segmented mode of the camera was revised to allow for subdivision of the 128 image sensors into arbitrary segments (1-128) with an inter-segment time of 17 μs. Furthermore, a region of interest can be selected to increase the number of recordings within a single run of the camera from 6 up to 125. By extending the imaging system with a laser-induced fluorescence setup, time-resolved ultra-high-speed fluorescence imaging of microscopic objects has been enabled. Minor updates to the system are also reported here.

  14. High speed multiwire photon camera

    NASA Technical Reports Server (NTRS)

    Lacy, Jeffrey L. (Inventor)

    1991-01-01

    An improved multiwire proportional counter camera having particular utility in the field of clinical nuclear medicine imaging. The detector utilizes direct coupled, low impedance, high speed delay lines, the segments of which are capacitor-inductor networks. A pile-up rejection test is provided to reject confused events otherwise caused by multiple ionization events occuring during the readout window.

  15. High speed multiwire photon camera

    NASA Technical Reports Server (NTRS)

    Lacy, Jeffrey L. (Inventor)

    1989-01-01

    An improved multiwire proportional counter camera having particular utility in the field of clinical nuclear medicine imaging. The detector utilizes direct coupled, low impedance, high speed delay lines, the segments of which are capacitor-inductor networks. A pile-up rejection test is provided to reject confused events otherwise caused by multiple ionization events occurring during the readout window.

  16. Portable telepathology: methods and tools.

    PubMed

    Alfaro, Luis; Roca, Ma José

    2008-07-15

    Telepathology is becoming easier to implement in most pathology departments. In fact e-mail image transmit can be done from almost any pathologist as a simplistic telepathology system. We tried to develop a way to improve capabilities of communication among pathologists with the idea that the system should be affordable for everybody. We took the premise that any pathology department would have microscopes and computers with Internet connection, and selected a few elements to convert them into a telepathology station. Needs were reduced to a camera to collect images, a universal microscope adapter for the camera, a device to connect the camera to the computer, and a software for the remote image transmit. We found out a microscope adapter (MaxView Plus) that allowed us connect almost any domestic digital camera to any microscope. The video out signal from the camera was sent to the computer through an Aver Media USB connector. At last, we selected a group of portable applications that were assembled into a USB memory device. Portable applications are computer programs that can be carried generally on USB flash drives, but also in any other portable device, and used on any (Windows) computer without installation. Besides, when unplugging the device, none of personal data is left behind. We selected open-source applications, and based the pathology image transmission to VLC Media Player due to its functionality as streaming server, portability and ease of use and configuration. Audio transmission was usually done through normal phone lines. We also employed alternative videoconferencing software, SightSpeed for bi-directional image transmission from microscopes, and conventional cameras allowing visual communication and also image transmit from gross pathology specimens. All these elements allowed us to install and use a telepathology system in a few minutes, fully prepared for real time image broadcast.

  17. Portable telepathology: methods and tools

    PubMed Central

    Alfaro, Luis; Roca, Ma José

    2008-01-01

    Telepathology is becoming easier to implement in most pathology departments. In fact e-mail image transmit can be done from almost any pathologist as a simplistic telepathology system. We tried to develop a way to improve capabilities of communication among pathologists with the idea that the system should be affordable for everybody. We took the premise that any pathology department would have microscopes and computers with Internet connection, and selected a few elements to convert them into a telepathology station. Needs were reduced to a camera to collect images, a universal microscope adapter for the camera, a device to connect the camera to the computer, and a software for the remote image transmit. We found out a microscope adapter (MaxView Plus) that allowed us connect almost any domestic digital camera to any microscope. The video out signal from the camera was sent to the computer through an Aver Media USB connector. At last, we selected a group of portable applications that were assembled into a USB memory device. Portable applications are computer programs that can be carried generally on USB flash drives, but also in any other portable device, and used on any (Windows) computer without installation. Besides when unplugging the device, none of personal data is left behind. We selected open-source applications, and based the pathology image transmission to VLC Media Player due to its functionality as streaming server, portability and ease of use and configuration. Audio transmission was usually done through normal phone lines. We also employed alternative videoconferencing software, SightSpeed for bi-directional image transmission from microscopes, and conventional cameras allowing visual communication and also image transmit from gross pathology specimens. All these elements allowed us to install and use a telepathology system in a few minutes, fully prepared for real time image broadcast. PMID:18673507

  18. Motor vehicle injuries in Qatar: time trends in a rapidly developing Middle Eastern nation.

    PubMed

    Mamtani, Ravinder; Al-Thani, Mohammed H; Al-Thani, Al-Anoud Mohammed; Sheikh, Javaid I; Lowenfels, Albert B

    2012-04-01

    Despite their wealth and modern road systems, traffic injury rates in Middle Eastern countries are generally higher than those in Western countries. The authors examined traffic injuries in Qatar during 2000-2010, a period of rapid population growth, focusing on the impact of speed control cameras installed in 2007 on overall injury rates and mortality. During the period 2000-2006, prior to camera installation, the mean (SD) vehicular injury death rate per 100,000 was 19.9±4.1. From 2007 to 2010, the mean (SD) vehicular death rates were significantly lower: 14.7±1.5 (p=0.028). Non-fatal severe injury rates also declined, but mild injury rates increased, perhaps because of increased traffic congestion and improved notification. It is possible that speed cameras decreased speeding enough to affect the death rate, without affecting overall injury rates. These data suggest that in a rapidly growing Middle Eastern country, photo enforcement (speed) cameras can be an important component of traffic control, but other measures will be required for maximum impact.

  19. Motor vehicle injuries in Qatar: time trends in a rapidly developing Middle Eastern nation

    PubMed Central

    Al-Thani, Mohammed H; Al-Thani, Al-Anoud Mohammed; Sheikh, Javaid I; Lowenfels, Albert B

    2011-01-01

    Despite their wealth and modern road systems, traffic injury rates in Middle Eastern countries are generally higher than those in Western countries. The authors examined traffic injuries in Qatar during 2000–2010, a period of rapid population growth, focusing on the impact of speed control cameras installed in 2007 on overall injury rates and mortality. During the period 2000–2006, prior to camera installation, the mean (SD) vehicular injury death rate per 100 000 was 19.9±4.1. From 2007 to 2010, the mean (SD) vehicular death rates were significantly lower: 14.7±1.5 (p=0.028). Non-fatal severe injury rates also declined, but mild injury rates increased, perhaps because of increased traffic congestion and improved notification. It is possible that speed cameras decreased speeding enough to affect the death rate, without affecting overall injury rates. These data suggest that in a rapidly growing Middle Eastern country, photo enforcement (speed) cameras can be an important component of traffic control, but other measures will be required for maximum impact. PMID:21994881

  20. Scalable software architecture for on-line multi-camera video processing

    NASA Astrophysics Data System (ADS)

    Camplani, Massimo; Salgado, Luis

    2011-03-01

    In this paper we present a scalable software architecture for on-line multi-camera video processing, that guarantees a good trade off between computational power, scalability and flexibility. The software system is modular and its main blocks are the Processing Units (PUs), and the Central Unit. The Central Unit works as a supervisor of the running PUs and each PU manages the acquisition phase and the processing phase. Furthermore, an approach to easily parallelize the desired processing application has been presented. In this paper, as case study, we apply the proposed software architecture to a multi-camera system in order to efficiently manage multiple 2D object detection modules in a real-time scenario. System performance has been evaluated under different load conditions such as number of cameras and image sizes. The results show that the software architecture scales well with the number of camera and can easily works with different image formats respecting the real time constraints. Moreover, the parallelization approach can be used in order to speed up the processing tasks with a low level of overhead.

  1. Low-cost mobile phone microscopy with a reversed mobile phone camera lens.

    PubMed

    Switz, Neil A; D'Ambrosio, Michael V; Fletcher, Daniel A

    2014-01-01

    The increasing capabilities and ubiquity of mobile phones and their associated digital cameras offer the possibility of extending low-cost, portable diagnostic microscopy to underserved and low-resource areas. However, mobile phone microscopes created by adding magnifying optics to the phone's camera module have been unable to make use of the full image sensor due to the specialized design of the embedded camera lens, exacerbating the tradeoff between resolution and field of view inherent to optical systems. This tradeoff is acutely felt for diagnostic applications, where the speed and cost of image-based diagnosis is related to the area of the sample that can be viewed at sufficient resolution. Here we present a simple and low-cost approach to mobile phone microscopy that uses a reversed mobile phone camera lens added to an intact mobile phone to enable high quality imaging over a significantly larger field of view than standard microscopy. We demonstrate use of the reversed lens mobile phone microscope to identify red and white blood cells in blood smears and soil-transmitted helminth eggs in stool samples.

  2. Low-Cost Mobile Phone Microscopy with a Reversed Mobile Phone Camera Lens

    PubMed Central

    Fletcher, Daniel A.

    2014-01-01

    The increasing capabilities and ubiquity of mobile phones and their associated digital cameras offer the possibility of extending low-cost, portable diagnostic microscopy to underserved and low-resource areas. However, mobile phone microscopes created by adding magnifying optics to the phone's camera module have been unable to make use of the full image sensor due to the specialized design of the embedded camera lens, exacerbating the tradeoff between resolution and field of view inherent to optical systems. This tradeoff is acutely felt for diagnostic applications, where the speed and cost of image-based diagnosis is related to the area of the sample that can be viewed at sufficient resolution. Here we present a simple and low-cost approach to mobile phone microscopy that uses a reversed mobile phone camera lens added to an intact mobile phone to enable high quality imaging over a significantly larger field of view than standard microscopy. We demonstrate use of the reversed lens mobile phone microscope to identify red and white blood cells in blood smears and soil-transmitted helminth eggs in stool samples. PMID:24854188

  3. Video image processing greatly enhances contrast, quality, and speed in polarization-based microscopy

    PubMed Central

    1981-01-01

    Video cameras with contrast and black level controls can yield polarized light and differential interference contrast microscope images with unprecedented image quality, resolution, and recording speed. The theoretical basis and practical aspects of video polarization and differential interference contrast microscopy are discussed and several applications in cell biology are illustrated. These include: birefringence of cortical structures and beating cilia in Stentor, birefringence of rotating flagella on a single bacterium, growth and morphogenesis of echinoderm skeletal spicules in culture, ciliary and electrical activity in a balancing organ of a nudibranch snail, and acrosomal reaction in activated sperm. PMID:6788777

  4. Application of high-speed photography to chip refining

    NASA Astrophysics Data System (ADS)

    Stationwala, Mustafa I.; Miller, Charles E.; Atack, Douglas; Karnis, A.

    1991-04-01

    Several high speed photographic methods have been employed to elucidate the mechanistic aspects of producing mechanical pulp in a disc refiner. Material flow patterns of pulp in a refmer were previously recorded by means of a HYCAM camera and continuous lighting system which provided cine pictures at up to 10,000 pps. In the present work an IMACON camera was used to obtain several series of high resolution, high speed photographs, each photograph containing an eight-frame sequence obtained at a framing rate of 100,000 pps. These high-resolution photographs made it possible to identify the nature of the fibrous material trapped on the bars of the stationary disc. Tangential movement of fibre floes, during the passage of bars on the rotating disc over bars on the stationary disc, was also observed on the stator bars. In addition, using a cinestroboscopic technique a large number of high resolution pictures were taken at three different positions of the rotating disc relative to the stationary disc. These pictures were computer analyzed, statistically, to determine the fractional coverage of the bars of the stationary disc with pulp. Information obtained from these studies provides new insights into the mechanism of the refining process.

  5. A pilot study using a novel pyrotechnically driven prototype applicator for epidermal powder immunization in piglets.

    PubMed

    Engert, Julia; Anamur, Cihad; Engelke, Laura; Fellner, Christian; Lell, Peter; Henke, Stefan; Stadler, Julia; Zöls, Susanne; Ritzmann, Mathias; Winter, Gerhard

    2018-04-20

    Epidermal powder immunization (EPI) is an alternative technique to the classical immunization route using needle and syringe. In this work, we present the results of an in vivo pilot study in piglets using a dried influenza model vaccine which was applied by EPI using a novel pyrotechnically driven applicator. A liquid influenza vaccine (Pandemrix ® ) was first concentrated by tangential flow filtration and hemagglutinin content was determined by RP-HPLC. The liquid formulation was then transformed into a dry powder by collapse freeze-drying and subsequent cryo-milling. The vaccine powder was attached to a membrane of a novel pyrotechnical applicator using oily adjuvant components. Upon actuation of the applicator, particles were accelerated to high speed as determined by a high-speed camera setup. Piglets were immunized twice using either the novel pyrotechnical applicator or classical intramuscular injection. Blood samples of the animals were collected at various time points and analyzed by enzyme-linked immunosorbent assay. Our pilot study shows that acceleration of a dried vaccine powder to supersonic speed using the pyrotechnical applicator is possible and that the speed and impact of the particles is sufficient to breach the stratum corneum of piglet skin. Importantly, the administration of the dry vaccine powder resulted in measurable anti-H1N1 antibody titres in vivo. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Fiber optic interferometry for industrial process monitoring and control applications

    NASA Astrophysics Data System (ADS)

    Marcus, Michael A.

    2002-02-01

    Over the past few years we have been developing applications for a high-resolution (sub-micron accuracy) fiber optic coupled dual Michelson interferometer-based instrument. It is being utilized in a variety of applications including monitoring liquid layer thickness uniformity on coating hoppers, film base thickness uniformity measurement, digital camera focus assessment, optical cell path length assessment and imager and wafer surface profile mapping. The instrument includes both coherent and non-coherent light sources, custom application dependent optical probes and sample interfaces, a Michelson interferometer, custom electronics, a Pentium-based PC with data acquisition cards and LabWindows CVI or LabView based application specific software. This paper describes the development evolution of this instrument platform and applications highlighting robust instrument design, hardware, software, and user interfaces development. The talk concludes with a discussion of a new high-speed instrument configuration, which can be utilized for high speed surface profiling and as an on-line web thickness gauge.

  7. Performances Of The New Streak Camera TSN 506

    NASA Astrophysics Data System (ADS)

    Nodenot, P.; Imhoff, C.; Bouchu, M.; Cavailler, C.; Fleurot, N.; Launspach, J.

    1985-02-01

    The number of streack cameras used in research laboratory has been continuously increased du-ring the past years. The increasing of this type of equipment is due to the development of various measurement techniques in the nanosecond and picosecond range. Among the many different applications, we would mention detonics chronometry measurement, measurement of the speed of matter by means of Doppler-laser interferometry, laser and plasma diagnostics associated with laser-matter interaction. The old range of cameras have been remodelled, in order to standardize and rationalize the production of ultrafast cinematography instruments, to produce a single camera known as TSN 506. Tne TSN 506 is composed of an electronic control unit, built around the image converter tube it can be fitted with a nanosecond sweep circuit covering the whole range from 1 ms to 200 ns or with a picosecond circuit providing streak durations from 1 to 100 ns. We shall describe the main electronic and opto-electronic performance of the TSN 506 operating in these two temporal fields.

  8. Time-to-impact sensors in robot vision applications based on the near-sensor image processing concept

    NASA Astrophysics Data System (ADS)

    Åström, Anders; Forchheimer, Robert

    2012-03-01

    Based on the Near-Sensor Image Processing (NSIP) concept and recent results concerning optical flow and Time-to- Impact (TTI) computation with this architecture, we show how these results can be used and extended for robot vision applications. The first case involves estimation of the tilt of an approaching planar surface. The second case concerns the use of two NSIP cameras to estimate absolute distance and speed similar to a stereo-matching system but without the need to do image correlations. Going back to a one-camera system, the third case deals with the problem to estimate the shape of the approaching surface. It is shown that the previously developed TTI method not only gives a very compact solution with respect to hardware complexity, but also surprisingly high performance.

  9. Stratified charge rotary engine - Internal flow studies at the MSU engine research laboratory

    NASA Technical Reports Server (NTRS)

    Hamady, F.; Kosterman, J.; Chouinard, E.; Somerton, C.; Schock, H.; Chun, K.; Hicks, Y.

    1989-01-01

    High-speed visualization and laser Doppler velocimetry (LDV) systems consisting of a 40-watt copper vapor laser, mirrors, cylindrical lenses, a high speed camera, a synchronization timing system, and a particle generator were developed for the study of the fuel spray-air mixing flow characteristics within the combustion chamber of a motored rotary engine. The laser beam is focused down to a sheet approximately 1 mm thick, passing through the combustion chamber and illuminates smoke particles entrained in the intake air. The light scattered off the particles is recorded by a high speed rotating prism camera. Movies are made showing the air flow within the combustion chamber. The results of a movie showing the development of a high-speed (100 Hz) high-pressure (68.94 MPa, 10,000 psi) fuel jet are also discussed. The visualization system is synchronized so that a pulse generated by the camera triggers the laser's thyratron.

  10. High-speed and high-resolution quantitative phase imaging with digital-micromirror device-based illumination (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Zhou, Renjie; Jin, Di; Yaqoob, Zahid; So, Peter T. C.

    2017-02-01

    Due to the large number of available mirrors, the patterning speed, low-cost, and compactness, digital-micromirror devices (DMDs) have been extensively used in biomedical imaging system. Recently, DMDs have been brought to the quantitative phase microscopy (QPM) field to achieve synthetic-aperture imaging and tomographic imaging. Last year, our group demonstrated using DMD for QPM, where the phase-retrieval is based on a recently developed Fourier ptychography algorithm. In our previous system, the illumination angle was varied through coding the aperture plane of the illumination system, which has a low efficiency on utilizing the laser power. In our new DMD-based QPM system, we use the Lee-holograms, which is conjugated to the sample plane, to change the illumination angles for much higher power efficiency. Multiple-angle illumination can also be achieved with this method. With this versatile system, we can achieve FPM-based high-resolution phase imaging with 250 nm lateral resolution using the Rayleigh criteria. Due to the use of a powerful laser, the imaging speed would only be limited by the camera acquisition speed. With a fast camera, we expect to achieve close to 100 fps phase imaging speed that has not been achieved in current FPM imaging systems. By adding reference beam, we also expect to achieve synthetic-aperture imaging while directly measuring the phase of the sample fields. This would reduce the phase-retrieval processing time to allow for real-time imaging applications in the future.

  11. Self calibrating monocular camera measurement of traffic parameters.

    DOT National Transportation Integrated Search

    2009-12-01

    This proposed project will extend the work of previous projects that have developed algorithms and software : to measure traffic speed under adverse conditions using un-calibrated cameras. The present implementation : uses the WSDOT CCTV cameras moun...

  12. Development of low-cost high-performance multispectral camera system at Banpil

    NASA Astrophysics Data System (ADS)

    Oduor, Patrick; Mizuno, Genki; Olah, Robert; Dutta, Achyut K.

    2014-05-01

    Banpil Photonics (Banpil) has developed a low-cost high-performance multispectral camera system for Visible to Short- Wave Infrared (VIS-SWIR) imaging for the most demanding high-sensitivity and high-speed military, commercial and industrial applications. The 640x512 pixel InGaAs uncooled camera system is designed to provide a compact, smallform factor to within a cubic inch, high sensitivity needing less than 100 electrons, high dynamic range exceeding 190 dB, high-frame rates greater than 1000 frames per second (FPS) at full resolution, and low power consumption below 1W. This is practically all the feature benefits highly desirable in military imaging applications to expand deployment to every warfighter, while also maintaining a low-cost structure demanded for scaling into commercial markets. This paper describes Banpil's development of the camera system including the features of the image sensor with an innovation integrating advanced digital electronics functionality, which has made the confluence of high-performance capabilities on the same imaging platform practical at low cost. It discusses the strategies employed including innovations of the key components (e.g. focal plane array (FPA) and Read-Out Integrated Circuitry (ROIC)) within our control while maintaining a fabless model, and strategic collaboration with partners to attain additional cost reductions on optics, electronics, and packaging. We highlight the challenges and potential opportunities for further cost reductions to achieve a goal of a sub-$1000 uncooled high-performance camera system. Finally, a brief overview of emerging military, commercial and industrial applications that will benefit from this high performance imaging system and their forecast cost structure is presented.

  13. Combining High-Speed Cameras and Stop-Motion Animation Software to Support Students' Modeling of Human Body Movement

    ERIC Educational Resources Information Center

    Lee, Victor R.

    2015-01-01

    Biomechanics, and specifically the biomechanics associated with human movement, is a potentially rich backdrop against which educators can design innovative science teaching and learning activities. Moreover, the use of technologies associated with biomechanics research, such as high-speed cameras that can produce high-quality slow-motion video,…

  14. Vapour Pressure and Adiabatic Cooling from Champagne: Slow-Motion Visualization of Gas Thermodynamics

    ERIC Educational Resources Information Center

    Vollmer, Michael; Mollmann, Klaus-Peter

    2012-01-01

    The recent introduction of inexpensive high-speed cameras offers a new experimental approach to many simple but fast-occurring events in physics. In this paper, the authors present two simple demonstration experiments recorded with high-speed cameras in the fields of gas dynamics and thermal physics. The experiments feature vapour pressure effects…

  15. The World in Slow Motion: Using a High-Speed Camera in a Physics Workshop

    ERIC Educational Resources Information Center

    Dewanto, Andreas; Lim, Geok Quee; Kuang, Jianhong; Zhang, Jinfeng; Yeo, Ye

    2012-01-01

    We present a physics workshop for college students to investigate various physical phenomena using high-speed cameras. The technical specifications required, the step-by-step instructions, as well as the practical limitations of the workshop, are discussed. This workshop is also intended to be a novel way to promote physics to Generation-Y…

  16. 78 FR 76861 - Body-Worn Cameras for Criminal Justice Applications

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-19

    ..., Various). 3. Maximum Video Resolution of the BWC (e.g., 640x480, 1080p). 4. Recording Speed of the BWC (e... Photos. 7. Whether the BWC embeds a Time/Date Stamp in the recorded video. 8. The Field of View of the...-person video viewing. 12. The Audio Format of the BWC (e.g., MP2, AAC). 13. Whether the BWC contains...

  17. Multi-Angle Snowflake Camera Instrument Handbook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stuefer, Martin; Bailey, J.

    2016-07-01

    The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASCmore » cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.« less

  18. Precision of FLEET Velocimetry Using High-speed CMOS Camera Systems

    NASA Technical Reports Server (NTRS)

    Peters, Christopher J.; Danehy, Paul M.; Bathel, Brett F.; Jiang, Naibo; Calvert, Nathan D.; Miles, Richard B.

    2015-01-01

    Femtosecond laser electronic excitation tagging (FLEET) is an optical measurement technique that permits quantitative velocimetry of unseeded air or nitrogen using a single laser and a single camera. In this paper, we seek to determine the fundamental precision of the FLEET technique using high-speed complementary metal-oxide semiconductor (CMOS) cameras. Also, we compare the performance of several different high-speed CMOS camera systems for acquiring FLEET velocimetry data in air and nitrogen free-jet flows. The precision was defined as the standard deviation of a set of several hundred single-shot velocity measurements. Methods of enhancing the precision of the measurement were explored such as digital binning (similar in concept to on-sensor binning, but done in post-processing), row-wise digital binning of the signal in adjacent pixels and increasing the time delay between successive exposures. These techniques generally improved precision; however, binning provided the greatest improvement to the un-intensified camera systems which had low signal-to-noise ratio. When binning row-wise by 8 pixels (about the thickness of the tagged region) and using an inter-frame delay of 65 micro sec, precisions of 0.5 m/s in air and 0.2 m/s in nitrogen were achieved. The camera comparison included a pco.dimax HD, a LaVision Imager scientific CMOS (sCMOS) and a Photron FASTCAM SA-X2, along with a two-stage LaVision High Speed IRO intensifier. Excluding the LaVision Imager sCMOS, the cameras were tested with and without intensification and with both short and long inter-frame delays. Use of intensification and longer inter-frame delay generally improved precision. Overall, the Photron FASTCAM SA-X2 exhibited the best performance in terms of greatest precision and highest signal-to-noise ratio primarily because it had the largest pixels.

  19. 3-dimensional telepresence system for a robotic environment

    DOEpatents

    Anderson, Matthew O.; McKay, Mark D.

    2000-01-01

    A telepresence system includes a camera pair remotely controlled by a control module affixed to an operator. The camera pair provides for three dimensional viewing and the control module, affixed to the operator, affords hands-free operation of the camera pair. In one embodiment, the control module is affixed to the head of the operator and an initial position is established. A triangulating device is provided to track the head movement of the operator relative to the initial position. A processor module receives input from the triangulating device to determine where the operator has moved relative to the initial position and moves the camera pair in response thereto. The movement of the camera pair is predetermined by a software map having a plurality of operation zones. Each zone therein corresponds to unique camera movement parameters such as speed of movement. Speed parameters include constant speed, or increasing or decreasing. Other parameters include pan, tilt, slide, raise or lowering of the cameras. Other user interface devices are provided to improve the three dimensional control capabilities of an operator in a local operating environment. Such other devices include a pair of visual display glasses, a microphone and a remote actuator. The pair of visual display glasses are provided to facilitate three dimensional viewing, hence depth perception. The microphone affords hands-free camera movement by utilizing voice commands. The actuator allows the operator to remotely control various robotic mechanisms in the remote operating environment.

  20. Very High-Speed Digital Video Capability for In-Flight Use

    NASA Technical Reports Server (NTRS)

    Corda, Stephen; Tseng, Ting; Reaves, Matthew; Mauldin, Kendall; Whiteman, Donald

    2006-01-01

    digital video camera system has been qualified for use in flight on the NASA supersonic F-15B Research Testbed aircraft. This system is capable of very-high-speed color digital imaging at flight speeds up to Mach 2. The components of this system have been ruggedized and shock-mounted in the aircraft to survive the severe pressure, temperature, and vibration of the flight environment. The system includes two synchronized camera subsystems installed in fuselage-mounted camera pods (see Figure 1). Each camera subsystem comprises a camera controller/recorder unit and a camera head. The two camera subsystems are synchronized by use of an MHub(TradeMark) synchronization unit. Each camera subsystem is capable of recording at a rate up to 10,000 pictures per second (pps). A state-of-the-art complementary metal oxide/semiconductor (CMOS) sensor in the camera head has a maximum resolution of 1,280 1,024 pixels at 1,000 pps. Exposure times of the electronic shutter of the camera range from 1/200,000 of a second to full open. The recorded images are captured in a dynamic random-access memory (DRAM) and can be downloaded directly to a personal computer or saved on a compact flash memory card. In addition to the high-rate recording of images, the system can display images in real time at 30 pps. Inter Range Instrumentation Group (IRIG) time code can be inserted into the individual camera controllers or into the M-Hub unit. The video data could also be used to obtain quantitative, three-dimensional trajectory information. The first use of this system was in support of the Space Shuttle Return to Flight effort. Data were needed to help in understanding how thermally insulating foam is shed from a space shuttle external fuel tank during launch. The cameras captured images of simulated external tank debris ejected from a fixture mounted under the centerline of the F-15B aircraft. Digital video was obtained at subsonic and supersonic flight conditions, including speeds up to Mach 2 and altitudes up to 50,000 ft (15.24 km). The digital video was used to determine the structural survivability of the debris in a real flight environment and quantify the aerodynamic trajectories of the debris.

  1. Novel hyperspectral prediction method and apparatus

    NASA Astrophysics Data System (ADS)

    Kemeny, Gabor J.; Crothers, Natalie A.; Groth, Gard A.; Speck, Kathy A.; Marbach, Ralf

    2009-05-01

    Both the power and the challenge of hyperspectral technologies is the very large amount of data produced by spectral cameras. While off-line methodologies allow the collection of gigabytes of data, extended data analysis sessions are required to convert the data into useful information. In contrast, real-time monitoring, such as on-line process control, requires that compression of spectral data and analysis occur at a sustained full camera data rate. Efficient, high-speed practical methods for calibration and prediction are therefore sought to optimize the value of hyperspectral imaging. A novel method of matched filtering known as science based multivariate calibration (SBC) was developed for hyperspectral calibration. Classical (MLR) and inverse (PLS, PCR) methods are combined by spectroscopically measuring the spectral "signal" and by statistically estimating the spectral "noise." The accuracy of the inverse model is thus combined with the easy interpretability of the classical model. The SBC method is optimized for hyperspectral data in the Hyper-CalTM software used for the present work. The prediction algorithms can then be downloaded into a dedicated FPGA based High-Speed Prediction EngineTM module. Spectral pretreatments and calibration coefficients are stored on interchangeable SD memory cards, and predicted compositions are produced on a USB interface at real-time camera output rates. Applications include minerals, pharmaceuticals, food processing and remote sensing.

  2. A dual-band adaptor for infrared imaging.

    PubMed

    McLean, A G; Ahn, J-W; Maingi, R; Gray, T K; Roquemore, A L

    2012-05-01

    A novel imaging adaptor providing the capability to extend a standard single-band infrared (IR) camera into a two-color or dual-band device has been developed for application to high-speed IR thermography on the National Spherical Tokamak Experiment (NSTX). Temperature measurement with two-band infrared imaging has the advantage of being mostly independent of surface emissivity, which may vary significantly in the liquid lithium divertor installed on NSTX as compared to that of an all-carbon first wall. In order to take advantage of the high-speed capability of the existing IR camera at NSTX (1.6-6.2 kHz frame rate), a commercial visible-range optical splitter was extensively modified to operate in the medium wavelength and long wavelength IR. This two-band IR adapter utilizes a dichroic beamsplitter, which reflects 4-6 μm wavelengths and transmits 7-10 μm wavelength radiation, each with >95% efficiency and projects each IR channel image side-by-side on the camera's detector. Cutoff filters are used in each IR channel, and ZnSe imaging optics and mirrors optimized for broadband IR use are incorporated into the design. In-situ and ex-situ temperature calibration and preliminary data of the NSTX divertor during plasma discharges are presented, with contrasting results for dual-band vs. single-band IR operation.

  3. A portable high-speed camera system for vocal fold examinations.

    PubMed

    Hertegård, Stellan; Larsson, Hans

    2014-11-01

    In this article, we present a new portable low-cost system for high-speed examinations of the vocal folds. Analysis of glottal vibratory parameters from the high-speed recordings is compared with videostroboscopic recordings. The high-speed system is built around a Fastec 1 monochrome camera, which is used with newly developed software, High-Speed Studio (HSS). The HSS has options for video/image recording, contains a database, and has a set of analysis options. The Fastec/HSS system has been used clinically since 2011 in more than 2000 patient examinations and recordings. The Fastec 1 camera has sufficient time resolution (≥4000 frames/s) and light sensitivity (ISO 3200) to produce images for detailed analyses of parameters pertinent to vocal fold function. The camera can be used with both rigid and flexible endoscopes. The HSS software includes options for analyses of glottal vibrations, such as kymogram, phase asymmetry, glottal area variation, open and closed phase, and angle of vocal fold abduction. It can also be used for separate analysis of the left and vocal fold movements, including maximum speed during opening and closing, a parameter possibly related to vocal fold elasticity. A blinded analysis of 32 patients with various voice disorders examined with both the Fastec/HSS system and videostroboscopy showed that the high-speed recordings were significantly better for the analysis of glottal parameters (eg, mucosal wave and vibration asymmetry). The monochrome high-speed system can be used in daily clinical work within normal clinical time limits for patient examinations. A detailed analysis can be made of voice disorders and laryngeal pathology at a relatively low cost. Copyright © 2014 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  4. Modification of a Kowa RC-2 fundus camera for self-photography without the use of mydriatics. [for blood vessel monitoring during space flight

    NASA Technical Reports Server (NTRS)

    Philpott, D. E.; Harrison, G.; Turnbill, C.; Bailey, P. F.

    1979-01-01

    Research on retinal circulation during space flight required the development of a simple technique to provide self monitoring of blood vessel changes in the fundus without the use of mydriatics. A Kowa RC-2 fundus camera was modified for self-photography by the use of a bite plate for positioning and cross hairs for focusing the subject's retina relative to the film plane. Dilation of the pupils without the use of mydriatics was accomplished by dark-adaption of the subject. Pictures were obtained without pupil constriction by the use of a high speed strobe light. This method also has applications for clinical medicine.

  5. Geometric facial comparisons in speed-check photographs.

    PubMed

    Buck, Ursula; Naether, Silvio; Kreutz, Kerstin; Thali, Michael

    2011-11-01

    In many cases, it is not possible to call the motorists to account for their considerable excess in speeding, because they deny being the driver on the speed-check photograph. An anthropological comparison of facial features using a photo-to-photo comparison can be very difficult depending on the quality of the photographs. One difficulty of that analysis method is that the comparison photographs of the presumed driver are taken with a different camera or camera lens and from a different angle than for the speed-check photo. To take a comparison photograph with exactly the same camera setup is almost impossible. Therefore, only an imprecise comparison of the individual facial features is possible. The geometry and position of each facial feature, for example the distances between the eyes or the positions of the ears, etc., cannot be taken into consideration. We applied a new method using 3D laser scanning, optical surface digitalization, and photogrammetric calculation of the speed-check photo, which enables a geometric comparison. Thus, the influence of the focal length and the distortion of the objective lens are eliminated and the precise position and the viewing direction of the speed-check camera are calculated. Even in cases of low-quality images or when the face of the driver is partly hidden, good results are delivered using this method. This new method, Geometric Comparison, is evaluated and validated in a prepared study which is described in this article.

  6. Evaluation of Real-Time Hand Motion Tracking Using a Range Camera and the Mean-Shift Algorithm

    NASA Astrophysics Data System (ADS)

    Lahamy, H.; Lichti, D.

    2011-09-01

    Several sensors have been tested for improving the interaction between humans and machines including traditional web cameras, special gloves, haptic devices, cameras providing stereo pairs of images and range cameras. Meanwhile, several methods are described in the literature for tracking hand motion: the Kalman filter, the mean-shift algorithm and the condensation algorithm. In this research, the combination of a range camera and the simple version of the mean-shift algorithm has been evaluated for its capability for hand motion tracking. The evaluation was assessed in terms of position accuracy of the tracking trajectory in x, y and z directions in the camera space and the time difference between image acquisition and image display. Three parameters have been analyzed regarding their influence on the tracking process: the speed of the hand movement, the distance between the camera and the hand and finally the integration time of the camera. Prior to the evaluation, the required warm-up time of the camera has been measured. This study has demonstrated the suitability of the range camera used in combination with the mean-shift algorithm for real-time hand motion tracking but for very high speed hand movement in the traverse plane with respect to the camera, the tracking accuracy is low and requires improvement.

  7. Swimming speed alteration of Artemia sp. and Brachionus plicatilis as a sub-lethal behavioural end-point for ecotoxicological surveys.

    PubMed

    Garaventa, Francesca; Gambardella, Chiara; Di Fino, Alessio; Pittore, Massimiliano; Faimali, Marco

    2010-03-01

    In this study, we investigated the possibility to improve a new behavioural bioassay (Swimming Speed Alteration test-SSA test) using larvae of marine cyst-forming organisms: e.g. the brine shrimp Artemia sp. and the rotifer Brachionus plicatilis. Swimming speed was investigated as a behavioural end-point for application in ecotoxicology studies. A first experiment to analyse the linear swimming speed of the two organisms was performed to verify the applicability of the video-camera tracking system, here referred to as Swimming Behavioural Recorder (SBR). A second experiment was performed, exposing organisms to different toxic compounds (zinc pyrithione, Macrotrol MT-200, and Eserine). Swimming speed alteration was analyzed together with mortality. The results of the first experiment indicate that SBR is a suitable tool to detect linear swimming speed of the two organisms, since the values have been obtained in accordance with other studies using the same organisms (3.05 mm s(-1) for Artemia sp. and 0.62 mm s(-1) for B. plicatilis). Toxicity test results clearly indicate that swimming speed of Artemia sp. and B. plicatilis is a valid behavioural end-point to detect stress at sub-lethal toxic substance concentrations. Indeed, alterations in swimming speed have been detected at toxic compound concentrations as low as less then 0.1-5% of their LC(50) values. In conclusion, the SSA test with B. plicatilis and Artemia sp. can be a good behavioural integrated output for application in marine ecotoxicology and environmental monitoring programs.

  8. Measuring the circular motion of small objects using laser stroboscopic images.

    PubMed

    Wang, Hairong; Fu, Y; Du, R

    2008-01-01

    Measuring the circular motion of a small object, including its displacement, speed, and acceleration, is a challenging task. This paper presents a new method for measuring repetitive and/or nonrepetitive, constant speed and/or variable speed circular motion using laser stroboscopic images. Under stroboscopic illumination, each image taken by an ordinary camera records multioutlines of an object in motion; hence, processing the stroboscopic image will be able to extract the motion information. We built an experiment apparatus consisting of a laser as the light source, a stereomicroscope to magnify the image, and a normal complementary metal oxide semiconductor camera to record the image. As the object is in motion, the stroboscopic illumination generates a speckle pattern on the object that can be recorded by the camera and analyzed by a computer. Experimental results indicate that the stroboscopic imaging is stable under various conditions. Moreover, the characteristics of the motion, including the displacement, the velocity, and the acceleration can be calculated based on the width of speckle marks, the illumination intensity, the duty cycle, and the sampling frequency. Compared with the popular high-speed camera method, the presented method may achieve the same measuring accuracy, but with much reduced cost and complexity.

  9. Vibration extraction based on fast NCC algorithm and high-speed camera.

    PubMed

    Lei, Xiujun; Jin, Yi; Guo, Jie; Zhu, Chang'an

    2015-09-20

    In this study, a high-speed camera system is developed to complete the vibration measurement in real time and to overcome the mass introduced by conventional contact measurements. The proposed system consists of a notebook computer and a high-speed camera which can capture the images as many as 1000 frames per second. In order to process the captured images in the computer, the normalized cross-correlation (NCC) template tracking algorithm with subpixel accuracy is introduced. Additionally, a modified local search algorithm based on the NCC is proposed to reduce the computation time and to increase efficiency significantly. The modified algorithm can rapidly accomplish one displacement extraction 10 times faster than the traditional template matching without installing any target panel onto the structures. Two experiments were carried out under laboratory and outdoor conditions to validate the accuracy and efficiency of the system performance in practice. The results demonstrated the high accuracy and efficiency of the camera system in extracting vibrating signals.

  10. Micro-optical system based 3D imaging for full HD depth image capturing

    NASA Astrophysics Data System (ADS)

    Park, Yong-Hwa; Cho, Yong-Chul; You, Jang-Woo; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Kwon, Jong-Oh; Lee, Seung-Wan

    2012-03-01

    20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical shutter'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation. The optical shutter device is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image. Suggested novel optical shutter device enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously. The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical shutter design, fabrication, characterization, 3D camera system prototype and image test results.

  11. Instantaneous phase-shifting Fizeau interferometry with high-speed pixelated phase-mask camera

    NASA Astrophysics Data System (ADS)

    Yatagai, Toyohiko; Jackin, Boaz Jessie; Ono, Akira; Kiyohara, Kosuke; Noguchi, Masato; Yoshii, Minoru; Kiyohara, Motosuke; Niwa, Hayato; Ikuo, Kazuyuki; Onuma, Takashi

    2015-08-01

    A Fizeou interferometer with instantaneous phase-shifting ability using a Wollaston prism is designed. to measure dynamic phase change of objects, a high-speed video camera of 10-5s of shutter speed is used with a pixelated phase-mask of 1024 × 1024 elements. The light source used is a laser of wavelength 532 nm which is split into orthogonal polarization states by passing through a Wollaston prism. By adjusting the tilt of the reference surface it is possible to make the reference and object beam with orthogonal polarizations states to coincide and interfere. Then the pixelated phase-mask camera calculate the phase changes and hence the optical path length difference. Vibration of speakers and turbulence of air flow were successfully measured in 7,000 frames/sec.

  12. Speed of sound and photoacoustic imaging with an optical camera based ultrasound detection system

    NASA Astrophysics Data System (ADS)

    Nuster, Robert; Paltauf, Guenther

    2017-07-01

    CCD camera based optical ultrasound detection is a promising alternative approach for high resolution 3D photoacoustic imaging (PAI). To fully exploit its potential and to achieve an image resolution <50 μm, it is necessary to incorporate variations of the speed of sound (SOS) in the image reconstruction algorithm. Hence, in the proposed work the idea and a first implementation are shown how speed of sound imaging can be added to a previously developed camera based PAI setup. The current setup provides SOS-maps with a spatial resolution of 2 mm and an accuracy of the obtained absolute SOS values of about 1%. The proposed dual-modality setup has the potential to provide highly resolved and perfectly co-registered 3D photoacoustic and SOS images.

  13. Photography Foundations: The Student Photojournalist.

    ERIC Educational Resources Information Center

    Glowacki, Joseph W.

    Designed to aid student publications photographers in taking effective photographs, this publication provides discussions relating to the following areas: a publications photographer's self-image, the camera, camera handling, using the adjustable camera, the light meter, depth of field, shutter speeds and action pictures, lenses for publications…

  14. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    PubMed Central

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-01-01

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510

  15. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction.

    PubMed

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-03-20

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  16. Application of polarization in high speed, high contrast inspection

    NASA Astrophysics Data System (ADS)

    Novak, Matthew J.

    2017-08-01

    Industrial optical inspection often requires high speed and high throughput of materials. Engineers use a variety of techniques to handle these inspection needs. Some examples include line scan cameras, high speed multi-spectral and laser-based systems. High-volume manufacturing presents different challenges for inspection engineers. For example, manufacturers produce some components in quantities of millions per month, per week or even per day. Quality control of so many parts requires creativity to achieve the measurement needs. At times, traditional vision systems lack the contrast to provide the data required. In this paper, we show how dynamic polarization imaging captures high contrast images. These images are useful for engineers to perform inspection tasks in some cases where optical contrast is low. We will cover basic theory of polarization. We show how to exploit polarization as a contrast enhancement technique. We also show results of modeling for a polarization inspection application. Specifically, we explore polarization techniques for inspection of adhesives on glass.

  17. Thermographic measurements of high-speed metal cutting

    NASA Astrophysics Data System (ADS)

    Mueller, Bernhard; Renz, Ulrich

    2002-03-01

    Thermographic measurements of a high-speed cutting process have been performed with an infrared camera. To realize images without motion blur the integration times were reduced to a few microseconds. Since the high tool wear influences the measured temperatures a set-up has been realized which enables small cutting lengths. Only single images have been recorded because the process is too fast to acquire a sequence of images even with the frame rate of the very fast infrared camera which has been used. To expose the camera when the rotating tool is in the middle of the camera image an experimental set-up with a light barrier and a digital delay generator with a time resolution of 1 ns has been realized. This enables a very exact triggering of the camera at the desired position of the tool in the image. Since the cutting depth is between 0.1 and 0.2 mm a high spatial resolution was also necessary which was obtained by a special close-up lens allowing a resolution of app. 45 microns. The experimental set-up will be described and infrared images and evaluated temperatures of a titanium alloy and a carbon steel will be presented for cutting speeds up to 42 m/s.

  18. Video-rate or high-precision: a flexible range imaging camera

    NASA Astrophysics Data System (ADS)

    Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.; Payne, Andrew D.; Conroy, Richard M.; Godbaz, John P.; Jongenelen, Adrian P. P.

    2008-02-01

    A range imaging camera produces an output similar to a digital photograph, but every pixel in the image contains distance information as well as intensity. This is useful for measuring the shape, size and location of objects in a scene, hence is well suited to certain machine vision applications. Previously we demonstrated a heterodyne range imaging system operating in a relatively high resolution (512-by-512) pixels and high precision (0.4 mm best case) configuration, but with a slow measurement rate (one every 10 s). Although this high precision range imaging is useful for some applications, the low acquisition speed is limiting in many situations. The system's frame rate and length of acquisition is fully configurable in software, which means the measurement rate can be increased by compromising precision and image resolution. In this paper we demonstrate the flexibility of our range imaging system by showing examples of high precision ranging at slow acquisition speeds and video-rate ranging with reduced ranging precision and image resolution. We also show that the heterodyne approach and the use of more than four samples per beat cycle provides better linearity than the traditional homodyne quadrature detection approach. Finally, we comment on practical issues of frame rate and beat signal frequency selection.

  19. Optimising Camera Traps for Monitoring Small Mammals

    PubMed Central

    Glen, Alistair S.; Cockburn, Stuart; Nichols, Margaret; Ekanayake, Jagath; Warburton, Bruce

    2013-01-01

    Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera’s field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1) trigger speed, 2) passive infrared vs. microwave sensor, 3) white vs. infrared flash, and 4) still photographs vs. video. We also tested a new approach to standardise each camera’s field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats ( Mustela erminea ), feral cats (Felis catus) and hedgehogs ( Erinaceus europaeus ). Trigger speeds of 0.2–2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera’s field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps. PMID:23840790

  20. Coincidence ion imaging with a fast frame camera

    NASA Astrophysics Data System (ADS)

    Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander H.; Fan, Lin; Li, Wen

    2014-12-01

    A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.

  1. High-speed Imaging of Global Surface Temperature Distributions on Hypersonic Ballistic-Range Projectiles

    NASA Technical Reports Server (NTRS)

    Wilder, Michael C.; Reda, Daniel C.

    2004-01-01

    The NASA-Ames ballistic range provides a unique capability for aerothermodynamic testing of configurations in hypersonic, real-gas, free-flight environments. The facility can closely simulate conditions at any point along practically any trajectory of interest experienced by a spacecraft entering an atmosphere. Sub-scale models of blunt atmospheric entry vehicles are accelerated by a two-stage light-gas gun to speeds as high as 20 times the speed of sound to fly ballistic trajectories through an 24 m long vacuum-rated test section. The test-section pressure (effective altitude), the launch velocity of the model (flight Mach number), and the test-section working gas (planetary atmosphere) are independently variable. The model travels at hypersonic speeds through a quiescent test gas, creating a strong bow-shock wave and real-gas effects that closely match conditions achieved during actual atmospheric entry. The challenge with ballistic range experiments is to obtain quantitative surface measurements from a model traveling at hypersonic speeds. The models are relatively small (less than 3.8 cm in diameter), which limits the spatial resolution possible with surface mounted sensors. Furthermore, since the model is in flight, surface-mounted sensors require some form of on-board telemetry, which must survive the massive acceleration loads experienced during launch (up to 500,000 gravities). Finally, the model and any on-board instrumentation will be destroyed at the terminal wall of the range. For these reasons, optical measurement techniques are the most practical means of acquiring data. High-speed thermal imaging has been employed in the Ames ballistic range to measure global surface temperature distributions and to visualize the onset of transition to turbulent-flow on the forward regions of hypersonic blunt bodies. Both visible wavelength and infrared high-speed cameras are in use. The visible wavelength cameras are intensified CCD imagers capable of integration times as short as 2 ns. The infrared camera uses an Indium Antimonide (InSb) sensor in the 3 to 5 micron band and is capable of integration times as short as 500 ns. The projectiles are imaged nearly head-on using expendable mirrors offset slightly from the flight path. The proposed paper will discuss the application of high-speed digital imaging systems in the NASA-Ames hypersonic ballistic range, and the challenges encountered when applying these systems. Example images of the thermal radiation from the blunt nose of projectiles flying at nearly 14 times the speed of sound will be given.

  2. Event-Driven Random-Access-Windowing CCD Imaging System

    NASA Technical Reports Server (NTRS)

    Monacos, Steve; Portillo, Angel; Ortiz, Gerardo; Alexander, James; Lam, Raymond; Liu, William

    2004-01-01

    A charge-coupled-device (CCD) based high-speed imaging system, called a realtime, event-driven (RARE) camera, is undergoing development. This camera is capable of readout from multiple subwindows [also known as regions of interest (ROIs)] within the CCD field of view. Both the sizes and the locations of the ROIs can be controlled in real time and can be changed at the camera frame rate. The predecessor of this camera was described in High-Frame-Rate CCD Camera Having Subwindow Capability (NPO- 30564) NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 26. The architecture of the prior camera requires tight coupling between camera control logic and an external host computer that provides commands for camera operation and processes pixels from the camera. This tight coupling limits the attainable frame rate and functionality of the camera. The design of the present camera loosens this coupling to increase the achievable frame rate and functionality. From a host computer perspective, the readout operation in the prior camera was defined on a per-line basis; in this camera, it is defined on a per-ROI basis. In addition, the camera includes internal timing circuitry. This combination of features enables real-time, event-driven operation for adaptive control of the camera. Hence, this camera is well suited for applications requiring autonomous control of multiple ROIs to track multiple targets moving throughout the CCD field of view. Additionally, by eliminating the need for control intervention by the host computer during the pixel readout, the present design reduces ROI-readout times to attain higher frame rates. This camera (see figure) includes an imager card consisting of a commercial CCD imager and two signal-processor chips. The imager card converts transistor/ transistor-logic (TTL)-level signals from a field programmable gate array (FPGA) controller card. These signals are transmitted to the imager card via a low-voltage differential signaling (LVDS) cable assembly. The FPGA controller card is connected to the host computer via a standard peripheral component interface (PCI).

  3. Cavitation induced by high speed impact of a solid surface on a liquid jet

    NASA Astrophysics Data System (ADS)

    Farhat, Mohamed; Tinguely, Marc; Rouvinez, Mathieu

    2009-11-01

    A solid surface may suffer from severe erosion if it impacts a liquid jet at high speed. The physics behind the erosion process remains unclear. In the present study, we have investigated the impact of a gun bullet on a laminar water jet with the help of a high speed camera. The bullet has a flat front and 11 mm diameter, which is half of jet diameter. The impact speed was varied between 200 and 500 ms-1. Immediately after the impact, a systematic shock wave and high speed jetting were observed. As the compression waves reflect on the jet boundary, a spectacular number of vapour cavities are generated within the jet. Depending on the bullet velocity, these cavities may grow and collapse violently on the bullet surface with a risk of cavitation erosion. We strongly believe that this transient cavitation is the main cause of erosion observed in many industrial applications such as Pelton turbines.

  4. A new high-speed IR camera system

    NASA Technical Reports Server (NTRS)

    Travis, Jeffrey W.; Shu, Peter K.; Jhabvala, Murzy D.; Kasten, Michael S.; Moseley, Samuel H.; Casey, Sean C.; Mcgovern, Lawrence K.; Luers, Philip J.; Dabney, Philip W.; Kaipa, Ravi C.

    1994-01-01

    A multi-organizational team at the Goddard Space Flight Center is developing a new far infrared (FIR) camera system which furthers the state of the art for this type of instrument by the incorporating recent advances in several technological disciplines. All aspects of the camera system are optimized for operation at the high data rates required for astronomical observations in the far infrared. The instrument is built around a Blocked Impurity Band (BIB) detector array which exhibits responsivity over a broad wavelength band and which is capable of operating at 1000 frames/sec, and consists of a focal plane dewar, a compact camera head electronics package, and a Digital Signal Processor (DSP)-based data system residing in a standard 486 personal computer. In this paper we discuss the overall system architecture, the focal plane dewar, and advanced features and design considerations for the electronics. This system, or one derived from it, may prove useful for many commercial and/or industrial infrared imaging or spectroscopic applications, including thermal machine vision for robotic manufacturing, photographic observation of short-duration thermal events such as combustion or chemical reactions, and high-resolution surveillance imaging.

  5. Monitoring system for phreatic eruptions and thermal behavior on Poás volcano hyperacidic lake, with permanent IR and HD cameras

    NASA Astrophysics Data System (ADS)

    Ramirez, C. J.; Mora-Amador, R. A., Sr.; Alpizar Segura, Y.; González, G.

    2015-12-01

    Monitoring volcanoes have been on the past decades an expanding matter, one of the rising techniques that involve new technology is the digital video surveillance, and the automated software that come within, now is possible if you have the budget and some facilities on site, to set up a real-time network of high definition video cameras, some of them even with special features like infrared, thermal, ultraviolet, etc. That can make easier or harder the analysis of volcanic phenomena like lava eruptions, phreatic eruption, plume speed, lava flows, close/open vents, just to mention some of the many application of these cameras. We present the methodology of the installation at Poás volcano of a real-time system for processing and storing HD and thermal images and video, also the process to install and acquired the HD and IR cameras, towers, solar panels and radios to transmit the data on a volcano located at the tropics, plus what volcanic areas are our goal and why. On the other hand we show the hardware and software we consider necessary to carry on our project. Finally we show some early data examples of upwelling areas on the Poás volcano hyperacidic lake and the relation with lake phreatic eruptions, also some data of increasing temperature on an old dome wall and the suddenly wall explosions, and the use of IR video for measuring plume speed and contour for use on combination with DOAS or FTIR measurements.

  6. Single-camera stereo-digital image correlation with a four-mirror adapter: optimized design and validation

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2016-12-01

    A low-cost, easy-to-implement but practical single-camera stereo-digital image correlation (DIC) system using a four-mirror adapter is established for accurate shape and three-dimensional (3D) deformation measurements. The mirrors assisted pseudo-stereo imaging system can convert a single camera into two virtual cameras, which view a specimen from different angles and record the surface images of the test object onto two halves of the camera sensor. To enable deformation measurement in non-laboratory conditions or extreme high temperature environments, an active imaging optical design, combining an actively illuminated monochromatic source with a coupled band-pass optical filter, is compactly integrated to the pseudo-stereo DIC system. The optical design, basic principles and implementation procedures of the established system for 3D profile and deformation measurements are described in detail. The effectiveness and accuracy of the established system are verified by measuring the profile of a regular cylinder surface and displacements of a translated planar plate. As an application example, the established system is used to determine the tensile strains and Poisson's ratio of a composite solid propellant specimen during stress relaxation test. Since the established single-camera stereo-DIC system only needs a single camera and presents strong robustness against variations in ambient light or the thermal radiation of a hot object, it demonstrates great potential in determining transient deformation in non-laboratory or high-temperature environments with the aid of a single high-speed camera.

  7. Driving behaviour responses to a moose encounter, automatic speed camera, wildlife warning sign and radio message determined in a factorial simulator study.

    PubMed

    Jägerbrand, Annika K; Antonson, Hans

    2016-01-01

    In a driving simulator study, driving behaviour responses (speed and deceleration) to encountering a moose, automatic speed camera, wildlife warning sign and radio message, with or without a wildlife fence and in dense forest or open landscape, were analysed. The study consisted of a factorial experiment that examined responses to factors singly and in combination over 9-km road stretches driven eight times by 25 participants (10 men, 15 women). The aims were to: determine the most effective animal-vehicle collision (AVC) countermeasures in reducing vehicle speed and test whether these are more effective in combination for reducing vehicle speed; identify the most effective countermeasures on encountering moose; and determine whether the driving responses to AVC countermeasures are affected by the presence of wildlife fences and landscape characteristics. The AVC countermeasures that proved most effective in reducing vehicle speed were a wildlife warning sign and radio message, while automatic speed cameras had a speed-increasing effect. There were no statistically significant interactions between different countermeasures and moose encounters. However, there was a tendency for a stronger speed-reducing effect from the radio message warning and from a combination of a radio message and wildlife warning sign in velocity profiles covering longer driving distances than the statistical tests. Encountering a moose during the drive had the overall strongest speed-reducing effect and gave the strongest deceleration, indicating that moose decoys or moose artwork might be useful as speed-reducing countermeasures. Furthermore, drivers reduced speed earlier on encountering a moose in open landscape and had lower velocity when driving past it. The presence of a wildlife fence on encountering the moose resulted in smaller deceleration. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Invited Article: Quantitative imaging of explosions with high-speed cameras

    DOE PAGES

    McNesby, Kevin L.; Homan, Barrie E.; Benjamin, Richard A.; ...

    2016-05-31

    Here, the techniques presented in this paper allow for mapping of temperature, pressure, chemical species, and energy deposition during and following detonations of explosives, using high speed cameras as the main diagnostic tool. Additionally, this work provides measurement in the explosive near to far-field (0-500 charge diameters) of surface temperatures, peak air-shock pressures, some chemical species signatures, shock energy deposition, and air shock formation.

  9. Release and velocity of micronized dexamethasone implants with an intravitreal drug delivery system: kinematic analysis with a high-speed camera.

    PubMed

    Meyer, Carsten H; Klein, Adrian; Alten, Florian; Liu, Zengping; Stanzel, Boris V; Helb, Hans M; Brinkmann, Christian K

    2012-01-01

    Ozurdex, a novel dexamethasone (DEX) implant, is released by a drug delivery system into the vitreous cavity. We analyzed the mechanical release aperture of the novel applicator, obtained real-time recordings using a high-speed camera system and performed kinematic analysis of the DEX application. Experimental study. : The application of intravitreal DEX implants (6 mm length, 0.46 mm diameter; 700 μg DEX mass, 0.0012 g total implant mass) was recorded by a high-speed camera (500 frames per second) in water (Group A: n = 7) or vitreous (Group B: n = 7) filled tanks. Kinematic analysis calculated the initial muzzle velocity as well as the impact on the retinal surface at approximately 15 mm of the injected drug delivery system implant in both groups. A series of drug delivery system implant positions was obtained and graphically plotted over time. High-speed real-time recordings revealed that the entire movement of the DEX implant lasted between 28 milliseconds and 55 milliseconds in Group A and 1 millisecond and 7 milliseconds in Group B. The implants moved with a mean muzzle velocity of 820 ± 350 mm/s (±SD, range, 326-1,349 mm/s) in Group A and 817 ± 307 mm/s (±SD, range, 373-1,185 mm/s) in Group B. In both groups, the implant gradually decelerated because of drag force. With greater distances, the velocity of the DEX implant decreased exponentially to a complete stop at 13.9 mm to 24.7 mm in Group A and at 6.4 mm to 8.0 mm in Group B. Five DEX implants in Group A reached a total distance of more than 15 mm, and their calculated mean velocity at a retinal impact of 15 mm was 408 ± 145 mm/s (±SD, range, 322-667 mm/s), and the consecutive normalized energy was 0.55 ± 0.44 J/m (±SD). In Group B, none of the DEX implants reached a total distance of 6 mm or more. An accidental application at an angle of 30 grade and consecutively reduced distance of approximately 6 mm may result in a mean velocity of 844 and mean normalized energy of 0.15 J/m (SD ± 0.47) in a water-filled eye. The muzzle velocity of DEX implants is approximately 0.8 m/s and decreases exponentially over distance. The drag over time in vitreous is faster than in water. The calculated retinal impact energy does not reach reported damage levels for direct foreign bodies or other projectiles.

  10. Real-Time View Correction for Mobile Devices.

    PubMed

    Schops, Thomas; Oswald, Martin R; Speciale, Pablo; Yang, Shuoran; Pollefeys, Marc

    2017-11-01

    We present a real-time method for rendering novel virtual camera views from given RGB-D (color and depth) data of a different viewpoint. Missing color and depth information due to incomplete input or disocclusions is efficiently inpainted in a temporally consistent way. The inpainting takes the location of strong image gradients into account as likely depth discontinuities. We present our method in the context of a view correction system for mobile devices, and discuss how to obtain a screen-camera calibration and options for acquiring depth input. Our method has use cases in both augmented and virtual reality applications. We demonstrate the speed of our system and the visual quality of its results in multiple experiments in the paper as well as in the supplementary video.

  11. Automatic Recognition Of Moving Objects And Its Application To A Robot For Picking Asparagus

    NASA Astrophysics Data System (ADS)

    Baylou, P.; Amor, B. El Hadj; Bousseau, G.

    1983-10-01

    After a brief description of the robot for picking white asparagus, a statistical study of the different shapes of asparagus tips allowed us to determine certain discriminating parameters to detect the tips as they appear on the silhouette of the mound of earth. The localisation was done stereometrically with the help of two cameras. As the robot carrying the system of vision-localisation moves, the images are altered and decision cri-teria modified. A study of the image from mobile objects produced by both tube and CCD came-ras was carried out. A simulation of this phenomenon has been achieved in order to determine the modifications concerning object shapes, thresholding levels and decision parameters in function of the robot speed.

  12. Preliminary analysis on faint luminous lightning events recorded by multiple high speed cameras

    NASA Astrophysics Data System (ADS)

    Alves, J.; Saraiva, A. V.; Pinto, O.; Campos, L. Z.; Antunes, L.; Luz, E. S.; Medeiros, C.; Buzato, T. S.

    2013-12-01

    The objective of this work is the study of some faint luminous events produced by lightning flashes that were recorded simultaneously by multiple high-speed cameras during the previous RAMMER (Automated Multi-camera Network for Monitoring and Study of Lightning) campaigns. The RAMMER network is composed by three fixed cameras and one mobile color camera separated by, in average, distances of 13 kilometers. They were located in the Paraiba Valley (in the cities of São José dos Campos and Caçapava), SP, Brazil, arranged in a quadrilateral shape, centered in São José dos Campos region. This configuration allowed RAMMER to see a thunderstorm from different angles, registering the same lightning flashes simultaneously by multiple cameras. Each RAMMER sensor is composed by a triggering system and a Phantom high-speed camera version 9.1, which is set to operate at a frame rate of 2,500 frames per second with a lens Nikkor (model AF-S DX 18-55 mm 1:3.5 - 5.6 G in the stationary sensors, and a lens model AF-S ED 24 mm - 1:1.4 in the mobile sensor). All videos were GPS (Global Positioning System) time stamped. For this work we used a data set collected in four RAMMER manual operation days in the campaign of 2012 and 2013. On Feb. 18th the data set is composed by 15 flashes recorded by two cameras and 4 flashes recorded by three cameras. On Feb. 19th a total of 5 flashes was registered by two cameras and 1 flash registered by three cameras. On Feb. 22th we obtained 4 flashes registered by two cameras. Finally, in March 6th two cameras recorded 2 flashes. The analysis in this study proposes an evaluation methodology for faint luminous lightning events, such as continuing current. Problems in the temporal measurement of the continuing current can generate some imprecisions during the optical analysis, therefore this work aim to evaluate the effects of distance in this parameter with this preliminary data set. In the cases that include the color camera we analyzed the RGB mode (red, green, blue) and compared them with the data provided by the black and white cameras for the same event and the influence of these parameters with the luminosity intensity of the flashes. Two peculiar cases presented, from the data obtained at one site, a stroke, some continuing current during the interval between the strokes and, then, a subsequent stroke; however, the other site showed that the subsequent stroke was in fact an M-component, since the continuing current had not vanished after its parent stroke. These events generated a dubious classification for the same event that was based only in a visual analysis with high-speed cameras and they were analyzed in this work.

  13. HIGH SPEED CAMERA

    DOEpatents

    Rogers, B.T. Jr.; Davis, W.C.

    1957-12-17

    This patent relates to high speed cameras having resolution times of less than one-tenth microseconds suitable for filming distinct sequences of a very fast event such as an explosion. This camera consists of a rotating mirror with reflecting surfaces on both sides, a narrow mirror acting as a slit in a focal plane shutter, various other mirror and lens systems as well as an innage recording surface. The combination of the rotating mirrors and the slit mirror causes discrete, narrow, separate pictures to fall upon the film plane, thereby forming a moving image increment of the photographed event. Placing a reflecting surface on each side of the rotating mirror cancels the image velocity that one side of the rotating mirror would impart, so as a camera having this short a resolution time is thereby possible.

  14. Precision of FLEET Velocimetry Using High-Speed CMOS Camera Systems

    NASA Technical Reports Server (NTRS)

    Peters, Christopher J.; Danehy, Paul M.; Bathel, Brett F.; Jiang, Naibo; Calvert, Nathan D.; Miles, Richard B.

    2015-01-01

    Femtosecond laser electronic excitation tagging (FLEET) is an optical measurement technique that permits quantitative velocimetry of unseeded air or nitrogen using a single laser and a single camera. In this paper, we seek to determine the fundamental precision of the FLEET technique using high-speed complementary metal-oxide semiconductor (CMOS) cameras. Also, we compare the performance of several different high-speed CMOS camera systems for acquiring FLEET velocimetry data in air and nitrogen free-jet flows. The precision was defined as the standard deviation of a set of several hundred single-shot velocity measurements. Methods of enhancing the precision of the measurement were explored such as digital binning (similar in concept to on-sensor binning, but done in post-processing), row-wise digital binning of the signal in adjacent pixels and increasing the time delay between successive exposures. These techniques generally improved precision; however, binning provided the greatest improvement to the un-intensified camera systems which had low signal-to-noise ratio. When binning row-wise by 8 pixels (about the thickness of the tagged region) and using an inter-frame delay of 65 microseconds, precisions of 0.5 meters per second in air and 0.2 meters per second in nitrogen were achieved. The camera comparison included a pco.dimax HD, a LaVision Imager scientific CMOS (sCMOS) and a Photron FASTCAM SA-X2, along with a two-stage LaVision HighSpeed IRO intensifier. Excluding the LaVision Imager sCMOS, the cameras were tested with and without intensification and with both short and long inter-frame delays. Use of intensification and longer inter-frame delay generally improved precision. Overall, the Photron FASTCAM SA-X2 exhibited the best performance in terms of greatest precision and highest signal-to-noise ratio primarily because it had the largest pixels.

  15. Robust object tracking techniques for vision-based 3D motion analysis applications

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.

    2016-04-01

    Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.

  16. MPCM: a hardware coder for super slow motion video sequences

    NASA Astrophysics Data System (ADS)

    Alcocer, Estefanía; López-Granado, Otoniel; Gutierrez, Roberto; Malumbres, Manuel P.

    2013-12-01

    In the last decade, the improvements in VLSI levels and image sensor technologies have led to a frenetic rush to provide image sensors with higher resolutions and faster frame rates. As a result, video devices were designed to capture real-time video at high-resolution formats with frame rates reaching 1,000 fps and beyond. These ultrahigh-speed video cameras are widely used in scientific and industrial applications, such as car crash tests, combustion research, materials research and testing, fluid dynamics, and flow visualization that demand real-time video capturing at extremely high frame rates with high-definition formats. Therefore, data storage capability, communication bandwidth, processing time, and power consumption are critical parameters that should be carefully considered in their design. In this paper, we propose a fast FPGA implementation of a simple codec called modulo-pulse code modulation (MPCM) which is able to reduce the bandwidth requirements up to 1.7 times at the same image quality when compared with PCM coding. This allows current high-speed cameras to capture in a continuous manner through a 40-Gbit Ethernet point-to-point access.

  17. Fast PSP measurements of wall-pressure fluctuation in low-speed flows: improvements using proper orthogonal decomposition

    NASA Astrophysics Data System (ADS)

    Peng, Di; Wang, Shaofei; Liu, Yingzheng

    2016-04-01

    Fast pressure-sensitive paint (PSP) is very useful in flow diagnostics due to its fast response and high spatial resolution, but its applications in low-speed flows are usually challenging due to limitations of paint's pressure sensitivity and the capability of high-speed imagers. The poor signal-to-noise ratio in low-speed cases makes it very difficult to extract useful information from the PSP data. In this study, unsteady PSP measurements were made on a flat plate behind a cylinder in a low-speed wind tunnel (flow speed from 10 to 17 m/s). Pressure fluctuations (Δ P) on the plate caused by vortex-plate interaction were recorded continuously by fast PSP (using a high-speed camera) and a microphone array. Power spectrum of pressure fluctuations and phase-averaged Δ P obtained from PSP and microphone were compared, showing good agreement in general. Proper orthogonal decomposition (POD) was used to reduce noise in PSP data and extract the dominant pressure features. The PSP results reconstructed from selected POD modes were then compared to the pressure data obtained simultaneously with microphone sensors. Based on the comparison of both instantaneous Δ P and root-mean-square of Δ P, it was confirmed that POD analysis could effectively remove noise while preserving the instantaneous pressure information with good fidelity, especially for flows with strong periodicity. This technique extends the application range of fast PSP and can be a powerful tool for fundamental fluid mechanics research at low speed.

  18. High frequency modal identification on noisy high-speed camera data

    NASA Astrophysics Data System (ADS)

    Javh, Jaka; Slavič, Janko; Boltežar, Miha

    2018-01-01

    Vibration measurements using optical full-field systems based on high-speed footage are typically heavily burdened by noise, as the displacement amplitudes of the vibrating structures are often very small (in the range of micrometers, depending on the structure). The modal information is troublesome to measure as the structure's response is close to, or below, the noise level of the camera-based measurement system. This paper demonstrates modal parameter identification for such noisy measurements. It is shown that by using the Least-Squares Complex-Frequency method combined with the Least-Squares Frequency-Domain method, identification at high-frequencies is still possible. By additionally incorporating a more precise sensor to identify the eigenvalues, a hybrid accelerometer/high-speed camera mode shape identification is possible even below the noise floor. An accelerometer measurement is used to identify the eigenvalues, while the camera measurement is used to produce the full-field mode shapes close to 10 kHz. The identified modal parameters improve the quality of the measured modal data and serve as a reduced model of the structure's dynamics.

  19. Single-camera displacement field correlation method for centrosymmetric 3D dynamic deformation measurement

    NASA Astrophysics Data System (ADS)

    Zhao, Jiaye; Wen, Huihui; Liu, Zhanwei; Rong, Jili; Xie, Huimin

    2018-05-01

    Three-dimensional (3D) deformation measurements are a key issue in experimental mechanics. In this paper, a displacement field correlation (DFC) method to measure centrosymmetric 3D dynamic deformation using a single camera is proposed for the first time. When 3D deformation information is collected by a camera at a tilted angle, the measured displacement fields are coupling fields of both the in-plane and out-of-plane displacements. The features of the coupling field are analysed in detail, and a decoupling algorithm based on DFC is proposed. The 3D deformation to be measured can be inverted and reconstructed using only one coupling field. The accuracy of this method was validated by a high-speed impact experiment that simulated an underwater explosion. The experimental results show that the approach proposed in this paper can be used in 3D deformation measurements with higher sensitivity and accuracy, and is especially suitable for high-speed centrosymmetric deformation. In addition, this method avoids the non-synchronisation problem associated with using a pair of high-speed cameras, as is common in 3D dynamic measurements.

  20. Design of noise barrier inspection system for high-speed railway

    NASA Astrophysics Data System (ADS)

    Liu, Bingqian; Shao, Shuangyun; Feng, Qibo; Ma, Le; Cholryong, Kim

    2016-10-01

    The damage of noise barriers will highly reduce the transportation safety of the high-speed railway. In this paper, an online inspection system of noise barrier based on laser vision for the safety of high-speed railway is proposed. The inspection system, mainly consisted of a fast camera and a line laser, installed in the first carriage of the high-speed CIT(Composited Inspection Train).A Laser line was projected on the surface of the noise barriers and the images of the light line were received by the camera while the train is running at high speed. The distance between the inspection system and the noise barrier can be obtained based on laser triangulation principle. The results of field tests show that the proposed system can meet the need of high speed and high accuracy to get the contour distortion of the noise barriers.

  1. Imaging photomultiplier array with integrated amplifiers and high-speed USB interfacea)

    NASA Astrophysics Data System (ADS)

    Blacksell, M.; Wach, J.; Anderson, D.; Howard, J.; Collis, S. M.; Blackwell, B. D.; Andruczyk, D.; James, B. W.

    2008-10-01

    Multianode photomultiplier tube (PMT) arrays are finding application as convenient high-speed light sensitive devices for plasma imaging. This paper describes the development of a USB-based "plug-n-play" 16-channel PMT camera with 16bits simultaneous acquisition of 16 signal channels at rates up to 2MS/s per channel. The preamplifiers and digital hardware are packaged in a compact housing which incorporates magnetic shielding, on-board generation of the high-voltage PMT bias, an optical filter mount and slits, and F-mount lens adaptor. Triggering, timing, and acquisition are handled by four field-programmable gate arrays (FPGAs) under instruction from a master FPGA controlled by a computer with a LABVIEW interface. We present technical design details and specifications and illustrate performance with high-speed images obtained on the H-1 heliac at the ANU.

  2. Imaging photomultiplier array with integrated amplifiers and high-speed USB interface.

    PubMed

    Blacksell, M; Wach, J; Anderson, D; Howard, J; Collis, S M; Blackwell, B D; Andruczyk, D; James, B W

    2008-10-01

    Multianode photomultiplier tube (PMT) arrays are finding application as convenient high-speed light sensitive devices for plasma imaging. This paper describes the development of a USB-based "plug-n-play" 16-channel PMT camera with 16 bits simultaneous acquisition of 16 signal channels at rates up to 2 MSs per channel. The preamplifiers and digital hardware are packaged in a compact housing which incorporates magnetic shielding, on-board generation of the high-voltage PMT bias, an optical filter mount and slits, and F-mount lens adaptor. Triggering, timing, and acquisition are handled by four field-programmable gate arrays (FPGAs) under instruction from a master FPGA controlled by a computer with a LABVIEW interface. We present technical design details and specifications and illustrate performance with high-speed images obtained on the H-1 heliac at the ANU.

  3. High-speed real-time image compression based on all-optical discrete cosine transformation

    NASA Astrophysics Data System (ADS)

    Guo, Qiang; Chen, Hongwei; Wang, Yuxi; Chen, Minghua; Yang, Sigang; Xie, Shizhong

    2017-02-01

    In this paper, we present a high-speed single-pixel imaging (SPI) system based on all-optical discrete cosine transform (DCT) and demonstrate its capability to enable noninvasive imaging of flowing cells in a microfluidic channel. Through spectral shaping based on photonic time stretch (PTS) and wavelength-to-space conversion, structured illumination patterns are generated at a rate (tens of MHz) which is three orders of magnitude higher than the switching rate of a digital micromirror device (DMD) used in a conventional single-pixel camera. Using this pattern projector, high-speed image compression based on DCT can be achieved in the optical domain. In our proposed system, a high compression ratio (approximately 10:1) and a fast image reconstruction procedure are both achieved, which implicates broad applications in industrial quality control and biomedical imaging.

  4. Studies on dynamic behavior of rotating mirrors

    NASA Astrophysics Data System (ADS)

    Li, Jingzhen; Sun, Fengshan; Gong, Xiangdong; Huang, Hongbin; Tian, Jie

    2005-02-01

    A rotating mirror is a kernel unit in a Miller-type high speed camera, which is both as an imaging element in optical path and as an element to implement ultrahigh speed photography. According to Schardin"s Principle, information capacity of an ultrahigh speed camera with rotating mirror depends on primary wavelength of lighting used by the camera and limit linear velocity on edge of the rotating-mirror: the latter is related to material (including specifications in technology), cross-section shape and lateral structure of rotating mirror. In this manuscript dynamic behavior of high strength aluminium alloy rotating mirrors is studied, from which it is preliminarily shown that an aluminium alloy rotating mirror can be absolutely used as replacement for a steel rotating-mirror or a titanium alloy rotating-mirror in framing photographic systems, and it could be also used as a substitute for a beryllium rotating-mirror in streak photographic systems.

  5. Shuttlecock detection system for fully-autonomous badminton robot with two high-speed video cameras

    NASA Astrophysics Data System (ADS)

    Masunari, T.; Yamagami, K.; Mizuno, M.; Une, S.; Uotani, M.; Kanematsu, T.; Demachi, K.; Sano, S.; Nakamura, Y.; Suzuki, S.

    2017-02-01

    Two high-speed video cameras are successfully used to detect the motion of a flying shuttlecock of badminton. The shuttlecock detection system is applied to badminton robots that play badminton fully autonomously. The detection system measures the three dimensional position and velocity of a flying shuttlecock, and predicts the position where the shuttlecock falls to the ground. The badminton robot moves quickly to the position where the shuttle-cock falls to, and hits the shuttlecock back into the opponent's side of the court. In the game of badminton, there is a large audience, and some of them move behind a flying shuttlecock, which are a kind of background noise and makes it difficult to detect the motion of the shuttlecock. The present study demonstrates that such noises can be eliminated by the method of stereo imaging with two high-speed cameras.

  6. Overall impact of speed-related initiatives and factors on crash outcomes.

    PubMed

    D'Elia, A; Newstead, S; Cameron, M

    2007-01-01

    From December 2000 until July 2002 a package of speed-related initiatives and factors took place in Victoria, Australia. The broad aim of this study was to evaluate the overall impact of the package on crash outcomes. Monthly crash counts and injury severity proportions were assessed using Poisson and logistic regression models respectively. The model measured the overall effect of the package after adjusting as far as possible for non-speed road safety initiatives and socio-economic factors. The speed-related package was associated with statistically significant estimated reductions in casualty crashes and suggested reductions in injury severity with trends towards increased reductions over time. From December 2000 until July 2002, three new speed enforcement initiatives were implemented in Victoria, Australia. These initiatives were introduced in stages and involved the following key components: More covert operations of mobile speed cameras, including flash-less operations; 50% increase in speed camera operating hours; and lowering of cameras' speed detection threshold. In addition, during the period 2001 to 2002, the 50 km/h General Urban Speed Limit (GUSL) was introduced (January 2001), there was an increase in speed-related advertising including the "Wipe Off 5" campaign, media announcements were made related to the above enforcement initiatives and there was a speeding penalty restructure. The above elements combine to make up a package of speed-related initiatives and factors. The package represents a broad, long term program by Victorian government agencies to reduce speed based on three linked strategies: more intensive Police enforcement of speed limits to deter potential offenders, i.e. the three new speed enforcement initiatives just described - supported by higher penalties; a reduction in the speed limit on local streets throughout Victoria from 60 km/h to 50 km/h; and provision of information using the mass media (television, radio and billboard) to reinforce the benefits of reducing low level speeding - the central message of "Wipe Off 5". These strategies were implemented across the entire state of Victoria with the intention of covering as many road users as possible. This study aimed to evaluate the overall effectiveness of the speed-related package. The study objectives were: to document the increased speed camera activity in each speed limit zone and in Melbourne compared with the rest of Victoria; to evaluate the overall effect on crash outcomes of the package; to account as far as possible for the effect on crash outcomes of non-speed road safety initiatives and socio-economic factors, which would otherwise influence the speed-related package evaluation; and to examine speed trends in Melbourne and on Victorian rural highways, especially the proportions of vehicles travelling at excessive speeds. This paper presents the results of the evaluation of the overall impact on crash outcomes associated with the speed-related package, after adjusting as far as possible for the effect of non-speed road safety initiatives and socio-economic factors. D'Elia, Newstead and Cameron (2007) document the study results in full.

  7. Super-Resolution in Plenoptic Cameras Using FPGAs

    PubMed Central

    Pérez, Joel; Magdaleno, Eduardo; Pérez, Fernando; Rodríguez, Manuel; Hernández, David; Corrales, Jaime

    2014-01-01

    Plenoptic cameras are a new type of sensor that extend the possibilities of current commercial cameras allowing 3D refocusing or the capture of 3D depths. One of the limitations of plenoptic cameras is their limited spatial resolution. In this paper we describe a fast, specialized hardware implementation of a super-resolution algorithm for plenoptic cameras. The algorithm has been designed for field programmable graphic array (FPGA) devices using VHDL (very high speed integrated circuit (VHSIC) hardware description language). With this technology, we obtain an acceleration of several orders of magnitude using its extremely high-performance signal processing capability through parallelism and pipeline architecture. The system has been developed using generics of the VHDL language. This allows a very versatile and parameterizable system. The system user can easily modify parameters such as data width, number of microlenses of the plenoptic camera, their size and shape, and the super-resolution factor. The speed of the algorithm in FPGA has been successfully compared with the execution using a conventional computer for several image sizes and different 3D refocusing planes. PMID:24841246

  8. Super-resolution in plenoptic cameras using FPGAs.

    PubMed

    Pérez, Joel; Magdaleno, Eduardo; Pérez, Fernando; Rodríguez, Manuel; Hernández, David; Corrales, Jaime

    2014-05-16

    Plenoptic cameras are a new type of sensor that extend the possibilities of current commercial cameras allowing 3D refocusing or the capture of 3D depths. One of the limitations of plenoptic cameras is their limited spatial resolution. In this paper we describe a fast, specialized hardware implementation of a super-resolution algorithm for plenoptic cameras. The algorithm has been designed for field programmable graphic array (FPGA) devices using VHDL (very high speed integrated circuit (VHSIC) hardware description language). With this technology, we obtain an acceleration of several orders of magnitude using its extremely high-performance signal processing capability through parallelism and pipeline architecture. The system has been developed using generics of the VHDL language. This allows a very versatile and parameterizable system. The system user can easily modify parameters such as data width, number of microlenses of the plenoptic camera, their size and shape, and the super-resolution factor. The speed of the algorithm in FPGA has been successfully compared with the execution using a conventional computer for several image sizes and different 3D refocusing planes.

  9. Coincidence ion imaging with a fast frame camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei

    2014-12-15

    A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots onmore » each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.« less

  10. High-speed imaging on static tensile test for unidirectional CFRP

    NASA Astrophysics Data System (ADS)

    Kusano, Hideaki; Aoki, Yuichiro; Hirano, Yoshiyasu; Kondo, Yasushi; Nagao, Yosuke

    2008-11-01

    The objective of this study is to clarify the fracture mechanism of unidirectional CFRP (Carbon Fiber Reinforced Plastics) under static tensile loading. The advantages of CFRP are higher specific stiffness and strength than the metal material. The use of CFRP is increasing in not only the aerospace and rapid transit railway industries but also the sports, leisure and automotive industries. The tensile fracture mechanism of unidirectional CFRP has not been experimentally made clear because the fracture speed of unidirectional CFRP is quite high. We selected the intermediate modulus and high strength unidirectional CFRP laminate which is a typical material used in the aerospace field. The fracture process under static tensile loading was captured by a conventional high-speed camera and a new type High-Speed Video Camera HPV-1. It was found that the duration of fracture is 200 microseconds or less, then images taken by a conventional camera doesn't have enough temporal-resolution. On the other hand, results obtained by HPV-1 have higher quality where the fracture process can be clearly observed.

  11. High-performance dual-speed CCD camera system for scientific imaging

    NASA Astrophysics Data System (ADS)

    Simpson, Raymond W.

    1996-03-01

    Traditionally, scientific camera systems were partitioned with a `camera head' containing the CCD and its support circuitry and a camera controller, which provided analog to digital conversion, timing, control, computer interfacing, and power. A new, unitized high performance scientific CCD camera with dual speed readout at 1 X 106 or 5 X 106 pixels per second, 12 bit digital gray scale, high performance thermoelectric cooling, and built in composite video output is described. This camera provides all digital, analog, and cooling functions in a single compact unit. The new system incorporates the A/C converter, timing, control and computer interfacing in the camera, with the power supply remaining a separate remote unit. A 100 Mbyte/second serial link transfers data over copper or fiber media to a variety of host computers, including Sun, SGI, SCSI, PCI, EISA, and Apple Macintosh. Having all the digital and analog functions in the camera made it possible to modify this system for the Woods Hole Oceanographic Institution for use on a remote controlled submersible vehicle. The oceanographic version achieves 16 bit dynamic range at 1.5 X 105 pixels/second, can be operated at depths of 3 kilometers, and transfers data to the surface via a real time fiber optic link.

  12. The application of high-speed TV-holography to time-resolved vibration measurements

    NASA Astrophysics Data System (ADS)

    Buckberry, C.; Reeves, M.; Moore, A. J.; Hand, D. P.; Barton, J. S.; Jones, J. D. C.

    1999-10-01

    We describe an electronic speckle pattern interferometer (ESPI) system that has enabled non-harmonic vibrations to be measured with μs temporal resolution. The short exposure period and high framing rate of a high-speed camera at up to 40,500 frames per second allow low-power CW laser illumination and fibre-optic beam delivery to be used, rather than the high peak power pulsed lasers normally used in ESPI for transient measurement. The technique has been demonstrated in the laboratory and tested in preliminary industrial trials. The ability to measure vibration with high spatial and temporal resolution, which is not provided by techniques such as scanning laser vibrometry, has many applications in manufacturing design, and in an illustrative application described here revealed previously unmeasured “rocking” vibrations of a car door. It has been possible to make the measurement on the door as part of a complete vehicle standing on its own tyres, wheels and suspension, and where the excitation was generated by the running of the vehicle's own engine.

  13. Electronic still camera

    NASA Astrophysics Data System (ADS)

    Holland, S. Douglas

    1992-09-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  14. Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    Holland, S. Douglas (Inventor)

    1992-01-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  15. Imaging with organic indicators and high-speed charge-coupled device cameras in neurons: some applications where these classic techniques have advantages.

    PubMed

    Ross, William N; Miyazaki, Kenichi; Popovic, Marko A; Zecevic, Dejan

    2015-04-01

    Dynamic calcium and voltage imaging is a major tool in modern cellular neuroscience. Since the beginning of their use over 40 years ago, there have been major improvements in indicators, microscopes, imaging systems, and computers. While cutting edge research has trended toward the use of genetically encoded calcium or voltage indicators, two-photon microscopes, and in vivo preparations, it is worth noting that some questions still may be best approached using more classical methodologies and preparations. In this review, we highlight a few examples in neurons where the combination of charge-coupled device (CCD) imaging and classical organic indicators has revealed information that has so far been more informative than results using the more modern systems. These experiments take advantage of the high frame rates, sensitivity, and spatial integration of the best CCD cameras. These cameras can respond to the faster kinetics of organic voltage and calcium indicators, which closely reflect the fast dynamics of the underlying cellular events.

  16. Viscoelastic material properties' identification using high speed full field measurements on vibrating plates

    NASA Astrophysics Data System (ADS)

    Giraudeau, A.; Pierron, F.

    2010-06-01

    The paper presents an experimental application of a method leading to the identification of the elastic and damping material properties of isotropic vibrating plates. The theory assumes that the searched parameters can be extracted from curvature and deflection fields measured on the whole surface of the plate at two particular instants of the vibrating motion. The experimental application consists in an original excitation fixture, a particular adaptation of an optical full-field measurement technique, a data preprocessing giving the curvature and deflection fields and finally in the identification process using the Virtual Fields Method (VFM). The principle of the deflectometry technique used for the measurements is presented. First results of identification on an acrylic plate are presented and compared to reference values. Details about a new experimental arrangement, currently in progress, is presented. It uses a high speed digital camera to over sample the full-field measurements.

  17. Development Of A Dynamic Radiographic Capability Using High-Speed Video

    NASA Astrophysics Data System (ADS)

    Bryant, Lawrence E.

    1985-02-01

    High-speed video equipment can be used to optically image up to 2,000 full frames per second or 12,000 partial frames per second. X-ray image intensifiers have historically been used to image radiographic images at 30 frames per second. By combining these two types of equipment, it is possible to perform dynamic x-ray imaging of up to 2,000 full frames per second. The technique has been demonstrated using conventional, industrial x-ray sources such as 150 Kv and 300 Kv constant potential x-ray generators, 2.5 MeV Van de Graaffs, and linear accelerators. A crude form of this high-speed radiographic imaging has been shown to be possible with a cobalt 60 source. Use of a maximum aperture lens makes best use of the available light output from the image intensifier. The x-ray image intensifier input and output fluors decay rapidly enough to allow the high frame rate imaging. Data are presented on the maximum possible video frame rates versus x-ray penetration of various thicknesses of aluminum and steel. Photographs illustrate typical radiographic setups using the high speed imaging method. Video recordings show several demonstrations of this technique with the played-back x-ray images slowed down up to 100 times as compared to the actual event speed. Typical applications include boiling type action of liquids in metal containers, compressor operation with visualization of crankshaft, connecting rod and piston movement and thermal battery operation. An interesting aspect of this technique combines both the optical and x-ray capabilities to observe an object or event with both external and internal details with one camera in a visual mode and the other camera in an x-ray mode. This allows both kinds of video images to appear side by side in a synchronized presentation.

  18. Line following using a two camera guidance system for a mobile robot

    NASA Astrophysics Data System (ADS)

    Samu, Tayib; Kelkar, Nikhal; Perdue, David; Ruthemeyer, Michael A.; Matthews, Bradley O.; Hall, Ernest L.

    1996-10-01

    Automated unmanned guided vehicles have many potential applications in manufacturing, medicine, space and defense. A mobile robot has been designed for the 1996 Automated Unmanned Vehicle Society competition which was held in Orlando, Florida on July 15, 1996. The competition required the vehicle to follow solid and dashed lines around an approximately 800 ft. path while avoiding obstacles, overcoming terrain changes such as inclines and sand traps, and attempting to maximize speed. The purpose of this paper is to describe the algorithm developed for the line following. The line following algorithm images two windows and locates their centroid and with the knowledge that the points are on the ground plane, a mathematical and geometrical relationship between the image coordinates of the points and their corresponding ground coordinates are established. The angle of the line and minimum distance from the robot centroid are then calculated and used in the steering control. Two cameras are mounted on the robot with a camera on each side. One camera guides the robot and when it loses track of the line on its side, the robot control system automatically switches to the other camera. The test bed system has provided an educational experience for all involved and permits understanding and extending the state of the art in autonomous vehicle design.

  19. Interferometric imaging of acoustical phenomena using high-speed polarization camera and 4-step parallel phase-shifting technique

    NASA Astrophysics Data System (ADS)

    Ishikawa, K.; Yatabe, K.; Ikeda, Y.; Oikawa, Y.; Onuma, T.; Niwa, H.; Yoshii, M.

    2017-02-01

    Imaging of sound aids the understanding of the acoustical phenomena such as propagation, reflection, and diffraction, which is strongly required for various acoustical applications. The imaging of sound is commonly done by using a microphone array, whereas optical methods have recently been interested due to its contactless nature. The optical measurement of sound utilizes the phase modulation of light caused by sound. Since light propagated through a sound field changes its phase as proportional to the sound pressure, optical phase measurement technique can be used for the sound measurement. Several methods including laser Doppler vibrometry and Schlieren method have been proposed for that purpose. However, the sensitivities of the methods become lower as a frequency of sound decreases. In contrast, since the sensitivities of the phase-shifting technique do not depend on the frequencies of sounds, that technique is suitable for the imaging of sounds in the low-frequency range. The principle of imaging of sound using parallel phase-shifting interferometry was reported by the authors (K. Ishikawa et al., Optics Express, 2016). The measurement system consists of a high-speed polarization camera made by Photron Ltd., and a polarization interferometer. This paper reviews the principle briefly and demonstrates the high-speed imaging of acoustical phenomena. The results suggest that the proposed system can be applied to various industrial problems in acoustical engineering.

  20. High Resolution, High-Speed Photography, an Increasingly Prominent Diagnostic in Ballistic Research Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaw, L.; Muelder, S.

    1999-10-22

    High resolution, high-speed photography is becoming a prominent diagnostic in ballistic experimentation. The development of high speed cameras utilizing electro-optics and the use of lasers for illumination now provide the capability to routinely obtain high quality photographic records of ballistic style experiments. The purpose of this presentation is to review in a visual manner the progress of this technology and how it has impacted ballistic experimentation. Within the framework of development at LLNL, we look at the recent history of large format high-speed photography, and present a number of photographic records that represent the state of the art at themore » time they were made. These records are primarily from experiments involving shaped charges. We also present some examples of current photographic technology, developed within the ballistic community, that has application to hydro diagnostic experimentation at large. This paper is designed primarily as an oral-visual presentation. This written portion is to provide general background, a few examples, and a bibliography.« less

  1. Ultrahigh-speed X-ray imaging of hypervelocity projectiles

    NASA Astrophysics Data System (ADS)

    Miller, Stuart; Singh, Bipin; Cool, Steven; Entine, Gerald; Campbell, Larry; Bishel, Ron; Rushing, Rick; Nagarkar, Vivek V.

    2011-08-01

    High-speed X-ray imaging is an extremely important modality for healthcare, industrial, military and research applications such as medical computed tomography, non-destructive testing, imaging in-flight projectiles, characterizing exploding ordnance, and analyzing ballistic impacts. We report on the development of a modular, ultrahigh-speed, high-resolution digital X-ray imaging system with large active imaging area and microsecond time resolution, capable of acquiring at a rate of up to 150,000 frames per second. The system is based on a high-resolution, high-efficiency, and fast-decay scintillator screen optically coupled to an ultra-fast image-intensified CCD camera designed for ballistic impact studies and hypervelocity projectile imaging. A specially designed multi-anode, high-fluence X-ray source with 50 ns pulse duration provides a sequence of blur-free images of hypervelocity projectiles traveling at speeds exceeding 8 km/s (18,000 miles/h). This paper will discuss the design, performance, and high frame rate imaging capability of the system.

  2. High speed digital holographic interferometry for hypersonic flow visualization

    NASA Astrophysics Data System (ADS)

    Hegde, G. M.; Jagdeesh, G.; Reddy, K. P. J.

    2013-06-01

    Optical imaging techniques have played a major role in understanding the flow dynamics of varieties of fluid flows, particularly in the study of hypersonic flows. Schlieren and shadowgraph techniques have been the flow diagnostic tools for the investigation of compressible flows since more than a century. However these techniques provide only the qualitative information about the flow field. Other optical techniques such as holographic interferometry and laser induced fluorescence (LIF) have been used extensively for extracting quantitative information about the high speed flows. In this paper we present the application of digital holographic interferometry (DHI) technique integrated with short duration hypersonic shock tunnel facility having 1 ms test time, for quantitative flow visualization. Dynamics of the flow fields in hypersonic/supersonic speeds around different test models is visualized with DHI using a high-speed digital camera (0.2 million fps). These visualization results are compared with schlieren visualization and CFD simulation results. Fringe analysis is carried out to estimate the density of the flow field.

  3. High-Speed Edge-Detecting Line Scan Smart Camera

    NASA Technical Reports Server (NTRS)

    Prokop, Norman F.

    2012-01-01

    A high-speed edge-detecting line scan smart camera was developed. The camera is designed to operate as a component in a NASA Glenn Research Center developed inlet shock detection system. The inlet shock is detected by projecting a laser sheet through the airflow. The shock within the airflow is the densest part and refracts the laser sheet the most in its vicinity, leaving a dark spot or shadowgraph. These spots show up as a dip or negative peak within the pixel intensity profile of an image of the projected laser sheet. The smart camera acquires and processes in real-time the linear image containing the shock shadowgraph and outputting the shock location. Previously a high-speed camera and personal computer would perform the image capture and processing to determine the shock location. This innovation consists of a linear image sensor, analog signal processing circuit, and a digital circuit that provides a numerical digital output of the shock or negative edge location. The smart camera is capable of capturing and processing linear images at over 1,000 frames per second. The edges are identified as numeric pixel values within the linear array of pixels, and the edge location information can be sent out from the circuit in a variety of ways, such as by using a microcontroller and onboard or external digital interface to include serial data such as RS-232/485, USB, Ethernet, or CAN BUS; parallel digital data; or an analog signal. The smart camera system can be integrated into a small package with a relatively small number of parts, reducing size and increasing reliability over the previous imaging system..

  4. 11. INTERIOR VIEW OF 8FOOT HIGH SPEED WIND TUNNEL. SAME ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. INTERIOR VIEW OF 8-FOOT HIGH SPEED WIND TUNNEL. SAME CAMERA POSITION AS VA-118-B-10 LOOKING IN THE OPPOSITE DIRECTION. - NASA Langley Research Center, 8-Foot High Speed Wind Tunnel, 641 Thornell Avenue, Hampton, Hampton, VA

  5. New concept high-speed and high-resolution color scanner

    NASA Astrophysics Data System (ADS)

    Nakashima, Keisuke; Shinoda, Shin'ichi; Konishi, Yoshiharu; Sugiyama, Kenji; Hori, Tetsuya

    2003-05-01

    We have developed a new concept high-speed and high-resolution color scanner (Blinkscan) using digital camera technology. With our most advanced sub-pixel image processing technology, approximately 12 million pixel image data can be captured. High resolution imaging capability allows various uses such as OCR, color document read, and document camera. The scan time is only about 3 seconds for a letter size sheet. Blinkscan scans documents placed "face up" on its scan stage and without any special illumination lights. Using Blinkscan, a high-resolution color document can be easily inputted into a PC at high speed, a paperless system can be built easily. It is small, and since the occupancy area is also small, setting it on an individual desk is possible. Blinkscan offers the usability of a digital camera and accuracy of a flatbed scanner with high-speed processing. Now, about several hundred of Blinkscan are mainly shipping for the receptionist operation in a bank and a security. We will show the high-speed and high-resolution architecture of Blinkscan. Comparing operation-time with conventional image capture device, the advantage of Blinkscan will make clear. And image evaluation for variety of environment, such as geometric distortions or non-uniformity of brightness, will be made.

  6. Performance assessment of 3D surface imaging technique for medical imaging applications

    NASA Astrophysics Data System (ADS)

    Li, Tuotuo; Geng, Jason; Li, Shidong

    2013-03-01

    Recent development in optical 3D surface imaging technologies provide better ways to digitalize the 3D surface and its motion in real-time. The non-invasive 3D surface imaging approach has great potential for many medical imaging applications, such as motion monitoring of radiotherapy, pre/post evaluation of plastic surgery and dermatology, to name a few. Various commercial 3D surface imaging systems have appeared on the market with different dimension, speed and accuracy. For clinical applications, the accuracy, reproducibility and robustness across the widely heterogeneous skin color, tone, texture, shape properties, and ambient lighting is very crucial. Till now, a systematic approach for evaluating the performance of different 3D surface imaging systems still yet exist. In this paper, we present a systematic performance assessment approach to 3D surface imaging system assessment for medical applications. We use this assessment approach to exam a new real-time surface imaging system we developed, dubbed "Neo3D Camera", for image-guided radiotherapy (IGRT). The assessments include accuracy, field of view, coverage, repeatability, speed and sensitivity to environment, texture and color.

  7. Cranz-Schardin camera with a large working distance for the observation of small scale high-speed flows.

    PubMed

    Skupsch, C; Chaves, H; Brücker, C

    2011-08-01

    The Cranz-Schardin camera utilizes a Q-switched Nd:YAG laser and four single CCD cameras. Light pulse energy in the range of 25 mJ and pulse duration of about 5 ns is provided by the laser. The laser light is converted to incoherent light by Rhodamine-B fluorescence dye in a cuvette. The laser beam coherence is intentionally broken in order to avoid speckle. Four light fibers collect the fluorescence light and are used for illumination. Different light fiber lengths enable a delay of illumination between consecutive images. The chosen interframe time is 25 ns, corresponding to 40 × 10(6) frames per second. Exemplarily, the camera is applied to observe the bow shock in front of a water jet, propagating in air at supersonic speed. The initial phase of the formation of a jet structure is recorded.

  8. Mach-zehnder based optical marker/comb generator for streak camera calibration

    DOEpatents

    Miller, Edward Kirk

    2015-03-03

    This disclosure is directed to a method and apparatus for generating marker and comb indicia in an optical environment using a Mach-Zehnder (M-Z) modulator. High speed recording devices are configured to record image or other data defining a high speed event. To calibrate and establish time reference, the markers or combs are indicia which serve as timing pulses (markers) or a constant-frequency train of optical pulses (comb) to be imaged on a streak camera for accurate time based calibration and time reference. The system includes a camera, an optic signal generator which provides an optic signal to an M-Z modulator and biasing and modulation signal generators configured to provide input to the M-Z modulator. An optical reference signal is provided to the M-Z modulator. The M-Z modulator modulates the reference signal to a higher frequency optical signal which is output through a fiber coupled link to the streak camera.

  9. Unmanned Ground Vehicle Perception Using Thermal Infrared Cameras

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo; Huertas, Andres; Matthies, Larry; Bajracharya, Max; Assad, Christopher; Brennan, Shane; Bellutta, Paolo; Sherwin, Gary W.

    2011-01-01

    The ability to perform off-road autonomous navigation at any time of day or night is a requirement for some unmanned ground vehicle (UGV) programs. Because there are times when it is desirable for military UGVs to operate without emitting strong, detectable electromagnetic signals, a passive only terrain perception mode of operation is also often a requirement. Thermal infrared (TIR) cameras can be used to provide day and night passive terrain perception. TIR cameras have a detector sensitive to either mid-wave infrared (MWIR) radiation (3-5?m) or long-wave infrared (LWIR) radiation (8-12?m). With the recent emergence of high-quality uncooled LWIR cameras, TIR cameras have become viable passive perception options for some UGV programs. The Jet Propulsion Laboratory (JPL) has used a stereo pair of TIR cameras under several UGV programs to perform stereo ranging, terrain mapping, tree-trunk detection, pedestrian detection, negative obstacle detection, and water detection based on object reflections. In addition, we have evaluated stereo range data at a variety of UGV speeds, evaluated dual-band TIR classification of soil, vegetation, and rock terrain types, analyzed 24 hour water and 12 hour mud TIR imagery, and analyzed TIR imagery for hazard detection through smoke. Since TIR cameras do not currently provide the resolution available from megapixel color cameras, a UGV's daytime safe speed is often reduced when using TIR instead of color cameras. In this paper, we summarize the UGV terrain perception work JPL has performed with TIR cameras over the last decade and describe a calibration target developed by General Dynamics Robotic Systems (GDRS) for TIR cameras and other sensors.

  10. Synchronization trigger control system for flow visualization

    NASA Technical Reports Server (NTRS)

    Chun, K. S.

    1987-01-01

    The use of cinematography or holographic interferometry for dynamic flow visualization in an internal combustion engine requires a control device that globally synchronizes camera and light source timing at a predefined shaft encoder angle. The device is capable of 0.35 deg resolution for rotational speeds of up to 73 240 rpm. This was achieved by implementing the shaft encoder signal addressed look-up table (LUT) and appropriate latches. The developed digital signal processing technique achieves 25 nsec of high speed triggering angle detection by using direct parallel bit comparison of the shaft encoder digital code with a simulated angle reference code, instead of using angle value comparison which involves more complicated computation steps. In order to establish synchronization to an AC reference signal whose magnitude is variant with the rotating speed, a dynamic peak followup synchronization technique has been devised. This method scrutinizes the reference signal and provides the right timing within 40 nsec. Two application examples are described.

  11. High-Speed Videography Overview

    NASA Astrophysics Data System (ADS)

    Miller, C. E.

    1989-02-01

    The field of high-speed videography (HSV) has continued to mature in recent years, due to the introduction of a mixture of new technology and extensions of existing technology. Recent low frame-rate innovations have the potential to dramatically expand the areas of information gathering and motion analysis at all frame-rates. Progress at the 0 - rate is bringing the battle of film versus video to the field of still photography. The pressure to push intermediate frame rates higher continues, although the maximum achievable frame rate has remained stable for several years. Higher maximum recording rates appear technologically practical, but economic factors impose severe limitations to development. The application of diverse photographic techniques to video-based systems is under-exploited. The basics of HSV apply to other fields, such as machine vision and robotics. Present motion analysis systems continue to function mainly as an instant replay replacement for high-speed movie film cameras. The interrelationship among lighting, shuttering and spatial resolution is examined.

  12. High-Speed Rainbow Schlieren Deflectometry Analysis of Helium Jets Flowing into Air for Microgravity Applications

    NASA Technical Reports Server (NTRS)

    Leptuch, Peter A.

    2002-01-01

    The flow phenomena of buoyant jets have been analyzed by many researchers in recent years. Few, however have studied jets in microgravity conditions, and the exact nature of the flow under these conditions has until recently been unknown. This study seeks to extend the work done by researchers at the university of Oklahoma in examining and documenting the behavior of helium jets in micro-gravity conditions. Quantitative rainbow schlieren deflectometry data have been obtained for helium jets discharging vertically into quiescent ambient air from tubes of several diameters at various flow rates using a high-speed digital camera. These data have obtained before, during and after the onset of microgravity conditions. High-speed rainbow schlieren deflectometry has been developed for this study with the installation and use of a high-speed digital camera and modifications to the optical setup. Higher temporal resolution of the transitional phase between terrestrial and micro-gravity conditions has been obtained which has reduced the averaging effect of longer exposure times used in all previous schlieren studies. Results include color schlieren images, color time-space images (temporal evolution images), frequency analyses, contour plots of hue and contour plots of helium mole fraction. The results, which focus primarily on the periods before and during the onset of microgravity conditions, show that the pulsation of the jets normally found in terrestrial gravity ("earth"-gravity) conditions cease, and the gradients in helium diminish to produce a widening of the jet in micro-gravity conditions. In addition, the results show that the disturbance propagate upstream from a downstream source.

  13. First high speed imaging of lightning from summer thunderstorms over India: Preliminary results based on amateur recording using a digital camera

    NASA Astrophysics Data System (ADS)

    Narayanan, V. L.

    2017-12-01

    For the first time, high speed imaging of lightning from few isolated tropical thunderstorms are observed from India. The recordings are made from Tirupati (13.6oN, 79.4oE, 180 m above mean sea level) during summer months with a digital camera capable of recording high speed videos up to 480 fps. At 480 fps, each individual video file is recorded for 30 s resulting in 14400 deinterlaced images per video file. An automatic processing algorithm is developed for quick identification and analysis of the lightning events which will be discussed in detail. Preliminary results indicating different types of phenomena associated with lightning like stepped leader, dart leader, luminous channels corresponding to continuing current and M components are discussed. While most of the examples show cloud to ground discharges, few interesting cases of intra-cloud, inter-cloud and cloud-air discharges will also be displayed. This indicates that though high speed cameras with few 1000 fps are preferred for a detailed study on lightning, moderate range CMOS sensor based digital cameras can provide important information as well. The lightning imaging activity presented herein is initiated as an amateur effort and currently plans are underway to propose a suite of supporting instruments to conduct coordinated campaigns. The images discussed here are acquired from normal residential area and indicate how frequent lightning strikes are in such tropical locations during thunderstorms, though no towering structures are nearby. It is expected that popularizing of such recordings made with affordable digital cameras will trigger more interest in lightning research and provide a possible data source from amateur observers paving the way for citizen science.

  14. Optimizing Radiometric Fidelity to Enhance Aerial Image Change Detection Utilizing Digital Single Lens Reflex (DSLR) Cameras

    NASA Astrophysics Data System (ADS)

    Kerr, Andrew D.

    Determining optimal imaging settings and best practices related to the capture of aerial imagery using consumer-grade digital single lens reflex (DSLR) cameras, should enable remote sensing scientists to generate consistent, high quality, and low cost image data sets. Radiometric optimization, image fidelity, image capture consistency and repeatability were evaluated in the context of detailed image-based change detection. The impetus for this research is in part, a dearth of relevant, contemporary literature, on the utilization of consumer grade DSLR cameras for remote sensing, and the best practices associated with their use. The main radiometric control settings on a DSLR camera, EV (Exposure Value), WB (White Balance), light metering, ISO, and aperture (f-stop), are variables that were altered and controlled over the course of several image capture missions. These variables were compared for their effects on dynamic range, intra-frame brightness variation, visual acuity, temporal consistency, and the detectability of simulated cracks placed in the images. This testing was conducted from a terrestrial, rather than an airborne collection platform, due to the large number of images per collection, and the desire to minimize inter-image misregistration. The results point to a range of slightly underexposed image exposure values as preferable for change detection and noise minimization fidelity. The makeup of the scene, the sensor, and aerial platform, influence the selection of the aperture and shutter speed which along with other variables, allow for estimation of the apparent image motion (AIM) motion blur in the resulting images. The importance of the image edges in the image application, will in part dictate the lowest usable f-stop, and allow the user to select a more optimal shutter speed and ISO. The single most important camera capture variable is exposure bias (EV), with a full dynamic range, wide distribution of DN values, and high visual contrast and acuity occurring around -0.7 to -0.3EV exposure bias. The ideal values for sensor gain, was found to be ISO 100, with ISO 200 a less desirable. This study offers researchers a better understanding of the effects of camera capture settings on RSI pairs and their influence on image-based change detection.

  15. Lightdrum—Portable Light Stage for Accurate BTF Measurement on Site

    PubMed Central

    Havran, Vlastimil; Hošek, Jan; Němcová, Šárka; Čáp, Jiří; Bittner, Jiří

    2017-01-01

    We propose a miniaturised light stage for measuring the bidirectional reflectance distribution function (BRDF) and the bidirectional texture function (BTF) of surfaces on site in real world application scenarios. The main principle of our lightweight BTF acquisition gantry is a compact hemispherical skeleton with cameras along the meridian and with light emitting diode (LED) modules shining light onto a sample surface. The proposed device is portable and achieves a high speed of measurement while maintaining high degree of accuracy. While the positions of the LEDs are fixed on the hemisphere, the cameras allow us to cover the range of the zenith angle from 0∘ to 75∘ and by rotating the cameras along the axis of the hemisphere we can cover all possible camera directions. This allows us to take measurements with almost the same quality as existing stationary BTF gantries. Two degrees of freedom can be set arbitrarily for measurements and the other two degrees of freedom are fixed, which provides a tradeoff between accuracy of measurements and practical applicability. Assuming that a measured sample is locally flat and spatially accessible, we can set the correct perpendicular direction against the measured sample by means of an auto-collimator prior to measuring. Further, we have designed and used a marker sticker method to allow for the easy rectification and alignment of acquired images during data processing. We show the results of our approach by images rendered for 36 measured material samples. PMID:28241466

  16. Industrial X-Ray Imaging

    NASA Technical Reports Server (NTRS)

    1997-01-01

    In 1990, Lewis Research Center jointly sponsored a conference with the U.S. Air Force Wright Laboratory focused on high speed imaging. This conference, and early funding by Lewis Research Center, helped to spur work by Silicon Mountain Design, Inc. to break the performance barriers of imaging speed, resolution, and sensitivity through innovative technology. Later, under a Small Business Innovation Research contract with the Jet Propulsion Laboratory, the company designed a real-time image enhancing camera that yields superb, high quality images in 1/30th of a second while limiting distortion. The result is a rapidly available, enhanced image showing significantly greater detail compared to image processing executed on digital computers. Current applications include radiographic and pathology-based medicine, industrial imaging, x-ray inspection devices, and automated semiconductor inspection equipment.

  17. Speech versus manual control of camera functions during a telerobotic task

    NASA Technical Reports Server (NTRS)

    Bierschwale, John M.; Sampaio, Carlos E.; Stuart, Mark A.; Smith, Randy L.

    1993-01-01

    This investigation has evaluated the voice-commanded camera control concept. For this particular task, total voice control of continuous and discrete camera functions was significantly slower than manual control. There was no significant difference between voice and manual input for several types of errors. There was not a clear trend in subjective preference of camera command input modality. Task performance, in terms of both accuracy and speed, was very similar across both levels of experience.

  18. PtSi gimbal-based FLIR for airborne applications

    NASA Astrophysics Data System (ADS)

    Wallace, Joseph; Ornstein, Itzhak; Nezri, M.; Fryd, Y.; Bloomberg, Steve; Beem, S.; Bibi, B.; Hem, S.; Perna, Steve N.; Tower, John R.; Lang, Frank B.; Villani, Thomas S.; McCarthy, D. R.; Stabile, Paul J.

    1997-08-01

    A new gimbal-based, FLIR camera for several types of airborne platforms has been developed. The FLIR is based on a PtSi on silicon technology: developed for high volume and minimum cost. The gimbal scans an area of 360 degrees in azimuth and an elevation range of plus 15 degrees to minus 105 degrees. It is stabilized to 25 (mu) Rad-rms. A combination of uniformity correction, defect substitution, and compact optics results in a long range, low cost FLIR for all low-speed airborne platforms.

  19. Boundary-Layer Transition Detection in Cryogenic Wind Tunnel Using Fluorescent Paints

    NASA Technical Reports Server (NTRS)

    Sullivan, John

    1999-01-01

    Luminescent molecular probes imbedded in a polymer binder form a temperature or pressure paint. On excitation by light of the proper wavelength, the luminescence, which is quenched either thermally or by oxygen, is detected by a camera or photodetector. From the detected luminescent intensity, temperature and pressure can be determined. The basic photophysics, calibration, accuracy and time response of a luminescent paints is described followed by applications in low speed, transonic, supersonic and cryogenic wind tunnels and in rotating machinery.

  20. High speed fluorescence imaging with compressed ultrafast photography

    NASA Astrophysics Data System (ADS)

    Thompson, J. V.; Mason, J. D.; Beier, H. T.; Bixler, J. N.

    2017-02-01

    Fluorescent lifetime imaging is an optical technique that facilitates imaging molecular interactions and cellular functions. Because the excited lifetime of a fluorophore is sensitive to its local microenvironment,1, 2 measurement of fluorescent lifetimes can be used to accurately detect regional changes in temperature, pH, and ion concentration. However, typical state of the art fluorescent lifetime methods are severely limited when it comes to acquisition time (on the order of seconds to minutes) and video rate imaging. Here we show that compressed ultrafast photography (CUP) can be used in conjunction with fluorescent lifetime imaging to overcome these acquisition rate limitations. Frame rates up to one hundred billion frames per second have been demonstrated with compressed ultrafast photography using a streak camera.3 These rates are achieved by encoding time in the spatial direction with a pseudo-random binary pattern. The time domain information is then reconstructed using a compressed sensing algorithm, resulting in a cube of data (x,y,t) for each readout image. Thus, application of compressed ultrafast photography will allow us to acquire an entire fluorescent lifetime image with a single laser pulse. Using a streak camera with a high-speed CMOS camera, acquisition rates of 100 frames per second can be achieved, which will significantly enhance our ability to quantitatively measure complex biological events with high spatial and temporal resolution. In particular, we will demonstrate the ability of this technique to do single-shot fluorescent lifetime imaging of cells and microspheres.

  1. Explore Galaxies Far, Far Away at Internet Speeds | Berkeley Lab

    Science.gov Websites

    Survey) were taken by the 520-megapixel Dark Energy Survey Camera (DECam). The scientific aim of DECaLS the Dark Energy Camera Legacy Survey (DECaLS). Credit: Dustin Lang/University of Toronto This galaxy UGC 10041 imaged by the Dark Energy Camera Legacy Survey (DECaLS). Credit: Dustin Lang/University

  2. Coincidence electron/ion imaging with a fast frame camera

    NASA Astrophysics Data System (ADS)

    Li, Wen; Lee, Suk Kyoung; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander; Fan, Lin

    2015-05-01

    A new time- and position- sensitive particle detection system based on a fast frame CMOS camera is developed for coincidence electron/ion imaging. The system is composed of three major components: a conventional microchannel plate (MCP)/phosphor screen electron/ion imager, a fast frame CMOS camera and a high-speed digitizer. The system collects the positional information of ions/electrons from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of MCPs processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of electron/ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide. We further show that a time resolution of 30 ps can be achieved when measuring electron TOF spectrum and this enables the new system to achieve a good energy resolution along the TOF axis.

  3. Comparison of myocardial perfusion imaging between the new high-speed gamma camera and the standard anger camera.

    PubMed

    Tanaka, Hirokazu; Chikamori, Taishiro; Hida, Satoshi; Uchida, Kenji; Igarashi, Yuko; Yokoyama, Tsuyoshi; Takahashi, Masaki; Shiba, Chie; Yoshimura, Mana; Tokuuye, Koichi; Yamashina, Akira

    2013-01-01

    Cadmium-zinc-telluride (CZT) solid-state detectors have been recently introduced into the field of myocardial perfusion imaging. The aim of this study was to prospectively compare the diagnostic performance of the CZT high-speed gamma camera (Discovery NM 530c) with that of the standard 3-head gamma camera in the same group of patients. The study group consisted of 150 consecutive patients who underwent a 1-day stress-rest (99m)Tc-sestamibi or tetrofosmin imaging protocol. Image acquisition was performed first on a standard gamma camera with a 15-min scan time each for stress and for rest. All scans were immediately repeated on a CZT camera with a 5-min scan time for stress and a 3-min scan time for rest, using list mode. The correlations between the CZT camera and the standard camera for perfusion and function analyses were strong within narrow Bland-Altman limits of agreement. Using list mode analysis, image quality for stress was rated as good or excellent in 97% of the 3-min scans, and in 100% of the ≥4-min scans. For CZT scans at rest, similarly, image quality was rated as good or excellent in 94% of the 1-min scans, and in 100% of the ≥2-min scans. The novel CZT camera provides excellent image quality, which is equivalent to standard myocardial single-photon emission computed tomography, despite a short scan time of less than half of the standard time.

  4. Synchronous high speed multi-point velocity profile measurement by heterodyne interferometry

    NASA Astrophysics Data System (ADS)

    Hou, Xueqin; Xiao, Wen; Chen, Zonghui; Qin, Xiaodong; Pan, Feng

    2017-02-01

    This paper presents a synchronous multipoint velocity profile measurement system, which acquires the vibration velocities as well as images of vibrating objects by combining optical heterodyne interferometry and a high-speed CMOS-DVR camera. The high-speed CMOS-DVR camera records a sequence of images of the vibrating object. Then, by extracting and processing multiple pixels at the same time, a digital demodulation technique is implemented to simultaneously acquire the vibrating velocity of the target from the recorded sequences of images. This method is validated with an experiment. A piezoelectric ceramic plate with standard vibration characteristics is used as the vibrating target, which is driven by a standard sinusoidal signal.

  5. A Control System and Streaming DAQ Platform with Image-Based Trigger for X-ray Imaging

    NASA Astrophysics Data System (ADS)

    Stevanovic, Uros; Caselle, Michele; Cecilia, Angelica; Chilingaryan, Suren; Farago, Tomas; Gasilov, Sergey; Herth, Armin; Kopmann, Andreas; Vogelgesang, Matthias; Balzer, Matthias; Baumbach, Tilo; Weber, Marc

    2015-06-01

    High-speed X-ray imaging applications play a crucial role for non-destructive investigations of the dynamics in material science and biology. On-line data analysis is necessary for quality assurance and data-driven feedback, leading to a more efficient use of a beam time and increased data quality. In this article we present a smart camera platform with embedded Field Programmable Gate Array (FPGA) processing that is able to stream and process data continuously in real-time. The setup consists of a Complementary Metal-Oxide-Semiconductor (CMOS) sensor, an FPGA readout card, and a readout computer. It is seamlessly integrated in a new custom experiment control system called Concert that provides a more efficient way of operating a beamline by integrating device control, experiment process control, and data analysis. The potential of the embedded processing is demonstrated by implementing an image-based trigger. It records the temporal evolution of physical events with increased speed while maintaining the full field of view. The complete data acquisition system, with Concert and the smart camera platform was successfully integrated and used for fast X-ray imaging experiments at KIT's synchrotron radiation facility ANKA.

  6. Three-dimensional device characterization by high-speed cinematography

    NASA Astrophysics Data System (ADS)

    Maier, Claus; Hofer, Eberhard P.

    2001-10-01

    Testing of micro-electro-mechanical systems (MEMS) for optimization purposes or reliability checks can be supported by device visualization whenever an optical access is available. The difficulty in such an investigation is the short time duration of dynamical phenomena in micro devices. This paper presents a test setup to visualize movements within MEMS in real-time and in two perpendicular directions. A three-dimensional view is achieved by the combination of a commercial high-speed camera system, which allows to take up to 8 images of the same process with a minimum interframe time of 10 ns for the first direction, with a second visualization system consisting of a highly sensitive CCD camera working with a multiple exposure LED illumination in the perpendicular direction. Well synchronized this provides 3-D information which is treated by digital image processing to correct image distortions and to perform the detection of object contours. Symmetric and asymmetric binary collisions of micro drops are chosen as test experiments, featuring coalescence and surface rupture. Another application shown here is the investigation of sprays produced by an atomizer. The second direction of view is a prerequisite for this measurement to select an intended plane of focus.

  7. Shock wave driven microparticles for pharmaceutical applications

    NASA Astrophysics Data System (ADS)

    Menezes, V.; Takayama, K.; Gojani, A.; Hosseini, S. H. R.

    2008-10-01

    Ablation created by a Q-switched Nd:Yttrium Aluminum Garnet (Nd:YAG) laser beam focusing on a thin aluminum foil surface spontaneously generates a shock wave that propagates through the foil and deforms it at a high speed. This high-speed foil deformation can project dry micro- particles deposited on the anterior surface of the foil at high speeds such that the particles have sufficient momentum to penetrate soft targets. We used this method of particle acceleration to develop a drug delivery device to deliver DNA/drug coated microparticles into soft human-body targets for pharmaceutical applications. The device physics has been studied by observing the process of particle acceleration using a high-speed video camera in a shadowgraph system. Though the initial rate of foil deformation is over 5 km/s, the observed particle velocities are in the range of 900-400 m/s over a distance of 1.5-10 mm from the launch pad. The device has been tested by delivering microparticles into liver tissues of experimental rats and artificial soft human-body targets, modeled using gelatin. The penetration depths observed in the experimental targets are quite encouraging to develop a future clinical therapeutic device for treatments such as gene therapy, treatment of cancer and tumor cells, epidermal and mucosal immunizations etc.

  8. Work zone speed reduction utilizing dynamic speed signs

    DOT National Transportation Integrated Search

    2011-08-30

    Vast quantities of transportation data are automatically recorded by intelligent transportations infrastructure, such as inductive loop detectors, video cameras, and side-fire radar devices. Such devices are typically deployed by traffic management c...

  9. Robust multiple cue fusion-based high-speed and nonrigid object tracking algorithm for short track speed skating

    NASA Astrophysics Data System (ADS)

    Liu, Chenguang; Cheng, Heng-Da; Zhang, Yingtao; Wang, Yuxuan; Xian, Min

    2016-01-01

    This paper presents a methodology for tracking multiple skaters in short track speed skating competitions. Nonrigid skaters move at high speed with severe occlusions happening frequently among them. The camera is panned quickly in order to capture the skaters in a large and dynamic scene. To automatically track the skaters and precisely output their trajectories becomes a challenging task in object tracking. We employ the global rink information to compensate camera motion and obtain the global spatial information of skaters, utilize random forest to fuse multiple cues and predict the blob of each skater, and finally apply a silhouette- and edge-based template-matching and blob-evolving method to labelling pixels to a skater. The effectiveness and robustness of the proposed method are verified through thorough experiments.

  10. Influence of Wind Speed on RGB-D Images in Tree Plantations

    PubMed Central

    Andújar, Dionisio; Dorado, José; Bengochea-Guevara, José María; Conesa-Muñoz, Jesús; Fernández-Quintanilla, César; Ribeiro, Ángela

    2017-01-01

    Weather conditions can affect sensors’ readings when sampling outdoors. Although sensors are usually set up covering a wide range of conditions, their operational range must be established. In recent years, depth cameras have been shown as a promising tool for plant phenotyping and other related uses. However, the use of these devices is still challenged by prevailing field conditions. Although the influence of lighting conditions on the performance of these cameras has already been established, the effect of wind is still unknown. This study establishes the associated errors when modeling some tree characteristics at different wind speeds. A system using a Kinect v2 sensor and a custom software was tested from null wind speed up to 10 m·s−1. Two tree species with contrasting architecture, poplars and plums, were used as model plants. The results showed different responses depending on tree species and wind speed. Estimations of Leaf Area (LA) and tree volume were generally more consistent at high wind speeds in plum trees. Poplars were particularly affected by wind speeds higher than 5 m·s−1. On the contrary, height measurements were more consistent for poplars than for plum trees. These results show that the use of depth cameras for tree characterization must take into consideration wind conditions in the field. In general, 5 m·s−1 (18 km·h−1) could be established as a conservative limit for good estimations. PMID:28430119

  11. Modulated CMOS camera for fluorescence lifetime microscopy.

    PubMed

    Chen, Hongtao; Holst, Gerhard; Gratton, Enrico

    2015-12-01

    Widefield frequency-domain fluorescence lifetime imaging microscopy (FD-FLIM) is a fast and accurate method to measure the fluorescence lifetime of entire images. However, the complexity and high costs involved in construction of such a system limit the extensive use of this technique. PCO AG recently released the first luminescence lifetime imaging camera based on a high frequency modulated CMOS image sensor, QMFLIM2. Here we tested and provide operational procedures to calibrate the camera and to improve the accuracy using corrections necessary for image analysis. With its flexible input/output options, we are able to use a modulated laser diode or a 20 MHz pulsed white supercontinuum laser as the light source. The output of the camera consists of a stack of modulated images that can be analyzed by the SimFCS software using the phasor approach. The nonuniform system response across the image sensor must be calibrated at the pixel level. This pixel calibration is crucial and needed for every camera settings, e.g. modulation frequency and exposure time. A significant dependency of the modulation signal on the intensity was also observed and hence an additional calibration is needed for each pixel depending on the pixel intensity level. These corrections are important not only for the fundamental frequency, but also for the higher harmonics when using the pulsed supercontinuum laser. With these post data acquisition corrections, the PCO CMOS-FLIM camera can be used for various biomedical applications requiring a large frame and high speed acquisition. © 2015 Wiley Periodicals, Inc.

  12. Application of PLZT electro-optical shutter to diaphragm of visible and mid-infrared cameras

    NASA Astrophysics Data System (ADS)

    Fukuyama, Yoshiyuki; Nishioka, Shunji; Chonan, Takao; Sugii, Masakatsu; Shirahata, Hiromichi

    1997-04-01

    Pb0.9La0.09(Zr0.65,Ti0.35)0.9775O3 9/65/35) commonly used as an electro-optical shutter exhibits large phase retardation with low applied voltage. This shutter features as follows; (1) high shutter speed, (2) wide optical transmittance, and (3) high optical density in 'OFF'-state. If the shutter is applied to a diaphragm of video-camera, it could protect its sensor from intense lights. We have tested the basic characteristics of the PLZT electro-optical shutter and resolved power of imaging. The ratio of optical transmittance at 'ON' and 'OFF'-states was 1.1 X 103. The response time of the PLZT shutter from 'ON'-state to 'OFF'-state was 10 micro second. MTF reduction when putting the PLZT shutter in from of the visible video- camera lens has been observed only with 12 percent at a spatial frequency of 38 cycles/mm which are sensor resolution of the video-camera. Moreover, we took the visible image of the Si-CCD video-camera. The He-Ne laser ghost image was observed at 'ON'-state. On the contrary, the ghost image was totally shut out at 'OFF'-state. From these teste, it has been found that the PLZT shutter is useful for the diaphragm of the visible video-camera. The measured optical transmittance of PLZT wafer with no antireflection coating was 78 percent over the range from 2 to 6 microns.

  13. Developments in TurboBrayton Technology for Low Temperature Applications

    NASA Technical Reports Server (NTRS)

    Swift, W. L.; Zagarola, M. V.; Nellis, G. F.; McCormick, J. A.; Gibbon, Judy

    1999-01-01

    A single stage reverse Brayton cryocooler using miniature high-speed turbomachines recently completed a successful space shuttle test flight demonstrating its capabilities for use in cooling the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) on the Hubble Space Telescope (HST). The NICMOS CryoCooler (NCC) is designed for a cooling load of about 8 W at 65 K, and comprises a closed loop cryocooler coupled to an independent cryogenic circulating loop. Future space applications involve instruments that will require 5 mW to 200 mW of cooling at temperatures between 4 K and 10 K. This paper discusses the extension of Turbo-Brayton technology to meet these requirements.

  14. Real-time image processing for non-contact monitoring of dynamic displacements using smartphone technologies

    NASA Astrophysics Data System (ADS)

    Min, Jae-Hong; Gelo, Nikolas J.; Jo, Hongki

    2016-04-01

    The newly developed smartphone application, named RINO, in this study allows measuring absolute dynamic displacements and processing them in real time using state-of-the-art smartphone technologies, such as high-performance graphics processing unit (GPU), in addition to already powerful CPU and memories, embedded high-speed/ resolution camera, and open-source computer vision libraries. A carefully designed color-patterned target and user-adjustable crop filter enable accurate and fast image processing, allowing up to 240fps for complete displacement calculation and real-time display. The performances of the developed smartphone application are experimentally validated, showing comparable accuracy with those of conventional laser displacement sensor.

  15. Unmanned Ground Vehicle Perception Using Thermal Infrared Cameras

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo; Huertas, Andres; Matthies, Larry; Bajracharya, Max; Assad, Christopher; Brennan, Shane; Bellut, Paolo; Sherwin, Gary

    2011-01-01

    TIR cameras can be used for day/night Unmanned Ground Vehicle (UGV) autonomous navigation when stealth is required. The quality of uncooled TIR cameras has significantly improved over the last decade, making them a viable option at low speed Limiting factors for stereo ranging with uncooled LWIR cameras are image blur and low texture scenes TIR perception capabilities JPL has explored includes: (1) single and dual band TIR terrain classification (2) obstacle detection (pedestrian, vehicle, tree trunks, ditches, and water) (3) perception thru obscurants

  16. Exploding Balloons, Deformed Balls, Strange Reflections and Breaking Rods: Slow Motion Analysis of Selected Hands-On Experiments

    ERIC Educational Resources Information Center

    Vollmer, Michael; Mollmann, Klaus-Peter

    2011-01-01

    A selection of hands-on experiments from different fields of physics, which happen too fast for the eye or video cameras to properly observe and analyse the phenomena, is presented. They are recorded and analysed using modern high speed cameras. Two types of cameras were used: the first were rather inexpensive consumer products such as Casio…

  17. Near-infrared high-resolution real-time omnidirectional imaging platform for drone detection

    NASA Astrophysics Data System (ADS)

    Popovic, Vladan; Ott, Beat; Wellig, Peter; Leblebici, Yusuf

    2016-10-01

    Recent technological advancements in hardware systems have made higher quality cameras. State of the art panoramic systems use them to produce videos with a resolution of 9000 x 2400 pixels at a rate of 30 frames per second (fps).1 Many modern applications use object tracking to determine the speed and the path taken by each object moving through a scene. The detection requires detailed pixel analysis between two frames. In fields like surveillance systems or crowd analysis, this must be achieved in real time.2 In this paper, we focus on the system-level design of multi-camera sensor acquiring near-infrared (NIR) spectrum and its ability to detect mini-UAVs in a representative rural Swiss environment. The presented results show the UAV detection from the trial that we conducted during a field trial in August 2015.

  18. Time-lapse photogrammetry in geomorphic studies

    NASA Astrophysics Data System (ADS)

    Eltner, Anette; Kaiser, Andreas

    2017-04-01

    Image based approaches to reconstruct the earth surface (Structure from Motion - SfM) are establishing as a standard technology for high resolution topographic data. This is amongst other advantages due to the comparatively ease of use and flexibility of data generation. Furthermore, the increased spatial resolution led to its implementation at a vast range of applications from sub-mm to tens-of-km scale. Almost fully automatic calculation of referenced digital elevation models allows for a significant increase of temporal resolution, as well, potentially up to sub-second scales. Thereby, the setup of a time-lapse multi-camera system is necessary and different aspects need to be considered: The camera array has to be temporary stable or potential movements need to be compensated by temporary stable reference targets/areas. The stability of the internal camera geometry has to be considered due to a usually significantly lower amount of images of the scene, and thus redundancy for parameter estimation, compared to more common SfM applications. Depending on the speed of surface change, synchronisation has to be very accurate. Due to the usual application in the field, changing environmental conditions important for lighting and visual range are also crucial factors to keep in mind. Besides these important considerations much potential is comprised by time-lapse photogrammetry. The integration of multi-sensor systems, e.g. using thermal cameras, enables the potential detection of other processes not visible with RGB-images solely. Furthermore, the implementation of low-cost sensors allows for a significant increase of areal coverage and their setup at locations, where a loss of the system cannot be ruled out. The usage of micro-computers offers smart camera triggering, e.g. acquiring images with increased frequency controlled by a rainfall-triggered sensor. In addition these micro-computers can enable on-site data processing, e.g. recognition of increased surface movement, and thus might be used as warning system in the case of natural hazards. A large variety of applications are suitable with time-lapse photogrammetry, i.e. change detection of all sorts; e.g. volumetric alterations, movement tracking or roughness changes. The multi-camera systems can be used for slope investigations, soil studies, glacier observation, snow cover measurement, volcanic surveillance or plant growth monitoring. A conceptual workflow is introduced highlighting the limits and potentials of time-lapse photogrammetry.

  19. A user-friendly technical set-up for infrared photography of forensic findings.

    PubMed

    Rost, Thomas; Kalberer, Nicole; Scheurer, Eva

    2017-09-01

    Infrared photography is interesting for a use in forensic science and forensic medicine since it reveals findings that normally are almost invisible to the human eye. Originally, infrared photography has been made possible by the placement of an infrared light transmission filter screwed in front of the camera objective lens. However, this set-up is associated with many drawbacks such as the loss of the autofocus function, the need of an external infrared source, and long exposure times which make the use of a tripod necessary. These limitations prevented up to now the routine application of infrared photography in forensics. In this study the use of a professional modification inside the digital camera body was evaluated regarding camera handling and image quality. This permanent modification consisted of the replacement of the in-built infrared blocking filter by an infrared transmission filter of 700nm and 830nm, respectively. The application of this camera set-up for the photo-documentation of forensically relevant post-mortem findings was investigated in examples of trace evidence such as gunshot residues on the skin, in external findings, e.g. hematomas, as well as in an exemplary internal finding, i.e., Wischnewski spots in a putrefied stomach. The application of scattered light created by indirect flashlight yielded a more uniform illumination of the object, and the use of the 700nm filter resulted in better pictures than the 830nm filter. Compared to pictures taken under visible light, infrared photographs generally yielded better contrast. This allowed for discerning more details and revealed findings which were not visible otherwise, such as imprints on a fabric and tattoos in mummified skin. The permanent modification of a digital camera by building in a 700nm infrared transmission filter resulted in a user-friendly and efficient set-up which qualified for the use in daily forensic routine. Main advantages were a clear picture in the viewfinder, an auto-focus usable over the whole range of infrared light, and the possibility of using short shutter speeds which allows taking infrared pictures free-hand. The proposed set-up with a modification of the camera allows a user-friendly application of infrared photography in post-mortem settings. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Distributed processing method for arbitrary view generation in camera sensor network

    NASA Astrophysics Data System (ADS)

    Tehrani, Mehrdad P.; Fujii, Toshiaki; Tanimoto, Masayuki

    2003-05-01

    Camera sensor network as a new advent of technology is a network that each sensor node can capture video signals, process and communicate them with other nodes. The processing task in this network is to generate arbitrary view, which can be requested from central node or user. To avoid unnecessary communication between nodes in camera sensor network and speed up the processing time, we have distributed the processing tasks between nodes. In this method, each sensor node processes part of interpolation algorithm to generate the interpolated image with local communication between nodes. The processing task in camera sensor network is ray-space interpolation, which is an object independent method and based on MSE minimization by using adaptive filtering. Two methods were proposed for distributing processing tasks, which are Fully Image Shared Decentralized Processing (FIS-DP), and Partially Image Shared Decentralized Processing (PIS-DP), to share image data locally. Comparison of the proposed methods with Centralized Processing (CP) method shows that PIS-DP has the highest processing speed after FIS-DP, and CP has the lowest processing speed. Communication rate of CP and PIS-DP is almost same and better than FIS-DP. So, PIS-DP is recommended because of its better performance than CP and FIS-DP.

  1. 3-D Velocimetry of Strombolian Explosions

    NASA Astrophysics Data System (ADS)

    Taddeucci, J.; Gaudin, D.; Orr, T. R.; Scarlato, P.; Houghton, B. F.; Del Bello, E.

    2014-12-01

    Using two synchronized high-speed cameras we were able to reconstruct the three-dimensional displacement and velocity field of bomb-sized pyroclasts in Strombolian explosions at Stromboli Volcano. Relatively low-intensity Strombolian-style activity offers a rare opportunity to observe volcanic processes that remain hidden from view during more violent explosive activity. Such processes include the ejection and emplacement of bomb-sized clasts along pure or drag-modified ballistic trajectories, in-flight bomb collision, and gas liberation dynamics. High-speed imaging of Strombolian activity has already opened new windows for the study of the abovementioned processes, but to date has only utilized two-dimensional analysis with limited motion detection and ability to record motion towards or away from the observer. To overcome this limitation, we deployed two synchronized high-speed video cameras at Stromboli. The two cameras, located sixty meters apart, filmed Strombolian explosions at 500 and 1000 frames per second and with different resolutions. Frames from the two cameras were pre-processed and combined into a single video showing frames alternating from one to the other camera. Bomb-sized pyroclasts were then manually identified and tracked in the combined video, together with fixed reference points located as close as possible to the vent. The results from manual tracking were fed to a custom software routine that, knowing the relative position of the vent and cameras, and the field of view of the latter, provided the position of each bomb relative to the reference points. By tracking tens of bombs over five to ten frames at different intervals during one explosion, we were able to reconstruct the three-dimensional evolution of the displacement and velocity fields of bomb-sized pyroclasts during individual Strombolian explosions. Shifting jet directivity and dispersal angle clearly appear from the three-dimensional analysis.

  2. Earth elevation map production and high resolution sensing camera imaging analysis

    NASA Astrophysics Data System (ADS)

    Yang, Xiubin; Jin, Guang; Jiang, Li; Dai, Lu; Xu, Kai

    2010-11-01

    The Earth's digital elevation which impacts space camera imaging has prepared and imaging has analysed. Based on matching error that TDI CCD integral series request of the speed of image motion, statistical experimental methods-Monte Carlo method is used to calculate the distribution histogram of Earth's elevation in image motion compensated model which includes satellite attitude changes, orbital angular rate changes, latitude, longitude and the orbital inclination changes. And then, elevation information of the earth's surface from SRTM is read. Earth elevation map which produced for aerospace electronic cameras is compressed and spliced. It can get elevation data from flash according to the shooting point of latitude and longitude. If elevation data between two data, the ways of searching data uses linear interpolation. Linear interpolation can better meet the rugged mountains and hills changing requests. At last, the deviant framework and camera controller are used to test the character of deviant angle errors, TDI CCD camera simulation system with the material point corresponding to imaging point model is used to analyze the imaging's MTF and mutual correlation similarity measure, simulation system use adding cumulation which TDI CCD imaging exceeded the corresponding pixel horizontal and vertical offset to simulate camera imaging when stability of satellite attitude changes. This process is practicality. It can effectively control the camera memory space, and meet a very good precision TDI CCD camera in the request matches the speed of image motion and imaging.

  3. Study of plastic strain localization mechanisms caused by nonequilibrium transitions in mesodefect ensembles under high-speed loading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sokovikov, Mikhail, E-mail: sokovikov@icmm.ru; Chudinov, Vasiliy; Bilalov, Dmitry

    2015-10-27

    The behavior of specimens dynamically loaded during split Hopkinson (Kolsky) bar tests in a regime close to simple shear conditions was studied. The lateral surface of the specimens was investigated in-situ using a high-speed infrared camera CEDIP Silver 450M. The temperature field distribution obtained at different time allowed one to trace the evolution of plastic strain localization. The process of target perforation involving plug formation and ejection was examined using a high-speed infrared camera and a VISAR velocity measurement system. The microstructure of tested specimens was analyzed using an optical interferometer-profiler and a scanning electron microscope. The development of plasticmore » shear instability regions has been simulated numerically.« less

  4. Relativistic Astronomy

    NASA Astrophysics Data System (ADS)

    Zhang, Bing; Li, Kunyang

    2018-02-01

    The “Breakthrough Starshot” aims at sending near-speed-of-light cameras to nearby stellar systems in the future. Due to the relativistic effects, a transrelativistic camera naturally serves as a spectrograph, a lens, and a wide-field camera. We demonstrate this through a simulation of the optical-band image of the nearby galaxy M51 in the rest frame of the transrelativistic camera. We suggest that observing celestial objects using a transrelativistic camera may allow one to study the astronomical objects in a special way, and to perform unique tests on the principles of special relativity. We outline several examples that suggest transrelativistic cameras may make important contributions to astrophysics and suggest that the Breakthrough Starshot cameras may be launched in any direction to serve as a unique astronomical observatory.

  5. An electronic pan/tilt/zoom camera system

    NASA Technical Reports Server (NTRS)

    Zimmermann, Steve; Martin, H. Lee

    1991-01-01

    A camera system for omnidirectional image viewing applications that provides pan, tilt, zoom, and rotational orientation within a hemispherical field of view (FOV) using no moving parts was developed. The imaging device is based on the effect that from a fisheye lens, which produces a circular image of an entire hemispherical FOV, can be mathematically corrected using high speed electronic circuitry. An incoming fisheye image from any image acquisition source is captured in memory of the device, a transformation is performed for the viewing region of interest and viewing direction, and a corrected image is output as a video image signal for viewing, recording, or analysis. As a result, this device can accomplish the functions of pan, tilt, rotation, and zoom throughout a hemispherical FOV without the need for any mechanical mechanisms. A programmable transformation processor provides flexible control over viewing situations. Multiple images, each with different image magnifications and pan tilt rotation parameters, can be obtained from a single camera. The image transformation device can provide corrected images at frame rates compatible with RS-170 standard video equipment.

  6. Bio-inspired motion detection in an FPGA-based smart camera module.

    PubMed

    Köhler, T; Röchter, F; Lindemann, J P; Möller, R

    2009-03-01

    Flying insects, despite their relatively coarse vision and tiny nervous system, are capable of carrying out elegant and fast aerial manoeuvres. Studies of the fly visual system have shown that this is accomplished by the integration of signals from a large number of elementary motion detectors (EMDs) in just a few global flow detector cells. We developed an FPGA-based smart camera module with more than 10,000 single EMDs, which is closely modelled after insect motion-detection circuits with respect to overall architecture, resolution and inter-receptor spacing. Input to the EMD array is provided by a CMOS camera with a high frame rate. Designed as an adaptable solution for different engineering applications and as a testbed for biological models, the EMD detector type and parameters such as the EMD time constants, the motion-detection directions and the angle between correlated receptors are reconfigurable online. This allows a flexible and simultaneous detection of complex motion fields such as translation, rotation and looming, such that various tasks, e.g., obstacle avoidance, height/distance control or speed regulation can be performed by the same compact device.

  7. A high-speed trapezoid image sensor design for continuous traffic monitoring at signalized intersection approaches.

    DOT National Transportation Integrated Search

    2014-10-01

    The goal of this project is to monitor traffic flow continuously with an innovative camera system composed of a custom : designed image sensor integrated circuit (IC) containing trapezoid pixel array and camera system that is capable of : intelligent...

  8. HIGH SPEED KERR CELL FRAMING CAMERA

    DOEpatents

    Goss, W.C.; Gilley, L.F.

    1964-01-01

    The present invention relates to a high speed camera utilizing a Kerr cell shutter and a novel optical delay system having no moving parts. The camera can selectively photograph at least 6 frames within 9 x 10/sup -8/ seconds during any such time interval of an occurring event. The invention utilizes particularly an optical system which views and transmits 6 images of an event to a multi-channeled optical delay relay system. The delay relay system has optical paths of successively increased length in whole multiples of the first channel optical path length, into which optical paths the 6 images are transmitted. The successively delayed images are accepted from the exit of the delay relay system by an optical image focusing means, which in turn directs the images into a Kerr cell shutter disposed to intercept the image paths. A camera is disposed to simultaneously view and record the 6 images during a single exposure of the Kerr cell shutter. (AEC)

  9. Investigations of Section Speed on Rural Roads in Podlaskie Voivodeship

    NASA Astrophysics Data System (ADS)

    Ziolkowski, Robert

    2017-10-01

    Excessive speed is one of the most important factors considered in road safety and not only affects the severity of a crash but is also related to the risk of being involved in a crash. In Poland the problem of speeding drivers is widely common. Properly recognized and defined drivers behaviour is the base for any effective activities taken towards road safety improvements. Effective enforcement of speed limits especially on rural road plays an important role but conducted speed investigations basically focus on spot speed omitting travel speed on longer sections of roads which can better reflect driver’s behaviour. Possible solutions for rural roads are limited to administrative means of speed limitations, installations of speed cameras and police enforcement. However due to their limited proved effectiveness new solutions are still being sought. High expectations are associated with the sectional speed system that has recently been introduced in Poland and covered a number of national road sections. The aim of this paper is to investigate section speed on chosen regional and district roads located in Podlaskie Voivodeship. Test sections included 19 road segments varied in terms of functional and geometric characteristics. Speed measurements on regional and district roads were performed with the use of a set of two ANPR (Automatic Number Plate Recognition) cameras. Conducted research allowed to compare driver’s behaviour in terms of travel speed depending on roads’ functional classification as well as to evaluate the influence of chosen geometric parameters on average section speed.

  10. A high sensitivity 20Mfps CMOS image sensor with readout speed of 1Tpixel/sec for visualization of ultra-high speed phenomena

    NASA Astrophysics Data System (ADS)

    Kuroda, R.; Sugawa, S.

    2017-02-01

    Ultra-high speed (UHS) CMOS image sensors with on-chop analog memories placed on the periphery of pixel array for the visualization of UHS phenomena are overviewed in this paper. The developed UHS CMOS image sensors consist of 400H×256V pixels and 128 memories/pixel, and the readout speed of 1Tpixel/sec is obtained, leading to 10 Mfps full resolution video capturing with consecutive 128 frames, and 20 Mfps half resolution video capturing with consecutive 256 frames. The first development model has been employed in the high speed video camera and put in practical use in 2012. By the development of dedicated process technologies, photosensitivity improvement and power consumption reduction were simultaneously achieved, and the performance improved version has been utilized in the commercialized high-speed video camera since 2015 that offers 10 Mfps with ISO16,000 photosensitivity. Due to the improved photosensitivity, clear images can be captured and analyzed even under low light condition, such as under a microscope as well as capturing of UHS light emission phenomena.

  11. Quadcopter applications for wildlife monitoring

    NASA Astrophysics Data System (ADS)

    Radiansyah, S.; Kusrini, M. D.; Prasetyo, L. B.

    2017-01-01

    Recently, Unmanned Aerial Vehicle (UAV) had been use as an instrument for wildlife research. Most of that, using an airplane type which need space for runaway. Copter is UAV type that can fly at canopy space and do not need runaway. The research aims are to examine quadcopter application for wildlife monitoring, measure the accuracy of data generated and determine effective, efficient and appropriate technical recommendation in accordance with the ethics of wildlife photography. Flight trials with a camera 12 - 24 MP at altitude ranges from 50-200 m above ground level (agl), producing aerial photographs with spatial resolution of 0.85 - 4.79 cm/pixel. Aerial photos quality depends on the type and setting of camera, vibration damper system, flight altitude and punctuality of the shooting. For wildlife monitoring the copter is recommended to take off at least 300 m from the target, and flies at 50 - 100 m agl with flight speed of 5 - 7 m/sec on fine weather. Quadcopter presence with a distance more than 30 m from White-bellied Sea Eagles (Haliaeetus leucogaster) nest and Proboscis Monkey (Nasalis larvatus) did not cause negative response. Quadcopter application should pay attention to the behaviour and characteristic of wildlife.

  12. High speed television camera system processes photographic film data for digital computer analysis

    NASA Technical Reports Server (NTRS)

    Habbal, N. A.

    1970-01-01

    Data acquisition system translates and processes graphical information recorded on high speed photographic film. It automatically scans the film and stores the information with a minimal use of the computer memory.

  13. An optical system for detecting 3D high-speed oscillation of a single ultrasound microbubble

    PubMed Central

    Liu, Yuan; Yuan, Baohong

    2013-01-01

    As contrast agents, microbubbles have been playing significant roles in ultrasound imaging. Investigation of microbubble oscillation is crucial for microbubble characterization and detection. Unfortunately, 3-dimensional (3D) observation of microbubble oscillation is challenging and costly because of the bubble size—a few microns in diameter—and the high-speed dynamics under MHz ultrasound pressure waves. In this study, a cost-efficient optical confocal microscopic system combined with a gated and intensified charge-coupled device (ICCD) camera were developed to detect 3D microbubble oscillation. The capability of imaging microbubble high-speed oscillation with much lower costs than with an ultra-fast framing or streak camera system was demonstrated. In addition, microbubble oscillations along both lateral (x and y) and axial (z) directions were demonstrated. Accordingly, this system is an excellent alternative for 3D investigation of microbubble high-speed oscillation, especially when budgets are limited. PMID:24049677

  14. A High-Speed Motion-Picture Study of Normal Combustion, Knock and Preignition in a Spark-Ignition Engines

    NASA Technical Reports Server (NTRS)

    Rothrock, A M; Spencer, R C; Miller, Cearcy D

    1941-01-01

    Combustion in a spark-ignition engine was investigated by means of the NACA high-speed motion-picture cameras. This camera is operated at a speed of 40,000 photographs a second and therefore makes possible the study of changes that take place in the intervals as short as 0.000025 second. When the motion pictures are projected at the normal speed of 16 frames a second, any rate of movement shown is slowed down 2500 times. Photographs are presented of normal combustion, of combustion from preignitions, and of knock both with and without preignition. The photographs of combustion show that knock may be preceded by a period of exothermic reaction in the end zone that persists for a time interval of as much as 0.0006 second. The knock takes place in 0.00005 second or less.

  15. Automatic Exposure Iris Control (AEIC) for data acquisition camera

    NASA Technical Reports Server (NTRS)

    Mcatee, G. E., Jr.; Stoap, L. J.; Solheim, C. D.; Sharpsteen, J. T.

    1975-01-01

    A lens design capable of operating over a total range of f/1.4 to f/11.0 with through the lens light sensing is presented along with a system which compensates for ASA film speeds as well as shutter openings. The space shuttle camera system package is designed so that it can be assembled on the existing 16 mm DAC with a minimum of alteration to the camera.

  16. The use of low cost compact cameras with focus stacking functionality in entomological digitization projects

    PubMed Central

    Mertens, Jan E.J.; Roie, Martijn Van; Merckx, Jonas; Dekoninck, Wouter

    2017-01-01

    Abstract Digitization of specimen collections has become a key priority of many natural history museums. The camera systems built for this purpose are expensive, providing a barrier in institutes with limited funding, and therefore hampering progress. An assessment is made on whether a low cost compact camera with image stacking functionality can help expedite the digitization process in large museums or provide smaller institutes and amateur entomologists with the means to digitize their collections. Images of a professional setup were compared with the Olympus Stylus TG-4 Tough, a low-cost compact camera with internal focus stacking functions. Parameters considered include image quality, digitization speed, price, and ease-of-use. The compact camera’s image quality, although inferior to the professional setup, is exceptional considering its fourfold lower price point. Producing the image slices in the compact camera is a matter of seconds and when optimal image quality is less of a priority, the internal stacking function omits the need for dedicated stacking software altogether, further decreasing the cost and speeding up the process. In general, it is found that, aware of its limitations, this compact camera is capable of digitizing entomological collections with sufficient quality. As technology advances, more institutes and amateur entomologists will be able to easily and affordably catalogue their specimens. PMID:29134038

  17. Dynamic visual attention: motion direction versus motion magnitude

    NASA Astrophysics Data System (ADS)

    Bur, A.; Wurtz, P.; Müri, R. M.; Hügli, H.

    2008-02-01

    Defined as an attentive process in the context of visual sequences, dynamic visual attention refers to the selection of the most informative parts of video sequence. This paper investigates the contribution of motion in dynamic visual attention, and specifically compares computer models designed with the motion component expressed either as the speed magnitude or as the speed vector. Several computer models, including static features (color, intensity and orientation) and motion features (magnitude and vector) are considered. Qualitative and quantitative evaluations are performed by comparing the computer model output with human saliency maps obtained experimentally from eye movement recordings. The model suitability is evaluated in various situations (synthetic and real sequences, acquired with fixed and moving camera perspective), showing advantages and inconveniences of each method as well as preferred domain of application.

  18. High-speed time-reversed ultrasonically encoded (TRUE) optical focusing inside dynamic scattering media at 793 nm

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Lai, Puxiang; Ma, Cheng; Xu, Xiao; Suzuki, Yuta; Grabar, Alexander A.; Wang, Lihong V.

    2014-03-01

    Time-reversed ultrasonically encoded (TRUE) optical focusing is an emerging technique that focuses light deep into scattering media by phase-conjugating ultrasonically encoded diffuse light. In previous work, the speed of TRUE focusing was limited to no faster than 1 Hz by the response time of the photorefractive phase conjugate mirror, or the data acquisition and streaming speed of the digital camera; photorefractive-crystal-based TRUE focusing was also limited to the visible spectral range. These time-consuming schemes prevent this technique from being applied in vivo, since living biological tissue has a speckle decorrelation time on the order of a millisecond. In this work, using a Tedoped Sn2P2S6 photorefractive crystal at a near-infrared wavelength of 793 nm, we achieved TRUE focusing inside dynamic scattering media having a speckle decorrelation time as short as 7.7 ms. As the achieved speed approaches the tissue decorrelation rate, this work is an important step forward toward in vivo applications of TRUE focusing in deep tissue imaging, photodynamic therapy, and optical manipulation.

  19. Performance analysis of a new positron camera geometry for high speed, fine particle tracking

    NASA Astrophysics Data System (ADS)

    Sovechles, J. M.; Boucher, D.; Pax, R.; Leadbeater, T.; Sasmito, A. P.; Waters, K. E.

    2017-09-01

    A new positron camera arrangement was assembled using 16 ECAT951 modular detector blocks. A closely packed, cross pattern arrangement was selected to produce a highly sensitive cylindrical region for tracking particles with low activities and high speeds. To determine the capabilities of this system a comprehensive analysis of the tracking performance was conducted to determine the 3D location error and location frequency as a function of tracer activity and speed. The 3D error was found to range from 0.54 mm for a stationary particle, consistent for all tracer activities, up to 4.33 mm for a tracer with an activity of 3 MBq and a speed of 4 m · s-1. For lower activity tracers (<10-2 MBq), the error was more sensitive to increases in speed, increasing to 28 mm (at 4 m · s-1), indicating that at these conditions a reliable trajectory is not possible. These results expanded on, but correlated well with, previous literature that only contained location errors for tracer speeds up to 1.5 m · s-1. The camera was also used to track directly activated mineral particles inside a two-inch hydrocyclone and a 142 mm diameter flotation cell. A detailed trajectory, inside the hydrocyclone, of a  -212  +  106 µm (10-1 MBq) quartz particle displayed the expected spiralling motion towards the apex. This was the first time a mineral particle of this size had been successfully traced within a hydrocyclone, however more work is required to develop detailed velocity fields.

  20. FPGA Implementation of Stereo Disparity with High Throughput for Mobility Applications

    NASA Technical Reports Server (NTRS)

    Villalpando, Carlos Y.; Morfopolous, Arin; Matthies, Larry; Goldberg, Steven

    2011-01-01

    High speed stereo vision can allow unmanned robotic systems to navigate safely in unstructured terrain, but the computational cost can exceed the capacity of typical embedded CPUs. In this paper, we describe an end-to-end stereo computation co-processing system optimized for fast throughput that has been implemented on a single Virtex 4 LX160 FPGA. This system is capable of operating on images from a 1024 x 768 3CCD (true RGB) camera pair at 15 Hz. Data enters the FPGA directly from the cameras via Camera Link and is rectified, pre-filtered and converted into a disparity image all within the FPGA, incurring no CPU load. Once complete, a rectified image and the final disparity image are read out over the PCI bus, for a bandwidth cost of 68 MB/sec. Within the FPGA there are 4 distinct algorithms: Camera Link capture, Bilinear rectification, Bilateral subtraction pre-filtering and the Sum of Absolute Difference (SAD) disparity. Each module will be described in brief along with the data flow and control logic for the system. The system has been successfully fielded upon the Carnegie Mellon University's National Robotics Engineering Center (NREC) Crusher system during extensive field trials in 2007 and 2008 and is being implemented for other surface mobility systems at JPL.

  1. New Window on the Universe.

    ERIC Educational Resources Information Center

    Reynolds, Ronald F.

    1984-01-01

    Describes the basic components of a space telescope that will be launched during a 1986 space shuttle mission. These components include a wide field/planetary camera, faint object spectroscope, high-resolution spectrograph, high-speed photometer, faint object camera, and fine guidance sensors. Data to be collected from these instruments are…

  2. S3: School Zone Safety System Based on Wireless Sensor Network

    PubMed Central

    Yoo, Seong-eun; Chong, Poh Kit; Kim, Daeyoung

    2009-01-01

    School zones are areas near schools that have lower speed limits and where illegally parked vehicles pose a threat to school children by obstructing them from the view of drivers. However, these laws are regularly flouted. Thus, we propose a novel wireless sensor network application called School zone Safety System (S3) to help regulate the speed limit and to prevent illegal parking in school zones. S3 detects illegally parked vehicles, and warns the driver and records the license plate number. To reduce the traveling speed of vehicles in a school zone, S3 measures the speed of vehicles and displays the speed to the driver via an LED display, and also captures the image of the speeding vehicle with a speed camera. We developed a state machine based vehicle detection algorithm for S3. From extensive experiments in our testbeds and data from a real school zone, it is shown that the system can detect all kinds of vehicles, and has an accuracy of over 95% for speed measurement. We modeled the battery life time of a sensor node and validated the model with a downscaled measurement; we estimate the battery life time to be over 2 years. We have deployed S3 in 15 school zones in 2007, and we have demonstrated the robustness of S3 by operating them for over 1 year. PMID:22454567

  3. To brake or to accelerate? Safety effects of combined speed and red light cameras.

    PubMed

    De Pauw, Ellen; Daniels, Stijn; Brijs, Tom; Hermans, Elke; Wets, Geert

    2014-09-01

    The present study evaluates the traffic safety effect of combined speed and red light cameras at 253 signalized intersections in Flanders, Belgium that were installed between 2002 and 2007. The adopted approach is a before-and-after study with control for the trend. The analyses showed a non-significant increase of 5% in the number of injury crashes. An almost significant decrease of 14% was found for the more severe crashes. The number of rear-end crashes turned out to have increased significantly (+44%), whereas a non-significant decrease (-6%) was found in the number of side crashes. The decrease for the severe crashes was mainly attributable to the effect on side crashes, for which a significant decrease of 24% was found. It is concluded that combined speed and red light cameras have a favorable effect on traffic safety, in particular on severe crashes. However, future research should examine the circumstances of rear-end crashes and how this increase can be managed. Copyright © 2014 National Safety Council and Elsevier Ltd. All rights reserved.

  4. A Design and Development of Multi-Purpose CCD Camera System with Thermoelectric Cooling: Software

    NASA Astrophysics Data System (ADS)

    Oh, S. H.; Kang, Y. W.; Byun, Y. I.

    2007-12-01

    We present a software which we developed for the multi-purpose CCD camera. This software can be used on the all 3 types of CCD - KAF-0401E (768×512), KAF-1602E (15367times;1024), KAF-3200E (2184×1472) made in KODAK Co.. For the efficient CCD camera control, the software is operated with two independent processes of the CCD control program and the temperature/shutter operation program. This software is designed to fully automatic operation as well as manually operation under LINUX system, and is controled by LINUX user signal procedure. We plan to use this software for all sky survey system and also night sky monitoring and sky observation. As our results, the read-out time of each CCD are about 15sec, 64sec, 134sec for KAF-0401E, KAF-1602E, KAF-3200E., because these time are limited by the data transmission speed of parallel port. For larger format CCD, the data transmission is required more high speed. we are considering this control software to one using USB port for high speed data transmission.

  5. Synchro-ballistic recording of detonation phenomena

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Critchfield, R.R.; Asay, B.W.; Bdzil, J.B.

    1997-09-01

    Synchro-ballistic use of rotating-mirror streak cameras allows for detailed recording of high-speed events of known velocity and direction. After an introduction to the synchro-ballistic technique, this paper details two diverse applications of the technique as applied in the field of high-explosives research. In the first series of experiments detonation-front shape is recorded as the arriving detonation shock wave tilts an obliquely mounted mirror, causing reflected light to be deflected from the imaging lens. These tests were conducted for the purpose of calibrating and confirming the asymptotic Detonation Shock Dynamics (DSD) theory of Bdzil and Stewart. The phase velocities of themore » events range from ten to thirty millimeters per microsecond. Optical magnification is set for optimal use of the film`s spatial dimension and the phase velocity is adjusted to provide synchronization at the camera`s maximum writing speed. Initial calibration of the technique is undertaken using a cylindrical HE geometry over a range of charge diameters and of sufficient length-to-diameter ratio to insure a stable detonation wave. The final experiment utilizes an arc-shaped explosive charge, resulting in an asymmetric detonation-front record. The second series of experiments consists of photographing a shaped-charge jet having a velocity range of two to nine millimeters per microsecond. To accommodate the range of velocities it is necessary to fire several tests, each synchronized to a different section of the jet. The experimental apparatus consists of a vacuum chamber to preclude atmospheric ablation of the jet tip with shocked-argon back lighting to produce a shadow-graph image.« less

  6. Real time moving scene holographic camera system

    NASA Technical Reports Server (NTRS)

    Kurtz, R. L. (Inventor)

    1973-01-01

    A holographic motion picture camera system producing resolution of front surface detail is described. The system utilizes a beam of coherent light and means for dividing the beam into a reference beam for direct transmission to a conventional movie camera and two reflection signal beams for transmission to the movie camera by reflection from the front side of a moving scene. The system is arranged so that critical parts of the system are positioned on the foci of a pair of interrelated, mathematically derived ellipses. The camera has the theoretical capability of producing motion picture holograms of projectiles moving at speeds as high as 900,000 cm/sec (about 21,450 mph).

  7. Solid-state framing camera with multiple time frames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, K. L.; Stewart, R. E.; Steele, P. T.

    2013-10-07

    A high speed solid-state framing camera has been developed which can operate over a wide range of photon energies. This camera measures the two-dimensional spatial profile of the flux incident on a cadmium selenide semiconductor at multiple times. This multi-frame camera has been tested at 3.1 eV and 4.5 keV. The framing camera currently records two frames with a temporal separation between the frames of 5 ps but this separation can be varied between hundreds of femtoseconds up to nanoseconds and the number of frames can be increased by angularly multiplexing the probe beam onto the cadmium selenide semiconductor.

  8. Extreme ultra-violet movie camera for imaging microsecond time scale magnetic reconnection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chai, Kil-Byoung; Bellan, Paul M.

    2013-12-15

    An ultra-fast extreme ultra-violet (EUV) movie camera has been developed for imaging magnetic reconnection in the Caltech spheromak/astrophysical jet experiment. The camera consists of a broadband Mo:Si multilayer mirror, a fast decaying YAG:Ce scintillator, a visible light block, and a high-speed visible light CCD camera. The camera can capture EUV images as fast as 3.3 × 10{sup 6} frames per second with 0.5 cm spatial resolution. The spectral range is from 20 eV to 60 eV. EUV images reveal strong, transient, highly localized bursts of EUV radiation when magnetic reconnection occurs.

  9. NASA Marshall Impact Testing Facility Capabilities Applicable to Lunar Dust Work

    NASA Technical Reports Server (NTRS)

    Evans, Steven W.; Finchum, Andy; Hubbs, Whitney; Eskridge, Richard; Martin, Jim

    2008-01-01

    The Impact Testing Facility at Marshall Space Flight Center has several guns that would be of use in studying impact phenomena with respect to lunar dust. These include both ballistic guns, using compressed gas and powder charges, and hypervelocity guns, either light gas guns or an exploding wire gun. In addition, a plasma drag accelerator expected to reach 20 km/s for small particles is under development. Velocity determination and impact event recording are done using ultra-high-speed cameras. Simulation analysis is also available using the SPHC hydrocode.

  10. Applications Of Digital Image Acquisition In Anthropometry

    NASA Astrophysics Data System (ADS)

    Woolford, Barbara; Lewis, James L.

    1981-10-01

    Anthropometric data on reach and mobility have traditionally been collected by time consuming and relatively inaccurate manual methods. Three dimensional digital image acquisition promises to radically increase the speed and ease of data collection and analysis. A three-camera video anthropometric system for collecting position, velocity, and force data in real time is under development for the Anthropometric Measurement Laboratory at NASA's Johnson Space Center. The use of a prototype of this system for collecting data on reach capabilities and on lateral stability is described. Two extensions of this system are planned.

  11. Kinematic control of male Allen's Hummingbird wing trill over a range of flight speeds.

    PubMed

    Clark, Christopher J; Mistick, Emily A

    2018-05-18

    Wing trills are pulsed sounds produced by modified wing feathers at one or more specific points in time during a wingbeat. Male Allen's Hummingbird ( Selasphorus sasin ) produce a sexually dimorphic 9 kHz wing trill in flight. Here we investigate the kinematic basis for trill production. The wingtip velocity hypothesis posits that trill production is modulated by the airspeed of the wingtip at some point during the wingbeat, whereas the wing rotation hypothesis posits that trill production is instead modulated by wing rotation kinematics. To test these hypotheses, we flew six male Allen's Hummingbirds in an open jet wind tunnel at flight speeds of 0, 3, 6, 9, 12 and 14 m s -1 , and recorded their flight with two 'acoustic cameras' placed below and behind, or below and lateral to the flying bird. The acoustic cameras are phased arrays of 40 microphones that used beamforming to spatially locate sound sources within a camera image. Trill Sound Pressure Level (SPL) exhibited a U-shaped relationship with flight speed in all three camera positions. SPL was greatest perpendicular to the stroke plane. Acoustic camera videos suggest that the trill is produced during supination. The trill was up to 20 dB louder during maneuvers than it was during steady state flight in the wind tunnel, across all airspeeds tested. These data provide partial support for the wing rotation hypothesis. Altered wing rotation kinematics could allow male Allen's Hummingbird to modulate trill production in social contexts such as courtship displays. © 2018. Published by The Company of Biologists Ltd.

  12. Controlled impact demonstration on-board (interior) photographic system

    NASA Technical Reports Server (NTRS)

    May, C. J.

    1986-01-01

    Langley Research Center (LaRC) was responsible for the design, manufacture, and integration of all hardware required for the photographic system used to film the interior of the controlled impact demonstration (CID) B-720 aircraft during actual crash conditions. Four independent power supplies were constructed to operate the ten high-speed 16 mm cameras and twenty-four floodlights. An up-link command system, furnished by Ames Dryden Flight Research Facility (ADFRF), was necessary to activate the power supplies and start the cameras. These events were accomplished by initiation of relays located on each of the photo power pallets. The photographic system performed beyond expectations. All four power distribution pallets with their 20 year old Minuteman batteries performed flawlessly. All 24 lamps worked. All ten on-board high speed (400 fps) 16 mm cameras containing good resolution film data were recovered.

  13. TEM Video Compressive Sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, Andrew; Kovarik, Libor; Abellan, Patricia

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ TEM experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing methods [1, 2, 3, 4] to increase the framerate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integrated into amore » single camera frame during the acquisition process, and then extracted upon readout using statistical compressive sensing inversion. Our simulations show that it should be possible to increase the speed of any camera by at least an order of magnitude. Compressive Sensing (CS) combines sensing and compression in one operation, and thus provides an approach that could further improve the temporal resolution while correspondingly reducing the electron dose rate. Because the signal is measured in a compressive manner, fewer total measurements are required. When applied to TEM video capture, compressive imaging couled improve acquisition speed and reduce the electron dose rate. CS is a recent concept, and has come to the forefront due the seminal work of Candès [5]. Since the publication of Candès, there has been enormous growth in the application of CS and development of CS variants. For electron microscopy applications, the concept of CS has also been recently applied to electron tomography [6], and reduction of electron dose in scanning transmission electron microscopy (STEM) imaging [7]. To demonstrate the applicability of coded aperture CS video reconstruction for atomic level imaging, we simulate compressive sensing on observations of Pd nanoparticles and Ag nanoparticles during exposure to high temperatures and other environmental conditions. Figure 1 highlights the results from the Pd nanoparticle experiment. On the left, 10 frames are reconstructed from a single coded frame—the original frames are shown for comparison. On the right a selection of three frames are shown from reconstructions at compression levels 10,20,30. The reconstructions, which are not post-processed, are true to the original and degrade in a straightforward manner. The final choice of compression level will obviously depend on both the temporal and spatial resolution required for a specific imaging task, but the results indicate that an increase in speed of better than an order of magnitude should be possible for all experiments. References: [1] P Llull, X Liao, X Yuan et al. Optics express 21(9), (2013), p. 10526. [2] J Yang, X Yuan, X Liao et al. Image Processing, IEEE Trans 23(11), (2014), p. 4863. [3] X Yuan, J Yang, P Llull et al. In ICIP 2013 (IEEE), p. 14. [4] X Yuan, P Llull, X Liao et al. In CVPR 2014. p. 3318. [5] EJ Candès, J Romberg and T Tao. Information Theory, IEEE Trans 52(2), (2006), p. 489. [6] P Binev, W Dahmen, R DeVore et al. In Modeling Nanoscale Imaging in Electron Microscopy, eds. T Vogt, W Dahmen and P Binev (Springer US), Nanostructure Science and Technology (2012). p. 73. [7] A Stevens, H Yang, L Carin et al. Microscopy 63(1), (2014), pp. 41.« less

  14. Stereo imaging velocimetry for microgravity applications

    NASA Technical Reports Server (NTRS)

    Miller, Brian B.; Meyer, Maryjo B.; Bethea, Mark D.

    1994-01-01

    Stereo imaging velocimetry is the quantitative measurement of three-dimensional flow fields using two sensors recording data from different vantage points. The system described in this paper, under development at NASA Lewis Research Center in Cleveland, Ohio, uses two CCD cameras placed perpendicular to one another, laser disk recorders, an image processing substation, and a 586-based computer to record data at standard NTSC video rates (30 Hertz) and reduce it offline. The flow itself is marked with seed particles, hence the fluid must be transparent. The velocimeter tracks the motion of the particles, and from these we deduce a multipoint (500 or more), quantitative map of the flow. Conceptually, the software portion of the velocimeter can be divided into distinct modules. These modules are: camera calibration, particle finding (image segmentation) and centroid location, particle overlap decomposition, particle tracking, and stereo matching. We discuss our approach to each module, and give our currently achieved speed and accuracy for each where available.

  15. Multithreaded hybrid feature tracking for markerless augmented reality.

    PubMed

    Lee, Taehee; Höllerer, Tobias

    2009-01-01

    We describe a novel markerless camera tracking approach and user interaction methodology for augmented reality (AR) on unprepared tabletop environments. We propose a real-time system architecture that combines two types of feature tracking. Distinctive image features of the scene are detected and tracked frame-to-frame by computing optical flow. In order to achieve real-time performance, multiple operations are processed in a synchronized multi-threaded manner: capturing a video frame, tracking features using optical flow, detecting distinctive invariant features, and rendering an output frame. We also introduce user interaction methodology for establishing a global coordinate system and for placing virtual objects in the AR environment by tracking a user's outstretched hand and estimating a camera pose relative to it. We evaluate the speed and accuracy of our hybrid feature tracking approach, and demonstrate a proof-of-concept application for enabling AR in unprepared tabletop environments, using bare hands for interaction.

  16. Thermo-Mechanical Characterization of Silicon Carbide-Silicon Carbide Composites at Elevated Temperatures Using a Unique Combustion Facility

    DTIC Science & Technology

    2009-09-10

    Calibration Tool(s) Surface Temperature ~1250oC Furnace, R-type TC & IR Gas Temperature < 1800oC R-type TC Gas Velocity ~ Mach 0.5 XS -4 High Speed...Camera Equivalence Ratio ~ 0.9 HVOFTM Flow Controller Gas Composition H 2 O, O 2 ,CO 2 , CO, NOx Testo XL 350 Gas Analyzer Mechanical Loading Fatigue...unavailability, however, gas velocity was measured using the X-StreamTM XS -4 High Speed Camera. The range of our interest was the velocity in the upstream of a

  17. Experimental and numerical study of plastic shear instability under high-speed loading conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sokovikov, Mikhail, E-mail: sokovikov@icmm.ru, E-mail: naimark@icmm.ru; Chudinov, Vasiliy, E-mail: sokovikov@icmm.ru, E-mail: naimark@icmm.ru; Bilalov, Dmitry, E-mail: sokovikov@icmm.ru, E-mail: naimark@icmm.ru

    2014-11-14

    The behavior of specimens dynamically loaded during the split Hopkinson (Kolsky) bar tests in a regime close to simple shear conditions was studied. The lateral surface of the specimens was investigated in a real-time mode with the aid of a high-speed infra-red camera CEDIP Silver 450M. The temperature field distribution obtained at different time made it possible to trace the evolution of plastic strain localization. The process of target perforation involving plug formation and ejection was examined using a high-speed infra-red camera and a VISAR velocity measurement system. The microstructure of tested specimens was analyzed using an optical interferometer-profilometer andmore » a scanning electron microscope. The development of plastic shear instability regions has been simulated numerically.« less

  18. First steps towards dual-modality 3D photoacoustic and speed of sound imaging with optical ultrasound detection

    NASA Astrophysics Data System (ADS)

    Nuster, Robert; Wurzinger, Gerhild; Paltauf, Guenther

    2017-03-01

    CCD camera based optical ultrasound detection is a promising alternative approach for high resolution 3D photoacoustic imaging (PAI). To fully exploit its potential and to achieve an image resolution <50 μm, it is necessary to incorporate variations of the speed of sound (SOS) in the image reconstruction algorithm. Hence, in the proposed work the idea and a first implementation are shown how speed of sound imaging can be added to a previously developed camera based PAI setup. The current setup provides SOS-maps with a spatial resolution of 2 mm and an accuracy of the obtained absolute SOS values of about 1%. The proposed dual-modality setup has the potential to provide highly resolved and perfectly co-registered 3D photoacoustic and SOS images.

  19. SUSI 62 A Robust and Safe Parachute Uav with Long Flight Time and Good Payload

    NASA Astrophysics Data System (ADS)

    Thamm, H. P.

    2011-09-01

    In many research areas in the geo-sciences (erosion, land use, land cover change, etc.) or applications (e.g. forest management, mining, land management etc.) there is a demand for remote sensing images of a very high spatial and temporal resolution. Due to the high costs of classic aerial photo campaigns, the use of a UAV is a promising option for obtaining the desired remote sensed information at the time it is needed. However, the UAV must be easy to operate, safe, robust and should have a high payload and long flight time. For that purpose, the parachute UAV SUSI 62 was developed. It consists of a steel frame with a powerful 62 cm3 2- stroke engine and a parachute wing. The frame can be easily disassembled for transportation or to replace parts. On the frame there is a gimbal mounted sensor carrier where different sensors, standard SLR cameras and/or multi-spectral and thermal sensors can be mounted. Due to the design of the parachute, the SUSI 62 is very easy to control. Two different parachute sizes are available for different wind speed conditions. The SUSI 62 has a payload of up to 8 kg providing options to use different sensors at the same time or to extend flight duration. The SUSI 62 needs a runway of between 10 m and 50 m, depending on the wind conditions. The maximum flight speed is approximately 50 km/h. It can be operated in a wind speed of up to 6 m/s. The design of the system utilising a parachute UAV makes it comparatively safe as a failure of the electronics or the remote control only results in the UAV coming to the ground at a slow speed. The video signal from the camera, the GPS coordinates and other flight parameters are transmitted to the ground station in real time. An autopilot is available, which guarantees that the area of investigation is covered at the desired resolution and overlap. The robustly designed SUSI 62 has been used successfully in Europe, Africa and Australia for scientific projects and also for agricultural, forestry and industrial applications.

  20. High-speed railway real-time localization auxiliary method based on deep neural network

    NASA Astrophysics Data System (ADS)

    Chen, Dongjie; Zhang, Wensheng; Yang, Yang

    2017-11-01

    High-speed railway intelligent monitoring and management system is composed of schedule integration, geographic information, location services, and data mining technology for integration of time and space data. Assistant localization is a significant submodule of the intelligent monitoring system. In practical application, the general access is to capture the image sequences of the components by using a high-definition camera, digital image processing technique and target detection, tracking and even behavior analysis method. In this paper, we present an end-to-end character recognition method based on a deep CNN network called YOLO-toc for high-speed railway pillar plate number. Different from other deep CNNs, YOLO-toc is an end-to-end multi-target detection framework, furthermore, it exhibits a state-of-art performance on real-time detection with a nearly 50fps achieved on GPU (GTX960). Finally, we realize a real-time but high-accuracy pillar plate number recognition system and integrate natural scene OCR into a dedicated classification YOLO-toc model.

  1. TH-CD-201-10: Highly Efficient Synchronized High-Speed Scintillation Camera System for Measuring Proton Range, SOBP and Dose Distributions in a 2D-Plane

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goddu, S; Sun, B; Grantham, K

    2016-06-15

    Purpose: Proton therapy (PT) delivery is complex and extremely dynamic. Therefore, quality assurance testing is vital, but highly time-consuming. We have developed a High-Speed Scintillation-Camera-System (HS-SCS) for simultaneously measuring multiple beam characteristics. Methods: High-speed camera was placed in a light-tight housing and dual-layer neutron shield. HS-SCS is synchronized with a synchrocyclotron to capture individual proton-beam-pulses (PBPs) at ∼504 frames/sec. The PBPs from synchrocyclotron trigger the HS-SCS to open its shutter for programmed exposure-time. Light emissions within 30×30×5cm3 plastic-scintillator (BC-408) were captured by a CCD-camera as individual images revealing dose-deposition in a 2D-plane with a resolution of 0.7mm for range andmore » SOBP measurements and 1.67mm for profiles. The CCD response as well as signal to noise ratio (SNR) was characterized for varying exposure times, gains for different light intensities using a TV-Optoliner system. Software tools were developed to analyze ∼5000 images to extract different beam parameters. Quenching correction-factors were established by comparing scintillation Bragg-Peaks with water scanned ionization-chamber measurements. Quenching corrected Bragg-peaks were integrated to ascertain proton-beam range (PBR), width of Spared-Out-Bragg-Peak (MOD) and distal.« less

  2. Ultra-fast bright field and fluorescence imaging of the dynamics of micrometer-sized objects

    NASA Astrophysics Data System (ADS)

    Chen, Xucai; Wang, Jianjun; Versluis, Michel; de Jong, Nico; Villanueva, Flordeliza S.

    2013-06-01

    High speed imaging has application in a wide area of industry and scientific research. In medical research, high speed imaging has the potential to reveal insight into mechanisms of action of various therapeutic interventions. Examples include ultrasound assisted thrombolysis, drug delivery, and gene therapy. Visual observation of the ultrasound, microbubble, and biological cell interaction may help the understanding of the dynamic behavior of microbubbles and may eventually lead to better design of such delivery systems. We present the development of a high speed bright field and fluorescence imaging system that incorporates external mechanical waves such as ultrasound. Through collaborative design and contract manufacturing, a high speed imaging system has been successfully developed at the University of Pittsburgh Medical Center. We named the system "UPMC Cam," to refer to the integrated imaging system that includes the multi-frame camera and its unique software control, the customized modular microscope, the customized laser delivery system, its auxiliary ultrasound generator, and the combined ultrasound and optical imaging chamber for in vitro and in vivo observations. This system is capable of imaging microscopic bright field and fluorescence movies at 25 × 106 frames per second for 128 frames, with a frame size of 920 × 616 pixels. Example images of microbubble under ultrasound are shown to demonstrate the potential application of the system.

  3. Ultra-fast bright field and fluorescence imaging of the dynamics of micrometer-sized objects

    PubMed Central

    Chen, Xucai; Wang, Jianjun; Versluis, Michel; de Jong, Nico; Villanueva, Flordeliza S.

    2013-01-01

    High speed imaging has application in a wide area of industry and scientific research. In medical research, high speed imaging has the potential to reveal insight into mechanisms of action of various therapeutic interventions. Examples include ultrasound assisted thrombolysis, drug delivery, and gene therapy. Visual observation of the ultrasound, microbubble, and biological cell interaction may help the understanding of the dynamic behavior of microbubbles and may eventually lead to better design of such delivery systems. We present the development of a high speed bright field and fluorescence imaging system that incorporates external mechanical waves such as ultrasound. Through collaborative design and contract manufacturing, a high speed imaging system has been successfully developed at the University of Pittsburgh Medical Center. We named the system “UPMC Cam,” to refer to the integrated imaging system that includes the multi-frame camera and its unique software control, the customized modular microscope, the customized laser delivery system, its auxiliary ultrasound generator, and the combined ultrasound and optical imaging chamber for in vitro and in vivo observations. This system is capable of imaging microscopic bright field and fluorescence movies at 25 × 106 frames per second for 128 frames, with a frame size of 920 × 616 pixels. Example images of microbubble under ultrasound are shown to demonstrate the potential application of the system. PMID:23822346

  4. Assessment of the metrological performance of an in situ storage image sensor ultra-high speed camera for full-field deformation measurements

    NASA Astrophysics Data System (ADS)

    Rossi, Marco; Pierron, Fabrice; Forquin, Pascal

    2014-02-01

    Ultra-high speed (UHS) cameras allow us to acquire images typically up to about 1 million frames s-1 for a full spatial resolution of the order of 1 Mpixel. Different technologies are available nowadays to achieve these performances, an interesting one is the so-called in situ storage image sensor architecture where the image storage is incorporated into the sensor chip. Such an architecture is all solid state and does not contain movable devices as occurs, for instance, in the rotating mirror UHS cameras. One of the disadvantages of this system is the low fill factor (around 76% in the vertical direction and 14% in the horizontal direction) since most of the space in the sensor is occupied by memory. This peculiarity introduces a series of systematic errors when the camera is used to perform full-field strain measurements. The aim of this paper is to develop an experimental procedure to thoroughly characterize the performance of such kinds of cameras in full-field deformation measurement and identify the best operative conditions which minimize the measurement errors. A series of tests was performed on a Shimadzu HPV-1 UHS camera first using uniform scenes and then grids under rigid movements. The grid method was used as full-field measurement optical technique here. From these tests, it has been possible to appropriately identify the camera behaviour and utilize this information to improve actual measurements.

  5. The effect of flight altitude to data quality of fixed-wing UAV imagery: case study in Murcia, Spain

    NASA Astrophysics Data System (ADS)

    Anders, Niels; Keesstra, Saskia; Cammeraat, Erik

    2014-05-01

    Unmanned Aerial System (UAS) are becoming popular tools in the geosciences due to improving technology and processing techniques. They can potentially fill the gap between spaceborne or manned aircraft remote sensing and terrestrial remote sensing, both in terms of spatial and temporal resolution. In this study we tested a fixed-wing Unmanned Aerial System (UAS) for the application of digital landscape analysis. The focus was to analyze the effect of flight altitude and the effect to accuracy and detail of the produced digital elevation models, derived terrain properties and orthophotos. The aircraft was equipped with a Panasonic GX1 16MP pocket camera with 20 mm lens to capture normal JPEG RGB images. Images were processed using Agisoft Photoscan Pro which includes the structure-from-motion and multiview stereopsis algorithms. The test area consisted of small abandoned agricultural fields in semi-arid Murcia in southeastern Spain. The area was severely damaged after a destructive rainfall event, including damaged check dams, rills, deep gully incisions and piping. Results suggest that careful decisions on flight altitude are essential to find a balance between the area coverage, ground sampling distance, UAS ground speed, camera processing speed and the accurate registration of specific soil erosion features of interest.

  6. The Use Of High Speed Photography In Reactor Safety Studies At The Atomic Energy Establishment, Winfrith

    NASA Astrophysics Data System (ADS)

    Maddison, R. J.

    1985-02-01

    The investigation of certain areas of nuclear reactor safety involves the study of high speed phenomena with timescales ranging from microseconds to a few hundreds of milliseconds. Examples which have been extensively studied at Winfrith are firstly, the thermal interaction of molten fuel and reactor coolant which can generate high pressures on the 100 msec timescale, and which involves phenomena such as vapour film collapse which takes place on the microsecond timescale. Secondly, there is the response of reactor structures to such pressures, and finally there is the response of structural materials such as metals and concrete to the impulsive loading arising from the impact of heavy, high velocity missiles. A wide range of experimental techniques is used in these studies, many of which have been developed specially for this type of work which ranges from small laboratory scale to large field scale experiments. There are two important features which characterise many of these experiments:- i) a long period of meticulous preparation of very heavily instrumented, short duration experiments and; ii) the destructive nature of the experiments. Various forms of High Speed photography are included in the inventory of experimental techniques. These include the use of single and double exposure, short duration, spark photography; the use of an Image Convertor Camera (IMACON 790); and a number of rotating prism cine cameras. High Speed Photography is used both in a primary experimental role in the studies, and in a supportive role for other instrumentation. Because of the sometimes violent nature of these experiments, cameras are often heavily protected and operated remotely; lighting systems are sometimes destroyed. This has led to the development of unconventional techniques for camera operation and subject lighting. This paper will describe some of the experiments and the way in which High Speed Photography has been applied as an essential experimental tool. It will be illustrated with cine film taken during the experiments.

  7. Visible light communication based vehicle positioning using LED street light and rolling shutter CMOS sensors

    NASA Astrophysics Data System (ADS)

    Do, Trong Hop; Yoo, Myungsik

    2018-01-01

    This paper proposes a vehicle positioning system using LED street lights and two rolling shutter CMOS sensor cameras. In this system, identification codes for the LED street lights are transmitted to camera-equipped vehicles through a visible light communication (VLC) channel. Given that the camera parameters are known, the positions of the vehicles are determined based on the geometric relationship between the coordinates of the LEDs in the images and their real world coordinates, which are obtained through the LED identification codes. The main contributions of the paper are twofold. First, the collinear arrangement of the LED street lights makes traditional camera-based positioning algorithms fail to determine the position of the vehicles. In this paper, an algorithm is proposed to fuse data received from the two cameras attached to the vehicles in order to solve the collinearity problem of the LEDs. Second, the rolling shutter mechanism of the CMOS sensors combined with the movement of the vehicles creates image artifacts that may severely degrade the positioning accuracy. This paper also proposes a method to compensate for the rolling shutter artifact, and a high positioning accuracy can be achieved even when the vehicle is moving at high speeds. The performance of the proposed positioning system corresponding to different system parameters is examined by conducting Matlab simulations. Small-scale experiments are also conducted to study the performance of the proposed algorithm in real applications.

  8. Advanced illumination control algorithm for medical endoscopy applications

    NASA Astrophysics Data System (ADS)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Morgado-Dias, F.

    2015-05-01

    CMOS image sensor manufacturer, AWAIBA, is providing the world's smallest digital camera modules to the world market for minimally invasive surgery and one time use endoscopic equipment. Based on the world's smallest digital camera head and the evaluation board provided to it, the aim of this paper is to demonstrate an advanced fast response dynamic control algorithm of the illumination LED source coupled to the camera head, over the LED drivers embedded on the evaluation board. Cost efficient and small size endoscopic camera modules nowadays embed minimal size image sensors capable of not only adjusting gain and exposure time but also LED illumination with adjustable illumination power. The LED illumination power has to be dynamically adjusted while navigating the endoscope over changing illumination conditions of several orders of magnitude within fractions of the second to guarantee a smooth viewing experience. The algorithm is centered on the pixel analysis of selected ROIs enabling it to dynamically adjust the illumination intensity based on the measured pixel saturation level. The control core was developed in VHDL and tested in a laboratory environment over changing light conditions. The obtained results show that it is capable of achieving correction speeds under 1 s while maintaining a static error below 3% relative to the total number of pixels on the image. The result of this work will allow the integration of millimeter sized high brightness LED sources on minimal form factor cameras enabling its use in endoscopic surgical robotic or micro invasive surgery.

  9. Winter precipitation particle size distribution measurement by Multi-Angle Snowflake Camera

    NASA Astrophysics Data System (ADS)

    Huang, Gwo-Jong; Kleinkort, Cameron; Bringi, V. N.; Notaroš, Branislav M.

    2017-12-01

    From the radar meteorology viewpoint, the most important properties for quantitative precipitation estimation of winter events are 3D shape, size, and mass of precipitation particles, as well as the particle size distribution (PSD). In order to measure these properties precisely, optical instruments may be the best choice. The Multi-Angle Snowflake Camera (MASC) is a relatively new instrument equipped with three high-resolution cameras to capture the winter precipitation particle images from three non-parallel angles, in addition to measuring the particle fall speed using two pairs of infrared motion sensors. However, the results from the MASC so far are usually presented as monthly or seasonally, and particle sizes are given as histograms, no previous studies have used the MASC for a single storm study, and no researchers use MASC to measure the PSD. We propose the methodology for obtaining the winter precipitation PSD measured by the MASC, and present and discuss the development, implementation, and application of the new technique for PSD computation based on MASC images. Overall, this is the first study of the MASC-based PSD. We present PSD MASC experiments and results for segments of two snow events to demonstrate the performance of our PSD algorithm. The results show that the self-consistency of the MASC measured single-camera PSDs is good. To cross-validate PSD measurements, we compare MASC mean PSD (averaged over three cameras) with the collocated 2D Video Disdrometer, and observe good agreements of the two sets of results.

  10. Diaphragmless shock wave generators for industrial applications of shock waves

    NASA Astrophysics Data System (ADS)

    Hariharan, M. S.; Janardhanraj, S.; Saravanan, S.; Jagadeesh, G.

    2011-06-01

    The prime focus of this study is to design a 50 mm internal diameter diaphragmless shock tube that can be used in an industrial facility for repeated loading of shock waves. The instantaneous rise in pressure and temperature of a medium can be used in a variety of industrial applications. We designed, fabricated and tested three different shock wave generators of which one system employs a highly elastic rubber membrane and the other systems use a fast acting pneumatic valve instead of conventional metal diaphragms. The valve opening speed is obtained with the help of a high speed camera. For shock generation systems with a pneumatic cylinder, it ranges from 0.325 to 1.15 m/s while it is around 8.3 m/s for the rubber membrane. Experiments are conducted using the three diaphragmless systems and the results obtained are analyzed carefully to obtain a relation between the opening speed of the valve and the amount of gas that is actually utilized in the generation of the shock wave for each system. The rubber membrane is not suitable for industrial applications because it needs to be replaced regularly and cannot withstand high driver pressures. The maximum shock Mach number obtained using the new diaphragmless system that uses the pneumatic valve is 2.125 ± 0.2%. This system shows much promise for automation in an industrial environment.

  11. Fast Fiber-Coupled Imaging Devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brockington, Samuel; Case, Andrew; Witherspoon, Franklin Douglas

    HyperV Technologies Corp. has successfully designed, built and experimentally demonstrated a full scale 1024 pixel 100 MegaFrames/s fiber coupled camera with 12 or 14 bits, and record lengths of 32K frames, exceeding our original performance objectives. This high-pixel-count, fiber optically-coupled, imaging diagnostic can be used for investigating fast, bright plasma events. In Phase 1 of this effort, a 100 pixel fiber-coupled fast streak camera for imaging plasma jet profiles was constructed and successfully demonstrated. The resulting response from outside plasma physics researchers emphasized development of increased pixel performance as a higher priority over increasing pixel count. In this Phase 2more » effort, HyperV therefore focused on increasing the sample rate and bit-depth of the photodiode pixel designed in Phase 1, while still maintaining a long record length and holding the cost per channel to levels which allowed up to 1024 pixels to be constructed. Cost per channel was 53.31 dollars, very close to our original target of $50 per channel. The system consists of an imaging "camera head" coupled to a photodiode bank with an array of optical fibers. The output of these fast photodiodes is then digitized at 100 Megaframes per second and stored in record lengths of 32,768 samples with bit depths of 12 to 14 bits per pixel. Longer record lengths are possible with additional memory. A prototype imaging system with up to 1024 pixels was designed and constructed and used to successfully take movies of very fast moving plasma jets as a demonstration of the camera performance capabilities. Some faulty electrical components on the 64 circuit boards resulted in only 1008 functional channels out of 1024 on this first generation prototype system. We experimentally observed backlit high speed fan blades in initial camera testing and then followed that with full movies and streak images of free flowing high speed plasma jets (at 30-50 km/s). Jet structure and jet collisions onto metal pillars in the path of the plasma jets were recorded in a single shot. This new fast imaging system is an attractive alternative to conventional fast framing cameras for applications and experiments where imaging events using existing techniques are inefficient or impossible. The development of HyperV's new diagnostic was split into two tracks: a next generation camera track, in which HyperV built, tested, and demonstrated a prototype 1024 channel camera at its own facility, and a second plasma community beta test track, where selected plasma physics programs received small systems of a few test pixels to evaluate the expected performance of a full scale camera on their experiments. These evaluations were performed as part of an unfunded collaboration with researchers at Los Alamos National Laboratory and the University of California at Davis. Results from the prototype 1024-pixel camera are discussed, as well as results from the collaborations with test pixel system deployment sites.« less

  12. Investigation of the spreading of diesel injection jets using a new high-speed 3D drum camera

    NASA Astrophysics Data System (ADS)

    Eisfeld, Fritz

    1997-05-01

    To improve the combustion of the diesel engine it is important that the combustion chamber is equally filled with fuel and vapor of fuel. The investigation of the spatial spreading of the injection jet is possible with optical methods. Therefore a drum camera for 3D was developed to take this spatial event. The camera and the first results of the investigations of different injection nozzles are described.

  13. The effectiveness of detection of splashed particles using a system of three integrated high-speed cameras

    NASA Astrophysics Data System (ADS)

    Ryżak, Magdalena; Beczek, Michał; Mazur, Rafał; Sochan, Agata; Bieganowski, Andrzej

    2017-04-01

    The phenomenon of splash, which is one of the factors causing erosion of the soil surface, is the subject of research of various scientific teams. One of efficient methods of observation and analysis of this phenomenon are high-speed cameras to measure particles at 2000 frames per second or higher. Analysis of the phenomenon of splash with the use of high-speed cameras and specialized software can reveal, among other things, the number of broken particles, their speeds, trajectories, and the distances over which they were transferred. The paper presents an attempt at evaluation of the efficiency of detection of splashed particles with the use of a set of 3 cameras (Vision Research MIRO 310) and software Dantec Dynamics Studio, using a 3D module (Volumetric PTV). In order to assess the effectiveness of estimating the number of particles, the experiment was performed on glass beads with a diameter of 0.5 mm (corresponding to the sand fraction). Water droplets with a diameter of 4.2 mm fell on a sample from a height of 1.5 m. Two types of splashed particles were observed: particle having a low range (up to 18 mm) splashed at larger angles and particles of a high range (up to 118 mm) splashed at smaller angles. The detection efficiency the number of splashed particles estimated by the software was 45 - 65% for particles with a large range. The effectiveness of the detection of particles by the software has been calculated on the basis of comparison with the number of beads that fell on the adhesive surface around the sample. This work was partly financed from the National Science Centre, Poland; project no. 2014/14/E/ST10/00851.

  14. Visualization of hump formation in high-speed gas metal arc welding

    NASA Astrophysics Data System (ADS)

    Wu, C. S.; Zhong, L. M.; Gao, J. Q.

    2009-11-01

    The hump bead is a typical weld defect observed in high-speed welding. Its occurrence limits the improvement of welding productivity. Visualization of hump formation during high-speed gas metal arc welding (GMAW) is helpful in the better understanding of the humping phenomena so that effective measures can be taken to suppress or decrease the tendency of hump formation and achieve higher productivity welding. In this study, an experimental system was developed to implement vision-based observation of the weld pool behavior during high-speed GMAW. Considering the weld pool characteristics in high-speed welding, a narrow band-pass and neutral density filter was equipped for the CCD camera, the suitable exposure time was selected and side view orientation of the CCD camera was employed. The events that took place at the rear portion of the weld pools were imaged during the welding processes with and without hump bead formation, respectively. It was found that the variation of the weld pool surface height and the solid-liquid interface at the pool trailing with time shows some useful information to judge whether the humping phenomenon occurs or not.

  15. High-speed polarized light microscopy for in situ, dynamic measurement of birefringence properties

    NASA Astrophysics Data System (ADS)

    Wu, Xianyu; Pankow, Mark; Shadow Huang, Hsiao-Ying; Peters, Kara

    2018-01-01

    A high-speed, quantitative polarized light microscopy (QPLM) instrument has been developed to monitor the optical slow axis spatial realignment during controlled medium to high strain rate experiments at acquisition rates up to 10 kHz. This high-speed QPLM instrument is implemented within a modified drop tower and demonstrated using polycarbonate specimens. By utilizing a rotating quarter wave plate and a high-speed camera, the minimum acquisition time to generate an alignment map of a birefringent specimen is 6.1 ms. A sequential analysis method allows the QPLM instrument to generate QPLM data at the high-speed camera imaging frequency 10 kHz. The obtained QPLM data is processed using a vector correlation technique to detect anomalous optical axis realignment and retardation changes throughout the loading event. The detected anomalous optical axis realignment is shown to be associated with crack initiation, propagation, and specimen failure in a dynamically loaded polycarbonate specimen. The work provides a foundation for detecting damage in biological tissues through local collagen fiber realignment and fracture during dynamic loading.

  16. SCOUT: a fast Monte-Carlo modeling tool of scintillation camera output†

    PubMed Central

    Hunter, William C J; Barrett, Harrison H.; Muzi, John P.; McDougald, Wendy; MacDonald, Lawrence R.; Miyaoka, Robert S.; Lewellen, Thomas K.

    2013-01-01

    We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout of a scintillation camera. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:23640136

  17. Improving temporal resolution and speed sensitivity of laser speckle contrast analysis imaging based on noise reduction with an anisotropic diffusion filter

    NASA Astrophysics Data System (ADS)

    Song, Lipei; Wang, Xueyan; Zhang, Ru; Zhang, Kuanshou; Zhou, Zhen; Elson, Daniel S.

    2018-07-01

    The fluctuation of contrast caused by statistical noise degenerates the temporal/spatial resolution of laser speckle contrast imaging (LSCI) and limits the maximum speed when imaging. In this study, we investigated the application of the anisotropic diffusion filter (ADF) to temporal LSCI and found that the edge magnitude parameter of the ADF can be determined by the mean of the contrast image. Because the edge magnitude parameter is usually denoted as K, we term this the K-constant ADF (KC-ADF) and show that temporal sensitivity is improved when imaging because of the enhanced signal-to-noise ratio when using the KC-ADF in small-animal experiments. The cardiac cycle of a rat as high as 390 bpm can be imaged with an industrial camera.

  18. The diagnosing of plasmas using spectroscopy and imaging on Proto-MPEX

    NASA Astrophysics Data System (ADS)

    Baldwin, K. A.; Biewer, T. M.; Crouse Powers, J.; Hardin, R.; Johnson, S.; McCleese, A.; Shaw, G. C.; Showers, M.; Skeen, C.

    2015-11-01

    The Prototype Material Plasma Exposure eXperiment (Proto-MPEX) is a linear plasma device being developed at Oak Ridge National Laboratory (ORNL). This machine plans to study plasma-material interaction (PMI) physics relevant to future fusion reactors. We tested and learned to use tools of spectroscopy and imaging. These tools consist of a spectrometer, a high speed camera, an infrared camera, and a thermocouple. The spectrometer measures the color of the light from the plasma and its intensity. We also used a high speed camera to see how the magnetic field acts on the plasma, and how it is heated to the fourth state of matter. The thermocouples measure the temperature of the objects they are placed against, which in this case are the end plates of the machine. We also used the infrared camera to see the heat pattern of the plasma on the end plates. Data from these instruments will be shown. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725, and the Oak Ridge Associated Universities ARC program.

  19. Remote gaze tracking system on a large display.

    PubMed

    Lee, Hyeon Chang; Lee, Won Oh; Cho, Chul Woo; Gwon, Su Yeong; Park, Kang Ryoung; Lee, Heekyung; Cha, Jihun

    2013-10-07

    We propose a new remote gaze tracking system as an intelligent TV interface. Our research is novel in the following three ways: first, because a user can sit at various positions in front of a large display, the capture volume of the gaze tracking system should be greater, so the proposed system includes two cameras which can be moved simultaneously by panning and tilting mechanisms, a wide view camera (WVC) for detecting eye position and an auto-focusing narrow view camera (NVC) for capturing enlarged eye images. Second, in order to remove the complicated calibration between the WVC and NVC and to enhance the capture speed of the NVC, these two cameras are combined in a parallel structure. Third, the auto-focusing of the NVC is achieved on the basis of both the user's facial width in the WVC image and a focus score calculated on the eye image of the NVC. Experimental results showed that the proposed system can be operated with a gaze tracking accuracy of ±0.737°~±0.775° and a speed of 5~10 frames/s.

  20. Remote Gaze Tracking System on a Large Display

    PubMed Central

    Lee, Hyeon Chang; Lee, Won Oh; Cho, Chul Woo; Gwon, Su Yeong; Park, Kang Ryoung; Lee, Heekyung; Cha, Jihun

    2013-01-01

    We propose a new remote gaze tracking system as an intelligent TV interface. Our research is novel in the following three ways: first, because a user can sit at various positions in front of a large display, the capture volume of the gaze tracking system should be greater, so the proposed system includes two cameras which can be moved simultaneously by panning and tilting mechanisms, a wide view camera (WVC) for detecting eye position and an auto-focusing narrow view camera (NVC) for capturing enlarged eye images. Second, in order to remove the complicated calibration between the WVC and NVC and to enhance the capture speed of the NVC, these two cameras are combined in a parallel structure. Third, the auto-focusing of the NVC is achieved on the basis of both the user's facial width in the WVC image and a focus score calculated on the eye image of the NVC. Experimental results showed that the proposed system can be operated with a gaze tracking accuracy of ±0.737°∼±0.775° and a speed of 5∼10 frames/s. PMID:24105351

  1. Active hyperspectral imaging using a quantum cascade laser (QCL) array and digital-pixel focal plane array (DFPA) camera.

    PubMed

    Goyal, Anish; Myers, Travis; Wang, Christine A; Kelly, Michael; Tyrrell, Brian; Gokden, B; Sanchez, Antonio; Turner, George; Capasso, Federico

    2014-06-16

    We demonstrate active hyperspectral imaging using a quantum-cascade laser (QCL) array as the illumination source and a digital-pixel focal-plane-array (DFPA) camera as the receiver. The multi-wavelength QCL array used in this work comprises 15 individually addressable QCLs in which the beams from all lasers are spatially overlapped using wavelength beam combining (WBC). The DFPA camera was configured to integrate the laser light reflected from the sample and to perform on-chip subtraction of the passive thermal background. A 27-frame hyperspectral image was acquired of a liquid contaminant on a diffuse gold surface at a range of 5 meters. The measured spectral reflectance closely matches the calculated reflectance. Furthermore, the high-speed capabilities of the system were demonstrated by capturing differential reflectance images of sand and KClO3 particles that were moving at speeds of up to 10 m/s.

  2. In-Situ Observation of Horizontal Centrifugal Casting using a High-Speed Camera

    NASA Astrophysics Data System (ADS)

    Esaka, Hisao; Kawai, Kohsuke; Kaneko, Hiroshi; Shinozuka, Kei

    2012-07-01

    In order to understand the solidification process of horizontal centrifugal casting, experimental equipment for in-situ observation using transparent organic substance has been constructed. Succinonitrile-1 mass% water alloy was filled in the round glass cell and the glass cell was completely sealed. To observe the movement of equiaxed grains more clearly and to understand the effect of movement of free surface, a high-speed camera has been installed on the equipment. The most advantageous point of this equipment is that the camera rotates with mold, so that one can observe the same location of the glass cell. Because the recording rate could be increased up to 250 frames per second, the quality of movie was dramatically modified and this made easier and more precise to pursue the certain equiaxed grain. The amplitude of oscillation of equiaxed grain ( = At) decreased as the solidification proceeded.

  3. A design of a high speed dual spectrometer by single line scan camera

    NASA Astrophysics Data System (ADS)

    Palawong, Kunakorn; Meemon, Panomsak

    2018-03-01

    A spectrometer that can capture two orthogonal polarization components of s light beam is demanded for polarization sensitive imaging system. Here, we describe the design and implementation of a high speed spectrometer for simultaneous capturing of two orthogonal polarization components, i.e. vertical and horizontal components, of light beam. The design consists of a polarization beam splitter, two polarization-maintain optical fibers, two collimators, a single line-scan camera, a focusing lens, and a reflection blaze grating. The alignment of two beam paths was designed to be symmetrically incident on the blaze side and reverse blaze side of reflection grating, respectively. The two diffracted beams were passed through the same focusing lens and focused on the single line-scan sensors of a CMOS camera. The two spectra of orthogonal polarization were imaged on 1000 pixels per spectrum. With the proposed setup, the amplitude and shape of the two detected spectra can be controlled by rotating the collimators. The technique for optical alignment of spectrometer will be presented and discussed. The two orthogonal polarization spectra can be simultaneously captured at a speed of 70,000 spectra per second. The high speed dual spectrometer can simultaneously detected two orthogonal polarizations, which is an important component for the development of polarization-sensitive optical coherence tomography. The performance of the spectrometer have been measured and analyzed.

  4. North American AJ-2 Savage used for Microgravity Flights

    NASA Image and Video Library

    1960-09-21

    The National Aeronautics and Space Administration (NASA) Lewis Research Center acquired two North American AJ-2 Savages in the early 1960s to fly microgravity-inducing parabola flight patterns. Lewis was in the midst of an extensive study to determine the behavior of liquid hydrogen in microgravity so that proper fuel systems could be designed. Jack Enders was the primary pilot for the program and future astronaut Fred Haise worked with the cameras and instrumentation in the rear of the aircraft. North American developed the AJ-2 for the Navy in the mid-1940s as a carrier-based bomber. By the 1960s the Savage was no longer considered a modern aircraft, but its performance capabilities made it appealing to the Lewis researchers. The AJ-2 ‘s power, speed, response time, structural robustness, and large interior space were applicable to the microgravity flights. The AJ-2 could also accommodate a pilot, flight engineer, and two observers. Lewis engineers installed a 100-litre liquid hydrogen dewar, cryogenic cooling system, and cameras in the bomb bay. The AJ-2 was flown on a level course over western Lake Erie then went into a 20-degree dip to generate 375 knot. At 13,000 feet the pilot pulled the nose up by 40 degrees. The speed decreased and both latitudinal and longitudinal accelerations were nullified. Upon reaching 17,000 feet, the pilot turned the aircraft into a 45-degree dive. As the speed reached 390 knots the pilot pulled the aircraft up again. Each maneuver produced approximately 27 seconds of microgravity.

  5. Who cares about a camera if you are not speeding?

    DOT National Transportation Integrated Search

    1999-06-19

    Speeding is a hazard on both busy highways and city streets, but regular police enforcement does not work very well since dense and fast moving traffic makes it both difficult and dangerous for officers to make traditional traffic stops. The paper di...

  6. Improved iris localization by using wide and narrow field of view cameras for iris recognition

    NASA Astrophysics Data System (ADS)

    Kim, Yeong Gon; Shin, Kwang Yong; Park, Kang Ryoung

    2013-10-01

    Biometrics is a method of identifying individuals by their physiological or behavioral characteristics. Among other biometric identifiers, iris recognition has been widely used for various applications that require a high level of security. When a conventional iris recognition camera is used, the size and position of the iris region in a captured image vary according to the X, Y positions of a user's eye and the Z distance between a user and the camera. Therefore, the searching area of the iris detection algorithm is increased, which can inevitably decrease both the detection speed and accuracy. To solve these problems, we propose a new method of iris localization that uses wide field of view (WFOV) and narrow field of view (NFOV) cameras. Our study is new as compared to previous studies in the following four ways. First, the device used in our research acquires three images, one each of the face and both irises, using one WFOV and two NFOV cameras simultaneously. The relation between the WFOV and NFOV cameras is determined by simple geometric transformation without complex calibration. Second, the Z distance (between a user's eye and the iris camera) is estimated based on the iris size in the WFOV image and anthropometric data of the size of the human iris. Third, the accuracy of the geometric transformation between the WFOV and NFOV cameras is enhanced by using multiple matrices of the transformation according to the Z distance. Fourth, the searching region for iris localization in the NFOV image is significantly reduced based on the detected iris region in the WFOV image and the matrix of geometric transformation corresponding to the estimated Z distance. Experimental results showed that the performance of the proposed iris localization method is better than that of conventional methods in terms of accuracy and processing time.

  7. Underwater Test Diagnostics Using Explosively Excited Argon And Laser Light Photography Techniques

    NASA Astrophysics Data System (ADS)

    Wisotski, John

    1990-01-01

    This paper presents results of photographic methods employed in underwater tests used to study high-velocity fragment deceleration, deformation and fracture during the perforation of water-backed plates. These methods employed overlapping ultra-high and very high speed camera recordings using explosively excited argon and ruby-laser light sources that gave ample light to penetrate across a 2.3-meter (7.54-foot) diameter tank of water with enough intensity to photograph displacement-time histories of steel cubes with impact speeds of 1000 to 1500 m/s (3280 to 4920 ft/s) at camera framing rates of 250,000 and 17,000 fr/s, respectively.

  8. The threshold of vapor channel formation in water induced by pulsed CO2 laser

    NASA Astrophysics Data System (ADS)

    Guo, Wenqing; Zhang, Xianzeng; Zhan, Zhenlin; Xie, Shusen

    2012-12-01

    Water plays an important role in laser ablation. There are two main interpretations of laser-water interaction: hydrokinetic effect and vapor phenomenon. The two explanations are reasonable in some way, but they can't explain the mechanism of laser-water interaction completely. In this study, the dynamic process of vapor channel formation induced by pulsed CO2 laser in static water layer was monitored by high-speed camera. The wavelength of pulsed CO2 laser is 10.64 um, and pulse repetition rate is 60 Hz. The laser power ranged from 1 to 7 W with a step of 0.5 W. The frame rate of high-speed camera used in the experiment was 80025 fps. Based on high-speed camera pictures, the dynamic process of vapor channel formation was examined, and the threshold of vapor channel formation, pulsation period, the volume, the maximum depth and corresponding width of vapor channel were determined. The results showed that the threshold of vapor channel formation was about 2.5 W. Moreover, pulsation period, the maximum depth and corresponding width of vapor channel increased with the increasing of the laser power.

  9. High speed Infrared imaging method for observation of the fast varying temperature phenomena

    NASA Astrophysics Data System (ADS)

    Moghadam, Reza; Alavi, Kambiz; Yuan, Baohong

    With new improvements in high-end commercial R&D camera technologies many challenges have been overcome for exploring the high-speed IR camera imaging. The core benefits of this technology is the ability to capture fast varying phenomena without image blur, acquire enough data to properly characterize dynamic energy, and increase the dynamic range without compromising the number of frames per second. This study presents a noninvasive method for determining the intensity field of a High Intensity Focused Ultrasound Device (HIFU) beam using Infrared imaging. High speed Infrared camera was placed above the tissue-mimicking material that was heated by HIFU with no other sensors present in the HIFU axial beam. A MATLAB simulation code used to perform a finite-element solution to the pressure wave propagation and heat equations within the phantom and temperature rise to the phantom was computed. Three different power levels of HIFU transducers were tested and the predicted temperature increase values were within about 25% of IR measurements. The fundamental theory and methods developed in this research can be used to detect fast varying temperature phenomena in combination with the infrared filters.

  10. The Sydney University PAPA camera

    NASA Astrophysics Data System (ADS)

    Lawson, Peter R.

    1994-04-01

    The Precision Analog Photon Address (PAPA) camera is a photon-counting array detector that uses optical encoding to locate photon events on the output of a microchannel plate image intensifier. The Sydney University camera is a 256x256 pixel detector which can operate at speeds greater than 1 million photons per second and produce individual photon coordinates with a deadtime of only 300 ns. It uses a new Gray coded mask-plate which permits a simplified optical alignment and successfully guards against vignetting artifacts.

  11. Speed cameras in Sweden and Victoria, Australia--a case study.

    PubMed

    Belin, Matts-Ake; Tillgren, Per; Vedung, Evert; Cameron, Max; Tingvall, Claes

    2010-11-01

    In this article, the ideas behind two different speed camera systems in Australia, Victoria, and Sweden are explored and compared. The study shows that even if the both systems technically have the same aim--to reduce speeding--the ideas of how that should be achieved differ substantially. The approach adopted in Victoria is based on the concept that speeding is a deliberate offence in which a rational individual wants to drive as fast as possible and is prepared to calculate the costs and benefits of his behaviour. Therefore, the underlying aim of the intervention is to increase the perceived cost of committing an offence whilst at the same time decrease the perceived benefits, so that the former outweigh the latter. The Swedish approach, on the other hand, appears to be based on a belief that road safety is an important priority for the road users and one of the reasons to why road users drive too fast is lack of information and social support. In order to evaluate road safety interventions and how their effects are created together with the ambition to transfer technology, there is a need for a comprehensive understanding of the systems and their modi operandi in their specific contexts. This study has shown that there are major differences between the ideas behind the two speed camera programs in Victoria, Australia and Sweden and that these ideas have an impact on the actual design of the different systems and how these are intended to create road safety effects. 2010 Elsevier Ltd. All rights reserved.

  12. Single Pixel Black Phosphorus Photodetector for Near-Infrared Imaging.

    PubMed

    Miao, Jinshui; Song, Bo; Xu, Zhihao; Cai, Le; Zhang, Suoming; Dong, Lixin; Wang, Chuan

    2018-01-01

    Infrared imaging systems have wide range of military or civil applications and 2D nanomaterials have recently emerged as potential sensing materials that may outperform conventional ones such as HgCdTe, InGaAs, and InSb. As an example, 2D black phosphorus (BP) thin film has a thickness-dependent direct bandgap with low shot noise and noncryogenic operation for visible to mid-infrared photodetection. In this paper, the use of a single-pixel photodetector made with few-layer BP thin film for near-infrared imaging applications is demonstrated. The imaging is achieved by combining the photodetector with a digital micromirror device to encode and subsequently reconstruct the image based on compressive sensing algorithm. Stationary images of a near-infrared laser spot (λ = 830 nm) with up to 64 × 64 pixels are captured using this single-pixel BP camera with 2000 times of measurements, which is only half of the total number of pixels. The imaging platform demonstrated in this work circumvents the grand challenges of scalable BP material growth for photodetector array fabrication and shows the efficacy of utilizing the outstanding performance of BP photodetector for future high-speed infrared camera applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Railway clearance intrusion detection method with binocular stereo vision

    NASA Astrophysics Data System (ADS)

    Zhou, Xingfang; Guo, Baoqing; Wei, Wei

    2018-03-01

    In the stage of railway construction and operation, objects intruding railway clearance greatly threaten the safety of railway operation. Real-time intrusion detection is of great importance. For the shortcomings of depth insensitive and shadow interference of single image method, an intrusion detection method with binocular stereo vision is proposed to reconstruct the 3D scene for locating the objects and judging clearance intrusion. The binocular cameras are calibrated with Zhang Zhengyou's method. In order to improve the 3D reconstruction speed, a suspicious region is firstly determined by background difference method of a single camera's image sequences. The image rectification, stereo matching and 3D reconstruction process are only executed when there is a suspicious region. A transformation matrix from Camera Coordinate System(CCS) to Track Coordinate System(TCS) is computed with gauge constant and used to transfer the 3D point clouds into the TCS, then the 3D point clouds are used to calculate the object position and intrusion in TCS. The experiments in railway scene show that the position precision is better than 10mm. It is an effective way for clearance intrusion detection and can satisfy the requirement of railway application.

  14. Vision-Based People Detection System for Heavy Machine Applications

    PubMed Central

    Fremont, Vincent; Bui, Manh Tuan; Boukerroui, Djamal; Letort, Pierrick

    2016-01-01

    This paper presents a vision-based people detection system for improving safety in heavy machines. We propose a perception system composed of a monocular fisheye camera and a LiDAR. Fisheye cameras have the advantage of a wide field-of-view, but the strong distortions that they create must be handled at the detection stage. Since people detection in fisheye images has not been well studied, we focus on investigating and quantifying the impact that strong radial distortions have on the appearance of people, and we propose approaches for handling this specificity, adapted from state-of-the-art people detection approaches. These adaptive approaches nevertheless have the drawback of high computational cost and complexity. Consequently, we also present a framework for harnessing the LiDAR modality in order to enhance the detection algorithm for different camera positions. A sequential LiDAR-based fusion architecture is used, which addresses directly the problem of reducing false detections and computational cost in an exclusively vision-based system. A heavy machine dataset was built, and different experiments were carried out to evaluate the performance of the system. The results are promising, in terms of both processing speed and performance. PMID:26805838

  15. Vision-Based People Detection System for Heavy Machine Applications.

    PubMed

    Fremont, Vincent; Bui, Manh Tuan; Boukerroui, Djamal; Letort, Pierrick

    2016-01-20

    This paper presents a vision-based people detection system for improving safety in heavy machines. We propose a perception system composed of a monocular fisheye camera and a LiDAR. Fisheye cameras have the advantage of a wide field-of-view, but the strong distortions that they create must be handled at the detection stage. Since people detection in fisheye images has not been well studied, we focus on investigating and quantifying the impact that strong radial distortions have on the appearance of people, and we propose approaches for handling this specificity, adapted from state-of-the-art people detection approaches. These adaptive approaches nevertheless have the drawback of high computational cost and complexity. Consequently, we also present a framework for harnessing the LiDAR modality in order to enhance the detection algorithm for different camera positions. A sequential LiDAR-based fusion architecture is used, which addresses directly the problem of reducing false detections and computational cost in an exclusively vision-based system. A heavy machine dataset was built, and different experiments were carried out to evaluate the performance of the system. The results are promising, in terms of both processing speed and performance.

  16. Real-time image processing for particle tracking velocimetry

    NASA Astrophysics Data System (ADS)

    Kreizer, Mark; Ratner, David; Liberzon, Alex

    2010-01-01

    We present a novel high-speed particle tracking velocimetry (PTV) experimental system. Its novelty is due to the FPGA-based, real-time image processing "on camera". Instead of an image, the camera transfers to the computer using a network card, only the relevant information of the identified flow tracers. Therefore, the system is ideal for the remote particle tracking systems in research and industrial applications, while the camera can be controlled and data can be transferred over any high-bandwidth network. We present the hardware and the open source software aspects of the PTV experiments. The tracking results of the new experimental system has been compared to the flow visualization and particle image velocimetry measurements. The canonical flow in the central cross section of a a cubic cavity (1:1:1 aspect ratio) in our lid-driven cavity apparatus is used for validation purposes. The downstream secondary eddy (DSE) is the sensitive portion of this flow and its size was measured with increasing Reynolds number (via increasing belt velocity). The size of DSE estimated from the flow visualization, PIV and compressed PTV is shown to agree within the experimental uncertainty of the methods applied.

  17. Scalable Photogrammetric Motion Capture System "mosca": Development and Application

    NASA Astrophysics Data System (ADS)

    Knyaz, V. A.

    2015-05-01

    Wide variety of applications (from industrial to entertainment) has a need for reliable and accurate 3D information about motion of an object and its parts. Very often the process of movement is rather fast as in cases of vehicle movement, sport biomechanics, animation of cartoon characters. Motion capture systems based on different physical principles are used for these purposes. The great potential for obtaining high accuracy and high degree of automation has vision-based system due to progress in image processing and analysis. Scalable inexpensive motion capture system is developed as a convenient and flexible tool for solving various tasks requiring 3D motion analysis. It is based on photogrammetric techniques of 3D measurements and provides high speed image acquisition, high accuracy of 3D measurements and highly automated processing of captured data. Depending on the application the system can be easily modified for different working areas from 100 mm to 10 m. The developed motion capture system uses from 2 to 4 technical vision cameras for video sequences of object motion acquisition. All cameras work in synchronization mode at frame rate up to 100 frames per second under the control of personal computer providing the possibility for accurate calculation of 3D coordinates of interest points. The system was used for a set of different applications fields and demonstrated high accuracy and high level of automation.

  18. Camera systems in human motion analysis for biomedical applications

    NASA Astrophysics Data System (ADS)

    Chin, Lim Chee; Basah, Shafriza Nisha; Yaacob, Sazali; Juan, Yeap Ewe; Kadir, Aida Khairunnisaa Ab.

    2015-05-01

    Human Motion Analysis (HMA) system has been one of the major interests among researchers in the field of computer vision, artificial intelligence and biomedical engineering and sciences. This is due to its wide and promising biomedical applications, namely, bio-instrumentation for human computer interfacing and surveillance system for monitoring human behaviour as well as analysis of biomedical signal and image processing for diagnosis and rehabilitation applications. This paper provides an extensive review of the camera system of HMA, its taxonomy, including camera types, camera calibration and camera configuration. The review focused on evaluating the camera system consideration of the HMA system specifically for biomedical applications. This review is important as it provides guidelines and recommendation for researchers and practitioners in selecting a camera system of the HMA system for biomedical applications.

  19. 3D for the people: multi-camera motion capture in the field with consumer-grade cameras and open source software

    PubMed Central

    Evangelista, Dennis J.; Ray, Dylan D.; Hedrick, Tyson L.

    2016-01-01

    ABSTRACT Ecological, behavioral and biomechanical studies often need to quantify animal movement and behavior in three dimensions. In laboratory studies, a common tool to accomplish these measurements is the use of multiple, calibrated high-speed cameras. Until very recently, the complexity, weight and cost of such cameras have made their deployment in field situations risky; furthermore, such cameras are not affordable to many researchers. Here, we show how inexpensive, consumer-grade cameras can adequately accomplish these measurements both within the laboratory and in the field. Combined with our methods and open source software, the availability of inexpensive, portable and rugged cameras will open up new areas of biological study by providing precise 3D tracking and quantification of animal and human movement to researchers in a wide variety of field and laboratory contexts. PMID:27444791

  20. A Real-Time Method to Estimate Speed of Object Based on Object Detection and Optical Flow Calculation

    NASA Astrophysics Data System (ADS)

    Liu, Kaizhan; Ye, Yunming; Li, Xutao; Li, Yan

    2018-04-01

    In recent years Convolutional Neural Network (CNN) has been widely used in computer vision field and makes great progress in lots of contents like object detection and classification. Even so, combining Convolutional Neural Network, which means making multiple CNN frameworks working synchronously and sharing their output information, could figure out useful message that each of them cannot provide singly. Here we introduce a method to real-time estimate speed of object by combining two CNN: YOLOv2 and FlowNet. In every frame, YOLOv2 provides object size; object location and object type while FlowNet providing the optical flow of whole image. On one hand, object size and object location help to select out the object part of optical flow image thus calculating out the average optical flow of every object. On the other hand, object type and object size help to figure out the relationship between optical flow and true speed by means of optics theory and priori knowledge. Therefore, with these two key information, speed of object can be estimated. This method manages to estimate multiple objects at real-time speed by only using a normal camera even in moving status, whose error is acceptable in most application fields like manless driving or robot vision.

  1. 25 CFR 542.7 - What are the minimum internal control standards for bingo?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... acceptable. (b) Game play standards. (1) The functions of seller and payout verifier shall be segregated... selected in the bingo game. (5) Each ball shall be shown to a camera immediately before it is called so that it is individually displayed to all customers. For speed bingo games not verified by camera...

  2. 25 CFR 542.7 - What are the minimum internal control standards for bingo?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... section, as approved by the Tribal gaming regulatory authority, will be acceptable. (b) Game play... bingo game. (5) Each ball shall be shown to a camera immediately before it is called so that it is individually displayed to all customers. For speed bingo games not verified by camera equipment, each ball...

  3. 25 CFR 542.7 - What are the minimum internal control standards for bingo?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...) Game play standards. (1) The functions of seller and payout verifier shall be segregated. Employees who... selected in the bingo game. (5) Each ball shall be shown to a camera immediately before it is called so that it is individually displayed to all customers. For speed bingo games not verified by camera...

  4. Apollo 8 Mission image,Farside of Moon

    NASA Image and Video Library

    1968-12-21

    Apollo 8,Farside of Moon. Image taken on Revolution 4. Camera Tilt Mode: Vertical Stereo. Sun Angle: 13. Original Film Magazine was labeled D. Camera Data: 70mm Hasselblad. Lens: 80mm; F-Stop: F/2.8; Shutter Speed: 1/250 second. Film Type: Kodak SO-3400 Black and White,ASA 40. Flight Date: December 21-27,1968.

  5. Catchment-Scale Terrain Modelling with Structure-from-Motion Photogrammetry: a replacement for airborne lidar?

    NASA Astrophysics Data System (ADS)

    Brasington, J.

    2015-12-01

    Over the last five years, Structure-from-Motion photogrammetry has dramatically democratized the availability of high quality topographic data. This approach involves the use of a non-linear bundle adjustment to estimate simultaneously camera position, pose, distortion and 3D model coordinates. In contrast to traditional aerial photogrammetry, the bundle adjustment is typically solved without external constraints and instead ground control is used a posteriori to transform the modelled coordinates to an established datum using a similarity transformation. The limited data requirements, coupled with the ability to self-calibrate compact cameras, has led to a burgeoning of applications using low-cost imagery acquired terrestrially or from low-altitude platforms. To date, most applications have focused on relatively small spatial scales where relaxed logistics permit the use of dense ground control and high resolution, close-range photography. It is less clear whether this low-cost approach can be successfully upscaled to tackle larger, watershed-scale projects extending over 102-3 km2 where it could offer a competitive alternative to landscape modelling with airborne lidar. At such scales, compromises over the density of ground control, the speed and height of sensor platform and related image properties are inevitable. In this presentation we provide a systematic assessment of large-scale SfM terrain products derived for over 80 km2 of the braided Dart River and its catchment in the Southern Alps of NZ. Reference data in the form of airborne and terrestrial lidar are used to quantify the quality of 3D reconstructions derived from helicopter photography and used to establish baseline uncertainty models for geomorphic change detection. Results indicate that camera network design is a key determinant of model quality, and that standard aerial networks based on strips of nadir photography can lead to unstable camera calibration and systematic errors that are difficult to model with sparse ground control. We demonstrate how a low cost multi-camera platform providing both nadir and oblique imagery can support robust camera calibration, enabling the generation of high quality, large-scale terrain products that are suitable for precision fluvial change detection.

  6. Catchment-Scale Terrain Modelling with Structure-from-Motion Photogrammetry: a replacement for airborne lidar?

    NASA Astrophysics Data System (ADS)

    Brasington, James; James, Joe; Cook, Simon; Cox, Simon; Lotsari, Eliisa; McColl, Sam; Lehane, Niall; Williams, Richard; Vericat, Damia

    2016-04-01

    In recent years, 3D terrain reconstructions based on Structure-from-Motion photogrammetry have dramatically democratized the availability of high quality topographic data. This approach involves the use of a non-linear bundle adjustment to estimate simultaneously camera position, pose, distortion and 3D model coordinates. In contrast to traditional aerial photogrammetry, the bundle adjustment is typically solved without external constraints and instead ground control is used a posteriori to transform the modelled coordinates to an established datum using a similarity transformation. The limited data requirements, coupled with the ability to self-calibrate compact cameras, has led to a burgeoning of applications using low-cost imagery acquired terrestrially or from low-altitude platforms. To date, most applications have focused on relatively small spatial scales (0.1-5 Ha), where relaxed logistics permit the use of dense ground control networks and high resolution, close-range photography. It is less clear whether this low-cost approach can be successfully upscaled to tackle larger, watershed-scale projects extending over 102-3 km2 where it could offer a competitive alternative to established landscape modelling with airborne lidar. At such scales, compromises over the density of ground control, the speed and height of sensor platform and related image properties are inevitable. In this presentation we provide a systematic assessment of the quality of large-scale SfM terrain products derived for over 80 km2 of the braided Dart River and its catchment in the Southern Alps of NZ. Reference data in the form of airborne and terrestrial lidar are used to quantify the quality of 3D reconstructions derived from helicopter photography and used to establish baseline uncertainty models for geomorphic change detection. Results indicate that camera network design is a key determinant of model quality, and that standard aerial photogrammetric networks based on strips of nadir photography can lead to unstable camera calibration and systematic errors that are difficult to model with sparse ground control. We demonstrate how a low cost multi-camera platform providing both nadir and oblique imagery can support robust camera calibration, enabling the generation of high quality, large-scale terrain products that are suitable for precision fluvial change detection.

  7. Application of acoustic imaging techniques on snowmobile pass-by noise.

    PubMed

    Padois, Thomas; Berry, Alain

    2017-02-01

    Snowmobile manufacturers invest important efforts to reduce the noise emission of their products. The noise sources of snowmobiles are multiple and closely spaced, leading to difficult source separation in practice. In this study, source imaging results for snowmobile pass-by noise are discussed. The experiments involve a 193-microphone Underbrink array, with synchronization of acoustic with video data provided by a high-speed camera. Both conventional beamforming and Clean-SC deconvolution are implemented to provide noise source maps of the snowmobile. The results clearly reveal noise emission from the engine, exhaust, and track depending on the frequency range considered.

  8. Photogrammetry System and Method for Determining Relative Motion Between Two Bodies

    NASA Technical Reports Server (NTRS)

    Miller, Samuel A. (Inventor); Severance, Kurt (Inventor)

    2014-01-01

    A photogrammetry system and method provide for determining the relative position between two objects. The system utilizes one or more imaging devices, such as high speed cameras, that are mounted on a first body, and three or more photogrammetry targets of a known location on a second body. The system and method can be utilized with cameras having fish-eye, hyperbolic, omnidirectional, or other lenses. The system and method do not require overlapping fields-of-view if two or more cameras are utilized. The system and method derive relative orientation by equally weighting information from an arbitrary number of heterogeneous cameras, all with non-overlapping fields-of-view. Furthermore, the system can make the measurements with arbitrary wide-angle lenses on the cameras.

  9. Camera Installation on a Beach AT-11

    NASA Image and Video Library

    1950-02-21

    Researchers at the National Advisory Committee for Aeronautics (NACA) Lewis Flight Propulsion Laboratory conducted an extensive investigation into the composition of clouds and their effect on aircraft icing. The researcher in this photograph is installing cameras on a Beach AT-11 Kansan in order to photograph water droplets during flights through clouds. The twin engine AT-11 was the primary training aircraft for World War II bomber crews. The NACA acquired this aircraft in January 1946, shortly after the end of the war. The NACA Lewis’ icing research during the war focused on the resolution of icing problems for specific military aircraft. In 1947 the laboratory broadened its program and began systematically measuring and categorizing clouds and water droplets. The three main thrusts of the Lewis icing flight research were the development of better instrumentation, the accumulation of data on ice buildup during flight, and the measurement of droplet sizes in clouds. The NACA researchers developed several types of measurement devices for the icing flights, including modified cameras. The National Research Council of Canada experimented with high-speed cameras with a large magnification lens to photograph the droplets suspended in the air. In 1951 NACA Lewis developed and flight tested their own camera with a magnification of 32. The camera, mounted to an external strut, could be used every five seconds as the aircraft reached speeds up to 150 miles per hour. The initial flight tests through cumulus clouds demonstrated that droplet size distribution could be studied.

  10. A unified and efficient framework for court-net sports video analysis using 3D camera modeling

    NASA Astrophysics Data System (ADS)

    Han, Jungong; de With, Peter H. N.

    2007-01-01

    The extensive amount of video data stored on available media (hard and optical disks) necessitates video content analysis, which is a cornerstone for different user-friendly applications, such as, smart video retrieval and intelligent video summarization. This paper aims at finding a unified and efficient framework for court-net sports video analysis. We concentrate on techniques that are generally applicable for more than one sports type to come to a unified approach. To this end, our framework employs the concept of multi-level analysis, where a novel 3-D camera modeling is utilized to bridge the gap between the object-level and the scene-level analysis. The new 3-D camera modeling is based on collecting features points from two planes, which are perpendicular to each other, so that a true 3-D reference is obtained. Another important contribution is a new tracking algorithm for the objects (i.e. players). The algorithm can track up to four players simultaneously. The complete system contributes to summarization by various forms of information, of which the most important are the moving trajectory and real-speed of each player, as well as 3-D height information of objects and the semantic event segments in a game. We illustrate the performance of the proposed system by evaluating it for a variety of court-net sports videos containing badminton, tennis and volleyball, and we show that the feature detection performance is above 92% and events detection about 90%.

  11. Engineer's drawing of Skylab 4 Far Ultraviolet Electronographic camera

    NASA Image and Video Library

    1973-11-19

    S73-36910 (November 1973) --- An engineer's drawing of the Skylab 4 Far Ultraviolet Electronographic camera (Experiment S201). Arrows point to various features and components of the camera. As the Comet Kohoutek streams through space at speeds of 100,000 miles per hour, the Skylab 4 crewmen will use the S201 UV camera to photograph features of the comet not visible from the Earth's surface. While the comet is some distance from the sun, the camera will be pointed through the scientific airlock in the wall of the Skylab space station Orbital Workshop (OWS). By using a movable mirror system built for the Ultraviolet Stellar Astronomy (S019) Experiment and rotating the space station, the S201 camera will be able to photograph the comet around the side of the space station. Photo credit: NASA

  12. Application of High Speed Digital Image Correlation in Rocket Engine Hot Fire Testing

    NASA Technical Reports Server (NTRS)

    Gradl, Paul R.; Schmidt, Tim

    2016-01-01

    Hot fire testing of rocket engine components and rocket engine systems is a critical aspect of the development process to understand performance, reliability and system interactions. Ground testing provides the opportunity for highly instrumented development testing to validate analytical model predictions and determine necessary design changes and process improvements. To properly obtain discrete measurements for model validation, instrumentation must survive in the highly dynamic and extreme temperature application of hot fire testing. Digital Image Correlation has been investigated and being evaluated as a technique to augment traditional instrumentation during component and engine testing providing further data for additional performance improvements and cost savings. The feasibility of digital image correlation techniques were demonstrated in subscale and full scale hotfire testing. This incorporated a pair of high speed cameras to measure three-dimensional, real-time displacements and strains installed and operated under the extreme environments present on the test stand. The development process, setup and calibrations, data collection, hotfire test data collection and post-test analysis and results are presented in this paper.

  13. High-speed spectral domain polarization-sensitive OCT using a single InGaAs line-scan camera and an optical switch

    NASA Astrophysics Data System (ADS)

    Lee, Sang-Won; Jeong, Hyun-Woo; Kim, Beop-Min

    2010-02-01

    We demonstrated high-speed spectral domain polarization-sensitive optical coherence tomography (SD-PSOCT) using a single InGaAs line-scan camera and an optical switch at 1.3-μm region. The polarization-sensitive low coherence interferometer in the system was based on the original free-space PS-OCT system published by Hee et al. The horizontal and vertical polarization light rays split by polarization beam splitter were delivered and detected via an optical switch to a single spectrometer by turns instead of dual spectrometers. The SD-PSOCT system had an axial resolution of 8.2 μm, a sensitivity of 101.5 dB, and an acquisition speed of 23,496 Alines/s. We obtained the intensity, phase retardation, and fast axis orientation images of a biological tissue. In addition, we calculated the averaged axial profiles of the phase retardation in human skin.

  14. Cavitation effect of holmium laser pulse applied to ablation of hard tissue underwater.

    PubMed

    Lü, Tao; Xiao, Qing; Xia, Danqing; Ruan, Kai; Li, Zhengjia

    2010-01-01

    To overcome the inconsecutive drawback of shadow and schlieren photography, the complete dynamics of cavitation bubble oscillation or ablation products induced by a single holmium laser pulse [2.12 microm, 300 micros (FWHM)] transmitted in different core diameter (200, 400, and 600 microm) fibers is recorded by means of high-speed photography. Consecutive images from high-speed cameras can stand for the true and complete process of laser-water or laser-tissue interaction. Both laser pulse energy and fiber diameter determine cavitation bubble size, which further determines acoustic transient amplitudes. Based on the pictures taken by high-speed camera and scanned by an optical coherent microscopy (OCM) system, it is easily seen that the liquid layer at the distal end of the fiber plays an important role during the process of laser-tissue interaction, which can increase ablation efficiency, decrease heat side effects, and reduce cost.

  15. High-speed spectral domain polarization- sensitive optical coherence tomography using a single camera and an optical switch at 1.3 microm.

    PubMed

    Lee, Sang-Won; Jeong, Hyun-Woo; Kim, Beop-Min

    2010-01-01

    We propose high-speed spectral domain polarization-sensitive optical coherence tomography (SD-PS-OCT) using a single camera and a 1x2 optical switch at the 1.3-microm region. The PS-low coherence interferometer used in the system is constructed using free-space optics. The reflected horizontal and vertical polarization light rays are delivered via an optical switch to a single spectrometer by turns. Therefore, our system costs less to build than those that use dual spectrometers, and the processes of timing and triggering are simpler from the viewpoints of both hardware and software. Our SD-PS-OCT has a sensitivity of 101.5 dB, an axial resolution of 8.2 microm, and an acquisition speed of 23,496 A-scans per second. We obtain the intensity, phase retardation, and fast axis orientation images of a rat tail tendon ex vivo.

  16. Temperature grid sensor for the measurement of spatial temperature distributions at object surfaces.

    PubMed

    Schäfer, Thomas; Schubert, Markus; Hampel, Uwe

    2013-01-25

    This paper presents results of the development and application of a new temperature grid sensor based on the wire-mesh sensor principle. The grid sensor consists of a matrix of 256 Pt1000 platinum chip resistors and an associated electronics that measures the grid resistances with a multiplexing scheme at high speed. The individual sensor elements can be spatially distributed on an object surface and measure transient temperature distributions in real time. The advantage compared with other temperature field measurement approaches such as infrared cameras is that the object under investigation can be thermally insulated and the radiation properties of the surface do not affect the measurement accuracy. The sensor principle is therefore suited for various industrial monitoring applications. Its applicability for surface temperature monitoring has been demonstrated through heating and mixing experiments in a vessel.

  17. Fuzzy logic control of an AGV

    NASA Astrophysics Data System (ADS)

    Kelkar, Nikhal; Samu, Tayib; Hall, Ernest L.

    1997-09-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of a modular autonomous mobile robot controller. The controller incorporates a fuzzy logic approach for steering and speed control, a neuro-fuzzy approach for ultrasound sensing (not discussed in this paper) and an overall expert system. The advantages of a modular system are related to portability and transportability, i.e. any vehicle can become autonomous with minimal modifications. A mobile robot test-bed has been constructed using a golf cart base. This cart has full speed control with guidance provided by a vision system and obstacle avoidance using ultrasonic sensors. The speed and steering fuzzy logic controller is supervised by a 486 computer through a multi-axis motion controller. The obstacle avoidance system is based on a micro-controller interfaced with six ultrasonic transducers. This micro- controller independently handles all timing and distance calculations and sends a steering angle correction back to the computer via the serial line. This design yields a portable independent system in which high speed computer communication is not necessary. Vision guidance is accomplished with a CCD camera with a zoom lens. The data is collected by a vision tracking device that transmits the X, Y coordinates of the lane marker to the control computer. Simulation and testing of these systems yielded promising results. This design, in its modularity, creates a portable autonomous fuzzy logic controller applicable to any mobile vehicle with only minor adaptations.

  18. An algorithm of a real time image tracking system using a camera with pan/tilt motors on an embedded system

    NASA Astrophysics Data System (ADS)

    Kim, Hie-Sik; Nam, Chul; Ha, Kwan-Yong; Ayurzana, Odgeral; Kwon, Jong-Won

    2005-12-01

    The embedded systems have been applied to many fields, including households and industrial sites. The user interface technology with simple display on the screen was implemented more and more. The user demands are increasing and the system has more various applicable fields due to a high penetration rate of the Internet. Therefore, the demand for embedded system is tend to rise. An embedded system for image tracking was implemented. This system is used a fixed IP for the reliable server operation on TCP/IP networks. Using an USB camera on the embedded Linux system developed a real time broadcasting of video image on the Internet. The digital camera is connected at the USB host port of the embedded board. All input images from the video camera are continuously stored as a compressed JPEG file in a directory at the Linux web-server. And each frame image data from web camera is compared for measurement of displacement Vector. That used Block matching algorithm and edge detection algorithm for past speed. And the displacement vector is used at pan/tilt motor control through RS232 serial cable. The embedded board utilized the S3C2410 MPU, which used the ARM 920T core form Samsung. The operating system was ported to embedded Linux kernel and mounted of root file system. And the stored images are sent to the client PC through the web browser. It used the network function of Linux and it developed a program with protocol of the TCP/IP.

  19. Automation of a high-speed imaging setup for differential viscosity measurements

    NASA Astrophysics Data System (ADS)

    Hurth, C.; Duane, B.; Whitfield, D.; Smith, S.; Nordquist, A.; Zenhausern, F.

    2013-12-01

    We present the automation of a setup previously used to assess the viscosity of pleural effusion samples and discriminate between transudates and exudates, an important first step in clinical diagnostics. The presented automation includes the design, testing, and characterization of a vacuum-actuated loading station that handles the 2 mm glass spheres used as sensors, as well as the engineering of electronic Printed Circuit Board (PCB) incorporating a microcontroller and their synchronization with a commercial high-speed camera operating at 10 000 fps. The hereby work therefore focuses on the instrumentation-related automation efforts as the general method and clinical application have been reported earlier [Hurth et al., J. Appl. Phys. 110, 034701 (2011)]. In addition, we validate the performance of the automated setup with the calibration for viscosity measurements using water/glycerol standard solutions and the determination of the viscosity of an "unknown" solution of hydroxyethyl cellulose.

  20. Automation of a high-speed imaging setup for differential viscosity measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hurth, C.; Duane, B.; Whitfield, D.

    We present the automation of a setup previously used to assess the viscosity of pleural effusion samples and discriminate between transudates and exudates, an important first step in clinical diagnostics. The presented automation includes the design, testing, and characterization of a vacuum-actuated loading station that handles the 2 mm glass spheres used as sensors, as well as the engineering of electronic Printed Circuit Board (PCB) incorporating a microcontroller and their synchronization with a commercial high-speed camera operating at 10 000 fps. The hereby work therefore focuses on the instrumentation-related automation efforts as the general method and clinical application have beenmore » reported earlier [Hurth et al., J. Appl. Phys. 110, 034701 (2011)]. In addition, we validate the performance of the automated setup with the calibration for viscosity measurements using water/glycerol standard solutions and the determination of the viscosity of an “unknown” solution of hydroxyethyl cellulose.« less

  1. fastSIM: a practical implementation of fast structured illumination microscopy.

    PubMed

    Lu-Walther, Hui-Wen; Kielhorn, Martin; Förster, Ronny; Jost, Aurélie; Wicker, Kai; Heintzmann, Rainer

    2015-01-16

    A significant improvement in acquisition speed of structured illumination microscopy (SIM) opens a new field of applications to this already well-established super-resolution method towards 3D scanning real-time imaging of living cells. We demonstrate a method of increased acquisition speed on a two-beam SIM fluorescence microscope with a lateral resolution of ~100 nm at a maximum raw data acquisition rate of 162 frames per second (fps) with a region of interest of 16.5  ×  16.5 µm 2 , free of mechanically moving components. We use a programmable spatial light modulator (ferroelectric LCOS) which promises precise and rapid control of the excitation pattern in the sample plane. A passive Fourier filter and a segmented azimuthally patterned polarizer are used to perform structured illumination with maximum contrast. Furthermore, the free running mode in a modern sCMOS camera helps to achieve faster data acquisition.

  2. fastSIM: a practical implementation of fast structured illumination microscopy

    NASA Astrophysics Data System (ADS)

    Lu-Walther, Hui-Wen; Kielhorn, Martin; Förster, Ronny; Jost, Aurélie; Wicker, Kai; Heintzmann, Rainer

    2015-03-01

    A significant improvement in acquisition speed of structured illumination microscopy (SIM) opens a new field of applications to this already well-established super-resolution method towards 3D scanning real-time imaging of living cells. We demonstrate a method of increased acquisition speed on a two-beam SIM fluorescence microscope with a lateral resolution of ~100 nm at a maximum raw data acquisition rate of 162 frames per second (fps) with a region of interest of 16.5  ×  16.5 µm2, free of mechanically moving components. We use a programmable spatial light modulator (ferroelectric LCOS) which promises precise and rapid control of the excitation pattern in the sample plane. A passive Fourier filter and a segmented azimuthally patterned polarizer are used to perform structured illumination with maximum contrast. Furthermore, the free running mode in a modern sCMOS camera helps to achieve faster data acquisition.

  3. Holographic digital microscopy in on-line process control

    NASA Astrophysics Data System (ADS)

    Osanlou, Ardeshir

    2011-09-01

    This article investigates the feasibility of real-time three-dimensional imaging of microscopic objects within various emulsions while being produced in specialized production vessels. The study is particularly relevant to on-line process monitoring and control in chemical, pharmaceutical, food, cleaning, and personal hygiene industries. Such processes are often dynamic and the materials cannot be measured once removed from the production vessel. The technique reported here is applicable to three-dimensional characterization analyses on stirred fluids in small reaction vessels. Relatively expensive pulsed lasers have been avoided through the careful control of the speed of the moving fluid in relation to the speed of the camera exposure and the wavelength of the continuous wave laser used. The ultimate aim of the project is to introduce a fully robust and compact digital holographic microscope as a process control tool in a full size specialized production vessel.

  4. The Accuracy of Conventional 2D Video for Quantifying Upper Limb Kinematics in Repetitive Motion Occupational Tasks

    PubMed Central

    Chen, Chia-Hsiung; Azari, David; Hu, Yu Hen; Lindstrom, Mary J.; Thelen, Darryl; Yen, Thomas Y.; Radwin, Robert G.

    2015-01-01

    Objective Marker-less 2D video tracking was studied as a practical means to measure upper limb kinematics for ergonomics evaluations. Background Hand activity level (HAL) can be estimated from speed and duty cycle. Accuracy was measured using a cross correlation template-matching algorithm for tracking a region of interest on the upper extremities. Methods Ten participants performed a paced load transfer task while varying HAL (2, 4, and 5) and load (2.2 N, 8.9 N and 17.8 N). Speed and acceleration measured from 2D video were compared against ground truth measurements using 3D infrared motion capture. Results The median absolute difference between 2D video and 3D motion capture was 86.5 mm/s for speed, and 591 mm/s2 for acceleration, and less than 93 mm/s for speed and 656 mm/s2 for acceleration when camera pan and tilt were within ±30 degrees. Conclusion Single-camera 2D video had sufficient accuracy (< 100 mm/s) for evaluating HAL. Practitioner Summary This study demonstrated that 2D video tracking had sufficient accuracy to measure HAL for ascertaining the American Conference of Government Industrial Hygienists Threshold Limit Value® for repetitive motion when the camera is located within ±30 degrees off the plane of motion when compared against 3D motion capture for a simulated repetitive motion task. PMID:25978764

  5. Dynamics identification of a piezoelectric vibrational energy harvester by image analysis with a high speed camera

    NASA Astrophysics Data System (ADS)

    Wolszczak, Piotr; Łygas, Krystian; Litak, Grzegorz

    2018-07-01

    This study investigates dynamic responses of a nonlinear vibration energy harvester. The nonlinear mechanical resonator consists of a flexible beam moving like an inverted pendulum between amplitude limiters. It is coupled with a piezoelectric converter, and excited kinematically. Consequently, the mechanical energy input is converted into the electrical power output on the loading resistor included in an electric circuit attached to the piezoelectric electrodes. The curvature of beam mode shapes as well as deflection of the whole beam are examined using a high speed camera. The visual identification results are compared with the voltage output generated by the piezoelectric element for corresponding frequency sweeps and analyzed by the Hilbert transform.

  6. Exploring the Universe with the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    1990-01-01

    A general overview is given of the operations, engineering challenges, and components of the Hubble Space Telescope. Deployment, checkout and servicing in space are discussed. The optical telescope assembly, focal plane scientific instruments, wide field/planetary camera, faint object spectrograph, faint object camera, Goddard high resolution spectrograph, high speed photometer, fine guidance sensors, second generation technology, and support systems and services are reviewed.

  7. Image processing analysis on the air-water slug two-phase flow in a horizontal pipe

    NASA Astrophysics Data System (ADS)

    Dinaryanto, Okto; Widyatama, Arif; Majid, Akmal Irfan; Deendarlianto, Indarto

    2016-06-01

    Slug flow is a part of intermittent flow which is avoided in industrial application because of its irregularity and high pressure fluctuation. Those characteristics cause some problems such as internal corrosion and the damage of the pipeline construction. In order to understand the slug characteristics, some of the measurement techniques can be applied such as wire-mesh sensors, CECM, and high speed camera. The present study was aimed to determine slug characteristics by using image processing techniques. Experiment has been carried out in 26 mm i.d. acrylic horizontal pipe with 9 m long. Air-water flow was recorded 5 m from the air-water mixer using high speed video camera. Each of image sequence was processed using MATLAB. There are some steps including image complement, background subtraction, and image filtering that used in this algorithm to produce binary images. Special treatments also were applied to reduce the disturbance effect of dispersed bubble around the bubble. Furthermore, binary images were used to describe bubble contour and calculate slug parameter such as gas slug length, gas slug velocity, and slug frequency. As a result the effect of superficial gas velocity and superficial liquid velocity on the fundamental parameters can be understood. After comparing the results to the previous experimental results, the image processing techniques is a useful and potential technique to explain the slug characteristics.

  8. Multiple-frame IR photo-recorder KIT-3M

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roos, E; Wilkins, P; Nebeker, N

    2006-05-15

    This paper reports the experimental results of a high-speed multi-frame infrared camera which has been developed in Sarov at VNIIEF. Earlier [1] we discussed the possibility of creation of the multi-frame infrared radiation photo-recorder with framing frequency about 1 MHz. The basis of the photo-recorder is a semiconductor ionization camera [2, 3], which converts IR radiation of spectral range 1-10 micrometers into a visible image. Several sequential thermal images are registered by using the IR converter in conjunction with a multi-frame electron-optical camera. In the present report we discuss the performance characteristics of a prototype commercial 9-frame high-speed IR photo-recorder.more » The image converter records infrared images of thermal fields corresponding to temperatures ranging from 300 C to 2000 C with an exposure time of 1-20 {micro}s at a frame frequency up to 500 KHz. The IR-photo-recorder camera is useful for recording the time evolution of thermal fields in fast processes such as gas dynamics, ballistics, pulsed welding, thermal processing, automotive industry, aircraft construction, in pulsed-power electric experiments, and for the measurement of spatial mode characteristics of IR-laser radiation.« less

  9. Application of X-ray micro-computed tomography on high-speed cavitating diesel fuel flows

    NASA Astrophysics Data System (ADS)

    Mitroglou, N.; Lorenzi, M.; Santini, M.; Gavaises, M.

    2016-11-01

    The flow inside a purpose built enlarged single-orifice nozzle replica is quantified using time-averaged X-ray micro-computed tomography (micro-CT) and high-speed shadowgraphy. Results have been obtained at Reynolds and cavitation numbers similar to those of real-size injectors. Good agreement for the cavitation extent inside the orifice is found between the micro-CT and the corresponding temporal mean 2D cavitation image, as captured by the high-speed camera. However, the internal 3D structure of the developing cavitation cloud reveals a hollow vapour cloud ring formed at the hole entrance and extending only at the lower part of the hole due to the asymmetric flow entry. Moreover, the cavitation volume fraction exhibits a significant gradient along the orifice volume. The cavitation number and the needle valve lift seem to be the most influential operating parameters, while the Reynolds number seems to have only small effect for the range of values tested. Overall, the study demonstrates that use of micro-CT can be a reliable tool for cavitation in nozzle orifices operating under nominal steady-state conditions.

  10. Passive auto-focus for digital still cameras and camera phones: Filter-switching and low-light techniques

    NASA Astrophysics Data System (ADS)

    Gamadia, Mark Noel

    In order to gain valuable market share in the growing consumer digital still camera and camera phone market, camera manufacturers have to continually add and improve existing features to their latest product offerings. Auto-focus (AF) is one such feature, whose aim is to enable consumers to quickly take sharply focused pictures with little or no manual intervention in adjusting the camera's focus lens. While AF has been a standard feature in digital still and cell-phone cameras, consumers often complain about their cameras' slow AF performance, which may lead to missed photographic opportunities, rendering valuable moments and events with undesired out-of-focus pictures. This dissertation addresses this critical issue to advance the state-of-the-art in the digital band-pass filter, passive AF method. This method is widely used to realize AF in the camera industry, where a focus actuator is adjusted via a search algorithm to locate the in-focus position by maximizing a sharpness measure extracted from a particular frequency band of the incoming image of the scene. There are no known systematic methods for automatically deriving the parameters such as the digital pass-bands or the search step-size increments used in existing passive AF schemes. Conventional methods require time consuming experimentation and tuning in order to arrive at a set of parameters which balance AF performance in terms of speed and accuracy ultimately causing a delay in product time-to-market. This dissertation presents a new framework for determining an optimal set of passive AF parameters, named Filter- Switching AF, providing an automatic approach to achieve superior AF performance, both in good and low lighting conditions based on the following performance measures (metrics): speed (total number of iterations), accuracy (offset from truth), power consumption (total distance moved), and user experience (in-focus position overrun). Performance results using three different prototype cameras are presented to further illustrate the real-world AF performance gains achieved by the developed approach. The major contribution of this dissertation is that the developed auto focusing approach can be successfully used by camera manufacturers in the development of the AF feature in future generations of digital still cameras and camera phones.

  11. Solid state replacement of rotating mirror cameras

    NASA Astrophysics Data System (ADS)

    Frank, Alan M.; Bartolick, Joseph M.

    2007-01-01

    Rotating mirror cameras have been the mainstay of mega-frame per second imaging for decades. There is still no electronic camera that can match a film based rotary mirror camera for the combination of frame count, speed, resolution and dynamic range. The rotary mirror cameras are predominantly used in the range of 0.1 to 100 micro-seconds per frame, for 25 to more than a hundred frames. Electron tube gated cameras dominate the sub microsecond regime but are frame count limited. Video cameras are pushing into the microsecond regime but are resolution limited by the high data rates. An all solid state architecture, dubbed 'In-situ Storage Image Sensor' or 'ISIS', by Prof. Goji Etoh has made its first appearance into the market and its evaluation is discussed. Recent work at Lawrence Livermore National Laboratory has concentrated both on evaluation of the presently available technologies and exploring the capabilities of the ISIS architecture. It is clear though there is presently no single chip camera that can simultaneously match the rotary mirror cameras, the ISIS architecture has the potential to approach their performance.

  12. Toward an image compression algorithm for the high-resolution electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    Taking pictures with a camera that uses a digital recording medium instead of film has the advantage of recording and transmitting images without the use of a darkroom or a courier. However, high-resolution images contain an enormous amount of information and strain data-storage systems. Image compression will allow multiple images to be stored in the High-Resolution Electronic Still Camera. The camera is under development at Johnson Space Center. Fidelity of the reproduced image and compression speed are of tantamount importance. Lossless compression algorithms are fast and faithfully reproduce the image, but their compression ratios will be unacceptably low due to noise in the front end of the camera. Future efforts will include exploring methods that will reduce the noise in the image and increase the compression ratio.

  13. Faster than "g", Revisited with High-Speed Imaging

    ERIC Educational Resources Information Center

    Vollmer, Michael; Mollmann, Klaus-Peter

    2012-01-01

    The introduction of modern high-speed cameras in physics teaching provides a tool not only for easy visualization, but also for quantitative analysis of many simple though fast occurring phenomena. As an example, we present a very well-known demonstration experiment--sometimes also discussed in the context of falling chimneys--which is commonly…

  14. Sequence of the Essex-Lopresti lesion—a high-speed video documentation and kinematic analysis

    PubMed Central

    2014-01-01

    Background and purpose The pathomechanics of the Essex-Lopresti lesion are not fully understood. We used human cadavers and documented the genesis of the injury with high-speed cameras. Methods 4 formalin-fixed cadaveric specimens of human upper extremities were tested in a prototype, custom-made, drop-weight test bench. An axial high-energy impulse was applied and the development of the lesion was documented with 3 high-speed cameras. Results The high-speed images showed a transversal movement of the radius and ulna, which moved away from each other in the transversal plane during the impact. This resulted into a transversal rupture of the interosseous membrane, starting in its central portion, and only then did the radius migrate proximally and fracture. The lesion proceeded to the dislocation of the distal radio-ulnar joint and then to a full-blown Essex-Lopresti lesion. Interpretation Our findings indicate that fracture of the radial head may be preceded by at least partial lesions of the interosseous membrane in the course of high-energy axial trauma. PMID:24479620

  15. Application of image converter camera to measure flame propagation in S. I. engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakamura, A.; Ishii, K.; Sasaki, T.

    1989-01-01

    A combustion flame visualization system, for use as an engine diagnostics tool, was developed in order to evaluate combustion chamber shapes in the development stage of mass-produced spark ignition (S.I.) engines. The system consists of an image converter camera and a computer-aided image processing system. The system is capable of high speed photography (10,000 fps) at low intensity light (1,000 cd/m/sup 2/), and of real-time display of the raw images of combustion flames. By using this system, flame structure estimated from the brightness level on a photograph and direction of flame propagation in a mass-produced 4-valve engine were measured. Itmore » was observed that the difference in the structure and the propagation of the flame in the cases of 4-valve and quasi-2-valve combustion chambers, which has the same in the pressure diagram, were detected. The quasi-2-valve configuration was adopted in order to improve swirl intensity.« less

  16. Simultaneous planar measurements of soot structure and velocity fields in a turbulent lifted jet flame at 3 kHz

    NASA Astrophysics Data System (ADS)

    Köhler, M.; Boxx, I.; Geigle, K. P.; Meier, W.

    2011-05-01

    We describe a newly developed combustion diagnostic for the simultaneous planar imaging of soot structure and velocity fields in a highly sooting, lifted turbulent jet flame at 3000 frames per second, or two orders of magnitude faster than "conventional" laser imaging systems. This diagnostic uses short pulse duration (8 ns), frequency-doubled, diode-pumped solid state (DPSS) lasers to excite laser-induced incandescence (LII) at 3 kHz, which is then imaged onto a high framerate CMOS camera. A second (dual-cavity) DPSS laser and CMOS camera form the basis of a particle image velocity (PIV) system used to acquire 2-component velocity field in the flame. The LII response curve (measured in a laminar propane diffusion flame) is presented and the combined diagnostics then applied in a heavily sooting lifted turbulent jet flame. The potential challenges and rewards of application of this combined imaging technique at high speeds are discussed.

  17. Miniaturized fundus camera

    NASA Astrophysics Data System (ADS)

    Gliss, Christine; Parel, Jean-Marie A.; Flynn, John T.; Pratisto, Hans S.; Niederer, Peter F.

    2003-07-01

    We present a miniaturized version of a fundus camera. The camera is designed for the use in screening for retinopathy of prematurity (ROP). There, but also in other applications a small, light weight, digital camera system can be extremely useful. We present a small wide angle digital camera system. The handpiece is significantly smaller and lighter then in all other systems. The electronics is truly portable fitting in a standard boardcase. The camera is designed to be offered at a compatible price. Data from tests on young rabbits' eyes is presented. The development of the camera system is part of a telemedicine project screening for ROP. Telemedical applications are a perfect application for this camera system using both advantages: the portability as well as the digital image.

  18. SPADAS: a high-speed 3D single-photon camera for advanced driver assistance systems

    NASA Astrophysics Data System (ADS)

    Bronzi, D.; Zou, Y.; Bellisai, S.; Villa, F.; Tisa, S.; Tosi, A.; Zappa, F.

    2015-02-01

    Advanced Driver Assistance Systems (ADAS) are the most advanced technologies to fight road accidents. Within ADAS, an important role is played by radar- and lidar-based sensors, which are mostly employed for collision avoidance and adaptive cruise control. Nonetheless, they have a narrow field-of-view and a limited ability to detect and differentiate objects. Standard camera-based technologies (e.g. stereovision) could balance these weaknesses, but they are currently not able to fulfill all automotive requirements (distance range, accuracy, acquisition speed, and frame-rate). To this purpose, we developed an automotive-oriented CMOS single-photon camera for optical 3D ranging based on indirect time-of-flight (iTOF) measurements. Imagers based on Single-photon avalanche diode (SPAD) arrays offer higher sensitivity with respect to CCD/CMOS rangefinders, have inherent better time resolution, higher accuracy and better linearity. Moreover, iTOF requires neither high bandwidth electronics nor short-pulsed lasers, hence allowing the development of cost-effective systems. The CMOS SPAD sensor is based on 64 × 32 pixels, each able to process both 2D intensity-data and 3D depth-ranging information, with background suppression. Pixel-level memories allow fully parallel imaging and prevents motion artefacts (skew, wobble, motion blur) and partial exposure effects, which otherwise would hinder the detection of fast moving objects. The camera is housed in an aluminum case supporting a 12 mm F/1.4 C-mount imaging lens, with a 40°×20° field-of-view. The whole system is very rugged and compact and a perfect solution for vehicle's cockpit, with dimensions of 80 mm × 45 mm × 70 mm, and less that 1 W consumption. To provide the required optical power (1.5 W, eye safe) and to allow fast (up to 25 MHz) modulation of the active illumination, we developed a modular laser source, based on five laser driver cards, with three 808 nm lasers each. We present the full characterization of the 3D automotive system, operated both at night and during daytime, in both indoor and outdoor, in real traffic, scenario. The achieved long-range (up to 45m), high dynamic-range (118 dB), highspeed (over 200 fps) 3D depth measurement, and high precision (better than 90 cm at 45 m), highlight the excellent performance of this CMOS SPAD camera for automotive applications.

  19. Effect of twist on transverse impact response of ballistic fiber yarns

    DOE PAGES

    Song, Bo; Lu, Wei -Yang

    2015-06-15

    A Hopkinson bar was employed to conduct transverse impact testing of twisted Kevlar KM2 fiber yarns at the same impact speed. The speed of Euler transverse wave generated by the impact was measured utilizing a high speed digital camera. The study included fiber yarns twisted by different amounts. The Euler transverse wave speed was observed to increase with increasing amount of twist of the fiber yarn, within the range of this investigation. As a result, the higher transverse wave speeds in the more twisted fiber yarns indicate better ballistic performance in soft body armors for personal protection.

  20. Direct measurement of erythrocyte deformability in diabetes mellitus with a transparent microchannel capillary model and high-speed video camera system.

    PubMed

    Tsukada, K; Sekizuka, E; Oshio, C; Minamitani, H

    2001-05-01

    To measure erythrocyte deformability in vitro, we made transparent microchannels on a crystal substrate as a capillary model. We observed axisymmetrically deformed erythrocytes and defined a deformation index directly from individual flowing erythrocytes. By appropriate choice of channel width and erythrocyte velocity, we could observe erythrocytes deforming to a parachute-like shape similar to that occurring in capillaries. The flowing erythrocytes magnified 200-fold through microscopy were recorded with an image-intensified high-speed video camera system. The sensitivity of deformability measurement was confirmed by comparing the deformation index in healthy controls with erythrocytes whose membranes were hardened by glutaraldehyde. We confirmed that the crystal microchannel system is a valuable tool for erythrocyte deformability measurement. Microangiopathy is a characteristic complication of diabetes mellitus. A decrease in erythrocyte deformability may be part of the cause of this complication. In order to identify the difference in erythrocyte deformability between control and diabetic erythrocytes, we measured erythrocyte deformability using transparent crystal microchannels and a high-speed video camera system. The deformability of diabetic erythrocytes was indeed measurably lower than that of erythrocytes in healthy controls. This result suggests that impaired deformability in diabetic erythrocytes can cause altered viscosity and increase the shear stress on the microvessel wall. Copyright 2001 Academic Press.

  1. Combining High-Speed Cameras and Stop-Motion Animation Software to Support Students' Modeling of Human Body Movement

    NASA Astrophysics Data System (ADS)

    Lee, Victor R.

    2015-04-01

    Biomechanics, and specifically the biomechanics associated with human movement, is a potentially rich backdrop against which educators can design innovative science teaching and learning activities. Moreover, the use of technologies associated with biomechanics research, such as high-speed cameras that can produce high-quality slow-motion video, can be deployed in such a way to support students' participation in practices of scientific modeling. As participants in classroom design experiment, fifteen fifth-grade students worked with high-speed cameras and stop-motion animation software (SAM Animation) over several days to produce dynamic models of motion and body movement. The designed series of learning activities involved iterative cycles of animation creation and critique and use of various depictive materials. Subsequent analysis of flipbooks of human jumping movements created by the students at the beginning and end of the unit revealed a significant improvement in both the epistemic fidelity of students' representations. Excerpts from classroom observations highlight the role that the teacher plays in supporting students' thoughtful reflection of and attention to slow-motion video. In total, this design and research intervention demonstrates that the combination of technologies, activities, and teacher support can lead to improvements in some of the foundations associated with students' modeling.

  2. Motion-Blur-Free High-Speed Video Shooting Using a Resonant Mirror

    PubMed Central

    Inoue, Michiaki; Gu, Qingyi; Takaki, Takeshi; Ishii, Idaku; Tajima, Kenji

    2017-01-01

    This study proposes a novel concept of actuator-driven frame-by-frame intermittent tracking for motion-blur-free video shooting of fast-moving objects. The camera frame and shutter timings are controlled for motion blur reduction in synchronization with a free-vibration-type actuator vibrating with a large amplitude at hundreds of hertz so that motion blur can be significantly reduced in free-viewpoint high-frame-rate video shooting for fast-moving objects by deriving the maximum performance of the actuator. We develop a prototype of a motion-blur-free video shooting system by implementing our frame-by-frame intermittent tracking algorithm on a high-speed video camera system with a resonant mirror vibrating at 750 Hz. It can capture 1024 × 1024 images of fast-moving objects at 750 fps with an exposure time of 0.33 ms without motion blur. Several experimental results for fast-moving objects verify that our proposed method can reduce image degradation from motion blur without decreasing the camera exposure time. PMID:29109385

  3. TIME-SEQUENCED X-RAY OBSERVATION OF A THERMAL EXPLOSION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tringe, J. W.; Molitoris, J. D.; Kercher, J. R.

    The evolution of a thermally-initiated explosion is studied using a multiple-image x-ray system. HMX-based PBX 9501 is used in this work, enabling direct comparison to recently-published data obtained with proton radiography [1]. Multiple x-ray images of the explosion are obtained with image spacing of ten microseconds or more. The explosion is simultaneously characterized with a high-speed camera using an interframe spacing of 11 mus. X-ray and camera images were both initiated passively by signals from an embedded thermocouple array, as opposed to being actively triggered by a laser pulse or other external source. X-ray images show an accelerating reacting frontmore » within the explosive, and also show unreacted explosive at the time the containment vessel bursts. High-speed camera images show debris ejected from the vessel expanding at 800-2100 m/s in the first tens of mus after the container wall failure. The effective center of the initiation volume is about 6 mm from the geometric center of the explosive.« less

  4. Darwin's bee-trap: The kinetics of Catasetum, a new world orchid.

    PubMed

    Nicholson, Charles C; Bales, James W; Palmer-Fortune, Joyce E; Nicholson, Robert G

    2008-01-01

    The orchid genera Catasetum employs a hair-trigger activated, pollen release mechanism, which forcibly attaches pollen sacs onto foraging insects in the New World tropics. This remarkable adaptation was studied extensively by Charles Darwin and he termed this rapid response "sensitiveness." Using high speed video cameras with a frame speed of 1000 fps, this rapid release was filmed and from the subsequent footage, velocity, speed, acceleration, force and kinetic energy were computed.

  5. SCOUT: A Fast Monte-Carlo Modeling Tool of Scintillation Camera Output

    PubMed Central

    Hunter, William C. J.; Barrett, Harrison H.; Lewellen, Thomas K.; Miyaoka, Robert S.; Muzi, John P.; Li, Xiaoli; McDougald, Wendy; MacDonald, Lawrence R.

    2011-01-01

    We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:22072297

  6. Compact streak camera for the shock study of solids by using the high-pressure gas gun

    NASA Astrophysics Data System (ADS)

    Nagayama, Kunihito; Mori, Yasuhito

    1993-01-01

    For the precise observation of high-speed impact phenomena, a compact high-speed streak camera recording system has been developed. The system consists of a high-pressure gas gun, a streak camera, and a long-pulse dye laser. The gas gun installed in our laboratory has a muzzle of 40 mm in diameter, and a launch tube of 2 m long. Projectile velocity is measured by the laser beam cut method. The gun is capable of accelerating a 27 g projectile up to 500 m/s, if helium gas is used as a driver. The system has been designed on the principal idea that the precise optical measurement methods developed in other areas of research can be applied to the gun study. The streak camera is 300 mm in diameter, with a rectangular rotating mirror which is driven by an air turbine spindle. The attainable streak velocity is 3 mm/microsecond(s) . The size of the camera is rather small aiming at the portability and economy. Therefore, the streak velocity is relatively slower than the fast cameras, but it is possible to use low-sensitivity but high-resolution film as a recording medium. We have also constructed a pulsed dye laser of 25 - 30 microsecond(s) in duration. The laser can be used as a light source of observation. The advantage for the use of the laser will be multi-fold, i.e., good directivity, almost single frequency, and so on. The feasibility of the system has been demonstrated by performing several experiments.

  7. Minimum Requirements for Taxicab Security Cameras.

    PubMed

    Zeng, Shengke; Amandus, Harlan E; Amendola, Alfred A; Newbraugh, Bradley H; Cantis, Douglas M; Weaver, Darlene

    2014-07-01

    The homicide rate of taxicab-industry is 20 times greater than that of all workers. A NIOSH study showed that cities with taxicab-security cameras experienced significant reduction in taxicab driver homicides. Minimum technical requirements and a standard test protocol for taxicab-security cameras for effective taxicab-facial identification were determined. The study took more than 10,000 photographs of human-face charts in a simulated-taxicab with various photographic resolutions, dynamic ranges, lens-distortions, and motion-blurs in various light and cab-seat conditions. Thirteen volunteer photograph-evaluators evaluated these face photographs and voted for the minimum technical requirements for taxicab-security cameras. Five worst-case scenario photographic image quality thresholds were suggested: the resolution of XGA-format, highlight-dynamic-range of 1 EV, twilight-dynamic-range of 3.3 EV, lens-distortion of 30%, and shutter-speed of 1/30 second. These minimum requirements will help taxicab regulators and fleets to identify effective taxicab-security cameras, and help taxicab-security camera manufacturers to improve the camera facial identification capability.

  8. Plastification of polymers in twin-screw-extruders: New visualization technic using high-speed imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knieper, A., E-mail: Alexander.Knieper@lbf.fraunhofer.de, E-mail: Christian.Beinert@lbf.fraunhofer.de; Beinert, C., E-mail: Alexander.Knieper@lbf.fraunhofer.de, E-mail: Christian.Beinert@lbf.fraunhofer.de

    The initial melting of the first granules through plastic energy dissipation (PED) at the beginning of the melting zone, in the co-rotating twin-screw extruder is visualized in this work. The visualization was created through the use of a high speed camera in the cross section of the melting zone. The parameters screw speed, granule-temperature, temperature-profile, type of polymer and back pressure were examined. It was shown that the screw speed and the temperature-profile have significant influence on the rate of initial melting.

  9. Apollo 8 Mission image

    NASA Image and Video Library

    1968-12-21

    Apollo 8,Moon, Latitude 15 degrees South,Longitude 170 degrees West. Camera Tilt Mode: High Oblique. Direction: Southeast. Sun Angle 17 degrees. Original Film Magazine was labeled E. Camera Data: 70mm Hasselblad; F-Stop: F-5.6; Shutter Speed: 1/250 second. Film Type: Kodak SO-3400 Black and White,ASA 40. Other Photographic Coverage: Lunar Orbiter 1 (LO I) S-3. Flight Date: December 21-27,1968.

  10. Plenoptic camera image simulation for reconstruction algorithm verification

    NASA Astrophysics Data System (ADS)

    Schwiegerling, Jim

    2014-09-01

    Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.

  11. On a novel low cost high accuracy experimental setup for tomographic particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Discetti, Stefano; Ianiro, Andrea; Astarita, Tommaso; Cardone, Gennaro

    2013-07-01

    This work deals with the critical aspects related to cost reduction of a Tomo PIV setup and to the bias errors introduced in the velocity measurements by the coherent motion of the ghost particles. The proposed solution consists of using two independent imaging systems composed of three (or more) low speed single frame cameras, which can be up to ten times cheaper than double shutter cameras with the same image quality. Each imaging system is used to reconstruct a particle distribution in the same measurement region, relative to the first and the second exposure, respectively. The reconstructed volumes are then interrogated by cross-correlation in order to obtain the measured velocity field, as in the standard tomographic PIV implementation. Moreover, differently from tomographic PIV, the ghost particle distributions of the two exposures are uncorrelated, since their spatial distribution is camera orientation dependent. For this reason, the proposed solution promises more accurate results, without the bias effect of the coherent ghost particles motion. Guidelines for the implementation and the application of the present method are proposed. The performances are assessed with a parametric study on synthetic experiments. The proposed low cost system produces a much lower modulation with respect to an equivalent three-camera system. Furthermore, the potential accuracy improvement using the Motion Tracking Enhanced MART (Novara et al 2010 Meas. Sci. Technol. 21 035401) is much higher than in the case of the standard implementation of tomographic PIV.

  12. Measuring SO2 ship emissions with an ultraviolet imaging camera

    NASA Astrophysics Data System (ADS)

    Prata, A. J.

    2014-05-01

    Over the last few years fast-sampling ultraviolet (UV) imaging cameras have been developed for use in measuring SO2 emissions from industrial sources (e.g. power plants; typical emission rates ~ 1-10 kg s-1) and natural sources (e.g. volcanoes; typical emission rates ~ 10-100 kg s-1). Generally, measurements have been made from sources rich in SO2 with high concentrations and emission rates. In this work, for the first time, a UV camera has been used to measure the much lower concentrations and emission rates of SO2 (typical emission rates ~ 0.01-0.1 kg s-1) in the plumes from moving and stationary ships. Some innovations and trade-offs have been made so that estimates of the emission rates and path concentrations can be retrieved in real time. Field experiments were conducted at Kongsfjord in Ny Ålesund, Svalbard, where SO2 emissions from cruise ships were made, and at the port of Rotterdam, Netherlands, measuring emissions from more than 10 different container and cargo ships. In all cases SO2 path concentrations could be estimated and emission rates determined by measuring ship plume speeds simultaneously using the camera, or by using surface wind speed data from an independent source. Accuracies were compromised in some cases because of the presence of particulates in some ship emissions and the restriction of single-filter UV imagery, a requirement for fast-sampling (> 10 Hz) from a single camera. Despite the ease of use and ability to determine SO2 emission rates from the UV camera system, the limitation in accuracy and precision suggest that the system may only be used under rather ideal circumstances and that currently the technology needs further development to serve as a method to monitor ship emissions for regulatory purposes. A dual-camera system or a single, dual-filter camera is required in order to properly correct for the effects of particulates in ship plumes.

  13. High Speed Photographic Analysis Of Railgun Plasmas

    NASA Astrophysics Data System (ADS)

    Macintyre, I. B.

    1985-02-01

    Various experiments are underway at the Materials Research Laboratories, Australian Department of Defence, to develop a theory for the behaviour and propulsion action of plasmas in rail guns. Optical recording and imaging devices, with their low vulnerability to the effects of magnetic and electric fields present in the vicinity of electromagnetic launchers, have proven useful as diagnostic tools. This paper describes photoinstrumentation systems developed to provide visual qualitative assessment of the behaviour of plasma travelling along the bore of railgun launchers. In addition, a quantitative system is incorporated providing continuous data (on a microsecond time scale) of (a) Length of plasma during flight along the launcher bore. (b) Velocity of plasma. (c) Distribution of plasma with respect to time after creation. (d) Plasma intensity profile as it travels along the launcher bore. The evolution of the techniques used is discussed. Two systems were employed. The first utilized a modified high speed streak camera to record the light emitted from the plasma, through specially prepared fibre optic cables. The fibre faces external to the bore were then imaged onto moving film. The technique involved the insertion of fibres through the launcher body to enable the plasma to be viewed at discrete positions as it travelled along the launcher bore. Camera configuration, fibre optic preparation and experimental results are outlined. The second system utilized high speed streak and framing photography in conjunction with accurate sensitometric control procedures on the recording film. The two cameras recorded the plasma travelling along the bore of a specially designed transparent launcher. The streak camera, fitted with a precise slit size, recorded a streak image of the upper brightness range of the plasma as it travelled along the launcher's bore. The framing camera recorded an overall view of the launcher and the plasma path, to the maximum possible, governed by the film's ability to reproduce the plasma's brightness range. The instrumentation configuration, calibration, and film measurement using microdensitometer scanning techniques to evaluate inbore plasma behaviour, are also presented.

  14. Colour-based Object Detection and Tracking for Autonomous Quadrotor UAV

    NASA Astrophysics Data System (ADS)

    Kadouf, Hani Hunud A.; Mohd Mustafah, Yasir

    2013-12-01

    With robotics becoming a fundamental aspect of modern society, further research and consequent application is ever increasing. Aerial robotics, in particular, covers applications such as surveillance in hostile military zones or search and rescue operations in disaster stricken areas, where ground navigation is impossible. The increased visual capacity of UAV's (Unmanned Air Vehicles) is also applicable in the support of ground vehicles to provide supplies for emergency assistance, for scouting purposes or to extend communication beyond insurmountable land or water barriers. The Quadrotor, which is a small UAV has its lift generated by four rotors and can be controlled by altering the speeds of its motors relative to each other. The four rotors allow for a higher payload than single or dual rotor UAVs, which makes it safer and more suitable to carry camera and transmitter equipment. An onboard camera is used to capture and transmit images of the Quadrotor's First Person View (FPV) while in flight, in real time, wirelessly to a base station. The aim of this research is to develop an autonomous quadrotor platform capable of transmitting real time video signals to a base station for processing. The result from the image analysis will be used as a feedback in the quadrotor positioning control. To validate the system, the algorithm should have the capacity to make the quadrotor identify, track or hover above stationary or moving objects.

  15. ARINC 818 adds capabilities for high-speed sensors and systems

    NASA Astrophysics Data System (ADS)

    Keller, Tim; Grunwald, Paul

    2014-06-01

    ARINC 818, titled Avionics Digital Video Bus (ADVB), is the standard for cockpit video that has gained wide acceptance in both the commercial and military cockpits including the Boeing 787, the A350XWB, the A400M, the KC- 46A and many others. Initially conceived of for cockpit displays, ARINC 818 is now propagating into high-speed sensors, such as infrared and optical cameras due to its high-bandwidth and high reliability. The ARINC 818 specification that was initially release in the 2006 and has recently undergone a major update that will enhance its applicability as a high speed sensor interface. The ARINC 818-2 specification was published in December 2013. The revisions to the specification include: video switching, stereo and 3-D provisions, color sequential implementations, regions of interest, data-only transmissions, multi-channel implementations, bi-directional communication, higher link rates to 32Gbps, synchronization signals, options for high-speed coax interfaces and optical interface details. The additions to the specification are especially appealing for high-bandwidth, multi sensor systems that have issues with throughput bottlenecks and SWaP concerns. ARINC 818 is implemented on either copper or fiber optic high speed physical layers, and allows for time multiplexing multiple sensors onto a single link. This paper discusses each of the new capabilities in the ARINC 818-2 specification and the benefits for ISR and countermeasures implementations, several examples are provided.

  16. Vision-Based Steering Control, Speed Assistance and Localization for Inner-City Vehicles

    PubMed Central

    Olivares-Mendez, Miguel Angel; Sanchez-Lopez, Jose Luis; Jimenez, Felipe; Campoy, Pascual; Sajadi-Alamdari, Seyed Amin; Voos, Holger

    2016-01-01

    Autonomous route following with road vehicles has gained popularity in the last few decades. In order to provide highly automated driver assistance systems, different types and combinations of sensors have been presented in the literature. However, most of these approaches apply quite sophisticated and expensive sensors, and hence, the development of a cost-efficient solution still remains a challenging problem. This work proposes the use of a single monocular camera sensor for an automatic steering control, speed assistance for the driver and localization of the vehicle on a road. Herein, we assume that the vehicle is mainly traveling along a predefined path, such as in public transport. A computer vision approach is presented to detect a line painted on the road, which defines the path to follow. Visual markers with a special design painted on the road provide information to localize the vehicle and to assist in its speed control. Furthermore, a vision-based control system, which keeps the vehicle on the predefined path under inner-city speed constraints, is also presented. Real driving tests with a commercial car on a closed circuit finally prove the applicability of the derived approach. In these tests, the car reached a maximum speed of 48 km/h and successfully traveled a distance of 7 km without the intervention of a human driver and any interruption. PMID:26978365

  17. Vision-Based Steering Control, Speed Assistance and Localization for Inner-City Vehicles.

    PubMed

    Olivares-Mendez, Miguel Angel; Sanchez-Lopez, Jose Luis; Jimenez, Felipe; Campoy, Pascual; Sajadi-Alamdari, Seyed Amin; Voos, Holger

    2016-03-11

    Autonomous route following with road vehicles has gained popularity in the last few decades. In order to provide highly automated driver assistance systems, different types and combinations of sensors have been presented in the literature. However, most of these approaches apply quite sophisticated and expensive sensors, and hence, the development of a cost-efficient solution still remains a challenging problem. This work proposes the use of a single monocular camera sensor for an automatic steering control, speed assistance for the driver and localization of the vehicle on a road. Herein, we assume that the vehicle is mainly traveling along a predefined path, such as in public transport. A computer vision approach is presented to detect a line painted on the road, which defines the path to follow. Visual markers with a special design painted on the road provide information to localize the vehicle and to assist in its speed control. Furthermore, a vision-based control system, which keeps the vehicle on the predefined path under inner-city speed constraints, is also presented. Real driving tests with a commercial car on a closed circuit finally prove the applicability of the derived approach. In these tests, the car reached a maximum speed of 48 km/h and successfully traveled a distance of 7 km without the intervention of a human driver and any interruption.

  18. Do Speed Cameras Produce Net Benefits? Evidence from British Columbia, Canada

    ERIC Educational Resources Information Center

    Chen, Greg; Warburton, Rebecca N.

    2006-01-01

    Traffic collisions kill about 43,000 Americans a year. Worldwide, road traffic injuries are the leading cause of death by injury and the ninth leading cause of all deaths. Photo Radar speed enforcement has been implemented in the United States and many other industrialized countries, yet its cost-effectiveness from a societal viewpoint, taking all…

  19. Lunar Roving Vehicle gets speed workout by Astronaut John Young

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The Lunar Roving Vehicle (LRV) gets a speed workout by Astronaut John W. Young in the 'Grand Prix' run during the third Apollo 16 extravehicular activity (EVA-3) at the Descartes landing site. This view is a frame from motion picture film exposed by a 16mm Maurer camera held by Astronaut Charels M. Duke Jr.

  20. Using High Speed Smartphone Cameras and Video Analysis Techniques to Teach Mechanical Wave Physics

    ERIC Educational Resources Information Center

    Bonato, Jacopo; Gratton, Luigi M.; Onorato, Pasquale; Oss, Stefano

    2017-01-01

    We propose the use of smartphone-based slow-motion video analysis techniques as a valuable tool for investigating physics concepts ruling mechanical wave propagation. The simple experimental activities presented here, suitable for both high school and undergraduate students, allows one to measure, in a simple yet rigorous way, the speed of pulses…

  1. Broadband Terahertz Computed Tomography Using a 5k-pixel Real-time THz Camera

    NASA Astrophysics Data System (ADS)

    Trichopoulos, Georgios C.; Sertel, Kubilay

    2015-07-01

    We present a novel THz computed tomography system that enables fast 3-dimensional imaging and spectroscopy in the 0.6-1.2 THz band. The system is based on a new real-time broadband THz camera that enables rapid acquisition of multiple cross-sectional images required in computed tomography. Tomographic reconstruction is achieved using digital images from the densely-packed large-format (80×64) focal plane array sensor located behind a hyper-hemispherical silicon lens. Each pixel of the sensor array consists of an 85 μm × 92 μm lithographically fabricated wideband dual-slot antenna, monolithically integrated with an ultra-fast diode tuned to operate in the 0.6-1.2 THz regime. Concurrently, optimum impedance matching was implemented for maximum pixel sensitivity, enabling 5 frames-per-second image acquisition speed. As such, the THz computed tomography system generates diffraction-limited resolution cross-section images as well as the three-dimensional models of various opaque and partially transparent objects. As an example, an over-the-counter vitamin supplement pill is imaged and its material composition is reconstructed. The new THz camera enables, for the first time, a practical application of THz computed tomography for non-destructive evaluation and biomedical imaging.

  2. Strategies for Pre-Emptive Mid-Air Collision Avoidance in Budgerigars

    PubMed Central

    Schiffner, Ingo; Srinivasan, Mandyam V.

    2016-01-01

    We have investigated how birds avoid mid-air collisions during head-on encounters. Trajectories of birds flying towards each other in a tunnel were recorded using high speed video cameras. Analysis and modelling of the data suggest two simple strategies for collision avoidance: (a) each bird veers to its right and (b) each bird changes its altitude relative to the other bird according to a preset preference. Both strategies suggest simple rules by which collisions can be avoided in head-on encounters by two agents, be they animals or machines. The findings are potentially applicable to the design of guidance algorithms for automated collision avoidance on aircraft. PMID:27680488

  3. Traffic Monitor

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Mestech's X-15 "Eye in the Sky," a traffic monitoring system, incorporates NASA imaging and robotic vision technology. A camera or "sensor box" is mounted in a housing. The sensor detects vehicles approaching an intersection and sends the information to a computer, which controls the traffic light according to the traffic rate. Jet Propulsion Laboratory technical support packages aided in the company's development of the system. The X-15's "smart highway" can also be used to count vehicles on a highway and compute the number in each lane and their speeds, important information for freeway control engineers. Additional applications are in airport and railroad operations. The system is intended to replace loop-type traffic detectors.

  4. Data Acquisition System for Silicon Ultra Fast Cameras for Electron and Gamma Sources in Medical Applications (sucima Imager)

    NASA Astrophysics Data System (ADS)

    Czermak, A.; Zalewska, A.; Dulny, B.; Sowicki, B.; Jastrząb, M.; Nowak, L.

    2004-07-01

    The needs for real time monitoring of the hadrontherapy beam intensity and profile as well as requirements for the fast dosimetry using Monolithic Active Pixel Sensors (MAPS) forced the SUCIMA collaboration to the design of the unique Data Acquisition System (DAQ SUCIMA Imager). The DAQ system has been developed on one of the most advanced XILINX Field Programmable Gate Array chip - VERTEX II. The dedicated multifunctional electronic board for the detector's analogue signals capture, their parallel digital processing and final data compression as well as transmission through the high speed USB 2.0 port has been prototyped and tested.

  5. Towards next generation 3D cameras

    NASA Astrophysics Data System (ADS)

    Gupta, Mohit

    2017-03-01

    We are in the midst of a 3D revolution. Robots enabled by 3D cameras are beginning to autonomously drive cars, perform surgeries, and manage factories. However, when deployed in the real-world, these cameras face several challenges that prevent them from measuring 3D shape reliably. These challenges include large lighting variations (bright sunlight to dark night), presence of scattering media (fog, body tissue), and optically complex materials (metal, plastic). Due to these factors, 3D imaging is often the bottleneck in widespread adoption of several key robotics technologies. I will talk about our work on developing 3D cameras based on time-of-flight and active triangulation that addresses these long-standing problems. This includes designing `all-weather' cameras that can perform high-speed 3D scanning in harsh outdoor environments, as well as cameras that recover shape of objects with challenging material properties. These cameras are, for the first time, capable of measuring detailed (<100 microns resolution) scans in extremely demanding scenarios with low-cost components. Several of these cameras are making a practical impact in industrial automation, being adopted in robotic inspection and assembly systems.

  6. A high-speed digital camera system for the observation of rapid H-alpha fluctuations in solar flares

    NASA Technical Reports Server (NTRS)

    Kiplinger, Alan L.; Dennis, Brian R.; Orwig, Larry E.

    1989-01-01

    Researchers developed a prototype digital camera system for obtaining H-alpha images of solar flares with 0.1 s time resolution. They intend to operate this system in conjunction with SMM's Hard X Ray Burst Spectrometer, with x ray instruments which will be available on the Gamma Ray Observatory and eventually with the Gamma Ray Imaging Device (GRID), and with the High Resolution Gamma-Ray and Hard X Ray Spectrometer (HIREGS) which are being developed for the Max '91 program. The digital camera has recently proven to be successful as a one camera system operating in the blue wing of H-alpha during the first Max '91 campaign. Construction and procurement of a second and possibly a third camera for simultaneous observations at other wavelengths are underway as are analyses of the campaign data.

  7. Visualization of Projectile Flying at High Speed in Dusty Atmosphere

    NASA Astrophysics Data System (ADS)

    Masaki, Chihiro; Watanabe, Yasumasa; Suzuki, Kojiro

    2017-10-01

    Considering a spacecraft that encounters particle-laden environment, such as dust particles flying up over the regolith by the jet of the landing thruster, high-speed flight of a projectile in such environment was experimentally simulated by using the ballistic range. At high-speed collision of particles on the projectile surface, they may be reflected with cracking into smaller pieces. On the other hand, the projectile surface will be damaged by the collision. To obtain the fundamental characteristics of such complicated phenomena, a projectile was launched at the velocity up to 400 m/s and the collective behaviour of particles around projectile was observed by the high-speed camera. To eliminate the effect of the gas-particle interaction and to focus on only the effect of the interaction between the particles and the projectile's surface, the test chamber pressure was evacuated down to 30 Pa. The particles about 400μm diameter were scattered and formed a sheet of particles in the test chamber by using two-dimensional funnel with a narrow slit. The projectile was launched into the particle sheet in the tangential direction, and the high-speed camera captured both projectile and particle motions. From the movie, the interaction between the projectile and particle sheet was clarified.

  8. A Robust Mechanical Sensing System for Unmanned Sea Surface Vehicles

    NASA Technical Reports Server (NTRS)

    Kulczycki, Eric A.; Magnone, Lee J.; Huntsberger, Terrance; Aghazarian, Hrand; Padgett, Curtis W.; Trotz, David C.; Garrett, Michael S.

    2009-01-01

    The need for autonomous navigation and intelligent control of unmanned sea surface vehicles requires a mechanically robust sensing architecture that is watertight, durable, and insensitive to vibration and shock loading. The sensing system developed here comprises four black and white cameras and a single color camera. The cameras are rigidly mounted to a camera bar that can be reconfigured to mount multiple vehicles, and act as both navigational cameras and application cameras. The cameras are housed in watertight casings to protect them and their electronics from moisture and wave splashes. Two of the black and white cameras are positioned to provide lateral vision. They are angled away from the front of the vehicle at horizontal angles to provide ideal fields of view for mapping and autonomous navigation. The other two black and white cameras are positioned at an angle into the color camera's field of view to support vehicle applications. These two cameras provide an overlap, as well as a backup to the front camera. The color camera is positioned directly in the middle of the bar, aimed straight ahead. This system is applicable to any sea-going vehicle, both on Earth and in space.

  9. Measurement of instantaneous rotational speed using double-sine-varying-density fringe pattern

    NASA Astrophysics Data System (ADS)

    Zhong, Jianfeng; Zhong, Shuncong; Zhang, Qiukun; Peng, Zhike

    2018-03-01

    Fast and accurate rotational speed measurement is required both for condition monitoring and faults diagnose of rotating machineries. A vision- and fringe pattern-based rotational speed measurement system was proposed to measure the instantaneous rotational speed (IRS) with high accuracy and reliability. A special double-sine-varying-density fringe pattern (DSVD-FP) was designed and pasted around the shaft surface completely and worked as primary angular sensor. The rotational angle could be correctly obtained from the left and right fringe period densities (FPDs) of the DSVD-FP image sequence recorded by a high-speed camera. The instantaneous angular speed (IAS) between two adjacent frames could be calculated from the real-time rotational angle curves, thus, the IRS also could be obtained accurately and efficiently. Both the measurement principle and system design of the novel method have been presented. The influence factors on the sensing characteristics and measurement accuracy of the novel system, including the spectral centrobaric correction method (SCCM) on the FPD calculation, the noise sources introduce by the image sensor, the exposure time and the vibration of the shaft, were investigated through simulations and experiments. The sampling rate of the high speed camera could be up to 5000 Hz, thus, the measurement becomes very fast and the change in rotational speed was sensed within 0.2 ms. The experimental results for different IRS measurements and characterization of the response property of a servo motor demonstrated the high accuracy and fast measurement of the proposed technique, making it attractive for condition monitoring and faults diagnosis of rotating machineries.

  10. 3D Visual Tracking of an Articulated Robot in Precision Automated Tasks

    PubMed Central

    Alzarok, Hamza; Fletcher, Simon; Longstaff, Andrew P.

    2017-01-01

    The most compelling requirements for visual tracking systems are a high detection accuracy and an adequate processing speed. However, the combination between the two requirements in real world applications is very challenging due to the fact that more accurate tracking tasks often require longer processing times, while quicker responses for the tracking system are more prone to errors, therefore a trade-off between accuracy and speed, and vice versa is required. This paper aims to achieve the two requirements together by implementing an accurate and time efficient tracking system. In this paper, an eye-to-hand visual system that has the ability to automatically track a moving target is introduced. An enhanced Circular Hough Transform (CHT) is employed for estimating the trajectory of a spherical target in three dimensions, the colour feature of the target was carefully selected by using a new colour selection process, the process relies on the use of a colour segmentation method (Delta E) with the CHT algorithm for finding the proper colour of the tracked target, the target was attached to the six degree of freedom (DOF) robot end-effector that performs a pick-and-place task. A cooperation of two Eye-to Hand cameras with their image Averaging filters are used for obtaining clear and steady images. This paper also examines a new technique for generating and controlling the observation search window in order to increase the computational speed of the tracking system, the techniques is named Controllable Region of interest based on Circular Hough Transform (CRCHT). Moreover, a new mathematical formula is introduced for updating the depth information of the vision system during the object tracking process. For more reliable and accurate tracking, a simplex optimization technique was employed for the calculation of the parameters for camera to robotic transformation matrix. The results obtained show the applicability of the proposed approach to track the moving robot with an overall tracking error of 0.25 mm. Also, the effectiveness of CRCHT technique in saving up to 60% of the overall time required for image processing. PMID:28067860

  11. Application of high-speed photography to the study of high-strain-rate materials testing

    NASA Astrophysics Data System (ADS)

    Ruiz, D.; Harding, John; Noble, J. P.; Hillsdon, Graham K.

    1991-04-01

    There is a growing interest in material behaviour at strain rates greater than 104sec1, for instance in the design of aero-engine turbine blades. It is necessary therefore, to develop material testing techniques that give well-defined information on mechanical behaviour in this very high strain-rate regime. A number of techniques are available, including the expanding ring test1, a miniaturised compression Hopkinson bar technique using direct impact and the double-notch shear test3 which has been described by Nicholas4 as "one of the most promising for future studies in dynamic plasticity". However, although it is believed to be a good test for determining the flow stress at shear strain rates of 104sec and above, the design of specimen used makes an accurate determination of strain extremely difficult while, in the later stages of the test the deformation mode involves rotation as well as shear. If this technique is to be used, therefore, it is necessary to examine in detail the progressive deformation and state of stress within the specimen during the impact process. An attempt can then be made to assess how far the data obtained is a reliable measure of the specimen material response and the test can be calibrated. An extensive three stage analysis has been undertaken. In the first stage, reported in a previous paper5, the initial elastic behaviour was studied. Dynamic photoelastic experiments were used to support linear elastic numerical results derived by the finite element method. Good qualitative agreement was obtained between the photoelastic experiment and the numerical model and the principal source of error in the elastic region of the double-notch shear test was identified as the assumption that all deformation of the specimen is concentrated in the two shear zones. For the epoxy (photoelastic) specimen a calibration factor of 5.3 was determined. This factor represents the ratio between the defined (nominal) gauge length and the effective gauge length. The second stage of the analysis of the double-notch shear (DNS) specimen is described in this paper. This consists of the use of ultra-high speed photography to provide information on the plastic deformation behaviour of the specimen. Two different high speed cine cameras were used for this work, a Hadland "Imacon" 792 electronic image converter camera and a Cordin 377 rotating mirror-drum optical camera. Implementation of the two cameras and photographic results are briefly compared and contrasted here. Stage three of this work consists of an advanced numerical analysis of the elasto-plastic, strain rate dependent behaviour of the DNS specimen. The principle intention of the authors was to use the physical data collected from high speed photographs for correlation with this work. Full details of the numerical work are presented elsewhere6 but some salient results will be given here for completeness.

  12. Pulsed-neutron imaging by a high-speed camera and center-of-gravity processing

    NASA Astrophysics Data System (ADS)

    Mochiki, K.; Uragaki, T.; Koide, J.; Kushima, Y.; Kawarabayashi, J.; Taketani, A.; Otake, Y.; Matsumoto, Y.; Su, Y.; Hiroi, K.; Shinohara, T.; Kai, T.

    2018-01-01

    Pulsed-neutron imaging is attractive technique in the research fields of energy-resolved neutron radiography and RANS (RIKEN) and RADEN (J-PARC/JAEA) are small and large accelerator-driven pulsed-neutron facilities for its imaging, respectively. To overcome the insuficient spatial resolution of the conunting type imaging detectors like μ NID, nGEM and pixelated detectors, camera detectors combined with a neutron color image intensifier were investigated. At RANS center-of-gravity technique was applied to spots image obtained by a CCD camera and the technique was confirmed to be effective for improving spatial resolution. At RADEN a high-frame-rate CMOS camera was used and super resolution technique was applied and it was recognized that the spatial resolution was futhermore improved.

  13. A reference Pelton turbine - High speed visualization in the rotating frame

    NASA Astrophysics Data System (ADS)

    Solemslie, Bjørn W.; Dahlhaug, Ole G.

    2016-11-01

    To enable a detailed study the flow mechanisms effecting the flow within the reference Pelton runner designed at the Waterpower Laboratory (NTNLT) a flow visualization system has been developed. The system enables high speed filming of the hydraulic surface of a single bucket in the rotating frame of reference. It is built with an angular borescopes adapter entering the turbine along the rotational axis and a borescope embedded within a bucket. A stationary high speed camera located outside the turbine housing has been connected to the optical arrangement by a non-contact coupling. The view point of the system includes the whole hydraulic surface of one half of a bucket. The system has been designed to minimize the amount of vibrations and to ensure that the vibrations felt by the borescope are the same as those affecting the camera. The preliminary results captured with the system are promising and enable a detailed study of the flow within the turbine.

  14. High-speed motion picture camera experiments of cavitation in dynamically loaded journal bearings

    NASA Technical Reports Server (NTRS)

    Jacobson, B. O.; Hamrock, B. J.

    1982-01-01

    A high-speed camera was used to investigate cavitation in dynamically loaded journal bearings. The length-diameter ratio of the bearing, the speeds of the shaft and bearing, the surface material of the shaft, and the static and dynamic eccentricity of the bearing were varied. The results reveal not only the appearance of gas cavitation, but also the development of previously unsuspected vapor cavitation. It was found that gas cavitation increases with time until, after many hundreds of pressure cycles, there is a constant amount of gas kept in the cavitation zone of the bearing. The gas can have pressures of many times the atmospheric pressure. Vapor cavitation bubbles, on the other hand, collapse at pressures lower than the atmospheric pressure and cannot be transported through a high-pressure zone, nor does the amount of vapor cavitation in a bearing increase with time. Analysis is given to support the experimental findings for both gas and vapor cavitation.

  15. Real-time color measurement using active illuminant

    NASA Astrophysics Data System (ADS)

    Tominaga, Shoji; Horiuchi, Takahiko; Yoshimura, Akihiko

    2010-01-01

    This paper proposes a method for real-time color measurement using active illuminant. A synchronous measurement system is constructed by combining a high-speed active spectral light source and a high-speed monochrome camera. The light source is a programmable spectral source which is capable of emitting arbitrary spectrum in high speed. This system is the essential advantage of capturing spectral images without using filters in high frame rates. The new method of real-time colorimetry is different from the traditional method based on the colorimeter or the spectrometers. We project the color-matching functions onto an object surface as spectral illuminants. Then we can obtain the CIE-XYZ tristimulus values directly from the camera outputs at every point on the surface. We describe the principle of our colorimetric technique based on projection of the color-matching functions and the procedure for realizing a real-time measurement system of a moving object. In an experiment, we examine the performance of real-time color measurement for a static object and a moving object.

  16. A Study by High-Speed Photography of Combustion and Knock in a Spark-Ignition Engine

    NASA Technical Reports Server (NTRS)

    Miller, Cearcy D

    1942-01-01

    The study of combustion in a spark-ignition engine given in Technical Report no. 704 has been continued. The investigation was made with the NACA high-speed motion-picture camera and the NACA optical engine indicator. The camera operates at the rate of 40,000 photographs a second and makes possible the study of phenomena occurring in time intervals as short as 0.000025 second. Photographs are presented of combustion without knock and with both light and heavy knocks, the end zone of combustion being within the field of view. Time-pressure records covering the same conditions as the photographs are presented and their relations to the photographs are studied. Photographs with ignition at various advance angles are compared with a view to observing any possible relationship between pressure and flame depth. A tentative explanation of knock is suggested, which is designed to agree with the indications of the high-speed photographs and the time-pressure records.

  17. Development of a high-speed H-alpha camera system for the observation of rapid fluctuations in solar flares

    NASA Technical Reports Server (NTRS)

    Kiplinger, Alan L.; Dennis, Brian R.; Orwig, Larry E.; Chen, P. C.

    1988-01-01

    A solid-state digital camera was developed for obtaining H alpha images of solar flares with 0.1 s time resolution. Beginning in the summer of 1988, this system will be operated in conjunction with SMM's hard X-ray burst spectrometer (HXRBS). Important electron time-of-flight effects that are crucial for determining the flare energy release processes should be detectable with these combined H alpha and hard X-ray observations. Charge-injection device (CID) cameras provide 128 x 128 pixel images simultaneously in the H alpha blue wing, line center, and red wing, or other wavelength of interest. The data recording system employs a microprocessor-controlled, electronic interface between each camera and a digital processor board that encodes the data into a serial bitstream for continuous recording by a standard video cassette recorder. Only a small fraction of the data will be permanently archived through utilization of a direct memory access interface onto a VAX-750 computer. In addition to correlations with hard X-ray data, observations from the high speed H alpha camera will also be correlated and optical and microwave data and data from future MAX 1991 campaigns. Whether the recorded optical flashes are simultaneous with X-ray peaks to within 0.1 s, are delayed by tenths of seconds or are even undetectable, the results will have implications on the validity of both thermal and nonthermal models of hard X-ray production.

  18. Camera Layout Design for the Upper Stage Thrust Cone

    NASA Technical Reports Server (NTRS)

    Wooten, Tevin; Fowler, Bart

    2010-01-01

    Engineers in the Integrated Design and Analysis Division (EV30) use a variety of different tools to aid in the design and analysis of the Ares I vehicle. One primary tool in use is Pro-Engineer. Pro-Engineer is a computer-aided design (CAD) software that allows designers to create computer generated structural models of vehicle structures. For the Upper State thrust cone, Pro-Engineer was used to assist in the design of a layout for two camera housings. These cameras observe the separation between the first and second stage of the Ares I vehicle. For the Ares I-X, one standard speed camera was used. The Ares I design calls for two separate housings, three cameras, and a lighting system. With previous design concepts and verification strategies in mind, a new layout for the two camera design concept was developed with members of the EV32 team. With the new design, Pro-Engineer was used to draw the layout to observe how the two camera housings fit with the thrust cone assembly. Future analysis of the camera housing design will verify the stability and clearance of the camera with other hardware present on the thrust cone.

  19. Overall Impact of Speed-Related Initiatives and Factors on Crash Outcomes

    PubMed Central

    D’Elia, A.; Newstead, S.; Cameron, M.

    2007-01-01

    From December 2000 until July 2002 a package of speed-related initiatives and factors took place in Victoria, Australia. The broad aim of this study was to evaluate the overall impact of the package on crash outcomes. Monthly crash counts and injury severity proportions were assessed using Poisson and logistic regression models respectively. The model measured the overall effect of the package after adjusting as far as possible for non-speed road safety initiatives and socio-economic factors. The speed-related package was associated with statistically significant estimated reductions in casualty crashes and suggested reductions in injury severity with trends towards increased reductions over time. From December 2000 until July 2002, three new speed enforcement initiatives were implemented in Victoria, Australia. These initiatives were introduced in stages and involved the following key components: More covert operations of mobile speed cameras, including flash-less operations; 50% increase in speed camera operating hours; and lowering of cameras’ speed detection threshold. In addition, during the period 2001 to 2002, the 50 km/h General Urban Speed Limit (GUSL) was introduced (January 2001), there was an increase in speed-related advertising including the “Wipe Off 5” campaign, media announcements were made related to the above enforcement initiatives and there was a speeding penalty restructure. The above elements combine to make up a package of speed-related initiatives and factors. The package represents a broad, long term program by Victorian government agencies to reduce speed based on three linked strategies: more intensive Police enforcement of speed limits to deter potential offenders, i.e. the three new speed enforcement initiatives just described - supported by higher penalties; a reduction in the speed limit on local streets throughout Victoria from 60 km/h to 50 km/h; and provision of information using the mass media (television, radio and billboard) to reinforce the benefits of reducing low level speeding - the central message of “Wipe Off 5”. These strategies were implemented across the entire state of Victoria with the intention of covering as many road users as possible. PMID:18184508

  20. Improving Spherical Photogrammetry Using 360° OMNI-CAMERAS: Use Cases and New Applications

    NASA Astrophysics Data System (ADS)

    Fangi, G.; Pierdicca, R.; Sturari, M.; Malinverni, E. S.

    2018-05-01

    During the last few years, there has been a growing exploitation of consumer-grade cameras allowing one to capture 360° images. Each device has different features and the choice should be entrusted on the use and the expected final output. The interest on such technology within the research community is related to its use versatility, enabling the user to capture the world with an omnidirectional view with just one shot. The potential is huge and the literature presents many use cases in several research domains, spanning from retail to construction, from tourism to immersive virtual reality solutions. However, the domain that could the most benefit is Cultural Heritage (CH), since these sensors are particularly suitable for documenting a real scene with architectural detail. Following the previous researches conducted by Fangi, which introduced its own methodology called Spherical Photogrammetry (SP), the aim of this paper is to present some tests conducted with the omni-camera Panono 360° which reach a final resolution comparable with a traditional camera and to validate, after almost ten years from the first experiment, its reliability for architectural surveying purposes. Tests have been conducted choosing as study cases Santa Maria della Piazza and San Francesco alle scale Churches in Ancona, Italy, since they were previously surveyed and documented with SP methodology. In this way, it has been possible to validate the accuracy of the new survey, performed by means an omni-camera, compared with the previous one for both outdoor and indoor scenario. The core idea behind this work is to validate if this new sensor can replace the standard image collection phase, speeding up the process, assuring at the same time the final accuracy of the survey. The experiment conducted demonstrate that, w.r.t. the SP methodology developed so far, the main advantage in using 360° omni-directional cameras lies on increasing the rapidity of acquisition and panorama creation phases. Moreover, in order to foresee the implications that a wide adoption of fast and agile tools of acquisition could bring within the CH domain, points cloud have been generated with the same panoramas and visualized in a WEB application, to allow a result dissemination between the users.

  1. Direct imaging of explosives.

    PubMed

    Knapp, E A; Moler, R B; Saunders, A W; Trower, W P

    2000-01-01

    Any technique that can detect nitrogen concentrations can screen for concealed explosives. However, such a technique would have to be insensitive to metal, both encasing and incidental. If images of the nitrogen concentrations could be captured, then, since form follows function, a robust screening technology could be developed. However these images would have to be sensitive to the surface densities at or below that of the nitrogen contained in buried anti-personnel mines or of the SEMTEX that brought down Pan Am 103, approximately 200 g. Although the ability to image in three-dimensions would somewhat reduce false positives, capturing collateral images of carbon and oxygen would virtually assure that nitrogenous non-explosive material like fertilizer, Melmac dinnerware, and salami could be eliminated. We are developing such an instrument, the Nitrogen Camera, which has met experimentally these criteria with the exception of providing oxygen images, which awaits the availability of a sufficiently energetic light source. Our Nitrogen Camera technique uses an electron accelerator to produce photonuclear reactions whose unique decays it registers. Clearly if our Nitrogen Camera is made mobile, it could be effective in detecting buried mines, either in an active battlefield situation or in the clearing of abandoned military munitions. Combat operations require that a swathe the width of an armored vehicle, 5 miles deep, be screened in an hour, which is within our camera's scanning speed. Detecting abandoned munitions is technically easier as it is free from the onerous speed requirement. We describe here our Nitrogen Camera and show its 180 pixel intensity images of elemental nitrogen in a 200 g mine simulant and in a 125 g stick of SEMTEX. We also report on our progress in creating a lorry transportable 70 MeV electron racetrack microtron, the principal enabling technology that will allow our Nitrogen Camera to be deployed in the field.

  2. An evaluation of fish behavior upstream of the water temperature control tower at Cougar Dam, Oregon, using acoustic cameras, 2013

    USGS Publications Warehouse

    Adams, Noah S.; Smith, Collin; Plumb, John M.; Hansen, Gabriel S.; Beeman, John W.

    2015-07-06

    This report describes the initial year of a 2-year study to determine the feasibility of using acoustic cameras to monitor fish movements to help inform decisions about fish passage at Cougar Dam near Springfield, Oregon. Specifically, we used acoustic cameras to measure fish presence, travel speed, and direction adjacent to the water temperature control tower in the forebay of Cougar Dam during the spring (May, June, and July) and fall (September, October, and November) of 2013. Cougar Dam is a high-head flood-control dam, and the water temperature control tower enables depth-specific water withdrawals to facilitate adjustment of water temperatures released downstream of the dam. The acoustic cameras were positioned at the upstream entrance of the tower to monitor free-ranging subyearling and yearling-size juvenile Chinook salmon (Oncorhynchus tshawytscha). Because of the large size discrepancy, we could distinguish juvenile Chinook salmon from their predators, which enabled us to measure predators and prey in areas adjacent to the entrance of the tower. We used linear models to quantify and assess operational and environmental factors—such as time of day, discharge, and water temperature—that may influence juvenile Chinook salmon movements within the beam of the acoustic cameras. Although extensive milling behavior of fish near the structure may have masked directed movement of fish and added unpredictability to fish movement models, the acoustic-camera technology enabled us to ascertain the general behavior of discrete size classes of fish. Fish travel speed, direction of travel, and counts of fish moving toward the water temperature control tower primarily were influenced by the amount of water being discharged through the dam.

  3. Slow Speed--Fast Motion: Time-Lapse Recordings in Physics Education

    ERIC Educational Resources Information Center

    Vollmer, Michael; Möllmann, Klaus-Peter

    2018-01-01

    Video analysis with a 30 Hz frame rate is the standard tool in physics education. The development of affordable high-speed-cameras has extended the capabilities of the tool for much smaller time scales to the 1 ms range, using frame rates of typically up to 1000 frames s[superscript -1], allowing us to study transient physics phenomena happening…

  4. Analysis of javelin throwing by high-speed photography

    NASA Astrophysics Data System (ADS)

    Yamamoto, Yoshitaka; Matsuoka, Rutsu; Ishida, Yoshihisa; Seki, Kazuichi

    1999-06-01

    A xenon multiple exposure light source device was manufactured to record the trajectory of a flying javelin, and a wind tunnel experiment was performed with some javelin models to analyze the flying characteristics of the javelin. Furthermore, form of javelin throwing by athletes was recorded to estimate the characteristics in the form of each athlete using a high speed cameras.

  5. Pole Photogrammetry with AN Action Camera for Fast and Accurate Surface Mapping

    NASA Astrophysics Data System (ADS)

    Gonçalves, J. A.; Moutinho, O. F.; Rodrigues, A. C.

    2016-06-01

    High resolution and high accuracy terrain mapping can provide height change detection for studies of erosion, subsidence or land slip. A UAV flying at a low altitude above the ground, with a compact camera, acquires images with resolution appropriate for these change detections. However, there may be situations where different approaches may be needed, either because higher resolution is required or the operation of a drone is not possible. Pole photogrammetry, where a camera is mounted on a pole, pointing to the ground, is an alternative. This paper describes a very simple system of this kind, created for topographic change detection, based on an action camera. These cameras have high quality and very flexible image capture. Although radial distortion is normally high, it can be treated in an auto-calibration process. The system is composed by a light aluminium pole, 4 meters long, with a 12 megapixel GoPro camera. Average ground sampling distance at the image centre is 2.3 mm. The user moves along a path, taking successive photos, with a time lapse of 0.5 or 1 second, and adjusting the speed in order to have an appropriate overlap, with enough redundancy for 3D coordinate extraction. Marked ground control points are surveyed with GNSS for precise georeferencing of the DSM and orthoimage that are created by structure from motion processing software. An average vertical accuracy of 1 cm could be achieved, which is enough for many applications, for example for soil erosion. The GNSS survey in RTK mode with permanent stations is now very fast (5 seconds per point), which results, together with the image collection, in a very fast field work. If an improved accuracy is needed, since image resolution is 1/4 cm, it can be achieved using a total station for the control point survey, although the field work time increases.

  6. How Many Pixels Does It Take to Make a Good 4"×6" Print? Pixel Count Wars Revisited

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    Digital still cameras emerged following the introduction of the Sony Mavica analog prototype camera in 1981. These early cameras produced poor image quality and did not challenge film cameras for overall quality. By 1995 digital still cameras in expensive SLR formats had 6 mega-pixels and produced high quality images (with significant image processing). In 2005 significant improvement in image quality was apparent and lower prices for digital still cameras (DSCs) started a rapid decline in film usage and film camera sells. By 2010 film usage was mostly limited to professionals and the motion picture industry. The rise of DSCs was marked by a “pixel war” where the driving feature of the cameras was the pixel count where even moderate cost, ˜120, DSCs would have 14 mega-pixels. The improvement of CMOS technology pushed this trend of lower prices and higher pixel counts. Only the single lens reflex cameras had large sensors and large pixels. The drive for smaller pixels hurt the quality aspects of the final image (sharpness, noise, speed, and exposure latitude). Only today are camera manufactures starting to reverse their course and producing DSCs with larger sensors and pixels. This paper will explore why larger pixels and sensors are key to the future of DSCs.

  7. A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment.

    PubMed

    Yang, Tao; Li, Guangpo; Li, Jing; Zhang, Yanning; Zhang, Xiaoqiang; Zhang, Zhuoyue; Li, Zhi

    2016-08-30

    This paper proposes a novel infrared camera array guidance system with capability to track and provide real time position and speed of a fixed-wing Unmanned air vehicle (UAV) during a landing process. The system mainly include three novel parts: (1) Infrared camera array and near infrared laser lamp based cooperative long range optical imaging module; (2) Large scale outdoor camera array calibration module; and (3) Laser marker detection and 3D tracking module. Extensive automatic landing experiments with fixed-wing flight demonstrate that our infrared camera array system has the unique ability to guide the UAV landing safely and accurately in real time. Moreover, the measurement and control distance of our system is more than 1000 m. The experimental results also demonstrate that our system can be used for UAV automatic accurate landing in Global Position System (GPS)-denied environments.

  8. Visualization of explosion phenomena using a high-speed video camera with an uncoupled objective lens by fiber-optic

    NASA Astrophysics Data System (ADS)

    Tokuoka, Nobuyuki; Miyoshi, Hitoshi; Kusano, Hideaki; Hata, Hidehiro; Hiroe, Tetsuyuki; Fujiwara, Kazuhito; Yasushi, Kondo

    2008-11-01

    Visualization of explosion phenomena is very important and essential to evaluate the performance of explosive effects. The phenomena, however, generate blast waves and fragments from cases. We must protect our visualizing equipment from any form of impact. In the tests described here, the front lens was separated from the camera head by means of a fiber-optic cable in order to be able to use the camera, a Shimadzu Hypervision HPV-1, for tests in severe blast environment, including the filming of explosions. It was possible to obtain clear images of the explosion that were not inferior to the images taken by the camera with the lens directly coupled to the camera head. It could be confirmed that this system is very useful for the visualization of dangerous events, e.g., at an explosion site, and for visualizations at angles that would be unachievable under normal circumstances.

  9. Determining the frequency of open windows in motor vehicles: a pilot study using a video camera in Houston, Texas during high temperature conditions.

    PubMed

    Long, Tom; Johnson, Ted; Ollison, Will

    2002-05-01

    Researchers have developed a variety of computer-based models to estimate population exposure to air pollution. These models typically estimate exposures by simulating the movement of specific population groups through defined microenvironments. Exposures in the motor vehicle microenvironment are significantly affected by air exchange rate, which in turn is affected by vehicle speed, window position, vent status, and air conditioning use. A pilot study was conducted in Houston, Texas, during September 2000 for a specific set of weather, vehicle speed, and road type conditions to determine whether useful information on the position of windows, sunroofs, and convertible tops could be obtained through the use of video cameras. Monitoring was conducted at three sites (two arterial roads and one interstate highway) on the perimeter of Harris County located in or near areas not subject to mandated Inspection and Maintenance programs. Each site permitted an elevated view of vehicles as they proceeded through a turn, thereby exposing all windows to the stationary video camera. Five videotaping sessions were conducted over a two-day period in which the Heat Index (HI)-a function of temperature and humidity-varied from 80 to 101 degrees F and vehicle speed varied from 30 to 74 mph. The resulting videotapes were processed to create a master database listing vehicle-specific data for site location, date, time, vehicle type (e.g., minivan), color, window configuration (e.g., four windows and sunroof), number of windows in each of three position categories (fully open, partially open, and closed), HI, and speed. Of the 758 vehicles included in the database, 140 (18.5 percent) were labeled as "open," indicating a window, sunroof, or convertible top was fully or partially open. The results of a series of stepwise linear regression analyses indicated that the probability of a vehicle in the master database being "open" was weakly affected by time of day, vehicle type, vehicle color, vehicle speed, and HI. In particular, open windows occurred more frequently when vehicle speed was less than 50 mph during periods when HI exceeded 99.9 degrees F and the vehicle was a minivan or passenger van. Overall, the pilot study demonstrated that data on factors affecting vehicle window position could be acquired through a relatively simple experimental protocol using a single video camera. Limitations of the study requiring further research include the inability to determine the status of the vehicle air conditioning system; lack of a wide range of weather, vehicle speed, and road type conditions; and the need to exclude some vehicles from statistical analyses due to ambiguous window positions.

  10. High speed imaging, lightning mapping arrays and thermal imaging: a synergy for the monitoring of electrical discharges at the onset of volcanic explosions

    NASA Astrophysics Data System (ADS)

    Gaudin, Damien; Cimarelli, Corrado; Behnke, Sonja; Cigala, Valeria; Edens, Harald; McNutt, Stefen; Smith, Cassandra; Thomas, Ronald; Van Eaton, Alexa

    2017-04-01

    Volcanic lightning is being increasingly studied, due to its great potential for the detection and monitoring of ash plumes. Indeed, it is observed in a large number of ash-rich volcanic eruptions and it produces electromagnetic waves that can be detected remotely in all weather conditions. Electrical discharges in volcanic plume can also significantly change the structural, chemical and reactivity properties of the erupted material. Although electrical discharges are detected in various regions of the plume, those happening at the onset of an explosion are of particular relevance for the early warning and the study of volcanic jet dynamics. In order to better constrain the electrical activity of young volcanic plumes, we deployed at Sakurajima (Japan) in 2015 a multiparametric set-up including: i) a lightning mapping array (LMA) of 10 VHF antennas recording the electromagnetic waves produced by lightning at a sample rate of 25 Msps; ii) a visible-light high speed camera (5000 frames per second, 0.5 m pixel size, 300 m field of view) shooting short movies (approx. duration 1 s) at different stages of the plume evolution, showing the location of discharges in relation to the plume; and iii) a thermal camera (25 fps, 1.5 m pixel size, 800 m field of view) continuously recording the plume and allowing the estimation of its main source parameters (volume, rise velocity, mass eruption rate). The complementarity of these three setups is demonstrated by comparing and aggregating the data at various stages of the plume development. In the earliest stages, the high speed camera spots discrete small discharges, that appear on the LMA data as peaks superimposed to the continuous radio frequency (CRF) signal. At later stages, flashes happen less frequently and increase in length. The correspondence between high speed camera and LMA data allows to define a direct correlation between the length of the flash and the intensity of the electromagnetic signal. Such correlation is used to estimate the evolution of the total discharges within a volcanic plume, while the superimposition of thermal and high speed videos allows to contextualize the flashes location in the scope of the plume features and dynamics.

  11. Research on an optoelectronic measurement system of dynamic envelope measurement for China Railway high-speed train

    NASA Astrophysics Data System (ADS)

    Zhao, Ziyue; Gan, Xiaochuan; Zou, Zhi; Ma, Liqun

    2018-01-01

    The dynamic envelope measurement plays very important role in the external dimension design for high-speed train. Recently there is no digital measurement system to solve this problem. This paper develops an optoelectronic measurement system by using monocular digital camera, and presents the research of measurement theory, visual target design, calibration algorithm design, software programming and so on. This system consists of several CMOS digital cameras, several luminous targets for measuring, a scale bar, data processing software and a terminal computer. The system has such advantages as large measurement scale, high degree of automation, strong anti-interference ability, noise rejection and real-time measurement. In this paper, we resolve the key technology such as the transformation, storage and calculation of multiple cameras' high resolution digital image. The experimental data show that the repeatability of the system is within 0.02mm and the distance error of the system is within 0.12mm in the whole workspace. This experiment has verified the rationality of the system scheme, the correctness, the precision and effectiveness of the relevant methods.

  12. Time-sequenced X-ray Observation of a Thermal Explosion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tringe, J W; Molitoris, J D; Smilowitz, L

    The evolution of a thermally-initiated explosion is studied using a multiple-image x-ray system. HMX-based PBX 9501 is used in this work, enabling direct comparison to recently-published data obtained with proton radiography [1]. Multiple x-ray images of the explosion are obtained with image spacing of ten microseconds or more. The explosion is simultaneously characterized with a high-speed camera using an interframe spacing of 11 {micro}s. X-ray and camera images were both initiated passively by signals from an embedded thermocouple array, as opposed to being actively triggered by a laser pulse or other external source. X-ray images show an accelerating reacting frontmore » within the explosive, and also show unreacted explosive at the time the containment vessel bursts. High-speed camera images show debris ejected from the vessel expanding at 800-2100 m/s in the first tens of {micro}s after the container wall failure. The effective center of the initiation volume is about 6 mm from the geometric center of the explosive.« less

  13. In-line particle measurement in a recovery boiler using high-speed infrared imaging

    NASA Astrophysics Data System (ADS)

    Siikanen, Sami; Miikkulainen, Pasi; Kaarre, Marko; Juuti, Mikko

    2012-06-01

    Black liquor is the fuel of Kraft recovery boilers. It is sprayed into the furnace of a recovery boiler through splashplate nozzles. The operation of a recovery boiler is largely influenced by the particle size and particle size distribution of black liquor. When entrained by upwards-flowing flue gas flow, small droplet particles may form carry-over and cause the fouling of heat transfer surfaces. Large droplet particles hit the char bed and the walls of the furnace without being dried. In this study, particles of black liquor sprays were imaged using a high-speed infrared camera. Measurements were done in a functional recovery boiler in a pulp mill. Objective was to find a suitable wavelength range and settings such as integration time, frame rate and averaging for the camera.

  14. 03pd0676

    NASA Image and Video Library

    2003-03-07

    File name :DSC_0749.JPG File size :1.1MB(1174690Bytes) Date taken :2003/03/07 13:51:29 Image size :2000 x 1312 Resolution :300 x 300 dpi Number of bits :8bit/channel Protection attribute :Off Hide Attribute :Off Camera ID :N/A Camera :NIKON D1H Quality mode :FINE Metering mode :Matrix Exposure mode :Shutter priority Speed light :No Focal length :20 mm Shutter speed :1/500second Aperture :F11.0 Exposure compensation :0 EV White Balance :Auto Lens :20 mm F 2.8 Flash sync mode :N/A Exposure difference :0.0 EV Flexible program :No Sensitivity :ISO200 Sharpening :Normal Image Type :Color Color Mode :Mode II(Adobe RGB) Hue adjustment :3 Saturation Control :N/A Tone compensation :Normal Latitude(GPS) :N/A Longitude(GPS) :N/A Altitude(GPS) :N/A

  15. 03pd0517

    NASA Image and Video Library

    2002-02-19

    File name :DSC_0028.JPG File size :2.8MB(2950833Bytes) Date taken :2002/02/19 09:49:01 Image size :3008 x 2000 Resolution :300 x 300 dpi Number of bits :8bit/channel Protection attribute :Off Hide Attribute :Off Camera ID :N/A Camera :NIKON D100 Quality mode :N/A Metering mode :Matrix Exposure mode :Shutter priority Speed light :Yes Focal length :24 mm Shutter speed :1/60second Aperture :F3.5 Exposure compensation :0 EV White Balance :N/A Lens :N/A Flash sync mode :N/A Exposure difference :N/A Flexible program :N/A Sensitivity :N/A Sharpening :N/A Image Type :Color Color Mode :N/A Hue adjustment :N/A Saturation Control :N/A Tone compensation :N/A Latitude(GPS) :N/A Longitude(GPS) :N/A Altitude(GPS) :N/A

  16. 03pd0535

    NASA Image and Video Library

    2002-02-24

    File name :DSC_0047.JPG File size :2.8MB(2931574Bytes) Date taken :2002/02/24 10:06:57 Image size :3008 x 2000 Resolution :300 x 300 dpi Number of bits :8bit/channel Protection attribute :Off Hide Attribute :Off Camera ID :N/A Camera :NIKON D100 Quality mode :N/A Metering mode :Matrix Exposure mode :Shutter priority Speed light :Yes Focal length :24 mm Shutter speed :1/180second Aperture :F20.0 Exposure compensation :+0.3 EV White Balance :N/A Lens :N/A Flash sync mode :N/A Exposure difference :N/A Flexible program :N/A Sensitivity :N/A Sharpening :N/A Image Type :Color Color Mode :N/A Hue adjustment :N/A Saturation Control :N/A Tone compensation :N/A Latitude(GPS) :N/A Longitude(GPS) :N/A Altitude(GPS) :N/A

  17. Computer aided photographic engineering

    NASA Technical Reports Server (NTRS)

    Hixson, Jeffrey A.; Rieckhoff, Tom

    1988-01-01

    High speed photography is an excellent source of engineering data but only provides a two-dimensional representation of a three-dimensional event. Multiple cameras can be used to provide data for the third dimension but camera locations are not always available. A solution to this problem is to overlay three-dimensional CAD/CAM models of the hardware being tested onto a film or photographic image, allowing the engineer to measure surface distances, relative motions between components, and surface variations.

  18. Vision-Based Traffic Data Collection Sensor for Automotive Applications

    PubMed Central

    Llorca, David F.; Sánchez, Sergio; Ocaña, Manuel; Sotelo, Miguel. A.

    2010-01-01

    This paper presents a complete vision sensor onboard a moving vehicle which collects the traffic data in its local area in daytime conditions. The sensor comprises a rear looking and a forward looking camera. Thus, a representative description of the traffic conditions in the local area of the host vehicle can be computed. The proposed sensor detects the number of vehicles (traffic load), their relative positions and their relative velocities in a four-stage process: lane detection, candidates selection, vehicles classification and tracking. Absolute velocities (average road speed) and global positioning are obtained after combining the outputs provided by the vision sensor with the data supplied by the CAN Bus and a GPS sensor. The presented experiments are promising in terms of detection performance and accuracy in order to be validated for applications in the context of the automotive industry. PMID:22315572

  19. Vision-based traffic data collection sensor for automotive applications.

    PubMed

    Llorca, David F; Sánchez, Sergio; Ocaña, Manuel; Sotelo, Miguel A

    2010-01-01

    This paper presents a complete vision sensor onboard a moving vehicle which collects the traffic data in its local area in daytime conditions. The sensor comprises a rear looking and a forward looking camera. Thus, a representative description of the traffic conditions in the local area of the host vehicle can be computed. The proposed sensor detects the number of vehicles (traffic load), their relative positions and their relative velocities in a four-stage process: lane detection, candidates selection, vehicles classification and tracking. Absolute velocities (average road speed) and global positioning are obtained after combining the outputs provided by the vision sensor with the data supplied by the CAN Bus and a GPS sensor. The presented experiments are promising in terms of detection performance and accuracy in order to be validated for applications in the context of the automotive industry.

  20. Real-time determination of fringe pattern frequencies: An application to pressure measurement

    NASA Astrophysics Data System (ADS)

    Sciammarella, Cesar A.; Piroozan, Parham

    2007-05-01

    Retrieving information in real time from fringe patterns is a topic of a great deal of interest in scientific and engineering applications of optical methods. This paper presents a method for fringe frequency determination based on the capability of neural networks to recognize signals that are similar but not identical to signals used to train the neural network. Sampled patterns are generated by calibration and stored in memory. Incoming patterns are analyzed by a back-propagation neural network at the speed of the recording device, a CCD camera. This method of information retrieval is utilized to measure pressures on a boundary layer flow. The sensor combines optics and electronics to analyze dynamic pressure distributions and to feed information to a control system that is capable to preserve the stability of the flow.

  1. Qualification Tests of Micro-camera Modules for Space Applications

    NASA Astrophysics Data System (ADS)

    Kimura, Shinichi; Miyasaka, Akira

    Visual capability is very important for space-based activities, for which small, low-cost space cameras are desired. Although cameras for terrestrial applications are continually being improved, little progress has been made on cameras used in space, which must be extremely robust to withstand harsh environments. This study focuses on commercial off-the-shelf (COTS) CMOS digital cameras because they are very small and are based on an established mass-market technology. Radiation and ultrahigh-vacuum tests were conducted on a small COTS camera that weighs less than 100 mg (including optics). This paper presents the results of the qualification tests for COTS cameras and for a small, low-cost COTS-based space camera.

  2. A Highly Accurate Face Recognition System Using Filtering Correlation

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Sayuri; Kodate, Kashiko

    2007-09-01

    The authors previously constructed a highly accurate fast face recognition optical correlator (FARCO) [E. Watanabe and K. Kodate: Opt. Rev. 12 (2005) 460], and subsequently developed an improved, super high-speed FARCO (S-FARCO), which is able to process several hundred thousand frames per second. The principal advantage of our new system is its wide applicability to any correlation scheme. Three different configurations were proposed, each depending on correlation speed. This paper describes and evaluates a software correlation filter. The face recognition function proved highly accurate, seeing that a low-resolution facial image size (64 × 64 pixels) has been successfully implemented. An operation speed of less than 10 ms was achieved using a personal computer with a central processing unit (CPU) of 3 GHz and 2 GB memory. When we applied the software correlation filter to a high-security cellular phone face recognition system, experiments on 30 female students over a period of three months yielded low error rates: 0% false acceptance rate and 2% false rejection rate. Therefore, the filtering correlation works effectively when applied to low resolution images such as web-based images or faces captured by a monitoring camera.

  3. Teaching high-speed photography and photo-instrumentation

    NASA Astrophysics Data System (ADS)

    Davidhazy, Andrew

    2005-03-01

    As the tools available to the high speed photographer have become more powerful the underlying technology has increased in complexity and often is beyond the reach of most practitioners in terms of in-the-field troubleshooting or adaptation and this specialization has also driven many systems beyond the reach of high school, community college and undergraduate, non-research funded, universities. In spite of this and with the belief that fundamental techniques, reasoning and approaches have not changed much over the years, several courses in photo-instrumentation at the Imaging and Photographic Technology program at the Rochester Institute of Technology present to a couple dozen undergraduate students a year the principles associated with a various imaging systems and techniques for visualization and data analysis of high speed or "invisible" phenomena. This paper reviews the objectives and philosophy of these courses in the context of a total imaging technology education. It describes and illustrates current topics included in the program. In brief, calibration and time measurement concepts, instantaneous and repetitive time sampling equipment, various visualization technologies, strip and streak cameras and applications using film and improvised digital recorders, basic velocimetry techniques including sensitometric velocimetry and synchro-ballistic photography plus other related techniques are introduced to undergraduate students.

  4. High speed imaging television system

    DOEpatents

    Wilkinson, William O.; Rabenhorst, David W.

    1984-01-01

    A television system for observing an event which provides a composite video output comprising the serially interlaced images the system is greater than the time resolution of any of the individual cameras.

  5. New NASA Images of Irma's Towering Clouds

    NASA Image and Video Library

    2017-09-08

    On Sept. 7, the Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA's Terra satellite passed over Hurricane Irma at approximately 11:20 a.m. local time. The MISR instrument comprises nine cameras that view the Earth at different angles, and since it takes roughly seven minutes for all nine cameras to capture the same location, the motion of the clouds between images allows scientists to calculate the wind speed at the cloud tops. The animated GIF shows Irma's motion over the seven minutes of the MISR imagery. North is toward the top of the image. This composite image shows Hurricane Irma as viewed by the central, downward-looking camera (left), as well as the wind speeds (right) superimposed on the image. The length of the arrows is proportional to the wind speed, while their color shows the altitude at which the winds were calculated. At the time the image was acquired, Irma's eye was located approximately 60 miles (100 kilometers) north of the Dominican Republic and 140 miles (230 kilometers) north of its capital, Santo Domingo. Irma was a powerful Category 5 hurricane, with wind speeds at the ocean surface up to 185 miles (300 kilometers) per hour, according to the National Oceanic and Atmospheric Administration. The MISR data show that at cloud top, winds near the eye wall (the most destructive part of the storm) were approximately 90 miles per hour (145 kilometers per hour), and the maximum cloud-top wind speed throughout the storm calculated by MISR was 135 miles per hour (220 kilometers per hour). While the hurricane's dominant rotation direction is counter-clockwise, winds near the eye wall are consistently pointing outward from it. This is an indication of outflow, the process by which a hurricane draws in warm, moist air at the surface and ejects cool, dry air at its cloud tops. These data were captured during Terra orbit 94267. An animation is available at https://photojournal.jpl.nasa.gov/catalog/PIA21946

  6. Experiments with synchronized sCMOS cameras

    NASA Astrophysics Data System (ADS)

    Steele, Iain A.; Jermak, Helen; Copperwheat, Chris M.; Smith, Robert J.; Poshyachinda, Saran; Soonthorntham, Boonrucksar

    2016-07-01

    Scientific-CMOS (sCMOS) cameras can combine low noise with high readout speeds and do not suffer the charge multiplication noise that effectively reduces the quantum efficiency of electron multiplying CCDs by a factor 2. As such they have strong potential in fast photometry and polarimetry instrumentation. In this paper we describe the results of laboratory experiments using a pair of commercial off the shelf sCMOS cameras based around a 4 transistor per pixel architecture. In particular using a both stable and a pulsed light sources we evaluate the timing precision that may be obtained when the cameras readouts are synchronized either in software or electronically. We find that software synchronization can introduce an error of 200-msec. With electronic synchronization any error is below the limit ( 50-msec) of our simple measurement technique.

  7. Hypervelocity impact studies using a rotating mirror framing laser shadowgraph camera

    NASA Technical Reports Server (NTRS)

    Parker, Vance C.; Crews, Jeanne Lee

    1988-01-01

    The need to study the effects of the impact of micrometeorites and orbital debris on various space-based systems has brought together the technologies of several companies and individuals in order to provide a successful instrumentation package. A light gas gun was employed to accelerate small projectiles to speeds in excess of 7 km/sec. Their impact on various targets is being studied with the help of a specially designed continuous-access rotating-mirror framing camera. The camera provides 80 frames of data at up to 1 x 10 to the 6th frames/sec with exposure times of 20 nsec.

  8. KSC-04pd1226

    NASA Image and Video Library

    2004-05-19

    KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Rick Wetherington checks out one of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with an electric drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.

  9. KSC-04pd1220

    NASA Image and Video Library

    2004-05-19

    KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Kenny Allen works on the recently acquired Contraves-Goerz Kineto Tracking Mount (KTM). Trailer-mounted with a center console/seat and electric drive tracking mount, the KTM includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff. There are 10 KTMs certified for use on the Eastern Range.

  10. KSC-04pd1219

    NASA Image and Video Library

    2004-05-19

    KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Kenny Allen works on the recently acquired Contraves-Goerz Kineto Tracking Mount (KTM). Trailer-mounted with a center console/seat and electric drive tracking mount, the KTM includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff. There are 10 KTMs certified for use on the Eastern Range.

  11. KSC-04pd1227

    NASA Image and Video Library

    2004-05-19

    KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Kenny Allen checks out one of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with an electric drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.

  12. Low, slow, small target recognition based on spatial vision network

    NASA Astrophysics Data System (ADS)

    Cheng, Zhao; Guo, Pei; Qi, Xin

    2018-03-01

    Traditional photoelectric monitoring is monitored using a large number of identical cameras. In order to ensure the full coverage of the monitoring area, this monitoring method uses more cameras, which leads to more monitoring and repetition areas, and higher costs, resulting in more waste. In order to reduce the monitoring cost and solve the difficult problem of finding, identifying and tracking a low altitude, slow speed and small target, this paper presents spatial vision network for low-slow-small targets recognition. Based on camera imaging principle and monitoring model, spatial vision network is modeled and optimized. Simulation experiment results demonstrate that the proposed method has good performance.

  13. Next-generation digital camera integration and software development issues

    NASA Astrophysics Data System (ADS)

    Venkataraman, Shyam; Peters, Ken; Hecht, Richard

    1998-04-01

    This paper investigates the complexities associated with the development of next generation digital cameras due to requirements in connectivity and interoperability. Each successive generation of digital camera improves drastically in cost, performance, resolution, image quality and interoperability features. This is being accomplished by advancements in a number of areas: research, silicon, standards, etc. As the capabilities of these cameras increase, so do the requirements for both hardware and software. Today, there are two single chip camera solutions in the market including the Motorola MPC 823 and LSI DCAM- 101. Real time constraints for a digital camera may be defined by the maximum time allowable between capture of images. Constraints in the design of an embedded digital camera include processor architecture, memory, processing speed and the real-time operating systems. This paper will present the LSI DCAM-101, a single-chip digital camera solution. It will present an overview of the architecture and the challenges in hardware and software for supporting streaming video in such a complex device. Issues presented include the development of the data flow software architecture, testing and integration on this complex silicon device. The strategy for optimizing performance on the architecture will also be presented.

  14. HDR {sup 192}Ir source speed measurements using a high speed video camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fonseca, Gabriel P.; Viana, Rodrigo S. S.; Yoriyaz, Hélio

    Purpose: The dose delivered with a HDR {sup 192}Ir afterloader can be separated into a dwell component, and a transit component resulting from the source movement. The transit component is directly dependent on the source speed profile and it is the goal of this study to measure accurate source speed profiles. Methods: A high speed video camera was used to record the movement of a {sup 192}Ir source (Nucletron, an Elekta company, Stockholm, Sweden) for interdwell distances of 0.25–5 cm with dwell times of 0.1, 1, and 2 s. Transit dose distributions were calculated using a Monte Carlo code simulatingmore » the source movement. Results: The source stops at each dwell position oscillating around the desired position for a duration up to (0.026 ± 0.005) s. The source speed profile shows variations between 0 and 81 cm/s with average speed of ∼33 cm/s for most of the interdwell distances. The source stops for up to (0.005 ± 0.001) s at nonprogrammed positions in between two programmed dwell positions. The dwell time correction applied by the manufacturer compensates the transit dose between the dwell positions leading to a maximum overdose of 41 mGy for the considered cases and assuming an air-kerma strength of 48 000 U. The transit dose component is not uniformly distributed leading to over and underdoses, which is within 1.4% for commonly prescribed doses (3–10 Gy). Conclusions: The source maintains its speed even for the short interdwell distances. Dose variations due to the transit dose component are much lower than the prescribed treatment doses for brachytherapy, although transit dose component should be evaluated individually for clinical cases.« less

  15. A CMOS high speed imaging system design based on FPGA

    NASA Astrophysics Data System (ADS)

    Tang, Hong; Wang, Huawei; Cao, Jianzhong; Qiao, Mingrui

    2015-10-01

    CMOS sensors have more advantages than traditional CCD sensors. The imaging system based on CMOS has become a hot spot in research and development. In order to achieve the real-time data acquisition and high-speed transmission, we design a high-speed CMOS imaging system on account of FPGA. The core control chip of this system is XC6SL75T and we take advantages of CameraLink interface and AM41V4 CMOS image sensors to transmit and acquire image data. AM41V4 is a 4 Megapixel High speed 500 frames per second CMOS image sensor with global shutter and 4/3" optical format. The sensor uses column parallel A/D converters to digitize the images. The CameraLink interface adopts DS90CR287 and it can convert 28 bits of LVCMOS/LVTTL data into four LVDS data stream. The reflected light of objects is photographed by the CMOS detectors. CMOS sensors convert the light to electronic signals and then send them to FPGA. FPGA processes data it received and transmits them to upper computer which has acquisition cards through CameraLink interface configured as full models. Then PC will store, visualize and process images later. The structure and principle of the system are both explained in this paper and this paper introduces the hardware and software design of the system. FPGA introduces the driven clock of CMOS. The data in CMOS is converted to LVDS signals and then transmitted to the data acquisition cards. After simulation, the paper presents a row transfer timing sequence of CMOS. The system realized real-time image acquisition and external controls.

  16. Time-resolved nanoseconds dynamics of ultrasound contrast agent microbubbles manipulated and controlled by optical tweezers

    NASA Astrophysics Data System (ADS)

    Garbin, Valeria; Cojoc, Dan; Ferrari, Enrico; Di Fabrizio, Enzo; Overvelde, Marlies L. J.; Versluis, Michel; van der Meer, Sander M.; de Jong, Nico; Lohse, Detlef

    2006-08-01

    Optical tweezers enable non-destructive, contact-free manipulation of ultrasound contrast agent (UCA) microbubbles, which are used in medical imaging for enhancing the echogenicity of the blood pool and to quantify organ perfusion. The understanding of the fundamental dynamics of ultrasound-driven contrast agent microbubbles is a first step for exploiting their acoustical properties and to develop new diagnostic and therapeutic applications. In this respect, optical tweezers can be used to study UCA microbubbles under controlled and repeatable conditions, by positioning them away from interfaces and from neighboring bubbles. In addition, a high-speed imaging system is required to record the dynamics of UCA microbubbles in ultrasound, as their oscillations occur on the nanoseconds timescale. In this work, we demonstrate the use of an optical tweezers system combined with a high-speed camera capable of 128-frame recordings at up to 25 million frames per second (Mfps), for the study of individual UCA microbubble dynamics as a function of the distance from solid interfaces.

  17. High-speed DNA-based rolling motors powered by RNase H

    PubMed Central

    Yehl, Kevin; Mugler, Andrew; Vivek, Skanda; Liu, Yang; Zhang, Yun; Fan, Mengzhen; Weeks, Eric R.

    2016-01-01

    DNA-based machines that walk by converting chemical energy into controlled motion could be of use in applications such as next generation sensors, drug delivery platforms, and biological computing. Despite their exquisite programmability, DNA-based walkers are, however, challenging to work with due to their low fidelity and slow rates (~1 nm/min). Here, we report DNA-based machines that roll rather than walk, and consequently have a maximum speed and processivity that is three-orders of magnitude greater than conventional DNA motors. The motors are made from DNA-coated spherical particles that hybridise to a surface modified with complementary RNA; motion is achieved through the addition of RNase H, which selectively hydrolyses hybridised RNA. Spherical motors move in a self-avoiding manner, whereas anisotropic particles, such as dimerised particles or rod-shaped particles travel linearly without a track or external force. Finally, we demonstrate detection of single nucleotide polymorphism by measuring particle displacement using a smartphone camera. PMID:26619152

  18. Handheld hyperspectral imager system for chemical/biological and environmental applications

    NASA Astrophysics Data System (ADS)

    Hinnrichs, Michele; Piatek, Bob

    2004-08-01

    A small, hand held, battery operated imaging infrared spectrometer, Sherlock, has been developed by Pacific Advanced Technology and was field tested in early 2003. The Sherlock spectral imaging camera has been designed for remote gas leak detection, however, the architecture of the camera is versatile enough that it can be applied to numerous other applications such as homeland security, chemical/biological agent detection, medical and pharmaceutical applications as well as standard research and development. This paper describes the Sherlock camera, theory of operations, shows current applications and touches on potential future applications for the camera. The Sherlock has an embedded Power PC and performs real-time-image processing function in an embedded FPGA. The camera has a built in LCD display as well as output to a standard monitor, or NTSC display. It has several I/O ports, ethernet, firewire, RS232 and thus can be easily controlled from a remote location. In addition, software upgrades can be performed over the ethernet eliminating the need to send the camera back to the factory for a retrofit. Using the USB port a mouse and key board can be connected and the camera can be used in a laboratory environment as a stand alone imaging spectrometer.

  19. Hand-held hyperspectral imager for chemical/biological and environmental applications

    NASA Astrophysics Data System (ADS)

    Hinnrichs, Michele; Piatek, Bob

    2004-03-01

    A small, hand held, battery operated imaging infrared spectrometer, Sherlock, has been developed by Pacific Advanced Technology and was field tested in early 2003. The Sherlock spectral imaging camera has been designed for remote gas leak detection, however, the architecture of the camera is versatile enough that it can be applied to numerous other applications such as homeland security, chemical/biological agent detection, medical and pharmaceutical applications as well as standard research and development. This paper describes the Sherlock camera, theory of operations, shows current applications and touches on potential future applications for the camera. The Sherlock has an embedded Power PC and performs real-time-image processing function in an embedded FPGA. The camera has a built in LCD display as well as output to a standard monitor, or NTSC display. It has several I/O ports, ethernet, firewire, RS232 and thus can be easily controlled from a remote location. In addition, software upgrades can be performed over the ethernet eliminating the need to send the camera back to the factory for a retrofit. Using the USB port a mouse and key board can be connected and the camera can be used in a laboratory environment as a stand alone imaging spectrometer.

  20. A High-Speed Spectroscopy System for Observing Lightning and Transient Luminous Events

    NASA Astrophysics Data System (ADS)

    Boggs, L.; Liu, N.; Austin, M.; Aguirre, F.; Tilles, J.; Nag, A.; Lazarus, S. M.; Rassoul, H.

    2017-12-01

    Here we present a high-speed spectroscopy system that can be used to record atmospheric electrical discharges, including lightning and transient luminous events. The system consists of a Phantom V1210 high-speed camera, a Volume Phase Holographic (VPH) grism, an optional optical slit, and lenses. The spectrograph has the capability to record videos at speeds of 200,000 frames per second and has an effective wavelength band of 550-775 nm for the first order spectra. When the slit is used, the system has a spectral resolution of about 0.25 nm per pixel. We have constructed a durable enclosure made of heavy duty aluminum to house the high-speed spectrograph. It has two fans for continuous air flow and a removable tray to mount the spectrograph components. In addition, a Watec video camera (30 frames per second) is attached to the top of the enclosure to provide a scene view. A heavy duty Pelco pan/tilt motor is used to position the enclosure and can be controlled remotely through a Rasperry Pi computer. An observation campaign has been conducted during the summer and fall of 2017 at the Florida Institute of Technology. Several close cloud-to-ground discharges were recorded at 57,000 frames per second. The spectrum of a downward stepped negative leader and a positive cloud-to-ground return stroke will be reported on.

  1. Photo-Machining of Semiconductor Related Materials with Femtosecond Laser Ablation and Characterization of Its Properties

    NASA Astrophysics Data System (ADS)

    Yokotani, Atushi; Mizuno, Toshio; Mukumoto, Toru; Kawahara, Kousuke; Ninomiya, Takahumi; Sawada, Hiroshi; Kurosawa, Kou

    We have analyzed the drilling process with femtosecond laser on the silicon surface in order to investigate a degree of thermal effect during the dicing process of the very thin silicon substrate. A regenerative amplified Ti:Al2O3 laser (E= 30˜500 μJ/pulse, τ= 200 fs, λ= 780 nm, f= 10 Hz) was used and focused onto a 50 μm-thick silicon sample. ICCD (Intensified Charge coupled Device) camera with a high-speed gate of 5 ns was utilized to take images of processing hole. First, we investigated the dependence of laser energy on the speed of the formation of the drilled hole. As a result, it was found that the lager the energy, the slower the speed of the formation under the minimum hole was obtained. Consequently, in the case of defocused condition, even when the smaller the energy density was used, the very slow speed of formation and the much lager thermal effects are simultaneously observed. So we can say that the degree of the thermal effects is not simply related to energy density of the laser but strongly related to the speed of the formation, which can be measured by the ICCD camera. The similar tendency was also obtained for other materials, which are important for the fabrication of ICs (Al, Cu, SiO2 and acrylic resin).

  2. Performance evaluation and clinical applications of 3D plenoptic cameras

    NASA Astrophysics Data System (ADS)

    Decker, Ryan; Shademan, Azad; Opfermann, Justin; Leonard, Simon; Kim, Peter C. W.; Krieger, Axel

    2015-06-01

    The observation and 3D quantification of arbitrary scenes using optical imaging systems is challenging, but increasingly necessary in many fields. This paper provides a technical basis for the application of plenoptic cameras in medical and medical robotics applications, and rigorously evaluates camera integration and performance in the clinical setting. It discusses plenoptic camera calibration and setup, assesses plenoptic imaging in a clinically relevant context, and in the context of other quantitative imaging technologies. We report the methods used for camera calibration, precision and accuracy results in an ideal and simulated surgical setting. Afterwards, we report performance during a surgical task. Test results showed the average precision of the plenoptic camera to be 0.90mm, increasing to 1.37mm for tissue across the calibrated FOV. The ideal accuracy was 1.14mm. The camera showed submillimeter error during a simulated surgical task.

  3. Measurement of material mechanical properties in microforming

    NASA Astrophysics Data System (ADS)

    Yun, Wang; Xu, Zhenying; Hui, Huang; Zhou, Jianzhong

    2006-02-01

    As the rapid market need of micro-electro-mechanical systems engineering gives it the wide development and application ranging from mobile phones to medical apparatus, the need of metal micro-parts is increasing gradually. Microforming technology challenges the plastic processing technology. The findings have shown that if the grain size of the specimen remains constant, the flow stress changes with the increasing miniaturization, and also the necking elongation and the uniform elongation etc. It is impossible to get the specimen material properties in conventional tensile test machine, especially in the high precision demand. Therefore, one new measurement method for getting the specimen material-mechanical property with high precision is initiated. With this method, coupled with the high speed of Charge Coupled Device (CCD) camera and high precision of Coordinate Measuring Machine (CMM), the elongation and tensile strain in the gauge length are obtained. The elongation, yield stress and other mechanical properties can be calculated from the relationship between the images and CCD camera movement. This measuring method can be extended into other experiments, such as the alignment of the tool and specimen, micro-drawing process.

  4. Modeling of a microchannel plate working in pulsed mode

    NASA Astrophysics Data System (ADS)

    Secroun, Aurelia; Mens, Alain; Segre, Jacques; Assous, Franck; Piault, Emmanuel; Rebuffie, Jean-Claude

    1997-05-01

    MicroChannel Plates (MCPs) are used in high speed cinematography systems such as MCP framing cameras and streak camera readouts. In order to know the dynamic range or the signal to noise ratio that are available in these devices, a good knowledge of the performances of the MCP is essential. The point of interest of our simulation is the working mode of the microchannel plate--that is light pulsed mode--, in which the signal level is relatively high and its duration can be shorter than the time needed to replenish the wall of the channel, when other papers mainly studied night vision applications with weak continuous and nearly single electron input signal. Also our method allows the simulation of saturation phenomena due to the large number of electrons involved, whereas the discrete models previously used for simulating pulsed mode might not be properly adapted. Here are presented the choices made in modeling the microchannel, more specifically as for the physics laws, the secondary emission parameters and the 3D- geometry. In a last part first results are shown.

  5. Integration of fringe projection and two-dimensional digital image correlation for three-dimensional displacements measurements

    NASA Astrophysics Data System (ADS)

    Felipe-Sesé, Luis; López-Alba, Elías; Siegmann, Philip; Díaz, Francisco A.

    2016-12-01

    A low-cost approach for three-dimensional (3-D) full-field displacement measurement is applied for the analysis of large displacements involved in two different mechanical events. The method is based on a combination of fringe projection and two-dimensional digital image correlation (DIC) techniques. The two techniques have been employed simultaneously using an RGB camera and a color encoding method; therefore, it is possible to measure in-plane and out-of-plane displacements at the same time with only one camera even at high speed rates. The potential of the proposed methodology has been employed for the analysis of large displacements during contact experiments in a soft material block. Displacement results have been successfully compared with those obtained using a 3D-DIC commercial system. Moreover, the analysis of displacements during an impact test on a metal plate was performed to emphasize the application of the methodology for dynamics events. Results show a good level of agreement, highlighting the potential of FP + 2D DIC as low-cost alternative for the analysis of large deformations problems.

  6. A robust two-way switching control system for remote piloting and stabilization of low-cost quadrotor UAVs

    NASA Astrophysics Data System (ADS)

    Ripamonti, Francesco; Resta, Ferruccio; Vivani, Andrea

    2015-04-01

    The aim of this paper is to present two control logics and an attitude estimator for UAV stabilization and remote piloting, that are as robust as possible to physical parameters variation and to other external disturbances. Moreover, they need to be implemented on low-cost micro-controllers, in order to be attractive for commercial drones. As an example, possible applications of the two switching control logics could be area surveillance and facial recognition by means of a camera mounted on the drone: the high computational speed logic is used to reach the target, when the high-stability one is activated, in order to complete the recognition tasks.

  7. Real-time FPGA-based radar imaging for smart mobility systems

    NASA Astrophysics Data System (ADS)

    Saponara, Sergio; Neri, Bruno

    2016-04-01

    The paper presents an X-band FMCW (Frequency Modulated Continuous Wave) Radar Imaging system, called X-FRI, for surveillance in smart mobility applications. X-FRI allows for detecting the presence of targets (e.g. obstacles in a railway crossing or urban road crossing, or ships in a small harbor), as well as their speed and their position. With respect to alternative solutions based on LIDAR or camera systems, X-FRI operates in real-time also in bad lighting and weather conditions, night and day. The radio-frequency transceiver is realized through COTS (Commercial Off The Shelf) components on a single-board. An FPGA-based baseband platform allows for real-time Radar image processing.

  8. How to study the Doppler effect with Audacity software

    NASA Astrophysics Data System (ADS)

    Adriano Dias, Marco; Simeão Carvalho, Paulo; Rodrigues Ventura, Daniel

    2016-05-01

    The Doppler effect is one of the recurring themes in college and high school classes. In order to contextualize the topic and engage the students in their own learning process, we propose a simple and easily accessible activity, i.e. the analysis of the videos available on the internet by the students. The sound of the engine of the vehicle passing by the camera is recorded on the video; it is then analyzed with the free software Audacity by measuring the frequency of the sound during approach and recede of the vehicle from the observer. The speed of the vehicle is determined due to the application of Doppler effect equations for acoustic waves.

  9. Development of high definition OCT system for clinical therapy of skin diseases

    NASA Astrophysics Data System (ADS)

    Baek, Daeyul; Seo, Young-Seok; Kim, Jung-Hyun

    2018-02-01

    OCT is a non-invasive imaging technique that can be applied to diagnose various skin disease. Since its introduction in 1997, dermatology has used OCT technology to obtain high quality images of human skin. Recently, in order to accurately diagnose skin diseases, it is essential to develop OCT equipment that can obtain high quality images. Therefore, we developed the system that can obtain a high quality image by using a 1300 nm light source with a wide bandwidth and deep penetration depth, high-resolution image, and a camera capable of high sensitivity and high speed processing. We introduce the performance of the developed system and the clinical application data.

  10. Unsteady motion of laser ablation plume by vortex induced by the expansion of curved shock wave

    NASA Astrophysics Data System (ADS)

    Tran, D. T.; Mori, K.

    2017-02-01

    There are a number of industrial applications of laser ablation in a gas atmosphere. When an intense pulsed laser beam is irradiated on a solid surface in the gas atmosphere, the surface material is ablated and expands into the atmosphere. At the same time, a spherical shock wave is launched by the ablation jet to induce the unsteady flow around the target surface. The ablated materials, luminously working as tracer, exhibit strange unsteady motions depending on the experimental conditions. By using a high-speed video camera (HPV-X2), unsteady motion ablated materials are visualized at the frame rate more than 106 fps, and qualitatively characterized.

  11. Study on the laser irradiation characteristics of NEPE propellant in different oxygen concentrations

    NASA Astrophysics Data System (ADS)

    Xiang, Hengsheng; Chen, Xiong; Zhou, Changsheng

    2016-01-01

    The ignition and combustion characteristics of nitrate ester plasticized polyether (NEPE) propellant in different oxygen concentrations ambient gases were studied by the application of CO2 laser, infrared thermometer and high speed camera. The flame intensity data of the propellant was collected by the photodiode; propellant flame temperature was measured by infrared thermometer. The experimental results show that the time which NEPE propellant spend to be stable combustion will get shorter with the increase of oxygen concentration; the flame peak temperature measured by infrared thermometer increases with the increase of oxygen concentration when the oxygen concentration is less than 30% by volume, then decreases with the increase of oxygen concentration.

  12. Best practices to optimize intraoperative photography.

    PubMed

    Gaujoux, Sébastien; Ceribelli, Cecilia; Goudard, Geoffrey; Khayat, Antoine; Leconte, Mahaut; Massault, Pierre-Philippe; Balagué, Julie; Dousset, Bertrand

    2016-04-01

    Intraoperative photography is used extensively for communication, research, or teaching. The objective of the present work was to define, using a standardized methodology and literature review, the best technical conditions for intraoperative photography. Using either a smartphone camera, a bridge camera, or a single-lens reflex (SLR) camera, photographs were taken under various standard conditions by a professional photographer. All images were independently assessed blinded to technical conditions to define the best shooting conditions and methods. For better photographs, an SLR camera with manual settings should be used. Photographs should be centered and taken vertically and orthogonal to the surgical field with a linear scale to avoid error in perspective. The shooting distance should be about 75 cm using an 80-100-mm focal lens. Flash should be avoided and scialytic low-powered light should be used without focus. The operative field should be clean, wet surfaces should be avoided, and metal instruments should be hidden to avoid reflections. For SLR camera, International Organization for Standardization speed should be as low as possible, autofocus area selection mode should be on single point AF, shutter speed should be above 1/100 second, and aperture should be as narrow as possible, above f/8. For smartphone, use high dynamic range setting if available, use of flash, digital filter, effect apps, and digital zoom is not recommended. If a few basic technical rules are known and applied, high-quality photographs can be taken by amateur photographers and fit the standards accepted in clinical practice, academic communication, and publications. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Calibration of asynchronous smart phone cameras from moving objects

    NASA Astrophysics Data System (ADS)

    Hagen, Oksana; Istenič, Klemen; Bharti, Vibhav; Dhali, Maruf Ahmed; Barmaimon, Daniel; Houssineau, Jérémie; Clark, Daniel

    2015-04-01

    Calibrating multiple cameras is a fundamental prerequisite for many Computer Vision applications. Typically this involves using a pair of identical synchronized industrial or high-end consumer cameras. This paper considers an application on a pair of low-cost portable cameras with different parameters that are found in smart phones. This paper addresses the issues of acquisition, detection of moving objects, dynamic camera registration and tracking of arbitrary number of targets. The acquisition of data is performed using two standard smart phone cameras and later processed using detections of moving objects in the scene. The registration of cameras onto the same world reference frame is performed using a recently developed method for camera calibration using a disparity space parameterisation and the single-cluster PHD filter.

  14. Performance of Backshort-Under-Grid Kilopixel TES Arrays for HAWC+

    NASA Technical Reports Server (NTRS)

    Staguhn, J. G.; Benford, D. J.; Dowell, C. D.; Fixsen, D. J.; Hilton, G. C.; Irwin, K. D.; Jhabvala, C. A.; Maher, S. F.; Miller, T. M.; Moseley, S. H.; hide

    2016-01-01

    We present results from laboratory detector characterizations of the first kilopixel BUG arrays for the High- resolution Wideband Camera Plus (HAWC+) which is the imaging far-infrared polarimeter camera for the Stratospheric Observatory for Infrared Astronomy (SOFIA). Our tests demonstrate that the array performance is consistent with the predicted properties. Here, we highlight results obtained for the thermal conductivity, noise performance, detector speed, and first optical results demonstrating the pixel yield of the arrays.

  15. Integrating TV/digital data spectrograph system

    NASA Technical Reports Server (NTRS)

    Duncan, B. J.; Fay, T. D.; Miller, E. R.; Wamsteker, W.; Brown, R. M.; Neely, P. L.

    1975-01-01

    A 25-mm vidicon camera was previously modified to allow operation in an integration mode for low-light-level astronomical work. The camera was then mated to a low-dispersion spectrograph for obtaining spectral information in the 400 to 750 nm range. A high speed digital video image system was utilized to digitize the analog video signal, place the information directly into computer-type memory, and record data on digital magnetic tape for permanent storage and subsequent analysis.

  16. Testing and Validation of Timing Properties for High Speed Digital Cameras - A Best Practices Guide

    DTIC Science & Technology

    2016-07-27

    a five year plan to begin replacing its inventory of antiquated film and video systems with more modern and capable digital systems. As evidenced in...installation, testing, and documentation of DITCS. If shop support can be accelerated due to shifting mission priorities, this schedule can likely...assistance from the machine shop , welding shop , paint shop , and carpenter shop . Testing the DITCS system will require a KTM with digital cameras and

  17. Efficient large-scale graph data optimization for intelligent video surveillance

    NASA Astrophysics Data System (ADS)

    Shang, Quanhong; Zhang, Shujun; Wang, Yanbo; Sun, Chen; Wang, Zepeng; Zhang, Luming

    2017-08-01

    Society is rapidly accepting the use of a wide variety of cameras Location and applications: site traffic monitoring, parking Lot surveillance, car and smart space. These ones here the camera provides data every day in an analysis Effective way. Recent advances in sensor technology Manufacturing, communications and computing are stimulating.The development of new applications that can change the traditional Vision system incorporating universal smart camera network. This Analysis of visual cues in multi camera networks makes wide Applications ranging from smart home and office automation to large area surveillance and traffic surveillance. In addition, dense Camera networks, most of which have large overlapping areas of cameras. In the view of good research, we focus on sparse camera networks. One Sparse camera network using large area surveillance. As few cameras as possible, most cameras do not overlap Each other’s field of vision. This task is challenging Lack of knowledge of topology Network, the specific changes in appearance and movement Track different opinions of the target, as well as difficulties Understanding complex events in a network. In this review in this paper, we present a comprehensive survey of recent studies Results to solve the problem of topology learning, Object appearance modeling and global activity understanding sparse camera network. In addition, some of the current open Research issues are discussed.

  18. Measurement of surface shear stress vector beneath high-speed jet flow using liquid crystal coating

    NASA Astrophysics Data System (ADS)

    Wang, Cheng-Peng; Zhao, Ji-Song; Jiao, Yun; Cheng, Ke-Ming

    2018-05-01

    The shear-sensitive liquid crystal coating (SSLCC) technique is investigated in the high-speed jet flow of a micro-wind-tunnel. An approach to measure surface shear stress vector distribution using the SSLCC technique is established, where six synchronous cameras are used to record the coating color at different circumferential view angles. Spatial wall shear stress vector distributions on the test surface are obtained at different velocities. The results are encouraging and demonstrate the great potential of the SSLCC technique in high-speed wind-tunnel measurement.

  19. Marker-less multi-frame motion tracking and compensation in PET-brain imaging

    NASA Astrophysics Data System (ADS)

    Lindsay, C.; Mukherjee, J. M.; Johnson, K.; Olivier, P.; Song, X.; Shao, L.; King, M. A.

    2015-03-01

    In PET brain imaging, patient motion can contribute significantly to the degradation of image quality potentially leading to diagnostic and therapeutic problems. To mitigate the image artifacts resulting from patient motion, motion must be detected and tracked then provided to a motion correction algorithm. Existing techniques to track patient motion fall into one of two categories: 1) image-derived approaches and 2) external motion tracking (EMT). Typical EMT requires patients to have markers in a known pattern on a rigid too attached to their head, which are then tracked by expensive and bulky motion tracking camera systems or stereo cameras. This has made marker-based EMT unattractive for routine clinical application. Our main contributions are the development of a marker-less motion tracking system that uses lowcost, small depth-sensing cameras which can be installed in the bore of the imaging system. Our motion tracking system does not require anything to be attached to the patient and can track the rigid transformation (6-degrees of freedom) of the patient's head at a rate 60 Hz. We show that our method can not only be used in with Multi-frame Acquisition (MAF) PET motion correction, but precise timing can be employed to determine only the necessary frames needed for correction. This can speeds up reconstruction by eliminating the unnecessary subdivision of frames.

  20. A Crowd-Sourcing Indoor Localization Algorithm via Optical Camera on a Smartphone Assisted by Wi-Fi Fingerprint RSSI

    PubMed Central

    Chen, Wei; Wang, Weiping; Li, Qun; Chang, Qiang; Hou, Hongtao

    2016-01-01

    Indoor positioning based on existing Wi-Fi fingerprints is becoming more and more common. Unfortunately, the Wi-Fi fingerprint is susceptible to multiple path interferences, signal attenuation, and environmental changes, which leads to low accuracy. Meanwhile, with the recent advances in charge-coupled device (CCD) technologies and the processing speed of smartphones, indoor positioning using the optical camera on a smartphone has become an attractive research topic; however, the major challenge is its high computational complexity; as a result, real-time positioning cannot be achieved. In this paper we introduce a crowd-sourcing indoor localization algorithm via an optical camera and orientation sensor on a smartphone to address these issues. First, we use Wi-Fi fingerprint based on the K Weighted Nearest Neighbor (KWNN) algorithm to make a coarse estimation. Second, we adopt a mean-weighted exponent algorithm to fuse optical image features and orientation sensor data as well as KWNN in the smartphone to refine the result. Furthermore, a crowd-sourcing approach is utilized to update and supplement the positioning database. We perform several experiments comparing our approach with other positioning algorithms on a common smartphone to evaluate the performance of the proposed sensor-calibrated algorithm, and the results demonstrate that the proposed algorithm could significantly improve accuracy, stability, and applicability of positioning. PMID:27007379

Top