Sample records for camera systems including

  1. Multi-color pyrometry imaging system and method of operating the same

    DOEpatents

    Estevadeordal, Jordi; Nirmalan, Nirm Velumylum; Tralshawala, Nilesh; Bailey, Jeremy Clyde

    2017-03-21

    A multi-color pyrometry imaging system for a high-temperature asset includes at least one viewing port in optical communication with at least one high-temperature component of the high-temperature asset. The system also includes at least one camera device in optical communication with the at least one viewing port. The at least one camera device includes a camera enclosure and at least one camera aperture defined in the camera enclosure, The at least one camera aperture is in optical communication with the at least one viewing port. The at least one camera device also includes a multi-color filtering mechanism coupled to the enclosure. The multi-color filtering mechanism is configured to sequentially transmit photons within a first predetermined wavelength band and transmit photons within a second predetermined wavelength band that is different than the first predetermined wavelength band.

  2. Camera systems in human motion analysis for biomedical applications

    NASA Astrophysics Data System (ADS)

    Chin, Lim Chee; Basah, Shafriza Nisha; Yaacob, Sazali; Juan, Yeap Ewe; Kadir, Aida Khairunnisaa Ab.

    2015-05-01

    Human Motion Analysis (HMA) system has been one of the major interests among researchers in the field of computer vision, artificial intelligence and biomedical engineering and sciences. This is due to its wide and promising biomedical applications, namely, bio-instrumentation for human computer interfacing and surveillance system for monitoring human behaviour as well as analysis of biomedical signal and image processing for diagnosis and rehabilitation applications. This paper provides an extensive review of the camera system of HMA, its taxonomy, including camera types, camera calibration and camera configuration. The review focused on evaluating the camera system consideration of the HMA system specifically for biomedical applications. This review is important as it provides guidelines and recommendation for researchers and practitioners in selecting a camera system of the HMA system for biomedical applications.

  3. 241-AZ-101 Waste Tank Color Video Camera System Shop Acceptance Test Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    WERRY, S.M.

    2000-03-23

    This report includes shop acceptance test results. The test was performed prior to installation at tank AZ-101. Both the camera system and camera purge system were originally sought and procured as a part of initial waste retrieval project W-151.

  4. System Synchronizes Recordings from Separated Video Cameras

    NASA Technical Reports Server (NTRS)

    Nail, William; Nail, William L.; Nail, Jasper M.; Le, Doung T.

    2009-01-01

    A system of electronic hardware and software for synchronizing recordings from multiple, physically separated video cameras is being developed, primarily for use in multiple-look-angle video production. The system, the time code used in the system, and the underlying method of synchronization upon which the design of the system is based are denoted generally by the term "Geo-TimeCode(TradeMark)." The system is embodied mostly in compact, lightweight, portable units (see figure) denoted video time-code units (VTUs) - one VTU for each video camera. The system is scalable in that any number of camera recordings can be synchronized. The estimated retail price per unit would be about $350 (in 2006 dollars). The need for this or another synchronization system external to video cameras arises because most video cameras do not include internal means for maintaining synchronization with other video cameras. Unlike prior video-camera-synchronization systems, this system does not depend on continuous cable or radio links between cameras (however, it does depend on occasional cable links lasting a few seconds). Also, whereas the time codes used in prior video-camera-synchronization systems typically repeat after 24 hours, the time code used in this system does not repeat for slightly more than 136 years; hence, this system is much better suited for long-term deployment of multiple cameras.

  5. Integrated inertial stellar attitude sensor

    NASA Technical Reports Server (NTRS)

    Brady, Tye M. (Inventor); Kourepenis, Anthony S. (Inventor); Wyman, Jr., William F. (Inventor)

    2007-01-01

    An integrated inertial stellar attitude sensor for an aerospace vehicle includes a star camera system, a gyroscope system, a controller system for synchronously integrating an output of said star camera system and an output of said gyroscope system into a stream of data, and a flight computer responsive to said stream of data for determining from the star camera system output and the gyroscope system output the attitude of the aerospace vehicle.

  6. Enhanced technologies for unattended ground sensor systems

    NASA Astrophysics Data System (ADS)

    Hartup, David C.

    2010-04-01

    Progress in several technical areas is being leveraged to advantage in Unattended Ground Sensor (UGS) systems. This paper discusses advanced technologies that are appropriate for use in UGS systems. While some technologies provide evolutionary improvements, other technologies result in revolutionary performance advancements for UGS systems. Some specific technologies discussed include wireless cameras and viewers, commercial PDA-based system programmers and monitors, new materials and techniques for packaging improvements, low power cueing sensor radios, advanced long-haul terrestrial and SATCOM radios, and networked communications. Other technologies covered include advanced target detection algorithms, high pixel count cameras for license plate and facial recognition, small cameras that provide large stand-off distances, video transmissions of target activity instead of still images, sensor fusion algorithms, and control center hardware. The impact of each technology on the overall UGS system architecture is discussed, along with the advantages provided to UGS system users. Areas of analysis include required camera parameters as a function of stand-off distance for license plate and facial recognition applications, power consumption for wireless cameras and viewers, sensor fusion communication requirements, and requirements to practically implement video transmission through UGS systems. Examples of devices that have already been fielded using technology from several of these areas are given.

  7. Deep Space Positioning System

    NASA Technical Reports Server (NTRS)

    Vaughan, Andrew T. (Inventor); Riedel, Joseph E. (Inventor)

    2016-01-01

    A single, compact, lower power deep space positioning system (DPS) configured to determine a location of a spacecraft anywhere in the solar system, and provide state information relative to Earth, Sun, or any remote object. For example, the DPS includes a first camera and, possibly, a second camera configured to capture a plurality of navigation images to determine a state of a spacecraft in a solar system. The second camera is located behind, or adjacent to, a secondary reflector of a first camera in a body of a telescope.

  8. Nuclear medicine imaging system

    DOEpatents

    Bennett, Gerald W.; Brill, A. Bertrand; Bizais, Yves J.; Rowe, R. Wanda; Zubal, I. George

    1986-01-07

    A nuclear medicine imaging system having two large field of view scintillation cameras mounted on a rotatable gantry and being movable diametrically toward or away from each other is disclosed. In addition, each camera may be rotated about an axis perpendicular to the diameter of the gantry. The movement of the cameras allows the system to be used for a variety of studies, including positron annihilation, and conventional single photon emission, as well as static orthogonal dual multi-pinhole tomography. In orthogonal dual multi-pinhole tomography, each camera is fitted with a seven pinhole collimator to provide seven views from slightly different perspectives. By using two cameras at an angle to each other, improved sensitivity and depth resolution is achieved. The computer system and interface acquires and stores a broad range of information in list mode, including patient physiological data, energy data over the full range detected by the cameras, and the camera position. The list mode acquisition permits the study of attenuation as a result of Compton scatter, as well as studies involving the isolation and correlation of energy with a range of physiological conditions.

  9. Nuclear medicine imaging system

    DOEpatents

    Bennett, Gerald W.; Brill, A. Bertrand; Bizais, Yves J. C.; Rowe, R. Wanda; Zubal, I. George

    1986-01-01

    A nuclear medicine imaging system having two large field of view scintillation cameras mounted on a rotatable gantry and being movable diametrically toward or away from each other is disclosed. In addition, each camera may be rotated about an axis perpendicular to the diameter of the gantry. The movement of the cameras allows the system to be used for a variety of studies, including positron annihilation, and conventional single photon emission, as well as static orthogonal dual multi-pinhole tomography. In orthogonal dual multi-pinhole tomography, each camera is fitted with a seven pinhole collimator to provide seven views from slightly different perspectives. By using two cameras at an angle to each other, improved sensitivity and depth resolution is achieved. The computer system and interface acquires and stores a broad range of information in list mode, including patient physiological data, energy data over the full range detected by the cameras, and the camera position. The list mode acquisition permits the study of attenuation as a result of Compton scatter, as well as studies involving the isolation and correlation of energy with a range of physiological conditions.

  10. Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications

    NASA Technical Reports Server (NTRS)

    Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.

  11. Video systems for real-time oil-spill detection

    NASA Technical Reports Server (NTRS)

    Millard, J. P.; Arvesen, J. C.; Lewis, P. L.; Woolever, G. F.

    1973-01-01

    Three airborne television systems are being developed to evaluate techniques for oil-spill surveillance. These include a conventional TV camera, two cameras operating in a subtractive mode, and a field-sequential camera. False-color enhancement and wavelength and polarization filtering are also employed. The first of a series of flight tests indicates that an appropriately filtered conventional TV camera is a relatively inexpensive method of improving contrast between oil and water. False-color enhancement improves the contrast, but the problem caused by sun glint now limits the application to overcast days. Future effort will be aimed toward a one-camera system. Solving the sun-glint problem and developing the field-sequential camera into an operable system offers potential for color 'flagging' oil on water.

  12. An attentive multi-camera system

    NASA Astrophysics Data System (ADS)

    Napoletano, Paolo; Tisato, Francesco

    2014-03-01

    Intelligent multi-camera systems that integrate computer vision algorithms are not error free, and thus both false positive and negative detections need to be revised by a specialized human operator. Traditional multi-camera systems usually include a control center with a wall of monitors displaying videos from each camera of the network. Nevertheless, as the number of cameras increases, switching from a camera to another becomes hard for a human operator. In this work we propose a new method that dynamically selects and displays the content of a video camera from all the available contents in the multi-camera system. The proposed method is based on a computational model of human visual attention that integrates top-down and bottom-up cues. We believe that this is the first work that tries to use a model of human visual attention for the dynamic selection of the camera view of a multi-camera system. The proposed method has been experimented in a given scenario and has demonstrated its effectiveness with respect to the other methods and manually generated ground-truth. The effectiveness has been evaluated in terms of number of correct best-views generated by the method with respect to the camera views manually generated by a human operator.

  13. Automatic lightning detection and photographic system

    NASA Technical Reports Server (NTRS)

    Wojtasinski, R. J.; Holley, L. D.; Gray, J. L.; Hoover, R. B. (Inventor)

    1972-01-01

    A system is presented for monitoring and recording lightning strokes within a predetermined area with a camera having an electrically operated shutter with means for advancing the film in the camera after activating the shutter. The system includes an antenna for sensing lightning strikes which, in turn, generates a signal that is fed to an electronic circuit which generates signals for operating the shutter of the camera. Circuitry is provided for preventing activation of the shutter as the film in the camera is being advanced.

  14. Fuzzy logic control for camera tracking system

    NASA Technical Reports Server (NTRS)

    Lea, Robert N.; Fritz, R. H.; Giarratano, J.; Jani, Yashvant

    1992-01-01

    A concept utilizing fuzzy theory has been developed for a camera tracking system to provide support for proximity operations and traffic management around the Space Station Freedom. Fuzzy sets and fuzzy logic based reasoning are used in a control system which utilizes images from a camera and generates required pan and tilt commands to track and maintain a moving target in the camera's field of view. This control system can be implemented on a fuzzy chip to provide an intelligent sensor for autonomous operations. Capabilities of the control system can be expanded to include approach, handover to other sensors, caution and warning messages.

  15. Extreme-UV lithography system

    DOEpatents

    Replogle, William C.; Sweatt, William C.

    2001-01-01

    A photolithography system that employs a condenser that includes a series of aspheric mirrors on one side of a small, incoherent source of radiation producing a series of beams is provided. Each aspheric mirror images the quasi point source into a curved line segment. A relatively small arc of the ring image is needed by the camera; all of the beams are so manipulated that they all fall onto this same arc needed by the camera. Also, all of the beams are aimed through the camera's virtual entrance pupil. The condenser includes a correcting mirror for reshaping a beam segment which improves the overall system efficiency. The condenser efficiently fills the larger radius ringfield created by today's advanced camera designs. The system further includes (i) means for adjusting the intensity profile at the camera's entrance pupil or (ii) means for partially shielding the illumination imaging onto the mask or wafer. The adjusting means can, for example, change at least one of: (i) partial coherence of the photolithography system, (ii) mask image illumination uniformity on the wafer or (iii) centroid position of the illumination flux in the entrance pupil. A particularly preferred adjusting means includes at least one vignetting mask that covers at least a portion of the at least two substantially equal radial segments of the parent aspheric mirror.

  16. Ranging Apparatus and Method Implementing Stereo Vision System

    NASA Technical Reports Server (NTRS)

    Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system for use in telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a left and right video camera mounted on a camera platform, and a remotely positioned operator. The position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. A laser is provided between the left and right video camera and is directed by the user to point to a target device. The images produced by the left and right video cameras are processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. The horizontal disparity between the two processed images is calculated for use in a stereometric ranging analysis from which range is determined.

  17. A compact high-definition low-cost digital stereoscopic video camera for rapid robotic surgery development.

    PubMed

    Carlson, Jay; Kowalczuk, Jędrzej; Psota, Eric; Pérez, Lance C

    2012-01-01

    Robotic surgical platforms require vision feedback systems, which often consist of low-resolution, expensive, single-imager analog cameras. These systems are retooled for 3D display by simply doubling the cameras and outboard control units. Here, a fully-integrated digital stereoscopic video camera employing high-definition sensors and a class-compliant USB video interface is presented. This system can be used with low-cost PC hardware and consumer-level 3D displays for tele-medical surgical applications including military medical support, disaster relief, and space exploration.

  18. Single-snapshot 2D color measurement by plenoptic imaging system

    NASA Astrophysics Data System (ADS)

    Masuda, Kensuke; Yamanaka, Yuji; Maruyama, Go; Nagai, Sho; Hirai, Hideaki; Meng, Lingfei; Tosic, Ivana

    2014-03-01

    Plenoptic cameras enable capture of directional light ray information, thus allowing applications such as digital refocusing, depth estimation, or multiband imaging. One of the most common plenoptic camera architectures contains a microlens array at the conventional image plane and a sensor at the back focal plane of the microlens array. We leverage the multiband imaging (MBI) function of this camera and develop a single-snapshot, single-sensor high color fidelity camera. Our camera is based on a plenoptic system with XYZ filters inserted in the pupil plane of the main lens. To achieve high color measurement precision of this system, we perform an end-to-end optimization of the system model that includes light source information, object information, optical system information, plenoptic image processing and color estimation processing. Optimized system characteristics are exploited to build an XYZ plenoptic colorimetric camera prototype that achieves high color measurement precision. We describe an application of our colorimetric camera to color shading evaluation of display and show that it achieves color accuracy of ΔE<0.01.

  19. A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment.

    PubMed

    Yang, Tao; Li, Guangpo; Li, Jing; Zhang, Yanning; Zhang, Xiaoqiang; Zhang, Zhuoyue; Li, Zhi

    2016-08-30

    This paper proposes a novel infrared camera array guidance system with capability to track and provide real time position and speed of a fixed-wing Unmanned air vehicle (UAV) during a landing process. The system mainly include three novel parts: (1) Infrared camera array and near infrared laser lamp based cooperative long range optical imaging module; (2) Large scale outdoor camera array calibration module; and (3) Laser marker detection and 3D tracking module. Extensive automatic landing experiments with fixed-wing flight demonstrate that our infrared camera array system has the unique ability to guide the UAV landing safely and accurately in real time. Moreover, the measurement and control distance of our system is more than 1000 m. The experimental results also demonstrate that our system can be used for UAV automatic accurate landing in Global Position System (GPS)-denied environments.

  20. Control system for several rotating mirror camera synchronization operation

    NASA Astrophysics Data System (ADS)

    Liu, Ningwen; Wu, Yunfeng; Tan, Xianxiang; Lai, Guoji

    1997-05-01

    This paper introduces a single chip microcomputer control system for synchronization operation of several rotating mirror high-speed cameras. The system consists of four parts: the microcomputer control unit (including the synchronization part and precise measurement part and the time delay part), the shutter control unit, the motor driving unit and the high voltage pulse generator unit. The control system has been used to control the synchronization working process of the GSI cameras (driven by a motor) and FJZ-250 rotating mirror cameras (driven by a gas driven turbine). We have obtained the films of the same objective from different directions in different speed or in same speed.

  1. The sequence measurement system of the IR camera

    NASA Astrophysics Data System (ADS)

    Geng, Ai-hui; Han, Hong-xia; Zhang, Hai-bo

    2011-08-01

    Currently, the IR cameras are broadly used in the optic-electronic tracking, optic-electronic measuring, fire control and optic-electronic countermeasure field, but the output sequence of the most presently applied IR cameras in the project is complex and the giving sequence documents from the leave factory are not detailed. Aiming at the requirement that the continuous image transmission and image procession system need the detailed sequence of the IR cameras, the sequence measurement system of the IR camera is designed, and the detailed sequence measurement way of the applied IR camera is carried out. The FPGA programming combined with the SignalTap online observation way has been applied in the sequence measurement system, and the precise sequence of the IR camera's output signal has been achieved, the detailed document of the IR camera has been supplied to the continuous image transmission system, image processing system and etc. The sequence measurement system of the IR camera includes CameraLink input interface part, LVDS input interface part, FPGA part, CameraLink output interface part and etc, thereinto the FPGA part is the key composed part in the sequence measurement system. Both the video signal of the CmaeraLink style and the video signal of LVDS style can be accepted by the sequence measurement system, and because the image processing card and image memory card always use the CameraLink interface as its input interface style, the output signal style of the sequence measurement system has been designed into CameraLink interface. The sequence measurement system does the IR camera's sequence measurement work and meanwhile does the interface transmission work to some cameras. Inside the FPGA of the sequence measurement system, the sequence measurement program, the pixel clock modification, the SignalTap file configuration and the SignalTap online observation has been integrated to realize the precise measurement to the IR camera. Te sequence measurement program written by the verilog language combining the SignalTap tool on line observation can count the line numbers in one frame, pixel numbers in one line and meanwhile account the line offset and row offset of the image. Aiming at the complex sequence of the IR camera's output signal, the sequence measurement system of the IR camera accurately measures the sequence of the project applied camera, supplies the detailed sequence document to the continuous system such as image processing system and image transmission system and gives out the concrete parameters of the fval, lval, pixclk, line offset and row offset. The experiment shows that the sequence measurement system of the IR camera can get the precise sequence measurement result and works stably, laying foundation for the continuous system.

  2. Engineering study for pallet adapting the Apollo laser altimeter and photographic camera system for the Lidar Test Experiment on orbital flight tests 2 and 4

    NASA Technical Reports Server (NTRS)

    Kuebert, E. J.

    1977-01-01

    A Laser Altimeter and Mapping Camera System was included in the Apollo Lunar Orbital Experiment Missions. The backup system, never used in the Apollo Program, is available for use in the Lidar Test Experiments on the STS Orbital Flight Tests 2 and 4. Studies were performed to assess the problem associated with installation and operation of the Mapping Camera System in the STS. They were conducted on the photographic capabilities of the Mapping Camera System, its mechanical and electrical interface with the STS, documentation, operation and survivability in the expected environments, ground support equipment, test and field support.

  3. Plenoptic Imager for Automated Surface Navigation

    NASA Technical Reports Server (NTRS)

    Zollar, Byron; Milder, Andrew; Milder, Andrew; Mayo, Michael

    2010-01-01

    An electro-optical imaging device is capable of autonomously determining the range to objects in a scene without the use of active emitters or multiple apertures. The novel, automated, low-power imaging system is based on a plenoptic camera design that was constructed as a breadboard system. Nanohmics proved feasibility of the concept by designing an optical system for a prototype plenoptic camera, developing simulated plenoptic images and range-calculation algorithms, constructing a breadboard prototype plenoptic camera, and processing images (including range calculations) from the prototype system. The breadboard demonstration included an optical subsystem comprised of a main aperture lens, a mechanical structure that holds an array of micro lenses at the focal distance from the main lens, and a structure that mates a CMOS imaging sensor the correct distance from the micro lenses. The demonstrator also featured embedded electronics for camera readout, and a post-processor executing image-processing algorithms to provide ranging information.

  4. SCC500: next-generation infrared imaging camera core products with highly flexible architecture for unique camera designs

    NASA Astrophysics Data System (ADS)

    Rumbaugh, Roy N.; Grealish, Kevin; Kacir, Tom; Arsenault, Barry; Murphy, Robert H.; Miller, Scott

    2003-09-01

    A new 4th generation MicroIR architecture is introduced as the latest in the highly successful Standard Camera Core (SCC) series by BAE SYSTEMS to offer an infrared imaging engine with greatly reduced size, weight, power, and cost. The advanced SCC500 architecture provides great flexibility in configuration to include multiple resolutions, an industry standard Real Time Operating System (RTOS) for customer specific software application plug-ins, and a highly modular construction for unique physical and interface options. These microbolometer based camera cores offer outstanding and reliable performance over an extended operating temperature range to meet the demanding requirements of real-world environments. A highly integrated lens and shutter is included in the new SCC500 product enabling easy, drop-in camera designs for quick time-to-market product introductions.

  5. [Microeconomics of introduction of a PET system based on the revised Japanese National Insurance reimbursement system].

    PubMed

    Abe, Katsumi; Kosuda, Shigeru; Kusano, Shoichi; Nagata, Masayoshi

    2003-11-01

    It is crucial to evaluate an annual balance before-hand when an institution installs a PET system because the revised Japanese national insurance reimbursement system set the cost of a FDG PET study as 75,000 yen. A break-even point was calculated in an 8-hour or a 24-hour operation of a PET system, based on the total costs reported. The break-even points were as follows: 13.4, 17.7, 22.1 studies per day for the 1 cyclotron-1 PET camera, 1 cyclotron-2 PET cameras, 1 cyclotron-3 PET cameras system, respectively, in an ordinary PET system operation of 8 hours. The break-even points were 19.9, 25.5, 31.2 studies per day for the 1 cyclotron-1 PET camera, 1 cyclotron-2 PET cameras, 1 cyclotron-3 PET cameras system, respectively, in a full PET system operation of 24 hours. The results indicate no profit would accrue in an ordinary PET system operation of 8 hours. The annual profit and break-even point for the total cost including the initial investment would be respectively 530 million yen and 2.8 years in a 24-hour operation with 1 cyclotron-3 PET cameras system.

  6. Wired and Wireless Camera Triggering with Arduino

    NASA Astrophysics Data System (ADS)

    Kauhanen, H.; Rönnholm, P.

    2017-10-01

    Synchronous triggering is an important task that allows simultaneous data capture from multiple cameras. Accurate synchronization enables 3D measurements of moving objects or from a moving platform. In this paper, we describe one wired and four wireless variations of Arduino-based low-cost remote trigger systems designed to provide a synchronous trigger signal for industrial cameras. Our wireless systems utilize 315 MHz or 434 MHz frequencies with noise filtering capacitors. In order to validate the synchronization accuracy, we developed a prototype of a rotating trigger detection system (named RoTriDeS). This system is suitable to detect the triggering accuracy of global shutter cameras. As a result, the wired system indicated an 8.91 μs mean triggering time difference between two cameras. Corresponding mean values for the four wireless triggering systems varied between 7.92 and 9.42 μs. Presented values include both camera-based and trigger-based desynchronization. Arduino-based triggering systems appeared to be feasible, and they have the potential to be extended to more complicated triggering systems.

  7. 3-dimensional telepresence system for a robotic environment

    DOEpatents

    Anderson, Matthew O.; McKay, Mark D.

    2000-01-01

    A telepresence system includes a camera pair remotely controlled by a control module affixed to an operator. The camera pair provides for three dimensional viewing and the control module, affixed to the operator, affords hands-free operation of the camera pair. In one embodiment, the control module is affixed to the head of the operator and an initial position is established. A triangulating device is provided to track the head movement of the operator relative to the initial position. A processor module receives input from the triangulating device to determine where the operator has moved relative to the initial position and moves the camera pair in response thereto. The movement of the camera pair is predetermined by a software map having a plurality of operation zones. Each zone therein corresponds to unique camera movement parameters such as speed of movement. Speed parameters include constant speed, or increasing or decreasing. Other parameters include pan, tilt, slide, raise or lowering of the cameras. Other user interface devices are provided to improve the three dimensional control capabilities of an operator in a local operating environment. Such other devices include a pair of visual display glasses, a microphone and a remote actuator. The pair of visual display glasses are provided to facilitate three dimensional viewing, hence depth perception. The microphone affords hands-free camera movement by utilizing voice commands. The actuator allows the operator to remotely control various robotic mechanisms in the remote operating environment.

  8. Stereoscopic 3D reconstruction using motorized zoom lenses within an embedded system

    NASA Astrophysics Data System (ADS)

    Liu, Pengcheng; Willis, Andrew; Sui, Yunfeng

    2009-02-01

    This paper describes a novel embedded system capable of estimating 3D positions of surfaces viewed by a stereoscopic rig consisting of a pair of calibrated cameras. Novel theoretical and technical aspects of the system are tied to two aspects of the design that deviate from typical stereoscopic reconstruction systems: (1) incorporation of an 10x zoom lens (Rainbow- H10x8.5) and (2) implementation of the system on an embedded system. The system components include a DSP running μClinux, an embedded version of the Linux operating system, and an FPGA. The DSP orchestrates data flow within the system and performs complex computational tasks and the FPGA provides an interface to the system devices which consist of a CMOS camera pair and a pair of servo motors which rotate (pan) each camera. Calibration of the camera pair is accomplished using a collection of stereo images that view a common chess board calibration pattern for a set of pre-defined zoom positions. Calibration settings for an arbitrary zoom setting are estimated by interpolation of the camera parameters. A low-computational cost method for dense stereo matching is used to compute depth disparities for the stereo image pairs. Surface reconstruction is accomplished by classical triangulation of the matched points from the depth disparities. This article includes our methods and results for the following problems: (1) automatic computation of the focus and exposure settings for the lens and camera sensor, (2) calibration of the system for various zoom settings and (3) stereo reconstruction results for several free form objects.

  9. Performance benefits and limitations of a camera network

    NASA Astrophysics Data System (ADS)

    Carr, Peter; Thomas, Paul J.; Hornsey, Richard

    2005-06-01

    Visual information is of vital significance to both animals and artificial systems. The majority of mammals rely on two images, each with a resolution of 107-108 'pixels' per image. At the other extreme are insect eyes where the field of view is segmented into 103-105 images, each comprising effectively one pixel/image. The great majority of artificial imaging systems lie nearer to the mammalian characteristics in this parameter space, although electronic compound eyes have been developed in this laboratory and elsewhere. If the definition of a vision system is expanded to include networks or swarms of sensor elements, then schools of fish, flocks of birds and ant or termite colonies occupy a region where the number of images and the pixels/image may be comparable. A useful system might then have 105 imagers, each with about 104-105 pixels. Artificial analogs to these situations include sensor webs, smart dust and co-ordinated robot clusters. As an extreme example, we might consider the collective vision system represented by the imminent existence of ~109 cellular telephones, each with a one-megapixel camera. Unoccupied regions in this resolution-segmentation parameter space suggest opportunities for innovative artificial sensor network systems. Essential for the full exploitation of these opportunities is the availability of custom CMOS image sensor chips whose characteristics can be tailored to the application. Key attributes of such a chip set might include integrated image processing and control, low cost, and low power. This paper compares selected experimentally determined system specifications for an inward-looking array of 12 cameras with the aid of a camera-network model developed to explore the tradeoff between camera resolution and the number of cameras.

  10. FieldSAFE: Dataset for Obstacle Detection in Agriculture.

    PubMed

    Kragh, Mikkel Fly; Christiansen, Peter; Laursen, Morten Stigaard; Larsen, Morten; Steen, Kim Arild; Green, Ole; Karstoft, Henrik; Jørgensen, Rasmus Nyholm

    2017-11-09

    In this paper, we present a multi-modal dataset for obstacle detection in agriculture. The dataset comprises approximately 2 h of raw sensor data from a tractor-mounted sensor system in a grass mowing scenario in Denmark, October 2016. Sensing modalities include stereo camera, thermal camera, web camera, 360 ∘ camera, LiDAR and radar, while precise localization is available from fused IMU and GNSS. Both static and moving obstacles are present, including humans, mannequin dolls, rocks, barrels, buildings, vehicles and vegetation. All obstacles have ground truth object labels and geographic coordinates.

  11. FieldSAFE: Dataset for Obstacle Detection in Agriculture

    PubMed Central

    Christiansen, Peter; Larsen, Morten; Steen, Kim Arild; Green, Ole; Karstoft, Henrik

    2017-01-01

    In this paper, we present a multi-modal dataset for obstacle detection in agriculture. The dataset comprises approximately 2 h of raw sensor data from a tractor-mounted sensor system in a grass mowing scenario in Denmark, October 2016. Sensing modalities include stereo camera, thermal camera, web camera, 360∘ camera, LiDAR and radar, while precise localization is available from fused IMU and GNSS. Both static and moving obstacles are present, including humans, mannequin dolls, rocks, barrels, buildings, vehicles and vegetation. All obstacles have ground truth object labels and geographic coordinates. PMID:29120383

  12. Solid state television camera

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The design, fabrication, and tests of a solid state television camera using a new charge-coupled imaging device are reported. An RCA charge-coupled device arranged in a 512 by 320 format and directly compatible with EIA format standards was the sensor selected. This is a three-phase, sealed surface-channel array that has 163,840 sensor elements, which employs a vertical frame transfer system for image readout. Included are test results of the complete camera system, circuit description and changes to such circuits as a result of integration and test, maintenance and operation section, recommendations to improve the camera system, and a complete set of electrical and mechanical drawing sketches.

  13. On-line, continuous monitoring in solar cell and fuel cell manufacturing using spectral reflectance imaging

    DOEpatents

    Sopori, Bhushan; Rupnowski, Przemyslaw; Ulsh, Michael

    2016-01-12

    A monitoring system 100 comprising a material transport system 104 providing for the transportation of a substantially planar material 102, 107 through the monitoring zone 103 of the monitoring system 100. The system 100 also includes a line camera 106 positioned to obtain multiple line images across a width of the material 102, 107 as it is transported through the monitoring zone 103. The system 100 further includes an illumination source 108 providing for the illumination of the material 102, 107 transported through the monitoring zone 103 such that light reflected in a direction normal to the substantially planar surface of the material 102, 107 is detected by the line camera 106. A data processing system 110 is also provided in digital communication with the line camera 106. The data processing system 110 is configured to receive data output from the line camera 106 and further configured to calculate and provide substantially contemporaneous information relating to a quality parameter of the material 102, 107. Also disclosed are methods of monitoring a quality parameter of a material.

  14. Design and fabrication of a CCD camera for use with relay optics in solar X-ray astronomy

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Configured as a subsystem of a sounding rocket experiment, a camera system was designed to record and transmit an X-ray image focused on a charge coupled device. The camera consists of a X-ray sensitive detector and the electronics for processing and transmitting image data. The design and operation of the camera are described. Schematics are included.

  15. Variation in detection among passive infrared triggered-cameras used in wildlife research

    USGS Publications Warehouse

    Damm, Philip E.; Grand, James B.; Barnett, Steven W.

    2010-01-01

    Precise and accurate estimates of demographics such as age structure, productivity, and density are necessary in determining habitat and harvest management strategies for wildlife populations. Surveys using automated cameras are becoming an increasingly popular tool for estimating these parameters. However, most camera studies fail to incorporate detection probabilities, leading to parameter underestimation. The objective of this study was to determine the sources of heterogeneity in detection for trail cameras that incorporate a passive infrared (PIR) triggering system sensitive to heat and motion. Images were collected at four baited sites within the Conecuh National Forest, Alabama, using three cameras at each site operating continuously over the same seven-day period. Detection was estimated for four groups of animals based on taxonomic group and body size. Our hypotheses of detection considered variation among bait sites and cameras. The best model (w=0.99) estimated different rates of detection for each camera in addition to different detection rates for four animal groupings. Factors that explain this variability might include poor manufacturing tolerances, variation in PIR sensitivity, animal behavior, and species-specific infrared radiation. Population surveys using trail cameras with PIR systems must incorporate detection rates for individual cameras. Incorporating time-lapse triggering systems into survey designs should eliminate issues associated with PIR systems.

  16. Very High-Speed Digital Video Capability for In-Flight Use

    NASA Technical Reports Server (NTRS)

    Corda, Stephen; Tseng, Ting; Reaves, Matthew; Mauldin, Kendall; Whiteman, Donald

    2006-01-01

    digital video camera system has been qualified for use in flight on the NASA supersonic F-15B Research Testbed aircraft. This system is capable of very-high-speed color digital imaging at flight speeds up to Mach 2. The components of this system have been ruggedized and shock-mounted in the aircraft to survive the severe pressure, temperature, and vibration of the flight environment. The system includes two synchronized camera subsystems installed in fuselage-mounted camera pods (see Figure 1). Each camera subsystem comprises a camera controller/recorder unit and a camera head. The two camera subsystems are synchronized by use of an MHub(TradeMark) synchronization unit. Each camera subsystem is capable of recording at a rate up to 10,000 pictures per second (pps). A state-of-the-art complementary metal oxide/semiconductor (CMOS) sensor in the camera head has a maximum resolution of 1,280 1,024 pixels at 1,000 pps. Exposure times of the electronic shutter of the camera range from 1/200,000 of a second to full open. The recorded images are captured in a dynamic random-access memory (DRAM) and can be downloaded directly to a personal computer or saved on a compact flash memory card. In addition to the high-rate recording of images, the system can display images in real time at 30 pps. Inter Range Instrumentation Group (IRIG) time code can be inserted into the individual camera controllers or into the M-Hub unit. The video data could also be used to obtain quantitative, three-dimensional trajectory information. The first use of this system was in support of the Space Shuttle Return to Flight effort. Data were needed to help in understanding how thermally insulating foam is shed from a space shuttle external fuel tank during launch. The cameras captured images of simulated external tank debris ejected from a fixture mounted under the centerline of the F-15B aircraft. Digital video was obtained at subsonic and supersonic flight conditions, including speeds up to Mach 2 and altitudes up to 50,000 ft (15.24 km). The digital video was used to determine the structural survivability of the debris in a real flight environment and quantify the aerodynamic trajectories of the debris.

  17. Adaptive tracking control of a wheeled mobile robot via an uncalibrated camera system.

    PubMed

    Dixon, W E; Dawson, D M; Zergeroglu, E; Behal, A

    2001-01-01

    This paper considers the problem of position/orientation tracking control of wheeled mobile robots via visual servoing in the presence of parametric uncertainty associated with the mechanical dynamics and the camera system. Specifically, we design an adaptive controller that compensates for uncertain camera and mechanical parameters and ensures global asymptotic position/orientation tracking. Simulation and experimental results are included to illustrate the performance of the control law.

  18. OpenCV and TYZX : video surveillance for tracking.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Jim; Spencer, Andrew; Chu, Eric

    2008-08-01

    As part of the National Security Engineering Institute (NSEI) project, several sensors were developed in conjunction with an assessment algorithm. A camera system was developed in-house to track the locations of personnel within a secure room. In addition, a commercial, off-the-shelf (COTS) tracking system developed by TYZX was examined. TYZX is a Bay Area start-up that has developed its own tracking hardware and software which we use as COTS support for robust tracking. This report discusses the pros and cons of each camera system, how they work, a proposed data fusion method, and some visual results. Distributed, embedded image processingmore » solutions show the most promise in their ability to track multiple targets in complex environments and in real-time. Future work on the camera system may include three-dimensional volumetric tracking by using multiple simple cameras, Kalman or particle filtering, automated camera calibration and registration, and gesture or path recognition.« less

  19. Establishing imaging sensor specifications for digital still cameras

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    2007-02-01

    Digital Still Cameras, DSCs, have now displaced conventional still cameras in most markets. The heart of a DSC is thought to be the imaging sensor, be it Full Frame CCD, and Interline CCD, a CMOS sensor or the newer Foveon buried photodiode sensors. There is a strong tendency by consumers to consider only the number of mega-pixels in a camera and not to consider the overall performance of the imaging system, including sharpness, artifact control, noise, color reproduction, exposure latitude and dynamic range. This paper will provide a systematic method to characterize the physical requirements of an imaging sensor and supporting system components based on the desired usage. The analysis is based on two software programs that determine the "sharpness", potential for artifacts, sensor "photographic speed", dynamic range and exposure latitude based on the physical nature of the imaging optics, sensor characteristics (including size of pixels, sensor architecture, noise characteristics, surface states that cause dark current, quantum efficiency, effective MTF, and the intrinsic full well capacity in terms of electrons per square centimeter). Examples will be given for consumer, pro-consumer, and professional camera systems. Where possible, these results will be compared to imaging system currently on the market.

  20. Studying Upper-Limb Amputee Prosthesis Use to Inform Device Design

    DTIC Science & Technology

    2015-10-01

    the study. This equipment has included a modified GoPro head-mounted camera and a Vicon 13-camera optical motion capture system, which was not part...also completed for relevant members of the study team. 4. The head-mounted camera setup has been established (a modified GoPro Hero 3 with external

  1. Event-Driven Random-Access-Windowing CCD Imaging System

    NASA Technical Reports Server (NTRS)

    Monacos, Steve; Portillo, Angel; Ortiz, Gerardo; Alexander, James; Lam, Raymond; Liu, William

    2004-01-01

    A charge-coupled-device (CCD) based high-speed imaging system, called a realtime, event-driven (RARE) camera, is undergoing development. This camera is capable of readout from multiple subwindows [also known as regions of interest (ROIs)] within the CCD field of view. Both the sizes and the locations of the ROIs can be controlled in real time and can be changed at the camera frame rate. The predecessor of this camera was described in High-Frame-Rate CCD Camera Having Subwindow Capability (NPO- 30564) NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 26. The architecture of the prior camera requires tight coupling between camera control logic and an external host computer that provides commands for camera operation and processes pixels from the camera. This tight coupling limits the attainable frame rate and functionality of the camera. The design of the present camera loosens this coupling to increase the achievable frame rate and functionality. From a host computer perspective, the readout operation in the prior camera was defined on a per-line basis; in this camera, it is defined on a per-ROI basis. In addition, the camera includes internal timing circuitry. This combination of features enables real-time, event-driven operation for adaptive control of the camera. Hence, this camera is well suited for applications requiring autonomous control of multiple ROIs to track multiple targets moving throughout the CCD field of view. Additionally, by eliminating the need for control intervention by the host computer during the pixel readout, the present design reduces ROI-readout times to attain higher frame rates. This camera (see figure) includes an imager card consisting of a commercial CCD imager and two signal-processor chips. The imager card converts transistor/ transistor-logic (TTL)-level signals from a field programmable gate array (FPGA) controller card. These signals are transmitted to the imager card via a low-voltage differential signaling (LVDS) cable assembly. The FPGA controller card is connected to the host computer via a standard peripheral component interface (PCI).

  2. Computerized digital dermoscopy.

    PubMed

    Gewirtzman, A J; Braun, R P

    2003-01-01

    Within the past 15 years, dermoscopy has become a widely used non-invasive technique for physicians to better visualize pigmented lesions. Dermoscopy has helped trained physicians to better diagnose pigmented lesions. Now, the digital revolution is beginning to enhance standard dermoscopic procedures. Using digital dermoscopy, physicians are better able to document pigmented lesions for patient follow-up and to get second opinions, either through teledermoscopy with an expert colleague or by using computer-assisted diagnosis. As the market for digital dermoscopy products begins to grow, so do the number of decisions physicians need to make when choosing a system to fit their needs. The current market for digital dermoscopy includes two varieties of relatively simple and cheap attachments which can convert a consumer digital camera into a digital dermoscope. A coupling adapter acts as a fastener between the camera and an ordinary dermoscope, whereas a dermoscopy attachment includes the dermoscope optics and light source and can be attached directly to the camera. Other options for digital dermoscopy include complete dermoscopy systems that use a hand-held video camera linked directly to a computer. These systems differ from each other in whether or not they are calibrated as well as the quality of the camera and software interface. Another option in digital skin imaging involves spectral analysis rather than dermoscopy. This article serves as a guide to the current systems available and their capabilities.

  3. Correction And Use Of Jitter In Television Images

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Fender, Derek H.; Fender, Antony R. H.

    1989-01-01

    Proposed system stabilizes jittering television image and/or measures jitter to extract information on motions of objects in image. Alternative version, system controls lateral motion on camera to generate stereoscopic views to measure distances to objects. In another version, motion of camera controlled to keep object in view. Heart of system is digital image-data processor called "jitter-miser", which includes frame buffer and logic circuits to correct for jitter in image. Signals from motion sensors on camera sent to logic circuits and processed into corrections for motion along and across line of sight.

  4. HERCULES/MSI: a multispectral imager with geolocation for STS-70

    NASA Astrophysics Data System (ADS)

    Simi, Christopher G.; Kindsfather, Randy; Pickard, Henry; Howard, William, III; Norton, Mark C.; Dixon, Roberta

    1995-11-01

    A multispectral intensified CCD imager combined with a ring laser gyroscope based inertial measurement unit was flown on the Space Shuttle Discovery from July 13-22, 1995 (Space Transport System Flight No. 70, STS-70). The camera includes a six position filter wheel, a third generation image intensifier, and a CCD camera. The camera is integrated with a laser gyroscope system that determines the ground position of the imagery to an accuracy of better than three nautical miles. The camera has two modes of operation; a panchromatic mode for high-magnification imaging [ground sample distance (GSD) of 4 m], or a multispectral mode consisting of six different user-selectable spectral ranges at reduced magnification (12 m GSD). This paper discusses the system hardware and technical trade-offs involved with camera optimization, and presents imagery observed during the shuttle mission.

  5. The California All-sky Meteor Surveillance (CAMS) System

    NASA Astrophysics Data System (ADS)

    Gural, P. S.

    2011-01-01

    A unique next generation multi-camera, multi-site video meteor system is being developed and deployed in California to provide high accuracy orbits of simultaneously captured meteors. Included herein is a description of the goals, concept of operations, hardware, and software development progress. An appendix contains a meteor camera performance trade study made for video systems circa 2010.

  6. Characterization of lens based photoacoustic imaging system.

    PubMed

    Francis, Kalloor Joseph; Chinni, Bhargava; Channappayya, Sumohana S; Pachamuthu, Rajalakshmi; Dogra, Vikram S; Rao, Navalgund

    2017-12-01

    Some of the challenges in translating photoacoustic (PA) imaging to clinical applications includes limited view of the target tissue, low signal to noise ratio and the high cost of developing real-time systems. Acoustic lens based PA imaging systems, also known as PA cameras are a potential alternative to conventional imaging systems in these scenarios. The 3D focusing action of lens enables real-time C-scan imaging with a 2D transducer array. In this paper, we model the underlying physics in a PA camera in the mathematical framework of an imaging system and derive a closed form expression for the point spread function (PSF). Experimental verification follows including the details on how to design and fabricate the lens inexpensively. The system PSF is evaluated over a 3D volume that can be imaged by this PA camera. Its utility is demonstrated by imaging phantom and an ex vivo human prostate tissue sample.

  7. High-performance dual-speed CCD camera system for scientific imaging

    NASA Astrophysics Data System (ADS)

    Simpson, Raymond W.

    1996-03-01

    Traditionally, scientific camera systems were partitioned with a `camera head' containing the CCD and its support circuitry and a camera controller, which provided analog to digital conversion, timing, control, computer interfacing, and power. A new, unitized high performance scientific CCD camera with dual speed readout at 1 X 106 or 5 X 106 pixels per second, 12 bit digital gray scale, high performance thermoelectric cooling, and built in composite video output is described. This camera provides all digital, analog, and cooling functions in a single compact unit. The new system incorporates the A/C converter, timing, control and computer interfacing in the camera, with the power supply remaining a separate remote unit. A 100 Mbyte/second serial link transfers data over copper or fiber media to a variety of host computers, including Sun, SGI, SCSI, PCI, EISA, and Apple Macintosh. Having all the digital and analog functions in the camera made it possible to modify this system for the Woods Hole Oceanographic Institution for use on a remote controlled submersible vehicle. The oceanographic version achieves 16 bit dynamic range at 1.5 X 105 pixels/second, can be operated at depths of 3 kilometers, and transfers data to the surface via a real time fiber optic link.

  8. A Fast and Robust Extrinsic Calibration for RGB-D Camera Networks.

    PubMed

    Su, Po-Chang; Shen, Ju; Xu, Wanxin; Cheung, Sen-Ching S; Luo, Ying

    2018-01-15

    From object tracking to 3D reconstruction, RGB-Depth (RGB-D) camera networks play an increasingly important role in many vision and graphics applications. Practical applications often use sparsely-placed cameras to maximize visibility, while using as few cameras as possible to minimize cost. In general, it is challenging to calibrate sparse camera networks due to the lack of shared scene features across different camera views. In this paper, we propose a novel algorithm that can accurately and rapidly calibrate the geometric relationships across an arbitrary number of RGB-D cameras on a network. Our work has a number of novel features. First, to cope with the wide separation between different cameras, we establish view correspondences by using a spherical calibration object. We show that this approach outperforms other techniques based on planar calibration objects. Second, instead of modeling camera extrinsic calibration using rigid transformation, which is optimal only for pinhole cameras, we systematically test different view transformation functions including rigid transformation, polynomial transformation and manifold regression to determine the most robust mapping that generalizes well to unseen data. Third, we reformulate the celebrated bundle adjustment procedure to minimize the global 3D reprojection error so as to fine-tune the initial estimates. Finally, our scalable client-server architecture is computationally efficient: the calibration of a five-camera system, including data capture, can be done in minutes using only commodity PCs. Our proposed framework is compared with other state-of-the-arts systems using both quantitative measurements and visual alignment results of the merged point clouds.

  9. A Fast and Robust Extrinsic Calibration for RGB-D Camera Networks †

    PubMed Central

    Shen, Ju; Xu, Wanxin; Luo, Ying

    2018-01-01

    From object tracking to 3D reconstruction, RGB-Depth (RGB-D) camera networks play an increasingly important role in many vision and graphics applications. Practical applications often use sparsely-placed cameras to maximize visibility, while using as few cameras as possible to minimize cost. In general, it is challenging to calibrate sparse camera networks due to the lack of shared scene features across different camera views. In this paper, we propose a novel algorithm that can accurately and rapidly calibrate the geometric relationships across an arbitrary number of RGB-D cameras on a network. Our work has a number of novel features. First, to cope with the wide separation between different cameras, we establish view correspondences by using a spherical calibration object. We show that this approach outperforms other techniques based on planar calibration objects. Second, instead of modeling camera extrinsic calibration using rigid transformation, which is optimal only for pinhole cameras, we systematically test different view transformation functions including rigid transformation, polynomial transformation and manifold regression to determine the most robust mapping that generalizes well to unseen data. Third, we reformulate the celebrated bundle adjustment procedure to minimize the global 3D reprojection error so as to fine-tune the initial estimates. Finally, our scalable client-server architecture is computationally efficient: the calibration of a five-camera system, including data capture, can be done in minutes using only commodity PCs. Our proposed framework is compared with other state-of-the-arts systems using both quantitative measurements and visual alignment results of the merged point clouds. PMID:29342968

  10. Electronic camera-management system for 35-mm and 70-mm film cameras

    NASA Astrophysics Data System (ADS)

    Nielsen, Allan

    1993-01-01

    Military and commercial test facilities have been tasked with the need for increasingly sophisticated data collection and data reduction. A state-of-the-art electronic control system for high speed 35 mm and 70 mm film cameras designed to meet these tasks is described. Data collection in today's test range environment is difficult at best. The need for a completely integrated image and data collection system is mandated by the increasingly complex test environment. Instrumentation film cameras have been used on test ranges to capture images for decades. Their high frame rates coupled with exceptionally high resolution make them an essential part of any test system. In addition to documenting test events, today's camera system is required to perform many additional tasks. Data reduction to establish TSPI (time- space-position information) may be performed after a mission and is subject to all of the variables present in documenting the mission. A typical scenario would consist of multiple cameras located on tracking mounts capturing the event along with azimuth and elevation position data. Corrected data can then be reduced using each camera's time and position deltas and calculating the TSPI of the object using triangulation. An electronic camera control system designed to meet these requirements has been developed by Photo-Sonics, Inc. The feedback received from test technicians at range facilities throughout the world led Photo-Sonics to design the features of this control system. These prominent new features include: a comprehensive safety management system, full local or remote operation, frame rate accuracy of less than 0.005 percent, and phase locking capability to Irig-B. In fact, Irig-B phase lock operation of multiple cameras can reduce the time-distance delta of a test object traveling at mach-1 to less than one inch during data reduction.

  11. Automation of the targeting and reflective alignment concept

    NASA Technical Reports Server (NTRS)

    Redfield, Robin C.

    1992-01-01

    The automated alignment system, described herein, employs a reflective, passive (requiring no power) target and includes a PC-based imaging system and one camera mounted on a six degree of freedom robot manipulator. The system detects and corrects for manipulator misalignment in three translational and three rotational directions by employing the Targeting and Reflective Alignment Concept (TRAC), which simplifies alignment by decoupling translational and rotational alignment control. The concept uses information on the camera and the target's relative position based on video feedback from the camera. These relative positions are converted into alignment errors and minimized by motions of the robot. The system is robust to exogenous lighting by virtue of a subtraction algorithm which enables the camera to only see the target. These capabilities are realized with relatively minimal complexity and expense.

  12. Processing the Viking lander camera data

    NASA Technical Reports Server (NTRS)

    Levinthal, E. C.; Tucker, R.; Green, W.; Jones, K. L.

    1977-01-01

    Over 1000 camera events were returned from the two Viking landers during the Primary Mission. A system was devised for processing camera data as they were received, in real time, from the Deep Space Network. This system provided a flexible choice of parameters for three computer-enhanced versions of the data for display or hard-copy generation. Software systems allowed all but 0.3% of the imagery scan lines received on earth to be placed correctly in the camera data record. A second-order processing system was developed which allowed extensive interactive image processing including computer-assisted photogrammetry, a variety of geometric and photometric transformations, mosaicking, and color balancing using six different filtered images of a common scene. These results have been completely cataloged and documented to produce an Experiment Data Record.

  13. New generation of meteorology cameras

    NASA Astrophysics Data System (ADS)

    Janout, Petr; Blažek, Martin; Páta, Petr

    2017-12-01

    A new generation of the WILLIAM (WIde-field aLL-sky Image Analyzing Monitoring system) camera includes new features such as monitoring of rain and storm clouds during the day observation. Development of the new generation of weather monitoring cameras responds to the demand for monitoring of sudden weather changes. However, new WILLIAM cameras are ready to process acquired image data immediately, release warning against sudden torrential rains, and send it to user's cell phone and email. Actual weather conditions are determined from image data, and results of image processing are complemented by data from sensors of temperature, humidity, and atmospheric pressure. In this paper, we present the architecture, image data processing algorithms of mentioned monitoring camera and spatially-variant model of imaging system aberrations based on Zernike polynomials.

  14. Highly Protable Airborne Multispectral Imaging System

    NASA Technical Reports Server (NTRS)

    Lehnemann, Robert; Mcnamee, Todd

    2001-01-01

    A portable instrumentation system is described that includes and airborne and a ground-based subsytem. It can acquire multispectral image data over swaths of terrain ranging in width from about 1.5 to 1 km. The system was developed especially for use in coastal environments and is well suited for performing remote sensing and general environmental monitoring. It includes a small,munpilotaed, remotely controlled airplance that carries a forward-looking camera for navigation, three downward-looking monochrome video cameras for imaging terrain in three spectral bands, a video transmitter, and a Global Positioning System (GPS) reciever.

  15. Issues in implementing services for a wireless web-enabled digital camera

    NASA Astrophysics Data System (ADS)

    Venkataraman, Shyam; Sampat, Nitin; Fisher, Yoram; Canosa, John; Noel, Nicholas

    2001-05-01

    The competition in the exploding digital photography market has caused vendors to explore new ways to increase their return on investment. A common view among industry analysts is that increasingly it will be services provided by these cameras, and not the cameras themselves, that will provide the revenue stream. These services will be coupled to e- Appliance based Communities. In addition, the rapidly increasing need to upload images to the Internet for photo- finishing services as well as the need to download software upgrades to the camera is driving many camera OEMs to evaluate the benefits of using the wireless web to extend their enterprise systems. Currently, creating a viable e- appliance such as a digital camera coupled with a wireless web service requires more than just a competency in product development. This paper will evaluate the system implications in the deployment of recurring revenue services and enterprise connectivity of a wireless, web-enabled digital camera. These include, among other things, an architectural design approach for services such as device management, synchronization, billing, connectivity, security, etc. Such an evaluation will assist, we hope, anyone designing or connecting a digital camera to the enterprise systems.

  16. Next-generation digital camera integration and software development issues

    NASA Astrophysics Data System (ADS)

    Venkataraman, Shyam; Peters, Ken; Hecht, Richard

    1998-04-01

    This paper investigates the complexities associated with the development of next generation digital cameras due to requirements in connectivity and interoperability. Each successive generation of digital camera improves drastically in cost, performance, resolution, image quality and interoperability features. This is being accomplished by advancements in a number of areas: research, silicon, standards, etc. As the capabilities of these cameras increase, so do the requirements for both hardware and software. Today, there are two single chip camera solutions in the market including the Motorola MPC 823 and LSI DCAM- 101. Real time constraints for a digital camera may be defined by the maximum time allowable between capture of images. Constraints in the design of an embedded digital camera include processor architecture, memory, processing speed and the real-time operating systems. This paper will present the LSI DCAM-101, a single-chip digital camera solution. It will present an overview of the architecture and the challenges in hardware and software for supporting streaming video in such a complex device. Issues presented include the development of the data flow software architecture, testing and integration on this complex silicon device. The strategy for optimizing performance on the architecture will also be presented.

  17. Design and Development of Multi-Purpose CCD Camera System with Thermoelectric Cooling: Hardware

    NASA Astrophysics Data System (ADS)

    Kang, Y.-W.; Byun, Y. I.; Rhee, J. H.; Oh, S. H.; Kim, D. K.

    2007-12-01

    We designed and developed a multi-purpose CCD camera system for three kinds of CCDs; KAF-0401E(768×512), KAF-1602E(1536×1024), KAF-3200E(2184×1472) made by KODAK Co.. The system supports fast USB port as well as parallel port for data I/O and control signal. The packing is based on two stage circuit boards for size reduction and contains built-in filter wheel. Basic hardware components include clock pattern circuit, A/D conversion circuit, CCD data flow control circuit, and CCD temperature control unit. The CCD temperature can be controlled with accuracy of approximately 0.4° C in the max. range of temperature, Δ 33° C. This CCD camera system has with readout noise 6 e^{-}, and system gain 5 e^{-}/ADU. A total of 10 CCD camera systems were produced and our tests show that all of them show passable performance.

  18. Design and evaluation of controls for drift, video gain, and color balance in spaceborne facsimile cameras

    NASA Technical Reports Server (NTRS)

    Katzberg, S. J.; Kelly, W. L., IV; Rowland, C. W.; Burcher, E. E.

    1973-01-01

    The facsimile camera is an optical-mechanical scanning device which has become an attractive candidate as an imaging system for planetary landers and rovers. This paper presents electronic techniques which permit the acquisition and reconstruction of high quality images with this device, even under varying lighting conditions. These techniques include a control for low frequency noise and drift, an automatic gain control, a pulse-duration light modulation scheme, and a relative spectral gain control. Taken together, these techniques allow the reconstruction of radiometrically accurate and properly balanced color images from facsimile camera video data. These techniques have been incorporated into a facsimile camera and reproduction system, and experimental results are presented for each technique and for the complete system.

  19. Distributed Sensing and Processing for Multi-Camera Networks

    NASA Astrophysics Data System (ADS)

    Sankaranarayanan, Aswin C.; Chellappa, Rama; Baraniuk, Richard G.

    Sensor networks with large numbers of cameras are becoming increasingly prevalent in a wide range of applications, including video conferencing, motion capture, surveillance, and clinical diagnostics. In this chapter, we identify some of the fundamental challenges in designing such systems: robust statistical inference, computationally efficiency, and opportunistic and parsimonious sensing. We show that the geometric constraints induced by the imaging process are extremely useful for identifying and designing optimal estimators for object detection and tracking tasks. We also derive pipelined and parallelized implementations of popular tools used for statistical inference in non-linear systems, of which multi-camera systems are examples. Finally, we highlight the use of the emerging theory of compressive sensing in reducing the amount of data sensed and communicated by a camera network.

  20. Taking the Observatory to the Astronomer

    NASA Astrophysics Data System (ADS)

    Bisque, T. M.

    1997-05-01

    Since 1992, Software Bisque's Remote Astronomy Software has been used by the Mt. Wilson Institute to allow interactive control of a 24" telescope and digital camera via modem. Software Bisque now introduces a comparable, relatively low-cost observatory system that allows powerful, yet "user-friendly" telescope and CCD camera control via the Internet. Utilizing software developed for the Windows 95/NT operating systems, the system offers point-and-click access to comprehensive celestial databases, extremely accurate telescope pointing, rapid download of digital CCD images by one or many users and flexible image processing software for data reduction and analysis. Our presentation will describe how the power of the personal computer has been leveraged to provide professional-level tools to the amateur astronomer, and include a description of this system's software and hardware components. The system software includes TheSky Astronomy Software?, CCDSoft CCD Astronomy Software?, TPoint Telescope Pointing Analysis System? software, Orchestrate? and, optionally, the RealSky CDs. The system hardware includes the Paramount GT-1100? Robotic Telescope Mount, as well as third party CCD cameras, focusers and optical tube assemblies.

  1. Cameras for semiconductor process control

    NASA Technical Reports Server (NTRS)

    Porter, W. A.; Parker, D. L.

    1977-01-01

    The application of X-ray topography to semiconductor process control is described, considering the novel features of the high speed camera and the difficulties associated with this technique. The most significant results on the effects of material defects on device performance are presented, including results obtained using wafers processed entirely within this institute. Defects were identified using the X-ray camera and correlations made with probe data. Also included are temperature dependent effects of material defects. Recent applications and improvements of X-ray topographs of silicon-on-sapphire and gallium arsenide are presented with a description of a real time TV system prototype and of the most recent vacuum chuck design. Discussion is included of our promotion of the use of the camera by various semiconductor manufacturers.

  2. Optical Meteor Systems Used by the NASA Meteoroid Environment Office

    NASA Technical Reports Server (NTRS)

    Kingery, A. M.; Blaauw, R. C.; Cooke, W. J.; Moser, D. E.

    2015-01-01

    The NASA Meteoroid Environment Office (MEO) uses two main meteor camera networks to characterize the meteoroid environment: an all sky system and a wide field system to study cm and mm size meteors respectively. The NASA All Sky Fireball Network consists of fifteen meteor video cameras in the United States, with plans to expand to eighteen cameras by the end of 2015. The camera design and All-Sky Guided and Real-time Detection (ASGARD) meteor detection software [1, 2] were adopted from the University of Western Ontario's Southern Ontario Meteor Network (SOMN). After seven years of operation, the network has detected over 12,000 multi-station meteors, including meteors from at least 53 different meteor showers. The network is used for speed distribution determination, characterization of meteor showers and sporadic sources, and for informing the public on bright meteor events. The NASA Wide Field Meteor Network was established in December of 2012 with two cameras and expanded to eight cameras in December of 2014. The two camera configuration saw 5470 meteors over two years of operation with two cameras, and has detected 3423 meteors in the first five months of operation (Dec 12, 2014 - May 12, 2015) with eight cameras. We expect to see over 10,000 meteors per year with the expanded system. The cameras have a 20 degree field of view and an approximate limiting meteor magnitude of +5. The network's primary goal is determining the nightly shower and sporadic meteor fluxes. Both camera networks function almost fully autonomously with little human interaction required for upkeep and analysis. The cameras send their data to a central server for storage and automatic analysis. Every morning the servers automatically generates an e-mail and web page containing an analysis of the previous night's events. The current status of the networks will be described, alongside with preliminary results. In addition, future projects, CCD photometry and broadband meteor color camera system, will be discussed.

  3. Enviropod handbook: A guide to preparation and use of the Environmental Protection Agency's light-weight aerial camera system. [Weber River, Utah

    NASA Technical Reports Server (NTRS)

    Brower, S. J.; Ridd, M. K.

    1984-01-01

    The use of the Environmental Protection Agency (EPA) Enviropod camera system is detailed in this handbook which contains a step-by-step guide for mission planning, flights, film processing, indexing, and documentation. Information regarding Enviropod equipment and specifications is included.

  4. Line-Based Registration of Panoramic Images and LiDAR Point Clouds for Mobile Mapping.

    PubMed

    Cui, Tingting; Ji, Shunping; Shan, Jie; Gong, Jianya; Liu, Kejian

    2016-12-31

    For multi-sensor integrated systems, such as the mobile mapping system (MMS), data fusion at sensor-level, i.e., the 2D-3D registration between an optical camera and LiDAR, is a prerequisite for higher level fusion and further applications. This paper proposes a line-based registration method for panoramic images and a LiDAR point cloud collected by a MMS. We first introduce the system configuration and specification, including the coordinate systems of the MMS, the 3D LiDAR scanners, and the two panoramic camera models. We then establish the line-based transformation model for the panoramic camera. Finally, the proposed registration method is evaluated for two types of camera models by visual inspection and quantitative comparison. The results demonstrate that the line-based registration method can significantly improve the alignment of the panoramic image and the LiDAR datasets under either the ideal spherical or the rigorous panoramic camera model, with the latter being more reliable.

  5. Line-Based Registration of Panoramic Images and LiDAR Point Clouds for Mobile Mapping

    PubMed Central

    Cui, Tingting; Ji, Shunping; Shan, Jie; Gong, Jianya; Liu, Kejian

    2016-01-01

    For multi-sensor integrated systems, such as the mobile mapping system (MMS), data fusion at sensor-level, i.e., the 2D-3D registration between an optical camera and LiDAR, is a prerequisite for higher level fusion and further applications. This paper proposes a line-based registration method for panoramic images and a LiDAR point cloud collected by a MMS. We first introduce the system configuration and specification, including the coordinate systems of the MMS, the 3D LiDAR scanners, and the two panoramic camera models. We then establish the line-based transformation model for the panoramic camera. Finally, the proposed registration method is evaluated for two types of camera models by visual inspection and quantitative comparison. The results demonstrate that the line-based registration method can significantly improve the alignment of the panoramic image and the LiDAR datasets under either the ideal spherical or the rigorous panoramic camera model, with the latter being more reliable. PMID:28042855

  6. Intelligent viewing control for robotic and automation systems

    NASA Astrophysics Data System (ADS)

    Schenker, Paul S.; Peters, Stephen F.; Paljug, Eric D.; Kim, Won S.

    1994-10-01

    We present a new system for supervisory automated control of multiple remote cameras. Our primary purpose in developing this system has been to provide capability for knowledge- based, `hands-off' viewing during execution of teleoperation/telerobotic tasks. The reported technology has broader applicability to remote surveillance, telescience observation, automated manufacturing workcells, etc. We refer to this new capability as `Intelligent Viewing Control (IVC),' distinguishing it from a simple programmed camera motion control. In the IVC system, camera viewing assignment, sequencing, positioning, panning, and parameter adjustment (zoom, focus, aperture, etc.) are invoked and interactively executed by real-time by a knowledge-based controller, drawing on a priori known task models and constraints, including operator preferences. This multi-camera control is integrated with a real-time, high-fidelity 3D graphics simulation, which is correctly calibrated in perspective to the actual cameras and their platform kinematics (translation/pan-tilt). Such merged graphics- with-video design allows the system user to preview and modify the planned (`choreographed') viewing sequences. Further, during actual task execution, the system operator has available both the resulting optimized video sequence, as well as supplementary graphics views from arbitrary perspectives. IVC, including operator-interactive designation of robot task actions, is presented to the user as a well-integrated video-graphic single screen user interface allowing easy access to all relevant telerobot communication/command/control resources. We describe and show pictorial results of a preliminary IVC system implementation for telerobotic servicing of a satellite.

  7. High speed photography, videography, and photonics III; Proceedings of the Meeting, San Diego, CA, August 22, 23, 1985

    NASA Technical Reports Server (NTRS)

    Ponseggi, B. G. (Editor); Johnson, H. C. (Editor)

    1985-01-01

    Papers are presented on the picosecond electronic framing camera, photogrammetric techniques using high-speed cineradiography, picosecond semiconductor lasers for characterizing high-speed image shutters, the measurement of dynamic strain by high-speed moire photography, the fast framing camera with independent frame adjustments, design considerations for a data recording system, and nanosecond optical shutters. Consideration is given to boundary-layer transition detectors, holographic imaging, laser holographic interferometry in wind tunnels, heterodyne holographic interferometry, a multispectral video imaging and analysis system, a gated intensified camera, a charge-injection-device profile camera, a gated silicon-intensified-target streak tube and nanosecond-gated photoemissive shutter tubes. Topics discussed include high time-space resolved photography of lasers, time-resolved X-ray spectrographic instrumentation for laser studies, a time-resolving X-ray spectrometer, a femtosecond streak camera, streak tubes and cameras, and a short pulse X-ray diagnostic development facility.

  8. Recent developments in space shuttle remote sensing, using hand-held film cameras

    NASA Technical Reports Server (NTRS)

    Amsbury, David L.; Bremer, Jeffrey M.

    1992-01-01

    The authors report on the advantages and disadvantages of a number of camera systems which are currently employed for space shuttle remote sensing operations. Systems discussed include the modified Hasselbad, the Rolleiflex 6008, the Linkof 5-inch format system, and the Nikon F3/F4 systems. Film/filter combinations (color positive films, color infrared films, color negative films and polarization filters) are presented.

  9. American Carrier Air Power at the Dawn of a New Century

    DTIC Science & Technology

    2005-01-01

    Systems, Office of the Secretary of Defense (Operational Test and Evaluation); then–Commander Calvin Craig, OPNAV N81; Captain Kenneth Neubauer and...TACP Tactical Air Control Party TARPS Tactical Air Reconnaissance Pod System TCS Television Camera System TLAM Tomahawk Land-Attack Missile TST Time...store any video imagery acquired by the aircraft’s systems, including the TARPS pod, the pilot’s head-up display (HUD), the Television Camera System (TCS

  10. Photodetectors for the Advanced Gamma-ray Imaging System (AGIS)

    NASA Astrophysics Data System (ADS)

    Wagner, Robert G.; Advanced Gamma-ray Imaging System AGIS Collaboration

    2010-03-01

    The Advanced Gamma-Ray Imaging System (AGIS) is a concept for the next generation very high energy gamma-ray observatory. Design goals include an order of magnitude better sensitivity, better angular resolution, and a lower energy threshold than existing Cherenkov telescopes. Each telescope is equipped with a camera that detects and records the Cherenkov-light flashes from air showers. The camera is comprised of a pixelated focal plane of blue sensitive and fast (nanosecond) photon detectors that detect the photon signal and convert it into an electrical one. Given the scale of AGIS, the camera must be reliable and cost effective. The Schwarzschild-Couder optical design yields a smaller plate scale than present-day Cherenkov telescopes, enabling the use of more compact, multi-pixel devices, including multianode photomultipliers or Geiger avalanche photodiodes. We present the conceptual design of the focal plane for the camera and results from testing candidate! focal plane sensors.

  11. The Surgeon's View: Comparison of Two Digital Video Recording Systems in Veterinary Surgery.

    PubMed

    Giusto, Gessica; Caramello, Vittorio; Comino, Francesco; Gandini, Marco

    2015-01-01

    Video recording and photography during surgical procedures are useful in veterinary medicine for several reasons, including legal, educational, and archival purposes. Many systems are available, such as hand cameras, light-mounted cameras, and head cameras. We chose a reasonably priced head camera that is among the smallest video cameras available. To best describe its possible uses and advantages, we recorded video and images of eight different surgical cases and procedures, both in hospital and field settings. All procedures were recorded both with a head-mounted camera and a commercial hand-held photo camera. Then sixteen volunteers (eight senior clinicians and eight final-year students) completed an evaluation questionnaire. Both cameras produced high-quality photographs and videos, but observers rated the head camera significantly better regarding point of view and their understanding of the surgical operation. The head camera was considered significantly more useful in teaching surgical procedures. Interestingly, senior clinicians tended to assign generally lower scores compared to students. The head camera we tested is an effective, easy-to-use tool for recording surgeries and various veterinary procedures in all situations, with no need for assistance from a dedicated operator. It can be a valuable aid for veterinarians working in all fields of the profession and a useful tool for veterinary surgical education.

  12. Night Vision Camera

    NASA Technical Reports Server (NTRS)

    1996-01-01

    PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.

  13. Towards next generation 3D cameras

    NASA Astrophysics Data System (ADS)

    Gupta, Mohit

    2017-03-01

    We are in the midst of a 3D revolution. Robots enabled by 3D cameras are beginning to autonomously drive cars, perform surgeries, and manage factories. However, when deployed in the real-world, these cameras face several challenges that prevent them from measuring 3D shape reliably. These challenges include large lighting variations (bright sunlight to dark night), presence of scattering media (fog, body tissue), and optically complex materials (metal, plastic). Due to these factors, 3D imaging is often the bottleneck in widespread adoption of several key robotics technologies. I will talk about our work on developing 3D cameras based on time-of-flight and active triangulation that addresses these long-standing problems. This includes designing `all-weather' cameras that can perform high-speed 3D scanning in harsh outdoor environments, as well as cameras that recover shape of objects with challenging material properties. These cameras are, for the first time, capable of measuring detailed (<100 microns resolution) scans in extremely demanding scenarios with low-cost components. Several of these cameras are making a practical impact in industrial automation, being adopted in robotic inspection and assembly systems.

  14. Servo-controlled intravital microscope system

    NASA Technical Reports Server (NTRS)

    Mansour, M. N.; Wayland, H. J.; Chapman, C. P. (Inventor)

    1975-01-01

    A microscope system is described for viewing an area of a living body tissue that is rapidly moving, by maintaining the same area in the field-of-view and in focus. A focus sensing portion of the system includes two video cameras at which the viewed image is projected, one camera being slightly in front of the image plane and the other slightly behind it. A focus sensing circuit for each camera differentiates certain high frequency components of the video signal and then detects them and passes them through a low pass filter, to provide dc focus signal whose magnitudes represent the degree of focus. An error signal equal to the difference between the focus signals, drives a servo that moves the microscope objective so that an in-focus view is delivered to an image viewing/recording camera.

  15. Ringfield lithographic camera

    DOEpatents

    Sweatt, William C.

    1998-01-01

    A projection lithography camera is presented with a wide ringfield optimized so as to make efficient use of extreme ultraviolet radiation from a large area radiation source (e.g., D.sub.source .apprxeq.0.5 mm). The camera comprises four aspheric mirrors optically arranged on a common axis of symmetry with an increased etendue for the camera system. The camera includes an aperture stop that is accessible through a plurality of partial aperture stops to synthesize the theoretical aperture stop. Radiation from a mask is focused to form a reduced image on a wafer, relative to the mask, by reflection from the four aspheric mirrors.

  16. Systems and methods for maintaining multiple objects within a camera field-of-view

    DOEpatents

    Gans, Nicholas R.; Dixon, Warren

    2016-03-15

    In one embodiment, a system and method for maintaining objects within a camera field of view include identifying constraints to be enforced, each constraint relating to an attribute of the viewed objects, identifying a priority rank for the constraints such that more important constraints have a higher priority that less important constraints, and determining the set of solutions that satisfy the constraints relative to the order of their priority rank such that solutions that satisfy lower ranking constraints are only considered viable if they also satisfy any higher ranking constraints, each solution providing an indication as to how to control the camera to maintain the objects within the camera field of view.

  17. Data Acquisition System of Nobeyama MKID Camera

    NASA Astrophysics Data System (ADS)

    Nagai, M.; Hisamatsu, S.; Zhai, G.; Nitta, T.; Nakai, N.; Kuno, N.; Murayama, Y.; Hattori, S.; Mandal, P.; Sekimoto, Y.; Kiuchi, H.; Noguchi, T.; Matsuo, H.; Dominjon, A.; Sekiguchi, S.; Naruse, M.; Maekawa, J.; Minamidani, T.; Saito, M.

    2018-05-01

    We are developing a superconducting camera based on microwave kinetic inductance detectors (MKIDs) to observe 100-GHz continuum with the Nobeyama 45-m telescope. A data acquisition (DAQ) system for the camera has been designed to operate the MKIDs with the telescope. This system is required to connect the telescope control system (COSMOS) to the readout system of the MKIDs (MKID DAQ) which employs the frequency-sweeping probe scheme. The DAQ system is also required to record the reference signal of the beam switching for the demodulation by the analysis pipeline in order to suppress the sky fluctuation. The system has to be able to merge and save all data acquired both by the camera and by the telescope, including the cryostat temperature and pressure and the telescope pointing. A collection of software which implements these functions and works as a TCP/IP server on a workstation was developed. The server accepts commands and observation scripts from COSMOS and then issues commands to MKID DAQ to configure and start data acquisition. We made a commissioning of the MKID camera on the Nobeyama 45-m telescope and obtained successful scan signals of the atmosphere and of the Moon.

  18. Water Plume Temperature Measurements by an Unmanned Aerial System (UAS)

    PubMed Central

    DeMario, Anthony; Lopez, Pete; Plewka, Eli; Wix, Ryan; Xia, Hai; Zamora, Emily; Gessler, Dan; Yalin, Azer P.

    2017-01-01

    We report on the development and testing of a proof of principle water temperature measurement system deployed on an unmanned aerial system (UAS), for field measurements of thermal discharges into water. The primary elements of the system include a quad-copter UAS to which has been integrated, for the first time, both a thermal imaging infrared (IR) camera and an immersible probe that can be dipped below the water surface to obtain vertical water temperature profiles. The IR camera is used to take images of the overall water surface to geo-locate the plume, while the immersible probe provides quantitative temperature depth profiles at specific locations. The full system has been tested including the navigation of the UAS, its ability to safely carry the sensor payload, and the performance of both the IR camera and the temperature probe. Finally, the UAS sensor system was successfully deployed in a pilot field study at a coal burning power plant, and obtained images and temperature profiles of the thermal effluent. PMID:28178215

  19. Water Plume Temperature Measurements by an Unmanned Aerial System (UAS).

    PubMed

    DeMario, Anthony; Lopez, Pete; Plewka, Eli; Wix, Ryan; Xia, Hai; Zamora, Emily; Gessler, Dan; Yalin, Azer P

    2017-02-07

    We report on the development and testing of a proof of principle water temperature measurement system deployed on an unmanned aerial system (UAS), for field measurements of thermal discharges into water. The primary elements of the system include a quad-copter UAS to which has been integrated, for the first time, both a thermal imaging infrared (IR) camera and an immersible probe that can be dipped below the water surface to obtain vertical water temperature profiles. The IR camera is used to take images of the overall water surface to geo-locate the plume, while the immersible probe provides quantitative temperature depth profiles at specific locations. The full system has been tested including the navigation of the UAS, its ability to safely carry the sensor payload, and the performance of both the IR camera and the temperature probe. Finally, the UAS sensor system was successfully deployed in a pilot field study at a coal burning power plant, and obtained images and temperature profiles of the thermal effluent.

  20. Rugged Video System For Inspecting Animal Burrows

    NASA Technical Reports Server (NTRS)

    Triandafils, Dick; Maples, Art; Breininger, Dave

    1992-01-01

    Video system designed for examining interiors of burrows of gopher tortoises, 5 in. (13 cm) in diameter or greater, to depth of 18 ft. (about 5.5 m), includes video camera, video cassette recorder (VCR), television monitor, control unit, and power supply, all carried in backpack. Polyvinyl chloride (PVC) poles used to maneuver camera into (and out of) burrows, stiff enough to push camera into burrow, but flexible enough to bend around curves. Adult tortoises and other burrow inhabitants observable, young tortoises and such small animals as mice obscured by sand or debris.

  1. Lock-In Imaging System for Detecting Disturbances in Fluid

    NASA Technical Reports Server (NTRS)

    Park, Yeonjoon (Inventor); Choi, Sang Hyouk (Inventor); King, Glen C. (Inventor); Elliott, James R. (Inventor); Dimarcantonio, Albert L. (Inventor)

    2014-01-01

    A lock-in imaging system is configured for detecting a disturbance in air. The system includes an airplane, an interferometer, and a telescopic imaging camera. The airplane includes a fuselage and a pair of wings. The airplane is configured for flight in air. The interferometer is operatively disposed on the airplane and configured for producing an interference pattern by splitting a beam of light into two beams along two paths and recombining the two beams at a junction point in a front flight path of the airplane during flight. The telescopic imaging camera is configured for capturing an image of the beams at the junction point. The telescopic imaging camera is configured for detecting the disturbance in air in an optical path, based on an index of refraction of the image, as detected at the junction point.

  2. Cable and Line Inspection Mechanism

    NASA Technical Reports Server (NTRS)

    Ross, Terence J. (Inventor)

    2003-01-01

    An automated cable and line inspection mechanism visually scans the entire surface of a cable as the mechanism travels along the cable=s length. The mechanism includes a drive system, a video camera, a mirror assembly for providing the camera with a 360 degree view of the cable, and a laser micrometer for measuring the cable=s diameter. The drive system includes an electric motor and a plurality of drive wheels and tension wheels for engaging the cable or line to be inspected, and driving the mechanism along the cable. The mirror assembly includes mirrors that are positioned to project multiple images of the cable on the camera lens, each of which is of a different portion of the cable. A data transceiver and a video transmitter are preferably employed for transmission of video images, data and commands between the mechanism and a remote control station.

  3. Cable and line inspection mechanism

    NASA Technical Reports Server (NTRS)

    Ross, Terence J. (Inventor)

    2003-01-01

    An automated cable and line inspection mechanism visually scans the entire surface of a cable as the mechanism travels along the cable=s length. The mechanism includes a drive system, a video camera, a mirror assembly for providing the camera with a 360 degree view of the cable, and a laser micrometer for measuring the cable=s diameter. The drive system includes an electric motor and a plurality of drive wheels and tension wheels for engaging the cable or line to be inspected, and driving the mechanism along the cable. The mirror assembly includes mirrors that are positioned to project multiple images of the cable on the camera lens, each of which is of a different portion of the cable. A data transceiver and a video transmitter are preferably employed for transmission of video images, data and commands between the mechanism and a remote control station.

  4. A Normal Incidence X-ray Telescope (NIXT) sounding rocket payload

    NASA Technical Reports Server (NTRS)

    Golub, Leon

    1989-01-01

    Work on the High Resolution X-ray (HRX) Detector Program is described. In the laboratory and flight programs, multiple copies of a general purpose set of electronics which control the camera, signal processing and data acquisition, were constructed. A typical system consists of a phosphor convertor, image intensifier, a fiber optics coupler, a charge coupled device (CCD) readout, and a set of camera, signal processing and memory electronics. An initial rocket detector prototype camera was tested in flight and performed perfectly. An advanced prototype detector system was incorporated on another rocket flight, in which a high resolution heterojunction vidicon tube was used as the readout device for the H(alpha) telescope. The camera electronics for this tube were built in-house and included in the flight electronics. Performance of this detector system was 100 percent satisfactory. The laboratory X-ray system for operation on the ground is also described.

  5. Product Plan of New Generation System Camera "OLYMPUS PEN E-P1"

    NASA Astrophysics Data System (ADS)

    Ogawa, Haruo

    "OLYMPUS PEN E-P1", which is new generation system camera, is the first product of Olympus which is new standard "Micro Four-thirds System" for high-resolution mirror-less cameras. It continues good sales by the concept of "small and stylish design, easy operation and SLR image quality" since release on July 3, 2009. On the other hand, the half-size film camera "OLYMPUS PEN" was popular by the concept "small and stylish design and original mechanism" since the first product in 1959 and recorded sale number more than 17 million with 17 models. By the 50th anniversary topic and emotional value of the Olympus pen, Olympus pen E-P1 became big sales. I would like to explain the way of thinking of the product plan that included not only the simple functional value but also emotional value on planning the first product of "Micro Four-thirds System".

  6. Evaluation of multispectral plenoptic camera

    NASA Astrophysics Data System (ADS)

    Meng, Lingfei; Sun, Ting; Kosoglow, Rich; Berkner, Kathrin

    2013-01-01

    Plenoptic cameras enable capture of a 4D lightfield, allowing digital refocusing and depth estimation from data captured with a compact portable camera. Whereas most of the work on plenoptic camera design has been based a simplistic geometric-optics-based characterization of the optical path only, little work has been done of optimizing end-to-end system performance for a specific application. Such design optimization requires design tools that need to include careful parameterization of main lens elements, as well as microlens array and sensor characteristics. In this paper we are interested in evaluating the performance of a multispectral plenoptic camera, i.e. a camera with spectral filters inserted into the aperture plane of the main lens. Such a camera enables single-snapshot spectral data acquisition.1-3 We first describe in detail an end-to-end imaging system model for a spectrally coded plenoptic camera that we briefly introduced in.4 Different performance metrics are defined to evaluate the spectral reconstruction quality. We then present a prototype which is developed based on a modified DSLR camera containing a lenslet array on the sensor and a filter array in the main lens. Finally we evaluate the spectral reconstruction performance of a spectral plenoptic camera based on both simulation and measurements obtained from the prototype.

  7. Mosad and Stream Vision For A Telerobotic, Flying Camera System

    NASA Technical Reports Server (NTRS)

    Mandl, William

    2002-01-01

    Two full custom camera systems using the Multiplexed OverSample Analog to Digital (MOSAD) conversion technology for visible light sensing were built and demonstrated. They include a photo gate sensor and a photo diode sensor. The system includes the camera assembly, driver interface assembly, a frame stabler board with integrated decimeter and Windows 2000 compatible software for real time image display. An array size of 320X240 with 16 micron pixel pitch was developed for compatibility with 0.3 inch CCTV optics. With 1.2 micron technology, a 73% fill factor was achieved. Noise measurements indicated 9 to 11 bits operating with 13.7 bits best case. Power measured under 10 milliwatts at 400 samples per second. Nonuniformity variation was below noise floor. Pictures were taken with different cameras during the characterization study to demonstrate the operable range. The successful conclusion of this program demonstrates the utility of the MOSAD for NASA missions, providing superior performance over CMOS and lower cost and power consumption over CCD. The MOSAD approach also provides a path to radiation hardening for space based applications.

  8. SpectraCAM SPM: a camera system with high dynamic range for scientific and medical applications

    NASA Astrophysics Data System (ADS)

    Bhaskaran, S.; Baiko, D.; Lungu, G.; Pilon, M.; VanGorden, S.

    2005-08-01

    A scientific camera system having high dynamic range designed and manufactured by Thermo Electron for scientific and medical applications is presented. The newly developed CID820 image sensor with preamplifier-per-pixel technology is employed in this camera system. The 4 Mega-pixel imaging sensor has a raw dynamic range of 82dB. Each high-transparent pixel is based on a preamplifier-per-pixel architecture and contains two photogates for non-destructive readout of the photon-generated charge (NDRO). Readout is achieved via parallel row processing with on-chip correlated double sampling (CDS). The imager is capable of true random pixel access with a maximum operating speed of 4MHz. The camera controller consists of a custom camera signal processor (CSP) with an integrated 16-bit A/D converter and a PowerPC-based CPU running a Linux embedded operating system. The imager is cooled to -40C via three-stage cooler to minimize dark current. The camera housing is sealed and is designed to maintain the CID820 imager in the evacuated chamber for at least 5 years. Thermo Electron has also developed custom software and firmware to drive the SpectraCAM SPM camera. Included in this firmware package is the new Extreme DRTM algorithm that is designed to extend the effective dynamic range of the camera by several orders of magnitude up to 32-bit dynamic range. The RACID Exposure graphical user interface image analysis software runs on a standard PC that is connected to the camera via Gigabit Ethernet.

  9. Quality controls for gamma cameras and PET cameras: development of a free open-source ImageJ program

    NASA Astrophysics Data System (ADS)

    Carlier, Thomas; Ferrer, Ludovic; Berruchon, Jean B.; Cuissard, Regis; Martineau, Adeline; Loonis, Pierre; Couturier, Olivier

    2005-04-01

    Acquisition data and treatments for quality controls of gamma cameras and Positron Emission Tomography (PET) cameras are commonly performed with dedicated program packages, which are running only on manufactured computers and differ from each other, depending on camera company and program versions. The aim of this work was to develop a free open-source program (written in JAVA language) to analyze data for quality control of gamma cameras and PET cameras. The program is based on the free application software ImageJ and can be easily loaded on any computer operating system (OS) and thus on any type of computer in every nuclear medicine department. Based on standard parameters of quality control, this program includes 1) for gamma camera: a rotation center control (extracted from the American Association of Physics in Medicine, AAPM, norms) and two uniformity controls (extracted from the Institute of Physics and Engineering in Medicine, IPEM, and National Electronic Manufacturers Association, NEMA, norms). 2) For PET systems, three quality controls recently defined by the French Medical Physicist Society (SFPM), i.e. spatial resolution and uniformity in a reconstructed slice and scatter fraction, are included. The determination of spatial resolution (thanks to the Point Spread Function, PSF, acquisition) allows to compute the Modulation Transfer Function (MTF) in both modalities of cameras. All the control functions are included in a tool box which is a free ImageJ plugin and could be soon downloaded from Internet. Besides, this program offers the possibility to save on HTML format the uniformity quality control results and a warning can be set to automatically inform users in case of abnormal results. The architecture of the program allows users to easily add any other specific quality control program. Finally, this toolkit is an easy and robust tool to perform quality control on gamma cameras and PET cameras based on standard computation parameters, is free, run on any type of computer and will soon be downloadable from the net (http://rsb.info.nih.gov/ij/plugins or http://nucleartoolkit.free.fr).

  10. Synthetic depth data creation for sensor setup planning and evaluation of multi-camera multi-person trackers

    NASA Astrophysics Data System (ADS)

    Pattke, Marco; Martin, Manuel; Voit, Michael

    2017-05-01

    Tracking people with cameras in public areas is common today. However with an increasing number of cameras it becomes harder and harder to view the data manually. Especially in safety critical areas automatic image exploitation could help to solve this problem. Setting up such a system can however be difficult because of its increased complexity. Sensor placement is critical to ensure that people are detected and tracked reliably. We try to solve this problem using a simulation framework that is able to simulate different camera setups in the desired environment including animated characters. We combine this framework with our self developed distributed and scalable system for people tracking to test its effectiveness and can show the results of the tracking system in real time in the simulated environment.

  11. Thermal infrared panoramic imaging sensor

    NASA Astrophysics Data System (ADS)

    Gutin, Mikhail; Tsui, Eddy K.; Gutin, Olga; Wang, Xu-Ming; Gutin, Alexey

    2006-05-01

    Panoramic cameras offer true real-time, 360-degree coverage of the surrounding area, valuable for a variety of defense and security applications, including force protection, asset protection, asset control, security including port security, perimeter security, video surveillance, border control, airport security, coastguard operations, search and rescue, intrusion detection, and many others. Automatic detection, location, and tracking of targets outside protected area ensures maximum protection and at the same time reduces the workload on personnel, increases reliability and confidence of target detection, and enables both man-in-the-loop and fully automated system operation. Thermal imaging provides the benefits of all-weather, 24-hour day/night operation with no downtime. In addition, thermal signatures of different target types facilitate better classification, beyond the limits set by camera's spatial resolution. The useful range of catadioptric panoramic cameras is affected by their limited resolution. In many existing systems the resolution is optics-limited. Reflectors customarily used in catadioptric imagers introduce aberrations that may become significant at large camera apertures, such as required in low-light and thermal imaging. Advantages of panoramic imagers with high image resolution include increased area coverage with fewer cameras, instantaneous full horizon detection, location and tracking of multiple targets simultaneously, extended range, and others. The Automatic Panoramic Thermal Integrated Sensor (APTIS), being jointly developed by Applied Science Innovative, Inc. (ASI) and the Armament Research, Development and Engineering Center (ARDEC) combines the strengths of improved, high-resolution panoramic optics with thermal imaging in the 8 - 14 micron spectral range, leveraged by intelligent video processing for automated detection, location, and tracking of moving targets. The work in progress supports the Future Combat Systems (FCS) and the Intelligent Munitions Systems (IMS). The APTIS is anticipated to operate as an intelligent node in a wireless network of multifunctional nodes that work together to serve in a wide range of applications of homeland security, as well as serve the Army in tasks of improved situational awareness (SA) in defense and offensive operations, and as a sensor node in tactical Intelligence Surveillance Reconnaissance (ISR). The novel ViperView TM high-resolution panoramic thermal imager is the heart of the APTIS system. It features an aberration-corrected omnidirectional imager with small optics designed to match the resolution of a 640x480 pixels IR camera with improved image quality for longer range target detection, classification, and tracking. The same approach is applicable to panoramic cameras working in the visible spectral range. Other components of the ATPIS system include network communications, advanced power management, and wakeup capability. Recent developments include image processing, optical design being expanded into the visible spectral range, and wireless communications design. This paper describes the development status of the APTIS system.

  12. Motionless active depth from defocus system using smart optics for camera autofocus applications

    NASA Astrophysics Data System (ADS)

    Amin, M. Junaid; Riza, Nabeel A.

    2016-04-01

    This paper describes a motionless active Depth from Defocus (DFD) system design suited for long working range camera autofocus applications. The design consists of an active illumination module that projects a scene illuminating coherent conditioned optical radiation pattern which maintains its sharpness over multiple axial distances allowing an increased DFD working distance range. The imager module of the system responsible for the actual DFD operation deploys an electronically controlled variable focus lens (ECVFL) as a smart optic to enable a motionless imager design capable of effective DFD operation. An experimental demonstration is conducted in the laboratory which compares the effectiveness of the coherent conditioned radiation module versus a conventional incoherent active light source, and demonstrates the applicability of the presented motionless DFD imager design. The fast response and no-moving-parts features of the DFD imager design are especially suited for camera scenarios where mechanical motion of lenses to achieve autofocus action is challenging, for example, in the tiny camera housings in smartphones and tablets. Applications for the proposed system include autofocus in modern day digital cameras.

  13. Refining the Workflow of UV Camera Measurements: Data Collection from Low Emission Rate Volcanoes under Variable Conditions

    NASA Astrophysics Data System (ADS)

    Brewer, I. D.; Werner, C. A.; Nadeau, P. A.

    2010-12-01

    UV camera systems are gaining popularity worldwide for quantifying SO2 column abundances and emission rates from volcanoes, which serve as primary measures of volcanic hazard and aid in eruption forecasting. To date many of the investigations have focused on fairly active and routinely monitored volcanoes under optimal conditions. Some recent studies have begun to recommend protocols and procedures for data collection, but additional questions still need to be addressed. In this study we attempt to answer these questions, and also present results from volcanoes that are rarely monitored. Conditions at these volcanoes are typically sub-optimal for UV camera measurements. Discussion of such data is essential in the assessment of the wider applicability of UV camera measurements for SO2 monitoring purposes. Data discussed herein consists of plume images from volcanoes with relatively low emission rates, with varying weather conditions and from various distances (2-12 km). These include Karangatang Volcano (Indonesia), Mount St. Helens (Washington, USA), and Augustine and Redoubt Volcanoes (Alaska, USA). High emission rate data were also collected at Kilauea Volcano (Hawaii, USA), and blue sky test images with no plume were collected at Mammoth Mountain (California, USA). All data were collected between 2008 and 2010 using both single-filter (307 nm) and dual-filter (307 nm/326 nm) systems and were accompanied by FLYSPEC measurements. With the dual-filter systems, both a filter wheel setup and a synchronous-imaging dual-camera setup were employed. Data collection and processing questions included (1) what is the detection limit of the camera, (2) how large is the variability in raw camera output, (3) how do camera optics affect the measurements and how can this be corrected, (4) how much variability is observed in calibration under various conditions, (5) what is the optimal workflow for image collection and processing, and (6) what is the range of camera operating conditions? Besides emission rates from these infrequently monitored volcanoes, the results of this study include a recommended workflow and procedure for image collection and calibration, and a MATLAB-based algorithm for batch processing, thereby enabling accurate emission rates at 1 Hz when a synchronous-imaging dual-camera setup is used.

  14. LSST Camera Optics Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riot, V J; Olivier, S; Bauman, B

    2012-05-24

    The Large Synoptic Survey Telescope (LSST) uses a novel, three-mirror, telescope design feeding a camera system that includes a set of broad-band filters and three refractive corrector lenses to produce a flat field at the focal plane with a wide field of view. Optical design of the camera lenses and filters is integrated in with the optical design of telescope mirrors to optimize performance. We discuss the rationale for the LSST camera optics design, describe the methodology for fabricating, coating, mounting and testing the lenses and filters, and present the results of detailed analyses demonstrating that the camera optics willmore » meet their performance goals.« less

  15. User-assisted visual search and tracking across distributed multi-camera networks

    NASA Astrophysics Data System (ADS)

    Raja, Yogesh; Gong, Shaogang; Xiang, Tao

    2011-11-01

    Human CCTV operators face several challenges in their task which can lead to missed events, people or associations, including: (a) data overload in large distributed multi-camera environments; (b) short attention span; (c) limited knowledge of what to look for; and (d) lack of access to non-visual contextual intelligence to aid search. Developing a system to aid human operators and alleviate such burdens requires addressing the problem of automatic re-identification of people across disjoint camera views, a matching task made difficult by factors such as lighting, viewpoint and pose changes and for which absolute scoring approaches are not best suited. Accordingly, we describe a distributed multi-camera tracking (MCT) system to visually aid human operators in associating people and objects effectively over multiple disjoint camera views in a large public space. The system comprises three key novel components: (1) relative measures of ranking rather than absolute scoring to learn the best features for matching; (2) multi-camera behaviour profiling as higher-level knowledge to reduce the search space and increase the chance of finding correct matches; and (3) human-assisted data mining to interactively guide search and in the process recover missing detections and discover previously unknown associations. We provide an extensive evaluation of the greater effectiveness of the system as compared to existing approaches on industry-standard i-LIDS multi-camera data.

  16. Spinoff 1999

    NASA Technical Reports Server (NTRS)

    1999-01-01

    A survey is presented of NASA-developed technologies and systems that were reaching commercial application in the course of 1999. Attention is given to the contributions of each major NASA Research Center. Representative 'spinoff' technologies include the predictive AI engine monitoring system EMPAS, the GPS-based Wide Area Augmentation System for aircraft navigation, a CMOS-Active Pixel Sensor camera-on-a-chip, a marine spectroradiometer, portable fuel cells, hyperspectral camera technology, and a rapid-prototyping process for ceramic components.

  17. Development of low-cost high-performance multispectral camera system at Banpil

    NASA Astrophysics Data System (ADS)

    Oduor, Patrick; Mizuno, Genki; Olah, Robert; Dutta, Achyut K.

    2014-05-01

    Banpil Photonics (Banpil) has developed a low-cost high-performance multispectral camera system for Visible to Short- Wave Infrared (VIS-SWIR) imaging for the most demanding high-sensitivity and high-speed military, commercial and industrial applications. The 640x512 pixel InGaAs uncooled camera system is designed to provide a compact, smallform factor to within a cubic inch, high sensitivity needing less than 100 electrons, high dynamic range exceeding 190 dB, high-frame rates greater than 1000 frames per second (FPS) at full resolution, and low power consumption below 1W. This is practically all the feature benefits highly desirable in military imaging applications to expand deployment to every warfighter, while also maintaining a low-cost structure demanded for scaling into commercial markets. This paper describes Banpil's development of the camera system including the features of the image sensor with an innovation integrating advanced digital electronics functionality, which has made the confluence of high-performance capabilities on the same imaging platform practical at low cost. It discusses the strategies employed including innovations of the key components (e.g. focal plane array (FPA) and Read-Out Integrated Circuitry (ROIC)) within our control while maintaining a fabless model, and strategic collaboration with partners to attain additional cost reductions on optics, electronics, and packaging. We highlight the challenges and potential opportunities for further cost reductions to achieve a goal of a sub-$1000 uncooled high-performance camera system. Finally, a brief overview of emerging military, commercial and industrial applications that will benefit from this high performance imaging system and their forecast cost structure is presented.

  18. Science observations with the IUE using the one-gyro mode

    NASA Technical Reports Server (NTRS)

    Imhoff, C.; Pitts, R.; Arquilla, R.; Shrader, Chris R.; Perez, M. R.; Webb, J.

    1990-01-01

    The International Ultraviolet Explorer (IUE) attitude control system originally included an inertial reference package containing six gyroscopes for three axis stabilization. The science instrument includes a prime and redundant Field Error Sensor (FES) camera for target acquisition and offset guiding. Since launch, four of the six gyroscopes have failed. The current attitude control system utilizes the remaining two gyros and a Fine Sun Sensor (FSS) for three axis stabilization. When the next gyro fails, a new attitude control system will be uplinked which will rely on the remaining gyro and the FSS for general three axis stabilization. In addition to the FSS, the FES cameras will be required to assist in maintaining fine attitude control during target acquisition. This has required thoroughly determining the characteristics of the FES cameras and the spectrograph aperture plate as well as devising new target acquisition procedures. The results of this work are presented.

  19. Science observations with the IUE using the one-gyro mode

    NASA Technical Reports Server (NTRS)

    Imhoff, C.; Pitts, R.; Arquilla, R.; Shrader, C.; Perez, M.; Webb, J.

    1990-01-01

    The International Ultraviolet Explorer (IUE) attitude control system originally included an inertial reference package containing six gyroscopes for three axis stabilization. The science instrument includes a prime and redundant Field Error Sensor (FES) camera for target acquisition and offset guiding. Since launch, four of the six gyroscopes have failed. The current attitude control system utilizes the remaining two gyros and a Fine Sun Sensor (FSS) for three axis stabilization. When the next gyro fails, a new attitude control system will be uplinked, which will relay on the remaining gyro and the FSS for general three axis stabilization. In addition to the FSS, the FES cameras will be required to assist in maintaining fine attitude control during target acquisition. This has required thoroughly determining the characteristics of the FES cameras and the spectrograph aperture plate as well as devising new target acquisition procedures. The results of this work are presented.

  20. 75 FR 60495 - Notice of Intent To Rule on Request To Release Airport Property at Monroe Regional Airport...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-30

    ... upgrade and expand the security camera system to include more of the Security Passenger Holding area as well as cameras for airfield surveillance. Any person may inspect the request in person at the FAA...

  1. Camera Concepts for the Advanced Gamma-Ray Imaging System (AGIS)

    NASA Astrophysics Data System (ADS)

    Nepomuk Otte, Adam

    2009-05-01

    The Advanced Gamma-Ray Imaging System (AGIS) is a concept for the next generation observatory in ground-based very high energy gamma-ray astronomy. Design goals are ten times better sensitivity, higher angular resolution, and a lower energy threshold than existing Cherenkov telescopes. Each telescope is equipped with a camera that detects and records the Cherenkov-light flashes from air showers. The camera is comprised of a pixelated focal plane of blue sensitive and fast (nanosecond) photon detectors that detect the photon signal and convert it into an electrical one. The incorporation of trigger electronics and signal digitization into the camera are under study. Given the size of AGIS, the camera must be reliable, robust, and cost effective. We are investigating several directions that include innovative technologies such as Geiger-mode avalanche-photodiodes as a possible detector and switched capacitor arrays for the digitization.

  2. A Portable Shoulder-Mounted Camera System for Surgical Education in Spine Surgery.

    PubMed

    Pham, Martin H; Ohiorhenuan, Ifije E; Patel, Neil N; Jakoi, Andre M; Hsieh, Patrick C; Acosta, Frank L; Wang, Jeffrey C; Liu, John C

    2017-02-07

    The past several years have demonstrated an increased recognition of operative videos as an important adjunct for resident education. Currently lacking, however, are effective methods to record video for the purposes of illustrating the techniques of minimally invasive (MIS) and complex spine surgery. We describe here our experiences developing and using a shoulder-mounted camera system for recording surgical video. Our requirements for an effective camera system included wireless portability to allow for movement around the operating room, camera mount location for comfort and loupes/headlight usage, battery life for long operative days, and sterile control of on/off recording. With this in mind, we created a shoulder-mounted camera system utilizing a GoPro™ HERO3+, its Smart Remote (GoPro, Inc., San Mateo, California), a high-capacity external battery pack, and a commercially available shoulder-mount harness. This shoulder-mounted system was more comfortable to wear for long periods of time in comparison to existing head-mounted and loupe-mounted systems. Without requiring any wired connections, the surgeon was free to move around the room as needed. Over the past several years, we have recorded numerous MIS and complex spine surgeries for the purposes of surgical video creation for resident education. Surgical videos serve as a platform to distribute important operative nuances in rich multimedia. Effective and practical camera system setups are needed to encourage the continued creation of videos to illustrate the surgical maneuvers in minimally invasive and complex spinal surgery. We describe here a novel portable shoulder-mounted camera system setup specifically designed to be worn and used for long periods of time in the operating room.

  3. Multifunctional microcontrollable interface module

    NASA Astrophysics Data System (ADS)

    Spitzer, Mark B.; Zavracky, Paul M.; Rensing, Noa M.; Crawford, J.; Hockman, Angela H.; Aquilino, P. D.; Girolamo, Henry J.

    2001-08-01

    This paper reports the development of a complete eyeglass- mounted computer interface system including display, camera and audio subsystems. The display system provides an SVGA image with a 20 degree horizontal field of view. The camera system has been optimized for face recognition and provides a 19 degree horizontal field of view. A microphone and built-in pre-amp optimized for voice recognition and a speaker on an articulated arm are included for audio. An important feature of the system is a high degree of adjustability and reconfigurability. The system has been developed for testing by the Military Police, in a complete system comprising the eyeglass-mounted interface, a wearable computer, and an RF link. Details of the design, construction, and performance of the eyeglass-based system are discussed.

  4. Utilization and viability of biologically-inspired algorithms in a dynamic multiagent camera surveillance system

    NASA Astrophysics Data System (ADS)

    Mundhenk, Terrell N.; Dhavale, Nitin; Marmol, Salvador; Calleja, Elizabeth; Navalpakkam, Vidhya; Bellman, Kirstie; Landauer, Chris; Arbib, Michael A.; Itti, Laurent

    2003-10-01

    In view of the growing complexity of computational tasks and their design, we propose that certain interactive systems may be better designed by utilizing computational strategies based on the study of the human brain. Compared with current engineering paradigms, brain theory offers the promise of improved self-organization and adaptation to the current environment, freeing the programmer from having to address those issues in a procedural manner when designing and implementing large-scale complex systems. To advance this hypothesis, we discus a multi-agent surveillance system where 12 agent CPUs each with its own camera, compete and cooperate to monitor a large room. To cope with the overload of image data streaming from 12 cameras, we take inspiration from the primate"s visual system, which allows the animal to operate a real-time selection of the few most conspicuous locations in visual input. This is accomplished by having each camera agent utilize the bottom-up, saliency-based visual attention algorithm of Itti and Koch (Vision Research 2000;40(10-12):1489-1506) to scan the scene for objects of interest. Real time operation is achieved using a distributed version that runs on a 16-CPU Beowulf cluster composed of the agent computers. The algorithm guides cameras to track and monitor salient objects based on maps of color, orientation, intensity, and motion. To spread camera view points or create cooperation in monitoring highly salient targets, camera agents bias each other by increasing or decreasing the weight of different feature vectors in other cameras, using mechanisms similar to excitation and suppression that have been documented in electrophysiology, psychophysics and imaging studies of low-level visual processing. In addition, if cameras need to compete for computing resources, allocation of computational time is weighed based upon the history of each camera. A camera agent that has a history of seeing more salient targets is more likely to obtain computational resources. The system demonstrates the viability of biologically inspired systems in a real time tracking. In future work we plan on implementing additional biological mechanisms for cooperative management of both the sensor and processing resources in this system that include top down biasing for target specificity as well as novelty and the activity of the tracked object in relation to sensitive features of the environment.

  5. Touch And Go Camera System (TAGCAMS) for the OSIRIS-REx Asteroid Sample Return Mission

    NASA Astrophysics Data System (ADS)

    Bos, B. J.; Ravine, M. A.; Caplinger, M.; Schaffner, J. A.; Ladewig, J. V.; Olds, R. D.; Norman, C. D.; Huish, D.; Hughes, M.; Anderson, S. K.; Lorenz, D. A.; May, A.; Jackman, C. D.; Nelson, D.; Moreau, M.; Kubitschek, D.; Getzandanner, K.; Gordon, K. E.; Eberhardt, A.; Lauretta, D. S.

    2018-02-01

    NASA's OSIRIS-REx asteroid sample return mission spacecraft includes the Touch And Go Camera System (TAGCAMS) three camera-head instrument. The purpose of TAGCAMS is to provide imagery during the mission to facilitate navigation to the target asteroid, confirm acquisition of the asteroid sample, and document asteroid sample stowage. The cameras were designed and constructed by Malin Space Science Systems (MSSS) based on requirements developed by Lockheed Martin and NASA. All three of the cameras are mounted to the spacecraft nadir deck and provide images in the visible part of the spectrum, 400-700 nm. Two of the TAGCAMS cameras, NavCam 1 and NavCam 2, serve as fully redundant navigation cameras to support optical navigation and natural feature tracking. Their boresights are aligned in the nadir direction with small angular offsets for operational convenience. The third TAGCAMS camera, StowCam, provides imagery to assist with and confirm proper stowage of the asteroid sample. Its boresight is pointed at the OSIRIS-REx sample return capsule located on the spacecraft deck. All three cameras have at their heart a 2592 × 1944 pixel complementary metal oxide semiconductor (CMOS) detector array that provides up to 12-bit pixel depth. All cameras also share the same lens design and a camera field of view of roughly 44° × 32° with a pixel scale of 0.28 mrad/pixel. The StowCam lens is focused to image features on the spacecraft deck, while both NavCam lens focus positions are optimized for imaging at infinity. A brief description of the TAGCAMS instrument and how it is used to support critical OSIRIS-REx operations is provided.

  6. Stereoscopic Configurations To Minimize Distortions

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.

    1991-01-01

    Proposed television system provides two stereoscopic displays. Two-camera, two-monitor system used in various camera configurations and with stereoscopic images on monitors magnified to various degrees. Designed to satisfy observer's need to perceive spatial relationships accurately throughout workspace or to perceive them at high resolution in small region of workspace. Potential applications include industrial, medical, and entertainment imaging and monitoring and control of telemanipulators, telerobots, and remotely piloted vehicles.

  7. An on-line calibration algorithm for external parameters of visual system based on binocular stereo cameras

    NASA Astrophysics Data System (ADS)

    Wang, Liqiang; Liu, Zhen; Zhang, Zhonghua

    2014-11-01

    Stereo vision is the key in the visual measurement, robot vision, and autonomous navigation. Before performing the system of stereo vision, it needs to calibrate the intrinsic parameters for each camera and the external parameters of the system. In engineering, the intrinsic parameters remain unchanged after calibrating cameras, and the positional relationship between the cameras could be changed because of vibration, knocks and pressures in the vicinity of the railway or motor workshops. Especially for large baselines, even minute changes in translation or rotation can affect the epipolar geometry and scene triangulation to such a degree that visual system becomes disabled. A technology including both real-time examination and on-line recalibration for the external parameters of stereo system becomes particularly important. This paper presents an on-line method for checking and recalibrating the positional relationship between stereo cameras. In epipolar geometry, the external parameters of cameras can be obtained by factorization of the fundamental matrix. Thus, it offers a method to calculate the external camera parameters without any special targets. If the intrinsic camera parameters are known, the external parameters of system can be calculated via a number of random matched points. The process is: (i) estimating the fundamental matrix via the feature point correspondences; (ii) computing the essential matrix from the fundamental matrix; (iii) obtaining the external parameters by decomposition of the essential matrix. In the step of computing the fundamental matrix, the traditional methods are sensitive to noise and cannot ensure the estimation accuracy. We consider the feature distribution situation in the actual scene images and introduce a regional weighted normalization algorithm to improve accuracy of the fundamental matrix estimation. In contrast to traditional algorithms, experiments on simulated data prove that the method improves estimation robustness and accuracy of the fundamental matrix. Finally, we take an experiment for computing the relationship of a pair of stereo cameras to demonstrate accurate performance of the algorithm.

  8. Development of Automated Tracking System with Active Cameras for Figure Skating

    NASA Astrophysics Data System (ADS)

    Haraguchi, Tomohiko; Taki, Tsuyoshi; Hasegawa, Junichi

    This paper presents a system based on the control of PTZ cameras for automated real-time tracking of individual figure skaters moving on an ice rink. In the video images of figure skating, irregular trajectories, various postures, rapid movements, and various costume colors are included. Therefore, it is difficult to determine some features useful for image tracking. On the other hand, an ice rink has a limited area and uniform high intensity, and skating is always performed on ice. In the proposed system, an ice rink region is first extracted from a video image by the region growing method, and then, a skater region is extracted using the rink shape information. In the camera control process, each camera is automatically panned and/or tilted so that the skater region is as close to the center of the image as possible; further, the camera is zoomed to maintain the skater image at an appropriate scale. The results of experiments performed for 10 training scenes show that the skater extraction rate is approximately 98%. Thus, it was concluded that tracking with camera control was successful for almost all the cases considered in the study.

  9. Cross-Correlation-Based Structural System Identification Using Unmanned Aerial Vehicles

    PubMed Central

    Yoon, Hyungchul; Hoskere, Vedhus; Park, Jong-Woong; Spencer, Billie F.

    2017-01-01

    Computer vision techniques have been employed to characterize dynamic properties of structures, as well as to capture structural motion for system identification purposes. All of these methods leverage image-processing techniques using a stationary camera. This requirement makes finding an effective location for camera installation difficult, because civil infrastructure (i.e., bridges, buildings, etc.) are often difficult to access, being constructed over rivers, roads, or other obstacles. This paper seeks to use video from Unmanned Aerial Vehicles (UAVs) to address this problem. As opposed to the traditional way of using stationary cameras, the use of UAVs brings the issue of the camera itself moving; thus, the displacements of the structure obtained by processing UAV video are relative to the UAV camera. Some efforts have been reported to compensate for the camera motion, but they require certain assumptions that may be difficult to satisfy. This paper proposes a new method for structural system identification using the UAV video directly. Several challenges are addressed, including: (1) estimation of an appropriate scale factor; and (2) compensation for the rolling shutter effect. Experimental validation is carried out to validate the proposed approach. The experimental results demonstrate the efficacy and significant potential of the proposed approach. PMID:28891985

  10. A Distributed Wireless Camera System for the Management of Parking Spaces.

    PubMed

    Vítek, Stanislav; Melničuk, Petr

    2017-12-28

    The importance of detection of parking space availability is still growing, particularly in major cities. This paper deals with the design of a distributed wireless camera system for the management of parking spaces, which can determine occupancy of the parking space based on the information from multiple cameras. The proposed system uses small camera modules based on Raspberry Pi Zero and computationally efficient algorithm for the occupancy detection based on the histogram of oriented gradients (HOG) feature descriptor and support vector machine (SVM) classifier. We have included information about the orientation of the vehicle as a supporting feature, which has enabled us to achieve better accuracy. The described solution can deliver occupancy information at the rate of 10 parking spaces per second with more than 90% accuracy in a wide range of conditions. Reliability of the implemented algorithm is evaluated with three different test sets which altogether contain over 700,000 samples of parking spaces.

  11. A Novel Multi-Digital Camera System Based on Tilt-Shift Photography Technology

    PubMed Central

    Sun, Tao; Fang, Jun-yong; Zhao, Dong; Liu, Xue; Tong, Qing-xi

    2015-01-01

    Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product. PMID:25835187

  12. A novel multi-digital camera system based on tilt-shift photography technology.

    PubMed

    Sun, Tao; Fang, Jun-Yong; Zhao, Dong; Liu, Xue; Tong, Qing-Xi

    2015-03-31

    Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product.

  13. In-flight Video Captured by External Tank Camera System

    NASA Technical Reports Server (NTRS)

    2005-01-01

    In this July 26, 2005 video, Earth slowly fades into the background as the STS-114 Space Shuttle Discovery climbs into space until the External Tank (ET) separates from the orbiter. An External Tank ET Camera System featuring a Sony XC-999 model camera provided never before seen footage of the launch and tank separation. The camera was installed in the ET LO2 Feedline Fairing. From this position, the camera had a 40% field of view with a 3.5 mm lens. The field of view showed some of the Bipod area, a portion of the LH2 tank and Intertank flange area, and some of the bottom of the shuttle orbiter. Contained in an electronic box, the battery pack and transmitter were mounted on top of the Solid Rocker Booster (SRB) crossbeam inside the ET. The battery pack included 20 Nickel-Metal Hydride batteries (similar to cordless phone battery packs) totaling 28 volts DC and could supply about 70 minutes of video. Located 95 degrees apart on the exterior of the Intertank opposite orbiter side, there were 2 blade S-Band antennas about 2 1/2 inches long that transmitted a 10 watt signal to the ground stations. The camera turned on approximately 10 minutes prior to launch and operated for 15 minutes following liftoff. The complete camera system weighs about 32 pounds. Marshall Space Flight Center (MSFC), Johnson Space Center (JSC), Goddard Space Flight Center (GSFC), and Kennedy Space Center (KSC) participated in the design, development, and testing of the ET camera system.

  14. Optical analysis of a compound quasi-microscope for planetary landers

    NASA Technical Reports Server (NTRS)

    Wall, S. D.; Burcher, E. E.; Huck, F. O.

    1974-01-01

    A quasi-microscope concept, consisting of facsimile camera augmented with an auxiliary lens as a magnifier, was introduced and analyzed. The performance achievable with this concept was primarily limited by a trade-off between resolution and object field; this approach leads to a limiting resolution of 20 microns when used with the Viking lander camera (which has an angular resolution of 0.04 deg). An optical system is analyzed which includes a field lens between camera and auxiliary lens to overcome this limitation. It is found that this system, referred to as a compound quasi-microscope, can provide improved resolution (to about 2 microns ) and a larger object field. However, this improvement is at the expense of increased complexity, special camera design requirements, and tighter tolerances on the distances between optical components.

  15. A new high-speed IR camera system

    NASA Technical Reports Server (NTRS)

    Travis, Jeffrey W.; Shu, Peter K.; Jhabvala, Murzy D.; Kasten, Michael S.; Moseley, Samuel H.; Casey, Sean C.; Mcgovern, Lawrence K.; Luers, Philip J.; Dabney, Philip W.; Kaipa, Ravi C.

    1994-01-01

    A multi-organizational team at the Goddard Space Flight Center is developing a new far infrared (FIR) camera system which furthers the state of the art for this type of instrument by the incorporating recent advances in several technological disciplines. All aspects of the camera system are optimized for operation at the high data rates required for astronomical observations in the far infrared. The instrument is built around a Blocked Impurity Band (BIB) detector array which exhibits responsivity over a broad wavelength band and which is capable of operating at 1000 frames/sec, and consists of a focal plane dewar, a compact camera head electronics package, and a Digital Signal Processor (DSP)-based data system residing in a standard 486 personal computer. In this paper we discuss the overall system architecture, the focal plane dewar, and advanced features and design considerations for the electronics. This system, or one derived from it, may prove useful for many commercial and/or industrial infrared imaging or spectroscopic applications, including thermal machine vision for robotic manufacturing, photographic observation of short-duration thermal events such as combustion or chemical reactions, and high-resolution surveillance imaging.

  16. Commercially available high-speed system for recording and monitoring vocal fold vibrations.

    PubMed

    Sekimoto, Sotaro; Tsunoda, Koichi; Kaga, Kimitaka; Makiyama, Kiyoshi; Tsunoda, Atsunobu; Kondo, Kenji; Yamasoba, Tatsuya

    2009-12-01

    We have developed a special purpose adaptor making it possible to use a commercially available high-speed camera to observe vocal fold vibrations during phonation. The camera can capture dynamic digital images at speeds of 600 or 1200 frames per second. The adaptor is equipped with a universal-type attachment and can be used with most endoscopes sold by various manufacturers. Satisfactory images can be obtained with a rigid laryngoscope even with the standard light source. The total weight of the adaptor and camera (including battery) is only 1010 g. The new system comprising the high-speed camera and the new adaptor can be purchased for about $3000 (US), while the least expensive stroboscope costs about 10 times that price, and a high-performance high-speed imaging system may cost 100 times as much. Therefore the system is both cost-effective and useful in the outpatient clinic or casualty setting, on house calls, and for the purpose of student or patient education.

  17. Development of the geoCamera, a System for Mapping Ice from a Ship

    NASA Astrophysics Data System (ADS)

    Arsenault, R.; Clemente-Colon, P.

    2012-12-01

    The geoCamera produces maps of the ice surrounding an ice-capable ship by combining images from one or more digital cameras with the ship's position and attitude data. Maps are produced along the ship's path with the achievable width and resolution depending on camera mounting height as well as camera resolution and lens parameters. Our system has produced maps up to 2000m wide at 1m resolution. Once installed and calibrated, the system is designed to operate automatically producing maps in near real-time and making them available to on-board users via existing information systems. The resulting small-scale maps complement existing satellite based products as well as on-board observations. Development versions have temporarily been deployed in Antarctica on the RV Nathaniel B. Palmer in 2010 and in the Arctic on the USCGC Healy in 2011. A permanent system has been deployed during the summer of 2012 on the USCGC Healy. To make the system attractive to other ships of opportunity, design goals include using existing ship systems when practical, using low costs commercial-off-the-shelf components if additional hardware is necessary, automating the process to virtually eliminate adding to the workload of ships technicians and making the software components modular and flexible enough to allow more seamless integration with a ships particular IT system.

  18. Electronic still camera

    NASA Astrophysics Data System (ADS)

    Holland, S. Douglas

    1992-09-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  19. Miniaturized Autonomous Extravehicular Robotic Camera (Mini AERCam)

    NASA Technical Reports Server (NTRS)

    Fredrickson, Steven E.

    2001-01-01

    The NASA Johnson Space Center (JSC) Engineering Directorate is developing the Autonomous Extravehicular Robotic Camera (AERCam), a low-volume, low-mass free-flying camera system . AERCam project team personnel recently initiated development of a miniaturized version of AERCam known as Mini AERCam. The Mini AERCam target design is a spherical "nanosatellite" free-flyer 7.5 inches in diameter and weighing 1 0 pounds. Mini AERCam is building on the success of the AERCam Sprint STS-87 flight experiment by adding new on-board sensing and processing capabilities while simultaneously reducing volume by 80%. Achieving enhanced capability in a smaller package depends on applying miniaturization technology across virtually all subsystems. Technology innovations being incorporated include micro electromechanical system (MEMS) gyros, "camera-on-a-chip" CMOS imagers, rechargeable xenon gas propulsion system , rechargeable lithium ion battery, custom avionics based on the PowerPC 740 microprocessor, GPS relative navigation, digital radio frequency communications and tracking, micropatch antennas, digital instrumentation, and dense mechanical packaging. The Mini AERCam free-flyer will initially be integrated into an approximate flight-like configuration for demonstration on an airbearing table. A pilot-in-the-loop and hardware-in-the-loop simulation to simulate on-orbit navigation and dynamics will complement the airbearing table demonstration. The Mini AERCam lab demonstration is intended to form the basis for future development of an AERCam flight system that provides beneficial on-orbit views unobtainable from fixed cameras, cameras on robotic manipulators, or cameras carried by EVA crewmembers.

  20. Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    Holland, S. Douglas (Inventor)

    1992-01-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  1. A New Digital Imaging and Analysis System for Plant and Ecosystem Phenological Studies

    NASA Astrophysics Data System (ADS)

    Ramirez, G.; Ramirez, G. A.; Vargas, S. A., Jr.; Luna, N. R.; Tweedie, C. E.

    2015-12-01

    Over the past decade, environmental scientists have increasingly used low-cost sensors and custom software to gather and analyze environmental data. Included in this trend has been the use of imagery from field-mounted static digital cameras. Published literature has highlighted the challenge scientists have encountered with poor and problematic camera performance and power consumption, limited data download and wireless communication options, general ruggedness of off the shelf camera solutions, and time consuming and hard-to-reproduce digital image analysis options. Data loggers and sensors are typically limited to data storage in situ (requiring manual downloading) and/or expensive data streaming options. Here we highlight the features and functionality of a newly invented camera/data logger system and coupled image analysis software suited to plant and ecosystem phenological studies (patent pending). The camera has resulted from several years of development and prototype testing supported by several grants funded by the US NSF. These inventions have several unique features and functionality and have been field tested in desert, arctic, and tropical rainforest ecosystems. The system can be used to acquire imagery/data from static and mobile platforms. Data is collected, preprocessed, and streamed to the cloud without the need of an external computer and can run for extended time periods. The camera module is capable of acquiring RGB, IR, and thermal (LWIR) data and storing it in a variety of formats including RAW. The system is full customizable with a wide variety of passive and smart sensors. The camera can be triggered by state conditions detected by sensors and/or select time intervals. The device includes USB, Wi-Fi, Bluetooth, serial, GSM, Ethernet, and Iridium connections and can be connected to commercial cloud servers such as Dropbox. The complementary image analysis software is compatible with all popular operating systems. Imagery can be viewed and analyzed in RGB, HSV, and l*a*b color space. Users can select a spectral index, which have been derived from published literature and/or choose to have analytical output reported as separate channel strengths for a given color space. Results of the analysis can be viewed in a plot and/or saved as a .csv file for additional analysis and visualization.

  2. Development of the Brican TD100 Small Uas and Payload Trials

    NASA Astrophysics Data System (ADS)

    Eggleston, B.; McLuckie, B.; Koski, W. R.; Bird, D.; Patterson, C.; Bohdanov, D.; Liu, H.; Mathews, T.; Gamage, G.

    2015-08-01

    The Brican TD100 is a high performance, small UAS designed and made in Brampton Ontario Canada. The concept was defined in late 2009 and it is designed for a maximum weight of 25 kg which is now the accepted cut-off defining small civil UASs. A very clean tractor propeller layout is used with a lightweight composite structure and a high aspect ratio wing to obtain good range and endurance. The design features and performance of the initial electrically powered version are discussed and progress with developing a multifuel engine version is described. The system includes features enabling operation beyond line of sight (BLOS) and the proving missions are described. The vehicle has been used for aerial photography and low cost mapping using a professional grade Nikon DSLR camera. For forest fire research a FLIR A65 IR camera was used, while for georeferenced mapping a new Applanix AP20 system was calibrated with the Nikon camera. The sorties to be described include forest fire research, wildlife photography of bowhead whales in the Arctic and surveys of endangered caribou in a remote area of Labrador, with all these applications including the DSLR camera.

  3. Utilising the Intel RealSense Camera for Measuring Health Outcomes in Clinical Research.

    PubMed

    Siena, Francesco Luke; Byrom, Bill; Watts, Paul; Breedon, Philip

    2018-02-05

    Applications utilising 3D Camera technologies for the measurement of health outcomes in the health and wellness sector continues to expand. The Intel® RealSense™ is one of the leading 3D depth sensing cameras currently available on the market and aligns itself for use in many applications, including robotics, automation, and medical systems. One of the most prominent areas is the production of interactive solutions for rehabilitation which includes gait analysis and facial tracking. Advancements in depth camera technology has resulted in a noticeable increase in the integration of these technologies into portable platforms, suggesting significant future potential for pervasive in-clinic and field based health assessment solutions. This paper reviews the Intel RealSense technology's technical capabilities and discusses its application to clinical research and includes examples where the Intel RealSense camera range has been used for the measurement of health outcomes. This review supports the use of the technology to develop robust, objective movement and mobility-based endpoints to enable accurate tracking of the effects of treatment interventions in clinical trials.

  4. Calibration of an Outdoor Distributed Camera Network with a 3D Point Cloud

    PubMed Central

    Ortega, Agustín; Silva, Manuel; Teniente, Ernesto H.; Ferreira, Ricardo; Bernardino, Alexandre; Gaspar, José; Andrade-Cetto, Juan

    2014-01-01

    Outdoor camera networks are becoming ubiquitous in critical urban areas of the largest cities around the world. Although current applications of camera networks are mostly tailored to video surveillance, recent research projects are exploiting their use to aid robotic systems in people-assisting tasks. Such systems require precise calibration of the internal and external parameters of the distributed camera network. Despite the fact that camera calibration has been an extensively studied topic, the development of practical methods for user-assisted calibration that minimize user intervention time and maximize precision still pose significant challenges. These camera systems have non-overlapping fields of view, are subject to environmental stress, and are likely to suffer frequent recalibration. In this paper, we propose the use of a 3D map covering the area to support the calibration process and develop an automated method that allows quick and precise calibration of a large camera network. We present two cases of study of the proposed calibration method: one is the calibration of the Barcelona Robot Lab camera network, which also includes direct mappings (homographies) between image coordinates and world points in the ground plane (walking areas) to support person and robot detection and localization algorithms. The second case consist of improving the GPS positioning of geo-tagged images taken with a mobile device in the Facultat de Matemàtiques i Estadística (FME) patio at the Universitat Politècnica de Catalunya (UPC). PMID:25076221

  5. Calibration of an outdoor distributed camera network with a 3D point cloud.

    PubMed

    Ortega, Agustín; Silva, Manuel; Teniente, Ernesto H; Ferreira, Ricardo; Bernardino, Alexandre; Gaspar, José; Andrade-Cetto, Juan

    2014-07-29

    Outdoor camera networks are becoming ubiquitous in critical urban areas of the largest cities around the world. Although current applications of camera networks are mostly tailored to video surveillance, recent research projects are exploiting their use to aid robotic systems in people-assisting tasks. Such systems require precise calibration of the internal and external parameters of the distributed camera network. Despite the fact that camera calibration has been an extensively studied topic, the development of practical methods for user-assisted calibration that minimize user intervention time and maximize precision still pose significant challenges. These camera systems have non-overlapping fields of view, are subject to environmental stress, and are likely to suffer frequent recalibration. In this paper, we propose the use of a 3D map covering the area to support the calibration process and develop an automated method that allows quick and precise calibration of a large camera network. We present two cases of study of the proposed calibration method: one is the calibration of the Barcelona Robot Lab camera network, which also includes direct mappings (homographies) between image coordinates and world points in the ground plane (walking areas) to support person and robot detection and localization algorithms. The second case consist of improving the GPS positioning of geo-tagged images taken with a mobile device in the Facultat de Matemàtiques i Estadística (FME) patio at the Universitat Politècnica de Catalunya (UPC).

  6. Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert J.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less

  7. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert J.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less

  8. Uncooled infrared sensors: rapid growth and future perspective

    NASA Astrophysics Data System (ADS)

    Balcerak, Raymond S.

    2000-07-01

    The uncooled infrared cameras are now available for both the military and commercial markets. The current camera technology incorporates the fruits of many years of development, focusing on the details of pixel design, novel material processing, and low noise read-out electronics. The rapid insertion of cameras into systems is testimony to the successful completion of this 'first phase' of development. In the military market, the first uncooled infrared cameras will be used for weapon sights, driver's viewers and helmet mounted cameras. Major commercial applications include night driving, security, police and fire fighting, and thermography, primarily for preventive maintenance and process control. The technology for the next generation of cameras is even more demanding, but within reach. The paper outlines the technology program planned for the next generation of cameras, and the approaches to further enhance performance, even to the radiation limit of thermal detectors.

  9. A TV Camera System Which Extracts Feature Points For Non-Contact Eye Movement Detection

    NASA Astrophysics Data System (ADS)

    Tomono, Akira; Iida, Muneo; Kobayashi, Yukio

    1990-04-01

    This paper proposes a highly efficient camera system which extracts, irrespective of background, feature points such as the pupil, corneal reflection image and dot-marks pasted on a human face in order to detect human eye movement by image processing. Two eye movement detection methods are sugested: One utilizing face orientation as well as pupil position, The other utilizing pupil and corneal reflection images. A method of extracting these feature points using LEDs as illumination devices and a new TV camera system designed to record eye movement are proposed. Two kinds of infra-red LEDs are used. These LEDs are set up a short distance apart and emit polarized light of different wavelengths. One light source beams from near the optical axis of the lens and the other is some distance from the optical axis. The LEDs are operated in synchronization with the camera. The camera includes 3 CCD image pick-up sensors and a prism system with 2 boundary layers. Incident rays are separated into 2 wavelengths by the first boundary layer of the prism. One set of rays forms an image on CCD-3. The other set is split by the half-mirror layer of the prism and forms an image including the regularly reflected component by placing a polarizing filter in front of CCD-1 or another image not including the component by not placing a polarizing filter in front of CCD-2. Thus, three images with different reflection characteristics are obtained by three CCDs. Through the experiment, it is shown that two kinds of subtraction operations between the three images output from CCDs accentuate three kinds of feature points: the pupil and corneal reflection images and the dot-marks. Since the S/N ratio of the subtracted image is extremely high, the thresholding process is simple and allows reducting the intensity of the infra-red illumination. A high speed image processing apparatus using this camera system is decribed. Realtime processing of the subtraction, thresholding and gravity position calculation of the feature points is possible.

  10. A randomized comparison of laparoscopic, flexible endoscopic, and wired and wireless magnetic cameras on ex vivo and in vivo NOTES surgical performance.

    PubMed

    Chang, Victoria C; Tang, Shou-Jiang; Swain, C Paul; Bergs, Richard; Paramo, Juan; Hogg, Deborah C; Fernandez, Raul; Cadeddu, Jeffrey A; Scott, Daniel J

    2013-08-01

    The influence of endoscopic video camera (VC) image quality on surgical performance has not been studied. Flexible endoscopes are used as substitutes for laparoscopes in natural orifice translumenal endoscopic surgery (NOTES), but their optics are originally designed for intralumenal use. Manipulable wired or wireless independent VCs might offer advantages for NOTES but are still under development. To measure the optical characteristics of 4 VC systems and to compare their impact on the performance of surgical suturing tasks. VC systems included a laparoscope (Storz 10 mm), a flexible endoscope (Olympus GIF 160), and 2 prototype deployable cameras (magnetic anchoring and guidance system [MAGS] Camera and PillCam). In a randomized fashion, the 4 systems were evaluated regarding standardized optical characteristics and surgical manipulations of previously validated ex vivo (fundamentals of laparoscopic surgery model) and in vivo (live porcine Nissen model) tasks; objective metrics (time and errors/precision) and combined surgeon (n = 2) performance were recorded. Subtle differences were detected for color tests, and field of view was variable (65°-115°). Suitable resolution was detected up to 10 cm for the laparoscope and MAGS camera but only at closer distances for the endoscope and PillCam. Compared with the laparoscope, surgical suturing performances were modestly lower for the MAGS camera and significantly lower for the endoscope (ex vivo) and PillCam (ex vivo and in vivo). This study documented distinct differences in VC systems that may be used for NOTES in terms of both optical characteristics and surgical performance. Additional work is warranted to optimize cameras for NOTES. Deployable systems may be especially well suited for this purpose.

  11. General-Purpose Serial Interface For Remote Control

    NASA Technical Reports Server (NTRS)

    Busquets, Anthony M.; Gupton, Lawrence E.

    1990-01-01

    Computer controls remote television camera. General-purpose controller developed to serve as interface between host computer and pan/tilt/zoom/focus functions on series of automated video cameras. Interface port based on 8251 programmable communications-interface circuit configured for tristated outputs, and connects controller system to any host computer with RS-232 input/output (I/O) port. Accepts byte-coded data from host, compares them with prestored codes in read-only memory (ROM), and closes or opens appropriate switches. Six output ports control opening and closing of as many as 48 switches. Operator controls remote television camera by speaking commands, in system including general-purpose controller.

  12. Optical registration of spaceborne low light remote sensing camera

    NASA Astrophysics Data System (ADS)

    Li, Chong-yang; Hao, Yan-hui; Xu, Peng-mei; Wang, Dong-jie; Ma, Li-na; Zhao, Ying-long

    2018-02-01

    For the high precision requirement of spaceborne low light remote sensing camera optical registration, optical registration of dual channel for CCD and EMCCD is achieved by the high magnification optical registration system. System integration optical registration and accuracy of optical registration scheme for spaceborne low light remote sensing camera with short focal depth and wide field of view is proposed in this paper. It also includes analysis of parallel misalignment of CCD and accuracy of optical registration. Actual registration results show that imaging clearly, MTF and accuracy of optical registration meet requirements, it provide important guarantee to get high quality image data in orbit.

  13. Study of optical techniques for the Ames unitary wind tunnels. Part 4: Model deformation

    NASA Technical Reports Server (NTRS)

    Lee, George

    1992-01-01

    A survey of systems capable of model deformation measurements was conducted. The survey included stereo-cameras, scanners, and digitizers. Moire, holographic, and heterodyne interferometry techniques were also looked at. Stereo-cameras with passive or active targets are currently being deployed for model deformation measurements at NASA Ames and LaRC, Boeing, and ONERA. Scanners and digitizers are widely used in robotics, motion analysis, medicine, etc., and some of the scanner and digitizers can meet the model deformation requirements. Commercial stereo-cameras, scanners, and digitizers are being improved in accuracy, reliability, and ease of operation. A number of new systems are coming onto the market.

  14. Geolocating thermal binoculars based on a software defined camera core incorporating HOT MCT grown by MOVPE

    NASA Astrophysics Data System (ADS)

    Pillans, Luke; Harmer, Jack; Edwards, Tim; Richardson, Lee

    2016-05-01

    Geolocation is the process of calculating a target position based on bearing and range relative to the known location of the observer. A high performance thermal imager with integrated geolocation functions is a powerful long range targeting device. Firefly is a software defined camera core incorporating a system-on-a-chip processor running the AndroidTM operating system. The processor has a range of industry standard serial interfaces which were used to interface to peripheral devices including a laser rangefinder and a digital magnetic compass. The core has built in Global Positioning System (GPS) which provides the third variable required for geolocation. The graphical capability of Firefly allowed flexibility in the design of the man-machine interface (MMI), so the finished system can give access to extensive functionality without appearing cumbersome or over-complicated to the user. This paper covers both the hardware and software design of the system, including how the camera core influenced the selection of peripheral hardware, and the MMI design process which incorporated user feedback at various stages.

  15. Automated generation of image products for Mars Exploration Rover Mission tactical operations

    NASA Technical Reports Server (NTRS)

    Alexander, Doug; Zamani, Payam; Deen, Robert; Andres, Paul; Mortensen, Helen

    2005-01-01

    This paper will discuss, from design to implementation, the methodologies applied to MIPL's automated pipeline processing as a 'system of systems' integrated with the MER GDS. Overviews of the interconnected product generating systems will also be provided with emphasis on interdependencies, including those for a) geometric rectificationn of camera lens distortions, b) generation of stereo disparity, c) derivation of 3-dimensional coordinates in XYZ space, d) generation of unified terrain meshes, e) camera-to-target ranging (distance) and f) multi-image mosaicking.

  16. Imaging characteristics of photogrammetric camera systems

    USGS Publications Warehouse

    Welch, R.; Halliday, J.

    1973-01-01

    In view of the current interest in high-altitude and space photographic systems for photogrammetric mapping, the United States Geological Survey (U.S.G.S.) undertook a comprehensive research project designed to explore the practical aspects of applying the latest image quality evaluation techniques to the analysis of such systems. The project had two direct objectives: (1) to evaluate the imaging characteristics of current U.S.G.S. photogrammetric camera systems; and (2) to develop methodologies for predicting the imaging capabilities of photogrammetric camera systems, comparing conventional systems with new or different types of systems, and analyzing the image quality of photographs. Image quality was judged in terms of a number of evaluation factors including response functions, resolving power, and the detectability and measurability of small detail. The limiting capabilities of the U.S.G.S. 6-inch and 12-inch focal length camera systems were established by analyzing laboratory and aerial photographs in terms of these evaluation factors. In the process, the contributing effects of relevant parameters such as lens aberrations, lens aperture, shutter function, image motion, film type, and target contrast procedures for analyzing image quality and predicting and comparing performance capabilities. ?? 1973.

  17. System Architecture of the Dark Energy Survey Camera Readout Electronics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaw, Theresa; /FERMILAB; Ballester, Otger

    2010-05-27

    The Dark Energy Survey makes use of a new camera, the Dark Energy Camera (DECam). DECam will be installed in the Blanco 4M telescope at Cerro Tololo Inter-American Observatory (CTIO). DECam is presently under construction and is expected to be ready for observations in the fall of 2011. The focal plane will make use of 62 2Kx4K and 12 2kx2k fully depleted Charge-Coupled Devices (CCDs) for guiding, alignment and focus. This paper will describe design considerations of the system; including, the entire signal path used to read out the CCDs, the development of a custom crate and backplane, the overallmore » grounding scheme and early results of system tests.« less

  18. A randomized comparison of laparoscopic, magnetically anchored, and flexible endoscopic cameras in performance and workload between laparoscopic and single-incision surgery.

    PubMed

    Arain, Nabeel A; Cadeddu, Jeffrey A; Best, Sara L; Roshek, Thomas; Chang, Victoria; Hogg, Deborah C; Bergs, Richard; Fernandez, Raul; Webb, Erin M; Scott, Daniel J

    2012-04-01

    This study aimed to evaluate the surgeon performance and workload of a next-generation magnetically anchored camera compared with laparoscopic and flexible endoscopic imaging systems for laparoscopic and single-site laparoscopy (SSL) settings. The cameras included a 5-mm 30° laparoscope (LAP), a magnetically anchored (MAGS) camera, and a flexible endoscope (ENDO). The three camera systems were evaluated using standardized optical characteristic tests. Each system was used in random order for visualization during performance of a standardized suturing task by four surgeons. Each participant performed three to five consecutive repetitions as a surgeon and also served as a camera driver for other surgeons. Ex vivo testing was conducted in a laparoscopic multiport and SSL layout using a box trainer. In vivo testing was performed only in the multiport configuration and used a previously validated live porcine Nissen model. Optical testing showed superior resolution for MAGS at 5 and 10 cm compared with LAP or ENDO. The field of view ranged from 39 to 99°. The depth of focus was almost three times greater for MAGS (6-270 mm) than for LAP (2-88 mm) or ENDO (1-93 mm). Both ex vivo and in vivo multiport combined surgeon performance was significantly better for LAP than for ENDO, but no significant differences were detected for MAGS. For multiport testing, workload ratings were significantly less ex vivo for LAP and MAGS than for ENDO and less in vivo for LAP than for MAGS or ENDO. For ex vivo SSL, no significant performance differences were detected, but camera drivers rated the workload significantly less for MAGS than for LAP or ENDO. The data suggest that the improved imaging element of the next-generation MAGS camera has optical and performance characteristics that meet or exceed those of the LAP or ENDO systems and that the MAGS camera may be especially useful for SSL. Further refinements of the MAGS camera are encouraged.

  19. Recording stereoscopic 3D neurosurgery with a head-mounted 3D camera system.

    PubMed

    Lee, Brian; Chen, Brian R; Chen, Beverly B; Lu, James Y; Giannotta, Steven L

    2015-06-01

    Stereoscopic three-dimensional (3D) imaging can present more information to the viewer and further enhance the learning experience over traditional two-dimensional (2D) video. Most 3D surgical videos are recorded from the operating microscope and only feature the crux, or the most important part of the surgery, leaving out other crucial parts of surgery including the opening, approach, and closing of the surgical site. In addition, many other surgeries including complex spine, trauma, and intensive care unit procedures are also rarely recorded. We describe and share our experience with a commercially available head-mounted stereoscopic 3D camera system to obtain stereoscopic 3D recordings of these seldom recorded aspects of neurosurgery. The strengths and limitations of using the GoPro(®) 3D system as a head-mounted stereoscopic 3D camera system in the operating room are reviewed in detail. Over the past several years, we have recorded in stereoscopic 3D over 50 cranial and spinal surgeries and created a library for education purposes. We have found the head-mounted stereoscopic 3D camera system to be a valuable asset to supplement 3D footage from a 3D microscope. We expect that these comprehensive 3D surgical videos will become an important facet of resident education and ultimately lead to improved patient care.

  20. A traffic situation analysis system

    NASA Astrophysics Data System (ADS)

    Sidla, Oliver; Rosner, Marcin

    2011-01-01

    The observation and monitoring of traffic with smart visions systems for the purpose of improving traffic safety has a big potential. For example embedded vision systems built into vehicles can be used as early warning systems, or stationary camera systems can modify the switching frequency of signals at intersections. Today the automated analysis of traffic situations is still in its infancy - the patterns of vehicle motion and pedestrian flow in an urban environment are too complex to be fully understood by a vision system. We present steps towards such a traffic monitoring system which is designed to detect potentially dangerous traffic situations, especially incidents in which the interaction of pedestrians and vehicles might develop into safety critical encounters. The proposed system is field-tested at a real pedestrian crossing in the City of Vienna for the duration of one year. It consists of a cluster of 3 smart cameras, each of which is built from a very compact PC hardware system in an outdoor capable housing. Two cameras run vehicle detection software including license plate detection and recognition, one camera runs a complex pedestrian detection and tracking module based on the HOG detection principle. As a supplement, all 3 cameras use additional optical flow computation in a low-resolution video stream in order to estimate the motion path and speed of objects. This work describes the foundation for all 3 different object detection modalities (pedestrians, vehi1cles, license plates), and explains the system setup and its design.

  1. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    NASA Astrophysics Data System (ADS)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  2. Projection of controlled repeatable real-time moving targets to test and evaluate motion imagery quality

    NASA Astrophysics Data System (ADS)

    Scopatz, Stephen D.; Mendez, Michael; Trent, Randall

    2015-05-01

    The projection of controlled moving targets is key to the quantitative testing of video capture and post processing for Motion Imagery. This presentation will discuss several implementations of target projectors with moving targets or apparent moving targets creating motion to be captured by the camera under test. The targets presented are broadband (UV-VIS-IR) and move in a predictable, repeatable and programmable way; several short videos will be included in the presentation. Among the technical approaches will be targets that move independently in the camera's field of view, as well targets that change size and shape. The development of a rotating IR and VIS 4 bar target projector with programmable rotational velocity and acceleration control for testing hyperspectral cameras is discussed. A related issue for motion imagery is evaluated by simulating a blinding flash which is an impulse of broadband photons in fewer than 2 milliseconds to assess the camera's reaction to a large, fast change in signal. A traditional approach of gimbal mounting the camera in combination with the moving target projector is discussed as an alternative to high priced flight simulators. Based on the use of the moving target projector several standard tests are proposed to provide a corresponding test to MTF (resolution), SNR and minimum detectable signal at velocity. Several unique metrics are suggested for Motion Imagery including Maximum Velocity Resolved (the measure of the greatest velocity that is accurately tracked by the camera system) and Missing Object Tolerance (measurement of tracking ability when target is obscured in the images). These metrics are applicable to UV-VIS-IR wavelengths and can be used to assist in camera and algorithm development as well as comparing various systems by presenting the exact scenes to the cameras in a repeatable way.

  3. Remote gaze tracking system on a large display.

    PubMed

    Lee, Hyeon Chang; Lee, Won Oh; Cho, Chul Woo; Gwon, Su Yeong; Park, Kang Ryoung; Lee, Heekyung; Cha, Jihun

    2013-10-07

    We propose a new remote gaze tracking system as an intelligent TV interface. Our research is novel in the following three ways: first, because a user can sit at various positions in front of a large display, the capture volume of the gaze tracking system should be greater, so the proposed system includes two cameras which can be moved simultaneously by panning and tilting mechanisms, a wide view camera (WVC) for detecting eye position and an auto-focusing narrow view camera (NVC) for capturing enlarged eye images. Second, in order to remove the complicated calibration between the WVC and NVC and to enhance the capture speed of the NVC, these two cameras are combined in a parallel structure. Third, the auto-focusing of the NVC is achieved on the basis of both the user's facial width in the WVC image and a focus score calculated on the eye image of the NVC. Experimental results showed that the proposed system can be operated with a gaze tracking accuracy of ±0.737°~±0.775° and a speed of 5~10 frames/s.

  4. Remote Gaze Tracking System on a Large Display

    PubMed Central

    Lee, Hyeon Chang; Lee, Won Oh; Cho, Chul Woo; Gwon, Su Yeong; Park, Kang Ryoung; Lee, Heekyung; Cha, Jihun

    2013-01-01

    We propose a new remote gaze tracking system as an intelligent TV interface. Our research is novel in the following three ways: first, because a user can sit at various positions in front of a large display, the capture volume of the gaze tracking system should be greater, so the proposed system includes two cameras which can be moved simultaneously by panning and tilting mechanisms, a wide view camera (WVC) for detecting eye position and an auto-focusing narrow view camera (NVC) for capturing enlarged eye images. Second, in order to remove the complicated calibration between the WVC and NVC and to enhance the capture speed of the NVC, these two cameras are combined in a parallel structure. Third, the auto-focusing of the NVC is achieved on the basis of both the user's facial width in the WVC image and a focus score calculated on the eye image of the NVC. Experimental results showed that the proposed system can be operated with a gaze tracking accuracy of ±0.737°∼±0.775° and a speed of 5∼10 frames/s. PMID:24105351

  5. Situational Awareness from a Low-Cost Camera System

    NASA Technical Reports Server (NTRS)

    Freudinger, Lawrence C.; Ward, David; Lesage, John

    2010-01-01

    A method gathers scene information from a low-cost camera system. Existing surveillance systems using sufficient cameras for continuous coverage of a large field necessarily generate enormous amounts of raw data. Digitizing and channeling that data to a central computer and processing it in real time is difficult when using low-cost, commercially available components. A newly developed system is located on a combined power and data wire to form a string-of-lights camera system. Each camera is accessible through this network interface using standard TCP/IP networking protocols. The cameras more closely resemble cell-phone cameras than traditional security camera systems. Processing capabilities are built directly onto the camera backplane, which helps maintain a low cost. The low power requirements of each camera allow the creation of a single imaging system comprising over 100 cameras. Each camera has built-in processing capabilities to detect events and cooperatively share this information with neighboring cameras. The location of the event is reported to the host computer in Cartesian coordinates computed from data correlation across multiple cameras. In this way, events in the field of view can present low-bandwidth information to the host rather than high-bandwidth bitmap data constantly being generated by the cameras. This approach offers greater flexibility than conventional systems, without compromising performance through using many small, low-cost cameras with overlapping fields of view. This means significant increased viewing without ignoring surveillance areas, which can occur when pan, tilt, and zoom cameras look away. Additionally, due to the sharing of a single cable for power and data, the installation costs are lower. The technology is targeted toward 3D scene extraction and automatic target tracking for military and commercial applications. Security systems and environmental/ vehicular monitoring systems are also potential applications.

  6. End-to-End System Test of the Relative Precision and Stability of the Photometric Method for Detecting Earth-Size Extrasolar Planets

    NASA Technical Reports Server (NTRS)

    Dunham, Edward W.

    2000-01-01

    We developed the CCD camera system for the laboratory test demonstration and designed the optical system for this test. The camera system was delivered to Ames in April, 1999 with continuing support mostly in the software area as the test progressed. The camera system has been operating successfully since delivery. The optical system performed well during the test. The laboratory demonstration activity is now nearly complete and is considered to be successful by the Technical Advisory Group, which met on 8 February, 2000 at the SETI Institute. A final report for the Technical Advisory Group and NASA Headquarters will be produced in the next few months. This report will be a comprehensive report on all facets of the test including those covered under this grant. A copy will be forwarded, if desired, when it is complete.

  7. Harbour surveillance with cameras calibrated with AIS data

    NASA Astrophysics Data System (ADS)

    Palmieri, F. A. N.; Castaldo, F.; Marino, G.

    The inexpensive availability of surveillance cameras, easily connected in network configurations, suggests the deployment of this additional sensor modality in port surveillance. Vessels appearing within cameras fields of view can be recognized and localized providing to fusion centers information that can be added to data coming from Radar, Lidar, AIS, etc. Camera systems, that are used as localizers however, must be properly calibrated in changing scenarios where often there is limited choice on the position on which they are deployed. Automatic Identification System (AIS) data, that includes position, course and vessel's identity, freely available through inexpensive receivers, for some of the vessels appearing within the field of view, provide the opportunity to achieve proper camera calibration to be used for the localization of vessels not equipped with AIS transponders. In this paper we assume a pinhole model for camera geometry and propose perspective matrices computation using AIS positional data. Images obtained from calibrated cameras are then matched and pixel association is utilized for other vessel's localization. We report preliminary experimental results of calibration and localization using two cameras deployed on the Gulf of Naples coastline. The two cameras overlook a section of the harbour and record short video sequences that are synchronized offline with AIS positional information of easily-identified passenger ships. Other small vessels, not equipped with AIS transponders, are localized using camera matrices and pixel matching. Localization accuracy is experimentally evaluated as a function of target distance from the sensors.

  8. Pre-flight and On-orbit Geometric Calibration of the Lunar Reconnaissance Orbiter Camera

    NASA Astrophysics Data System (ADS)

    Speyerer, E. J.; Wagner, R. V.; Robinson, M. S.; Licht, A.; Thomas, P. C.; Becker, K.; Anderson, J.; Brylow, S. M.; Humm, D. C.; Tschimmel, M.

    2016-04-01

    The Lunar Reconnaissance Orbiter Camera (LROC) consists of two imaging systems that provide multispectral and high resolution imaging of the lunar surface. The Wide Angle Camera (WAC) is a seven color push-frame imager with a 90∘ field of view in monochrome mode and 60∘ field of view in color mode. From the nominal 50 km polar orbit, the WAC acquires images with a nadir ground sampling distance of 75 m for each of the five visible bands and 384 m for the two ultraviolet bands. The Narrow Angle Camera (NAC) consists of two identical cameras capable of acquiring images with a ground sampling distance of 0.5 m from an altitude of 50 km. The LROC team geometrically calibrated each camera before launch at Malin Space Science Systems in San Diego, California and the resulting measurements enabled the generation of a detailed camera model for all three cameras. The cameras were mounted and subsequently launched on the Lunar Reconnaissance Orbiter (LRO) on 18 June 2009. Using a subset of the over 793000 NAC and 207000 WAC images of illuminated terrain collected between 30 June 2009 and 15 December 2013, we improved the interior and exterior orientation parameters for each camera, including the addition of a wavelength dependent radial distortion model for the multispectral WAC. These geometric refinements, along with refined ephemeris, enable seamless projections of NAC image pairs with a geodetic accuracy better than 20 meters and sub-pixel precision and accuracy when orthorectifying WAC images.

  9. Protective laser beam viewing device

    DOEpatents

    Neil, George R.; Jordan, Kevin Carl

    2012-12-18

    A protective laser beam viewing system or device including a camera selectively sensitive to laser light wavelengths and a viewing screen receiving images from the laser sensitive camera. According to a preferred embodiment of the invention, the camera is worn on the head of the user or incorporated into a goggle-type viewing display so that it is always aimed at the area of viewing interest to the user and the viewing screen is incorporated into a video display worn as goggles over the eyes of the user.

  10. Operation and Performance of the Mars Exploration Rover Imaging System on the Martian Surface

    NASA Technical Reports Server (NTRS)

    Maki, Justin N.; Litwin, Todd; Herkenhoff, Ken

    2005-01-01

    This slide presentation details the Mars Exploration Rover (MER) imaging system. Over 144,000 images have been gathered from all Mars Missions, with 83.5% of them being gathered by MER. Each Rover has 9 cameras (Navcam, front and rear Hazcam, Pancam, Microscopic Image, Descent Camera, Engineering Camera, Science Camera) and produces 1024 x 1024 (1 Megapixel) images in the same format. All onboard image processing code is implemented in flight software and includes extensive processing capabilities such as autoexposure, flat field correction, image orientation, thumbnail generation, subframing, and image compression. Ground image processing is done at the Jet Propulsion Laboratory's Multimission Image Processing Laboratory using Video Image Communication and Retrieval (VICAR) while stereo processing (left/right pairs) is provided for raw image, radiometric correction; solar energy maps,triangulation (Cartesian 3-spaces) and slope maps.

  11. Multi-camera sensor system for 3D segmentation and localization of multiple mobile robots.

    PubMed

    Losada, Cristina; Mazo, Manuel; Palazuelos, Sira; Pizarro, Daniel; Marrón, Marta

    2010-01-01

    This paper presents a method for obtaining the motion segmentation and 3D localization of multiple mobile robots in an intelligent space using a multi-camera sensor system. The set of calibrated and synchronized cameras are placed in fixed positions within the environment (intelligent space). The proposed algorithm for motion segmentation and 3D localization is based on the minimization of an objective function. This function includes information from all the cameras, and it does not rely on previous knowledge or invasive landmarks on board the robots. The proposed objective function depends on three groups of variables: the segmentation boundaries, the motion parameters and the depth. For the objective function minimization, we use a greedy iterative algorithm with three steps that, after initialization of segmentation boundaries and depth, are repeated until convergence.

  12. Motorcycle detection and counting using stereo camera, IR camera, and microphone array

    NASA Astrophysics Data System (ADS)

    Ling, Bo; Gibson, David R. P.; Middleton, Dan

    2013-03-01

    Detection, classification, and characterization are the key to enhancing motorcycle safety, motorcycle operations and motorcycle travel estimation. Average motorcycle fatalities per Vehicle Mile Traveled (VMT) are currently estimated at 30 times those of auto fatalities. Although it has been an active research area for many years, motorcycle detection still remains a challenging task. Working with FHWA, we have developed a hybrid motorcycle detection and counting system using a suite of sensors including stereo camera, thermal IR camera and unidirectional microphone array. The IR thermal camera can capture the unique thermal signatures associated with the motorcycle's exhaust pipes that often show bright elongated blobs in IR images. The stereo camera in the system is used to detect the motorcyclist who can be easily windowed out in the stereo disparity map. If the motorcyclist is detected through his or her 3D body recognition, motorcycle is detected. Microphones are used to detect motorcycles that often produce low frequency acoustic signals. All three microphones in the microphone array are placed in strategic locations on the sensor platform to minimize the interferences of background noises from sources such as rain and wind. Field test results show that this hybrid motorcycle detection and counting system has an excellent performance.

  13. Low-cost uncooled VOx infrared camera development

    NASA Astrophysics Data System (ADS)

    Li, Chuan; Han, C. J.; Skidmore, George D.; Cook, Grady; Kubala, Kenny; Bates, Robert; Temple, Dorota; Lannon, John; Hilton, Allan; Glukh, Konstantin; Hardy, Busbee

    2013-06-01

    The DRS Tamarisk® 320 camera, introduced in 2011, is a low cost commercial camera based on the 17 µm pixel pitch 320×240 VOx microbolometer technology. A higher resolution 17 µm pixel pitch 640×480 Tamarisk®640 has also been developed and is now in production serving the commercial markets. Recently, under the DARPA sponsored Low Cost Thermal Imager-Manufacturing (LCTI-M) program and internal project, DRS is leading a team of industrial experts from FiveFocal, RTI International and MEMSCAP to develop a small form factor uncooled infrared camera for the military and commercial markets. The objective of the DARPA LCTI-M program is to develop a low SWaP camera (<3.5 cm3 in volume and <500 mW in power consumption) that costs less than US $500 based on a 10,000 units per month production rate. To meet this challenge, DRS is developing several innovative technologies including a small pixel pitch 640×512 VOx uncooled detector, an advanced digital ROIC and low power miniature camera electronics. In addition, DRS and its partners are developing innovative manufacturing processes to reduce production cycle time and costs including wafer scale optic and vacuum packaging manufacturing and a 3-dimensional integrated camera assembly. This paper provides an overview of the DRS Tamarisk® project and LCTI-M related uncooled technology development activities. Highlights of recent progress and challenges will also be discussed. It should be noted that BAE Systems and Raytheon Vision Systems are also participants of the DARPA LCTI-M program.

  14. A computational approach to real-time image processing for serial time-encoded amplified microscopy

    NASA Astrophysics Data System (ADS)

    Oikawa, Minoru; Hiyama, Daisuke; Hirayama, Ryuji; Hasegawa, Satoki; Endo, Yutaka; Sugie, Takahisa; Tsumura, Norimichi; Kuroshima, Mai; Maki, Masanori; Okada, Genki; Lei, Cheng; Ozeki, Yasuyuki; Goda, Keisuke; Shimobaba, Tomoyoshi

    2016-03-01

    High-speed imaging is an indispensable technique, particularly for identifying or analyzing fast-moving objects. The serial time-encoded amplified microscopy (STEAM) technique was proposed to enable us to capture images with a frame rate 1,000 times faster than using conventional methods such as CCD (charge-coupled device) cameras. The application of this high-speed STEAM imaging technique to a real-time system, such as flow cytometry for a cell-sorting system, requires successively processing a large number of captured images with high throughput in real time. We are now developing a high-speed flow cytometer system including a STEAM camera. In this paper, we describe our approach to processing these large amounts of image data in real time. We use an analog-to-digital converter that has up to 7.0G samples/s and 8-bit resolution for capturing the output voltage signal that involves grayscale images from the STEAM camera. Therefore the direct data output from the STEAM camera generates 7.0G byte/s continuously. We provided a field-programmable gate array (FPGA) device as a digital signal pre-processor for image reconstruction and finding objects in a microfluidic channel with high data rates in real time. We also utilized graphics processing unit (GPU) devices for accelerating the calculation speed of identification of the reconstructed images. We built our prototype system, which including a STEAM camera, a FPGA device and a GPU device, and evaluated its performance in real-time identification of small particles (beads), as virtual biological cells, owing through a microfluidic channel.

  15. The mosaics of Mars: As seen by the Viking Lander cameras

    NASA Technical Reports Server (NTRS)

    Levinthal, E. C.; Jones, K. L.

    1980-01-01

    The mosaics and derivative products produced from many individual high resolution images acquired by the Viking Lander Camera Systems are described: A morning and afternoon mosaic for both cameras at the Lander 1 Chryse Planitia site, and a morning, noon, and afternoon camera pair at Utopia Planitia, the Lander 11 site. The derived products include special geometric projections of the mosaic data sets, polar stereographic (donut), stereoscopic, and orthographic. Contour maps and vertical profiles of the topography were overlaid on the mosaics from which they were derived. Sets of stereo pairs were extracted and enlarged from stereoscopic projections of the mosaics.

  16. Note: Optics design of a periscope for the KSTAR visible inspection system with mitigated neutron damages on the camera

    NASA Astrophysics Data System (ADS)

    Lee, Kyuhang; Ko, Jinseok; Wi, Hanmin; Chung, Jinil; Seo, Hyeonjin; Jo, Jae Heung

    2018-06-01

    The visible TV system used in the Korea Superconducting Tokamak Advanced Research device has been equipped with a periscope to minimize the damage on its CCD pixels from neutron radiation. The periscope with more than 2.3 m in overall length has been designed for the visible camera system with its semi-diagonal field of view as wide as 30° and its effective focal length as short as 5.57 mm. The design performance of the periscope includes the modulation transfer function greater than 0.25 at 68 cycles/mm with low distortion. The installed periscope system has confirmed the image qualities as designed and also as comparable as those from its predecessor but with far less probabilities of neutral damages on the camera.

  17. Plenoptic particle image velocimetry with multiple plenoptic cameras

    NASA Astrophysics Data System (ADS)

    Fahringer, Timothy W.; Thurow, Brian S.

    2018-07-01

    Plenoptic particle image velocimetry was recently introduced as a viable three-dimensional, three-component velocimetry technique based on light field cameras. One of the main benefits of this technique is its single camera configuration allowing the technique to be applied in facilities with limited optical access. The main drawback of this configuration is decreased accuracy in the out-of-plane dimension. This work presents a solution with the addition of a second plenoptic camera in a stereo-like configuration. A framework for reconstructing volumes with multiple plenoptic cameras including the volumetric calibration and reconstruction algorithms, including: integral refocusing, filtered refocusing, multiplicative refocusing, and MART are presented. It is shown that the addition of a second camera improves the reconstruction quality and removes the ‘cigar’-like elongation associated with the single camera system. In addition, it is found that adding a third camera provides minimal improvement. Further metrics of the reconstruction quality are quantified in terms of a reconstruction algorithm, particle density, number of cameras, camera separation angle, voxel size, and the effect of common image noise sources. In addition, a synthetic Gaussian ring vortex is used to compare the accuracy of the single and two camera configurations. It was determined that the addition of a second camera reduces the RMSE velocity error from 1.0 to 0.1 voxels in depth and 0.2 to 0.1 voxels in the lateral spatial directions. Finally, the technique is applied experimentally on a ring vortex and comparisons are drawn from the four presented reconstruction algorithms, where it was found that MART and multiplicative refocusing produced the cleanest vortex structure and had the least shot-to-shot variability. Filtered refocusing is able to produce the desired structure, albeit with more noise and variability, while integral refocusing struggled to produce a coherent vortex ring.

  18. A Distributed Wireless Camera System for the Management of Parking Spaces

    PubMed Central

    Melničuk, Petr

    2017-01-01

    The importance of detection of parking space availability is still growing, particularly in major cities. This paper deals with the design of a distributed wireless camera system for the management of parking spaces, which can determine occupancy of the parking space based on the information from multiple cameras. The proposed system uses small camera modules based on Raspberry Pi Zero and computationally efficient algorithm for the occupancy detection based on the histogram of oriented gradients (HOG) feature descriptor and support vector machine (SVM) classifier. We have included information about the orientation of the vehicle as a supporting feature, which has enabled us to achieve better accuracy. The described solution can deliver occupancy information at the rate of 10 parking spaces per second with more than 90% accuracy in a wide range of conditions. Reliability of the implemented algorithm is evaluated with three different test sets which altogether contain over 700,000 samples of parking spaces. PMID:29283371

  19. Sensor fusion of cameras and a laser for city-scale 3D reconstruction.

    PubMed

    Bok, Yunsu; Choi, Dong-Geol; Kweon, In So

    2014-11-04

    This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale) in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate.

  20. BAE Systems' 17μm LWIR camera core for civil, commercial, and military applications

    NASA Astrophysics Data System (ADS)

    Lee, Jeffrey; Rodriguez, Christian; Blackwell, Richard

    2013-06-01

    Seventeen (17) µm pixel Long Wave Infrared (LWIR) Sensors based on vanadium oxide (VOx) micro-bolometers have been in full rate production at BAE Systems' Night Vision Sensors facility in Lexington, MA for the past five years.[1] We introduce here a commercial camera core product, the Airia-MTM imaging module, in a VGA format that reads out in 30 and 60Hz progressive modes. The camera core is architected to conserve power with all digital interfaces from the readout integrated circuit through video output. The architecture enables a variety of input/output interfaces including Camera Link, USB 2.0, micro-display drivers and optional RS-170 analog output supporting legacy systems. The modular board architecture of the electronics facilitates hardware upgrades allow us to capitalize on the latest high performance low power electronics developed for the mobile phones. Software and firmware is field upgradeable through a USB 2.0 port. The USB port also gives users access to up to 100 digitally stored (lossless) images.

  1. High-precision method of binocular camera calibration with a distortion model.

    PubMed

    Li, Weimin; Shan, Siyu; Liu, Hui

    2017-03-10

    A high-precision camera calibration method for binocular stereo vision system based on a multi-view template and alternative bundle adjustment is presented in this paper. The proposed method could be achieved by taking several photos on a specially designed calibration template that has diverse encoded points in different orientations. In this paper, the method utilized the existing algorithm used for monocular camera calibration to obtain the initialization, which involves a camera model, including radial lens distortion and tangential distortion. We created a reference coordinate system based on the left camera coordinate to optimize the intrinsic parameters of left camera through alternative bundle adjustment to obtain optimal values. Then, optimal intrinsic parameters of the right camera can be obtained through alternative bundle adjustment when we create a reference coordinate system based on the right camera coordinate. We also used all intrinsic parameters that were acquired to optimize extrinsic parameters. Thus, the optimal lens distortion parameters and intrinsic and extrinsic parameters were obtained. Synthetic and real data were used to test the method. The simulation results demonstrate that the maximum mean absolute relative calibration errors are about 3.5e-6 and 1.2e-6 for the focal length and the principal point, respectively, under zero-mean Gaussian noise with 0.05 pixels standard deviation. The real result shows that the reprojection error of our model is about 0.045 pixels with the relative standard deviation of 1.0e-6 over the intrinsic parameters. The proposed method is convenient, cost-efficient, highly precise, and simple to carry out.

  2. Rover imaging system for the Mars rover/sample return mission

    NASA Technical Reports Server (NTRS)

    1993-01-01

    In the past year, the conceptual design of a panoramic imager for the Mars Environmental Survey (MESUR) Pathfinder was finished. A prototype camera was built and its performace in the laboratory was tested. The performance of this camera was excellent. Based on this work, we have recently proposed a small, lightweight, rugged, and highly capable Mars Surface Imager (MSI) instrument for the MESUR Pathfinder mission. A key aspect of our approach to optimization of the MSI design is that we treat image gathering, coding, and restoration as a whole, rather than as separate and independent tasks. Our approach leads to higher image quality, especially in the representation of fine detail with good contrast and clarity, without increasing either the complexity of the camera or the amount of data transmission. We have made significant progress over the past year in both the overall MSI system design and in the detailed design of the MSI optics. We have taken a simple panoramic camera and have upgraded it substantially to become a prototype of the MSI flight instrument. The most recent version of the camera utilizes miniature wide-angle optics that image directly onto a 3-color, 2096-element CCD line array. There are several data-taking modes, providing resolution as high as 0.3 mrad/pixel. Analysis tasks that were performed or that are underway with the test data from the prototype camera include the following: construction of 3-D models of imaged scenes from stereo data, first for controlled scenes and later for field scenes; and checks on geometric fidelity, including alignment errors, mast vibration, and oscillation in the drive system. We have outlined a number of tasks planned for Fiscal Year '93 in order to prepare us for submission of a flight instrument proposal for MESUR Pathfinder.

  3. Automatic multi-camera calibration for deployable positioning systems

    NASA Astrophysics Data System (ADS)

    Axelsson, Maria; Karlsson, Mikael; Rudner, Staffan

    2012-06-01

    Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning. In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters (relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The method is compared to a manual calibration method using real HD video data from a field trial with a multicamera positioning system. The method is also evaluated on simulated data from a stereo camera model. The results show that the reprojection error of the automated camera calibration method is close to or smaller than the error for the manual calibration method and that the automated calibration method can replace the manual calibration.

  4. The PAUCam readout electronics system

    NASA Astrophysics Data System (ADS)

    Jiménez, Jorge; Illa, José M.; Cardiel-Sas, Laia; de Vicente, Juan; Castilla, Javier; Casas, Ricard

    2016-08-01

    The PAUCam is an optical camera with a wide field of view of 1 deg x 1 deg and up to 46 narrow and broad band filters. The camera is already installed on the William Herschel Telescope (WHT) in the Canary Islands, Spain and successfully commissioned during the first period of 2015. The paper presents the main results from the readout electronics commissioning tests and include an overview of the whole readout electronics system, its configuration and current performance.

  5. Ground-based search for the brightest transiting planets with the Multi-site All-Sky CAmeRA: MASCARA

    NASA Astrophysics Data System (ADS)

    Snellen, Ignas A. G.; Stuik, Remko; Navarro, Ramon; Bettonvil, Felix; Kenworthy, Matthew; de Mooij, Ernst; Otten, Gilles; ter Horst, Rik; le Poole, Rudolf

    2012-09-01

    The Multi-site All-sky CAmeRA MASCARA is an instrument concept consisting of several stations across the globe, with each station containing a battery of low-cost cameras to monitor the near-entire sky at each location. Once all stations have been installed, MASCARA will be able to provide a nearly 24-hr coverage of the complete dark sky, down to magnitude 8, at sub-minute cadence. Its purpose is to find the brightest transiting exoplanet systems, expected in the V=4-8 magnitude range - currently not probed by space- or ground-based surveys. The bright/nearby transiting planet systems, which MASCARA will discover, will be the key targets for detailed planet atmosphere observations. We present studies on the initial design of a MASCARA station, including the camera housing, domes, and computer equipment, and on the photometric stability of low-cost cameras showing that a precision of 0.3-1% per hour can be readily achieved. We plan to roll out the first MASCARA station before the end of 2013. A 5-station MASCARA can within two years discover up to a dozen of the brightest transiting planet systems in the sky.

  6. Upgraded cameras for the HESS imaging atmospheric Cherenkov telescopes

    NASA Astrophysics Data System (ADS)

    Giavitto, Gianluca; Ashton, Terry; Balzer, Arnim; Berge, David; Brun, Francois; Chaminade, Thomas; Delagnes, Eric; Fontaine, Gérard; Füßling, Matthias; Giebels, Berrie; Glicenstein, Jean-François; Gräber, Tobias; Hinton, James; Jahnke, Albert; Klepser, Stefan; Kossatz, Marko; Kretzschmann, Axel; Lefranc, Valentin; Leich, Holger; Lüdecke, Hartmut; Lypova, Iryna; Manigot, Pascal; Marandon, Vincent; Moulin, Emmanuel; de Naurois, Mathieu; Nayman, Patrick; Penno, Marek; Ross, Duncan; Salek, David; Schade, Markus; Schwab, Thomas; Simoni, Rachel; Stegmann, Christian; Steppa, Constantin; Thornhill, Julian; Toussnel, François

    2016-08-01

    The High Energy Stereoscopic System (H.E.S.S.) is an array of five imaging atmospheric Cherenkov telescopes, sensitive to cosmic gamma rays of energies between 30 GeV and several tens of TeV. Four of them started operations in 2003 and their photomultiplier tube (PMT) cameras are currently undergoing a major upgrade, with the goals of improving the overall performance of the array and reducing the failure rate of the ageing systems. With the exception of the 960 PMTs, all components inside the camera have been replaced: these include the readout and trigger electronics, the power, ventilation and pneumatic systems and the control and data acquisition software. New designs and technical solutions have been introduced: the readout makes use of the NECTAr analog memory chip, which samples and stores the PMT signals and was developed for the Cherenkov Telescope Array (CTA). The control of all hardware subsystems is carried out by an FPGA coupled to an embedded ARM computer, a modular design which has proven to be very fast and reliable. The new camera software is based on modern C++ libraries such as Apache Thrift, ØMQ and Protocol buffers, offering very good performance, robustness, flexibility and ease of development. The first camera was upgraded in 2015, the other three cameras are foreseen to follow in fall 2016. We describe the design, the performance, the results of the tests and the lessons learned from the first upgraded H.E.S.S. camera.

  7. 78 FR 17939 - Announcement of Funding Awards; Capital Fund Safety and Security Grants; Fiscal Year 2012

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-25

    ... publishing the names, addresses, and amounts of the 18 awards made under the set aside in Appendix A to this... Security Camera Harrison Street, Oakland, CA Surveillance System 94612. including digital video recorders... Cameras, 50 Lincoln Plaza, Wilkes-Barre, Network Video PA 18702. Recorders, and Lighting. Ft. Worth...

  8. An algorithm of a real time image tracking system using a camera with pan/tilt motors on an embedded system

    NASA Astrophysics Data System (ADS)

    Kim, Hie-Sik; Nam, Chul; Ha, Kwan-Yong; Ayurzana, Odgeral; Kwon, Jong-Won

    2005-12-01

    The embedded systems have been applied to many fields, including households and industrial sites. The user interface technology with simple display on the screen was implemented more and more. The user demands are increasing and the system has more various applicable fields due to a high penetration rate of the Internet. Therefore, the demand for embedded system is tend to rise. An embedded system for image tracking was implemented. This system is used a fixed IP for the reliable server operation on TCP/IP networks. Using an USB camera on the embedded Linux system developed a real time broadcasting of video image on the Internet. The digital camera is connected at the USB host port of the embedded board. All input images from the video camera are continuously stored as a compressed JPEG file in a directory at the Linux web-server. And each frame image data from web camera is compared for measurement of displacement Vector. That used Block matching algorithm and edge detection algorithm for past speed. And the displacement vector is used at pan/tilt motor control through RS232 serial cable. The embedded board utilized the S3C2410 MPU, which used the ARM 920T core form Samsung. The operating system was ported to embedded Linux kernel and mounted of root file system. And the stored images are sent to the client PC through the web browser. It used the network function of Linux and it developed a program with protocol of the TCP/IP.

  9. CAOS-CMOS camera.

    PubMed

    Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid

    2016-06-13

    Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.

  10. Flow visualization by mobile phone cameras

    NASA Astrophysics Data System (ADS)

    Cierpka, Christian; Hain, Rainer; Buchmann, Nicolas A.

    2016-06-01

    Mobile smart phones were completely changing people's communication within the last ten years. However, these devices do not only offer communication through different channels but also devices and applications for fun and recreation. In this respect, mobile phone cameras include now relatively fast (up to 240 Hz) cameras to capture high-speed videos of sport events or other fast processes. The article therefore explores the possibility to make use of this development and the wide spread availability of these cameras in the terms of velocity measurements for industrial or technical applications and fluid dynamics education in high schools and at universities. The requirements for a simplistic PIV (particle image velocimetry) system are discussed. A model experiment of a free water jet was used to prove the concept and shed some light on the achievable quality and determine bottle necks by comparing the results obtained with a mobile phone camera with data taken by a high-speed camera suited for scientific experiments.

  11. QWIP technology for both military and civilian applications

    NASA Astrophysics Data System (ADS)

    Gunapala, Sarath D.; Kukkonen, Carl A.; Sirangelo, Mark N.; McQuiston, Barbara K.; Chehayeb, Riad; Kaufmann, M.

    2001-10-01

    Advanced thermal imaging infrared cameras have been a cost effective and reliable method to obtain the temperature of objects. Quantum Well Infrared Photodetector (QWIP) based thermal imaging systems have advanced the state-of-the-art and are the most sensitive commercially available thermal systems. QWIP Technologies LLC, under exclusive agreement with Caltech University, is currently manufacturing the QWIP-ChipTM, a 320 X 256 element, bound-to-quasibound QWIP FPA. The camera performance falls within the long-wave IR band, spectrally peaked at 8.5 μm. The camera is equipped with a 32-bit floating-point digital signal processor combined with multi- tasking software, delivering a digital acquisition resolution of 12-bits using nominal power consumption of less than 50 Watts. With a variety of video interface options, remote control capability via an RS-232 connection, and an integrated control driver circuit to support motorized zoom and focus- compatible lenses, this camera design has excellent application in both the military and commercial sector. In the area of remote sensing, high-performance QWIP systems can be used for high-resolution, target recognition as part of a new system of airborne platforms (including UAVs). Such systems also have direct application in law enforcement, surveillance, industrial monitoring and road hazard detection systems. This presentation will cover the current performance of the commercial QWIP cameras, conceptual platform systems and advanced image processing for use in both military remote sensing and civilian applications currently being developed in road hazard monitoring.

  12. Toward an image compression algorithm for the high-resolution electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    Taking pictures with a camera that uses a digital recording medium instead of film has the advantage of recording and transmitting images without the use of a darkroom or a courier. However, high-resolution images contain an enormous amount of information and strain data-storage systems. Image compression will allow multiple images to be stored in the High-Resolution Electronic Still Camera. The camera is under development at Johnson Space Center. Fidelity of the reproduced image and compression speed are of tantamount importance. Lossless compression algorithms are fast and faithfully reproduce the image, but their compression ratios will be unacceptably low due to noise in the front end of the camera. Future efforts will include exploring methods that will reduce the noise in the image and increase the compression ratio.

  13. Characterization and optimization for detector systems of IGRINS

    NASA Astrophysics Data System (ADS)

    Jeong, Ueejeong; Chun, Moo-Young; Oh, Jae Sok; Park, Chan; Yuk, In-Soo; Oh, Heeyoung; Kim, Kang-Min; Ko, Kyeong Yeon; Pavel, Michael D.; Yu, Young Sam; Jaffe, Daniel T.

    2014-07-01

    IGRINS (Immersion GRating INfrared Spectrometer) is a high resolution wide-band infrared spectrograph developed by the Korea Astronomy and Space Science Institute (KASI) and the University of Texas at Austin (UT). This spectrograph has H-band and K-band science cameras and a slit viewing camera, all three of which use Teledyne's λc~2.5μm 2k×2k HgCdTe HAWAII-2RG CMOS detectors. The two spectrograph cameras employ science grade detectors, while the slit viewing camera includes an engineering grade detector. Teledyne's cryogenic SIDECAR ASIC boards and JADE2 USB interface cards were installed to control those detectors. We performed experiments to characterize and optimize the detector systems in the IGRINS cryostat. We present measurements and optimization of noise, dark current, and referencelevel stability obtained under dark conditions. We also discuss well depth, linearity and conversion gain measurements obtained using an external light source.

  14. MS Lucid places samples in the TEHOF aboard the Spektr module

    NASA Image and Video Library

    1997-03-26

    STS079-S-082 (16-26 Sept. 1996) --- Cosmonaut guest researcher Shannon W. Lucid and Valeri G. Korzun, her Mir-22 commander, are pictured on the Spektr Module aboard Russia's Earth-orbiting Mir Space Station. Korzun was the third of four commanders that Lucid served with during her record-setting 188 consecutive days in space. Later, Lucid returned to Earth with her fourth commander-astronaut William F. Readdy-and five other NASA astronauts to complete the STS-79 mission. During the STS-79 mission, the crew used an IMAX camera to document activities aboard the space shuttle Atlantis and the various Mir modules. A hand-held version of the 65mm camera system accompanied the STS-79 crew into space in Atlantis' crew cabin. NASA has flown IMAX camera systems on many Shuttle missions, including a special cargo bay camera's coverage of other recent Shuttle-Mir rendezvous and/or docking missions.

  15. Evolution of the SOFIA tracking control system

    NASA Astrophysics Data System (ADS)

    Fiebig, Norbert; Jakob, Holger; Pfüller, Enrico; Röser, Hans-Peter; Wiedemann, Manuel; Wolf, Jürgen

    2014-07-01

    The airborne observatory SOFIA (Stratospheric Observatory for Infrared Astronomy) is undergoing a modernization of its tracking system. This included new, highly sensitive tracking cameras, control computers, filter wheels and other equipment, as well as a major redesign of the control software. The experiences along the migration path from an aged 19" VMbus based control system to the application of modern industrial PCs, from VxWorks real-time operating system to embedded Linux and a state of the art software architecture are presented. Further, the concept is presented to operate the new camera also as a scientific instrument, in parallel to tracking.

  16. MonoSLAM: real-time single camera SLAM.

    PubMed

    Davison, Andrew J; Reid, Ian D; Molton, Nicholas D; Stasse, Olivier

    2007-06-01

    We present a real-time algorithm which can recover the 3D trajectory of a monocular camera, moving rapidly through a previously unknown scene. Our system, which we dub MonoSLAM, is the first successful application of the SLAM methodology from mobile robotics to the "pure vision" domain of a single uncontrolled camera, achieving real time but drift-free performance inaccessible to Structure from Motion approaches. The core of the approach is the online creation of a sparse but persistent map of natural landmarks within a probabilistic framework. Our key novel contributions include an active approach to mapping and measurement, the use of a general motion model for smooth camera movement, and solutions for monocular feature initialization and feature orientation estimation. Together, these add up to an extremely efficient and robust algorithm which runs at 30 Hz with standard PC and camera hardware. This work extends the range of robotic systems in which SLAM can be usefully applied, but also opens up new areas. We present applications of MonoSLAM to real-time 3D localization and mapping for a high-performance full-size humanoid robot and live augmented reality with a hand-held camera.

  17. Miniaturized fundus camera

    NASA Astrophysics Data System (ADS)

    Gliss, Christine; Parel, Jean-Marie A.; Flynn, John T.; Pratisto, Hans S.; Niederer, Peter F.

    2003-07-01

    We present a miniaturized version of a fundus camera. The camera is designed for the use in screening for retinopathy of prematurity (ROP). There, but also in other applications a small, light weight, digital camera system can be extremely useful. We present a small wide angle digital camera system. The handpiece is significantly smaller and lighter then in all other systems. The electronics is truly portable fitting in a standard boardcase. The camera is designed to be offered at a compatible price. Data from tests on young rabbits' eyes is presented. The development of the camera system is part of a telemedicine project screening for ROP. Telemedical applications are a perfect application for this camera system using both advantages: the portability as well as the digital image.

  18. Automated Meteor Detection by All-Sky Digital Camera Systems

    NASA Astrophysics Data System (ADS)

    Suk, Tomáš; Šimberová, Stanislava

    2017-12-01

    We have developed a set of methods to detect meteor light traces captured by all-sky CCD cameras. Operating at small automatic observatories (stations), these cameras create a network spread over a large territory. Image data coming from these stations are merged in one central node. Since a vast amount of data is collected by the stations in a single night, robotic storage and analysis are essential to processing. The proposed methodology is adapted to data from a network of automatic stations equipped with digital fish-eye cameras and includes data capturing, preparation, pre-processing, analysis, and finally recognition of objects in time sequences. In our experiments we utilized real observed data from two stations.

  19. NASA Johnson Space Center: Mini AERCam Testing with GSS6560

    NASA Technical Reports Server (NTRS)

    Cryant, Scott P.

    2004-01-01

    This slide presentation reviews the testing of the Miniature Autonomous Extravehicular Robotic Camera (Mini AERCam) with the GPS/SBAS simulation system, GSS6560. There is a listing of several GPS based programs at NASA Johnson, including the testing of Shuttle testing of the GPS system. Including information about Space Integrated GPS/INS (SIGI) testing. There is also information about the standalone ISS SIGI test,and testing of the SIGI for the Crew Return Vehicle. The Mini AERCam is a small, free-flying camera for remote inspections of the ISS, it uses precise relative navigation with differential carrier phase GPS to provide situational awareness to operators. The closed loop orbital testing with and without the use of the GSS6550 system of the Mini AERCam system is reviewed.

  20. High-performance camera module for fast quality inspection in industrial printing applications

    NASA Astrophysics Data System (ADS)

    Fürtler, Johannes; Bodenstorfer, Ernst; Mayer, Konrad J.; Brodersen, Jörg; Heiss, Dorothea; Penz, Harald; Eckel, Christian; Gravogl, Klaus; Nachtnebel, Herbert

    2007-02-01

    Today, printing products which must meet highest quality standards, e.g., banknotes, stamps, or vouchers, are automatically checked by optical inspection systems. Typically, the examination of fine details of the print or security features demands images taken from various perspectives, with different spectral sensitivity (visible, infrared, ultraviolet), and with high resolution. Consequently, the inspection system is equipped with several cameras and has to cope with an enormous data rate to be processed in real-time. Hence, it is desirable to move image processing tasks into the camera to reduce the amount of data which has to be transferred to the (central) image processing system. The idea is to transfer relevant information only, i.e., features of the image instead of the raw image data from the sensor. These features are then further processed. In this paper a color line-scan camera for line rates up to 100 kHz is presented. The camera is based on a commercial CMOS (complementary metal oxide semiconductor) area image sensor and a field programmable gate array (FPGA). It implements extraction of image features which are well suited to detect print flaws like blotches of ink, color smears, splashes, spots and scratches. The camera design and several image processing methods implemented on the FPGA are described, including flat field correction, compensation of geometric distortions, color transformation, as well as decimation and neighborhood operations.

  1. Optical design of space cameras for automated rendezvous and docking systems

    NASA Astrophysics Data System (ADS)

    Zhu, X.

    2018-05-01

    Visible cameras are essential components of a space automated rendezvous and docking (AR and D) system, which is utilized in many space missions including crewed or robotic spaceship docking, on-orbit satellite servicing, autonomous landing and hazard avoidance. Cameras are ubiquitous devices in modern time with countless lens designs that focus on high resolution and color rendition. In comparison, space AR and D cameras, while are not required to have extreme high resolution and color rendition, impose some unique requirements on lenses. Fixed lenses with no moving parts and separated lenses for narrow and wide field-of-view (FOV) are normally used in order to meet high reliability requirement. Cemented lens elements are usually avoided due to wide temperature swing and outgassing requirement in space environment. The lenses should be designed with exceptional straylight performance and minimum lens flare given intense sun light and lacking of atmosphere scattering in space. Furthermore radiation resistant glasses should be considered to prevent glass darkening from space radiation. Neptec has designed and built a narrow FOV (NFOV) lens and a wide FOV (WFOV) lens for an AR and D visible camera system. The lenses are designed by using ZEMAX program; the straylight performance and the lens baffles are simulated by using TracePro program. This paper discusses general requirements for space AR and D camera lenses and the specific measures for lenses to meet the space environmental requirements.

  2. A multipurpose camera system for monitoring Kīlauea Volcano, Hawai'i

    USGS Publications Warehouse

    Patrick, Matthew R.; Orr, Tim R.; Lee, Lopaka; Moniz, Cyril J.

    2015-01-01

    We describe a low-cost, compact multipurpose camera system designed for field deployment at active volcanoes that can be used either as a webcam (transmitting images back to an observatory in real-time) or as a time-lapse camera system (storing images onto the camera system for periodic retrieval during field visits). The system also has the capability to acquire high-definition video. The camera system uses a Raspberry Pi single-board computer and a 5-megapixel low-light (near-infrared sensitive) camera, as well as a small Global Positioning System (GPS) module to ensure accurate time-stamping of images. Custom Python scripts control the webcam and GPS unit and handle data management. The inexpensive nature of the system allows it to be installed at hazardous sites where it might be lost. Another major advantage of this camera system is that it provides accurate internal timing (independent of network connection) and, because a full Linux operating system and the Python programming language are available on the camera system itself, it has the versatility to be configured for the specific needs of the user. We describe example deployments of the camera at Kīlauea Volcano, Hawai‘i, to monitor ongoing summit lava lake activity. 

  3. e2v CMOS and CCD sensors and systems for astronomy

    NASA Astrophysics Data System (ADS)

    Jorden, P. R.; Jerram, P. A.; Fryer, M.; Stefanov, K. D.

    2017-07-01

    e2v designs and manufactures a wide range of sensors for space and astronomy applications. This includes high performance CCDs for X-ray, visible and near-IR wavelengths. In this paper we illustrate the maturity of CMOS capability for these applications; examples are presented together with performance data. The majority of e2v sensors for these applications are back-thinned for highest spectral response and designed for very low read-out noise; the combination delivers high signal to noise ratio in association with a variety of formats and package designs. The growing e2v capability in delivery of sub-systems and cryogenic cameras is illustrated—including the 1.2 Giga-pixel J-PAS camera system.

  4. NASA's Optical Program on Ascension Island: Bringing MCAT to Life as the Eugene Stansbery-Meter Class Autonomous Telescope (ES-MCAT)

    NASA Astrophysics Data System (ADS)

    Lederer, S. M.; Hickson, P.; Cowardin, H. M.; Buckalew, B.; Frith, J.; Alliss, R.

    In June 2015, the construction of the Meter Class Autonomous Telescope was completed and MCAT saw the light of the stars for the first time. In 2017, MCAT was newly dedicated as the Eugene Stansbery-MCAT telescope by NASA’s Orbital Debris Program Office (ODPO), in honour of his inspiration and dedication to this newest optical member of the NASA ODPO. Since that time, MCAT has viewed the skies with one engineering camera and two scientific cameras, and the ODPO optical team has begun the process of vetting the entire system. The full system vetting includes verification and validation of: (1) the hardware comprising the system (e.g. the telescopes and its instruments, the dome, weather systems, all-sky camera, FLIR cloud infrared camera, etc.), (2) the custom-written Observatory Control System (OCS) master software designed to autonomously control this complex system of instruments, each with its own control software, and (3) the custom written Orbital Debris Processing software for post-processing the data. ES-MCAT is now capable of autonomous observing to include Geosyncronous survey, TLE (Two-line element) tracking of individual catalogued debris at all orbital regimes (Low-Earth Orbit all the way to Geosynchronous (GEO) orbit), tracking at specified non-sidereal rates, as well as sidereal rates for proper calibration with standard stars. Ultimately, the data will be used for validation of NASA’s Orbital Debris Engineering Model, ORDEM, which aids in engineering designs of spacecraft that require knowledge of the orbital debris environment and long-term risks for collisions with Resident Space Objects (RSOs).

  5. NASA's Optical Program on Ascension Island: Bringing MCAT to Life as the Eugene Stansbery-Meter Class Autonomous Telescope (ES-MCAT)

    NASA Technical Reports Server (NTRS)

    Lederer, S. M.; Hickson, P.; Cowardin, H. M.; Buckalew, B.; Frith, J.; Alliss, R.

    2017-01-01

    In June 2015, the construction of the Meter Class Autonomous Telescope was completed and MCAT saw the light of the stars for the first time. In 2017, MCAT was newly dedicated as the Eugene Stansbery-MCAT telescope by NASA's Orbital Debris Program Office (ODPO), in honor of his inspiration and dedication to this newest optical member of the NASA ODPO. Since that time, MCAT has viewed the skies with one engineering camera and two scientific cameras, and the ODPO optical team has begun the process of vetting the entire system. The full system vetting includes verification and validation of: (1) the hardware comprising the system (e.g. the telescopes and its instruments, the dome, weather systems, all-sky camera, FLIR cloud infrared camera, etc.), (2) the custom-written Observatory Control System (OCS) master software designed to autonomously control this complex system of instruments, each with its own control software, and (3) the custom written Orbital Debris Processing software for post-processing the data. ES-MCAT is now capable of autonomous observing to include Geosynchronous survey, TLE (Two-line element) tracking of individual catalogued debris at all orbital regimes (Low-Earth Orbit all the way to Geosynchronous (GEO) orbit), tracking at specified non-sidereal rates, as well as sidereal rates for proper calibration with standard stars. Ultimately, the data will be used for validation of NASA's Orbital Debris Engineering Model, ORDEM, which aids in engineering designs of spacecraft that require knowledge of the orbital debris environment and long-term risks for collisions with Resident Space Objects (RSOs).

  6. The system analysis of light field information collection based on the light field imaging

    NASA Astrophysics Data System (ADS)

    Wang, Ye; Li, Wenhua; Hao, Chenyang

    2016-10-01

    Augmented reality(AR) technology is becoming the study focus, and the AR effect of the light field imaging makes the research of light field camera attractive. The micro array structure was adopted in most light field information acquisition system(LFIAS) since emergence of light field camera, micro lens array(MLA) and micro pinhole array(MPA) system mainly included. It is reviewed in this paper the structure of the LFIAS that the Light field camera commonly used in recent years. LFIAS has been analyzed based on the theory of geometrical optics. Meanwhile, this paper presents a novel LFIAS, plane grating system, we call it "micro aperture array(MAA." And the LFIAS are analyzed based on the knowledge of information optics; This paper proves that there is a little difference in the multiple image produced by the plane grating system. And the plane grating system can collect and record the amplitude and phase information of the field light.

  7. CMOS Image Sensors: Electronic Camera On A Chip

    NASA Technical Reports Server (NTRS)

    Fossum, E. R.

    1995-01-01

    Recent advancements in CMOS image sensor technology are reviewed, including both passive pixel sensors and active pixel sensors. On- chip analog to digital converters and on-chip timing and control circuits permit realization of an electronic camera-on-a-chip. Highly miniaturized imaging systems based on CMOS image sensor technology are emerging as a competitor to charge-coupled devices for low cost uses.

  8. High-speed and ultrahigh-speed cinematographic recording techniques

    NASA Astrophysics Data System (ADS)

    Miquel, J. C.

    1980-12-01

    A survey is presented of various high-speed and ultrahigh-speed cinematographic recording systems (covering a range of speeds from 100 to 14-million pps). Attention is given to the functional and operational characteristics of cameras and to details of high-speed cinematography techniques (including image processing, and illumination). A list of cameras (many of them French) available in 1980 is presented

  9. PBF Reactor Building (PER620). Cubicle 10. Camera facing southeast. Loop ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    PBF Reactor Building (PER-620). Cubicle 10. Camera facing southeast. Loop pressurizer on right. Other equipment includes loop strained, control valves, loop piping, pressurizer interchanger, and cleanup system cooler. High-density shielding brick walls. Photographer: Kirsh. Date: November 2, 1970. INEEL negative no. 70-4908 - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID

  10. LAMOST CCD camera-control system based on RTS2

    NASA Astrophysics Data System (ADS)

    Tian, Yuan; Wang, Zheng; Li, Jian; Cao, Zi-Huang; Dai, Wei; Wei, Shou-Lin; Zhao, Yong-Heng

    2018-05-01

    The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) is the largest existing spectroscopic survey telescope, having 32 scientific charge-coupled-device (CCD) cameras for acquiring spectra. Stability and automation of the camera-control software are essential, but cannot be provided by the existing system. The Remote Telescope System 2nd Version (RTS2) is an open-source and automatic observatory-control system. However, all previous RTS2 applications were developed for small telescopes. This paper focuses on implementation of an RTS2-based camera-control system for the 32 CCDs of LAMOST. A virtual camera module inherited from the RTS2 camera module is built as a device component working on the RTS2 framework. To improve the controllability and robustness, a virtualized layer is designed using the master-slave software paradigm, and the virtual camera module is mapped to the 32 real cameras of LAMOST. The new system is deployed in the actual environment and experimentally tested. Finally, multiple observations are conducted using this new RTS2-framework-based control system. The new camera-control system is found to satisfy the requirements for automatic camera control in LAMOST. This is the first time that RTS2 has been applied to a large telescope, and provides a referential solution for full RTS2 introduction to the LAMOST observatory control system.

  11. Advances in Gamma-Ray Imaging with Intensified Quantum-Imaging Detectors

    NASA Astrophysics Data System (ADS)

    Han, Ling

    Nuclear medicine, an important branch of modern medical imaging, is an essential tool for both diagnosis and treatment of disease. As the fundamental element of nuclear medicine imaging, the gamma camera is able to detect gamma-ray photons emitted by radiotracers injected into a patient and form an image of the radiotracer distribution, reflecting biological functions of organs or tissues. Recently, an intensified CCD/CMOS-based quantum detector, called iQID, was developed in the Center for Gamma-Ray Imaging. Originally designed as a novel type of gamma camera, iQID demonstrated ultra-high spatial resolution (< 100 micron) and many other advantages over traditional gamma cameras. This work focuses on advancing this conceptually-proven gamma-ray imaging technology to make it ready for both preclinical and clinical applications. To start with, a Monte Carlo simulation of the key light-intensification device, i.e. the image intensifier, was developed, which revealed the dominating factor(s) that limit energy resolution performance of the iQID cameras. For preclinical imaging applications, a previously-developed iQID-based single-photon-emission computed-tomography (SPECT) system, called FastSPECT III, was fully advanced in terms of data acquisition software, system sensitivity and effective FOV by developing and adopting a new photon-counting algorithm, thicker columnar scintillation detectors, and system calibration method. Originally designed for mouse brain imaging, the system is now able to provide full-body mouse imaging with sub-350-micron spatial resolution. To further advance the iQID technology to include clinical imaging applications, a novel large-area iQID gamma camera, called LA-iQID, was developed from concept to prototype. Sub-mm system resolution in an effective FOV of 188 mm x 188 mm has been achieved. The camera architecture, system components, design and integration, data acquisition, camera calibration, and performance evaluation are presented in this work. Mounted on a castered counter-weighted clinical cart, the camera also features portable and mobile capabilities for easy handling and on-site applications at remote locations where hospital facilities are not available.

  12. Motion Estimation Utilizing Range Detection-Enhanced Visual Odometry

    NASA Technical Reports Server (NTRS)

    Morris, Daniel Dale (Inventor); Chang, Hong (Inventor); Friend, Paul Russell (Inventor); Chen, Qi (Inventor); Graf, Jodi Seaborn (Inventor)

    2016-01-01

    A motion determination system is disclosed. The system may receive a first and a second camera image from a camera, the first camera image received earlier than the second camera image. The system may identify corresponding features in the first and second camera images. The system may receive range data comprising at least one of a first and a second range data from a range detection unit, corresponding to the first and second camera images, respectively. The system may determine first positions and the second positions of the corresponding features using the first camera image and the second camera image. The first positions or the second positions may be determined by also using the range data. The system may determine a change in position of the machine based on differences between the first and second positions, and a VO-based velocity of the machine based on the determined change in position.

  13. Earth elevation map production and high resolution sensing camera imaging analysis

    NASA Astrophysics Data System (ADS)

    Yang, Xiubin; Jin, Guang; Jiang, Li; Dai, Lu; Xu, Kai

    2010-11-01

    The Earth's digital elevation which impacts space camera imaging has prepared and imaging has analysed. Based on matching error that TDI CCD integral series request of the speed of image motion, statistical experimental methods-Monte Carlo method is used to calculate the distribution histogram of Earth's elevation in image motion compensated model which includes satellite attitude changes, orbital angular rate changes, latitude, longitude and the orbital inclination changes. And then, elevation information of the earth's surface from SRTM is read. Earth elevation map which produced for aerospace electronic cameras is compressed and spliced. It can get elevation data from flash according to the shooting point of latitude and longitude. If elevation data between two data, the ways of searching data uses linear interpolation. Linear interpolation can better meet the rugged mountains and hills changing requests. At last, the deviant framework and camera controller are used to test the character of deviant angle errors, TDI CCD camera simulation system with the material point corresponding to imaging point model is used to analyze the imaging's MTF and mutual correlation similarity measure, simulation system use adding cumulation which TDI CCD imaging exceeded the corresponding pixel horizontal and vertical offset to simulate camera imaging when stability of satellite attitude changes. This process is practicality. It can effectively control the camera memory space, and meet a very good precision TDI CCD camera in the request matches the speed of image motion and imaging.

  14. Differences in glance behavior between drivers using a rearview camera, parking sensor system, both technologies, or no technology during low-speed parking maneuvers.

    PubMed

    Kidd, David G; McCartt, Anne T

    2016-02-01

    This study characterized the use of various fields of view during low-speed parking maneuvers by drivers with a rearview camera, a sensor system, a camera and sensor system combined, or neither technology. Participants performed four different low-speed parking maneuvers five times. Glances to different fields of view the second time through the four maneuvers were coded along with the glance locations at the onset of the audible warning from the sensor system and immediately after the warning for participants in the sensor and camera-plus-sensor conditions. Overall, the results suggest that information from cameras and/or sensor systems is used in place of mirrors and shoulder glances. Participants with a camera, sensor system, or both technologies looked over their shoulders significantly less than participants without technology. Participants with cameras (camera and camera-plus-sensor conditions) used their mirrors significantly less compared with participants without cameras (no-technology and sensor conditions). Participants in the camera-plus-sensor condition looked at the center console/camera display for a smaller percentage of the time during the low-speed maneuvers than participants in the camera condition and glanced more frequently to the center console/camera display immediately after the warning from the sensor system compared with the frequency of glances to this location at warning onset. Although this increase was not statistically significant, the pattern suggests that participants in the camera-plus-sensor condition may have used the warning as a cue to look at the camera display. The observed differences in glance behavior between study groups were illustrated by relating it to the visibility of a 12-15-month-old child-size object. These findings provide evidence that drivers adapt their glance behavior during low-speed parking maneuvers following extended use of rearview cameras and parking sensors, and suggest that other technologies which augment the driving task may do the same. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Adjustable control station with movable monitors and cameras for viewing systems in robotics and teleoperations

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor)

    1994-01-01

    Real-time video presentations are provided in the field of operator-supervised automation and teleoperation, particularly in control stations having movable cameras for optimal viewing of a region of interest in robotics and teleoperations for performing different types of tasks. Movable monitors to match the corresponding camera orientations (pan, tilt, and roll) are provided in order to match the coordinate systems of all the monitors to the operator internal coordinate system. Automated control of the arrangement of cameras and monitors, and of the configuration of system parameters, is provided for optimal viewing and performance of each type of task for each operator since operators have different individual characteristics. The optimal viewing arrangement and system parameter configuration is determined and stored for each operator in performing each of many types of tasks in order to aid the automation of setting up optimal arrangements and configurations for successive tasks in real time. Factors in determining what is optimal include the operator's ability to use hand-controllers for each type of task. Robot joint locations, forces and torques are used, as well as the operator's identity, to identify the current type of task being performed in order to call up a stored optimal viewing arrangement and system parameter configuration.

  16. Texture-based measurement of spatial frequency response using the dead leaves target: extensions, and application to real camera systems

    NASA Astrophysics Data System (ADS)

    McElvain, Jon; Campbell, Scott P.; Miller, Jonathan; Jin, Elaine W.

    2010-01-01

    The dead leaves model was recently introduced as a method for measuring the spatial frequency response (SFR) of camera systems. The target consists of a series of overlapping opaque circles with a uniform gray level distribution and radii distributed as r-3. Unlike the traditional knife-edge target, the SFR derived from the dead leaves target will be penalized for systems that employ aggressive noise reduction. Initial studies have shown that the dead leaves SFR correlates well with sharpness/texture blur preference, and thus the target can potentially be used as a surrogate for more expensive subjective image quality evaluations. In this paper, the dead leaves target is analyzed for measurement of camera system spatial frequency response. It was determined that the power spectral density (PSD) of the ideal dead leaves target does not exhibit simple power law dependence, and scale invariance is only loosely obeyed. An extension to the ideal dead leaves PSD model is proposed, including a correction term to account for system noise. With this extended model, the SFR of several camera systems with a variety of formats was measured, ranging from 3 to 10 megapixels; the effects of handshake motion blur are also analyzed via the dead leaves target.

  17. Soft tissue navigation for laparoscopic prostatectomy: evaluation of camera pose estimation for enhanced visualization

    NASA Astrophysics Data System (ADS)

    Baumhauer, M.; Simpfendörfer, T.; Schwarz, R.; Seitel, M.; Müller-Stich, B. P.; Gutt, C. N.; Rassweiler, J.; Meinzer, H.-P.; Wolf, I.

    2007-03-01

    We introduce a novel navigation system to support minimally invasive prostate surgery. The system utilizes transrectal ultrasonography (TRUS) and needle-shaped navigation aids to visualize hidden structures via Augmented Reality. During the intervention, the navigation aids are segmented once from a 3D TRUS dataset and subsequently tracked by the endoscope camera. Camera Pose Estimation methods directly determine position and orientation of the camera in relation to the navigation aids. Accordingly, our system does not require any external tracking device for registration of endoscope camera and ultrasonography probe. In addition to a preoperative planning step in which the navigation targets are defined, the procedure consists of two main steps which are carried out during the intervention: First, the preoperatively prepared planning data is registered with an intraoperatively acquired 3D TRUS dataset and the segmented navigation aids. Second, the navigation aids are continuously tracked by the endoscope camera. The camera's pose can thereby be derived and relevant medical structures can be superimposed on the video image. This paper focuses on the latter step. We have implemented several promising real-time algorithms and incorporated them into the Open Source Toolkit MITK (www.mitk.org). Furthermore, we have evaluated them for minimally invasive surgery (MIS) navigation scenarios. For this purpose, a virtual evaluation environment has been developed, which allows for the simulation of navigation targets and navigation aids, including their measurement errors. Besides evaluating the accuracy of the computed pose, we have analyzed the impact of an inaccurate pose and the resulting displacement of navigation targets in Augmented Reality.

  18. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    NASA Astrophysics Data System (ADS)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  19. Concept of electro-optical sensor module for sniper detection system

    NASA Astrophysics Data System (ADS)

    Trzaskawka, Piotr; Dulski, Rafal; Kastek, Mariusz

    2010-10-01

    The paper presents an initial concept of the electro-optical sensor unit for sniper detection purposes. This unit, comprising of thermal and daylight cameras, can operate as a standalone device but its primary application is a multi-sensor sniper and shot detection system. Being a part of a larger system it should contribute to greater overall system efficiency and lower false alarm rate thanks to data and sensor fusion techniques. Additionally, it is expected to provide some pre-shot detection capabilities. Generally acoustic (or radar) systems used for shot detection offer only "after-the-shot" information and they cannot prevent enemy attack, which in case of a skilled sniper opponent usually means trouble. The passive imaging sensors presented in this paper, together with active systems detecting pointed optics, are capable of detecting specific shooter signatures or at least the presence of suspected objects in the vicinity. The proposed sensor unit use thermal camera as a primary sniper and shot detection tool. The basic camera parameters such as focal plane array size and type, focal length and aperture were chosen on the basis of assumed tactical characteristics of the system (mainly detection range) and current technology level. In order to provide costeffective solution the commercially available daylight camera modules and infrared focal plane arrays were tested, including fast cooled infrared array modules capable of 1000 fps image acquisition rate. The daylight camera operates as a support, providing corresponding visual image, easier to comprehend for a human operator. The initial assumptions concerning sensor operation were verified during laboratory and field test and some example shot recording sequences are presented.

  20. Enhanced Video-Oculography System

    NASA Technical Reports Server (NTRS)

    Moore, Steven T.; MacDougall, Hamish G.

    2009-01-01

    A previously developed video-oculography system has been enhanced for use in measuring vestibulo-ocular reflexes of a human subject in a centrifuge, motor vehicle, or other setting. The system as previously developed included a lightweight digital video camera mounted on goggles. The left eye was illuminated by an infrared light-emitting diode via a dichroic mirror, and the camera captured images of the left eye in infrared light. To extract eye-movement data, the digitized video images were processed by software running in a laptop computer. Eye movements were calibrated by having the subject view a target pattern, fixed with respect to the subject s head, generated by a goggle-mounted laser with a diffraction grating. The system as enhanced includes a second camera for imaging the scene from the subject s perspective, and two inertial measurement units (IMUs) for measuring linear accelerations and rates of rotation for computing head movements. One IMU is mounted on the goggles, the other on the centrifuge or vehicle frame. All eye-movement and head-motion data are time-stamped. In addition, the subject s point of regard is superimposed on each scene image to enable analysis of patterns of gaze in real time.

  1. REVIEW OF DEVELOPMENTS IN SPACE REMOTE SENSING FOR MONITORING RESOURCES.

    USGS Publications Warehouse

    Watkins, Allen H.; Lauer, D.T.; Bailey, G.B.; Moore, D.G.; Rohde, W.G.

    1984-01-01

    Space remote sensing systems are compared for suitability in assessing and monitoring the Earth's renewable resources. Systems reviewed include the Landsat Thematic Mapper (TM), the National Oceanic and Atmospheric Administration (NOAA) Advanced Very High Resolution Radiometer (AVHRR), the French Systeme Probatoire d'Observation de la Terre (SPOT), the German Shuttle Pallet Satellite (SPAS) Modular Optoelectronic Multispectral Scanner (MOMS), the European Space Agency (ESA) Spacelab Metric Camera, the National Aeronautics and Space Administration (NASA) Large Format Camera (LFC) and Shuttle Imaging Radar (SIR-A and -B), the Russian Meteor satellite BIK-E and fragment experiments and MKF-6M and KATE-140 camera systems, the ESA Earth Resources Satellite (ERS-1), the Japanese Marine Observation Satellite (MOS-1) and Earth Resources Satellite (JERS-1), the Canadian Radarsat, the Indian Resources Satellite (IRS), and systems proposed or planned by China, Brazil, Indonesia, and others. Also reviewed are the concepts for a 6-channel Shuttle Imaging Spectroradiometer, a 128-channel Shuttle Imaging Spectrometer Experiment (SISEX), and the U. S. Mapsat.

  2. Design of video interface conversion system based on FPGA

    NASA Astrophysics Data System (ADS)

    Zhao, Heng; Wang, Xiang-jun

    2014-11-01

    This paper presents a FPGA based video interface conversion system that enables the inter-conversion between digital and analog video. Cyclone IV series EP4CE22F17C chip from Altera Corporation is used as the main video processing chip, and single-chip is used as the information interaction control unit between FPGA and PC. The system is able to encode/decode messages from the PC. Technologies including video decoding/encoding circuits, bus communication protocol, data stream de-interleaving and de-interlacing, color space conversion and the Camera Link timing generator module of FPGA are introduced. The system converts Composite Video Broadcast Signal (CVBS) from the CCD camera into Low Voltage Differential Signaling (LVDS), which will be collected by the video processing unit with Camera Link interface. The processed video signals will then be inputted to system output board and displayed on the monitor.The current experiment shows that it can achieve high-quality video conversion with minimum board size.

  3. A surgical navigation system for non-contact diffuse optical tomography and intraoperative cone-beam CT

    NASA Astrophysics Data System (ADS)

    Daly, Michael J.; Muhanna, Nidal; Chan, Harley; Wilson, Brian C.; Irish, Jonathan C.; Jaffray, David A.

    2014-02-01

    A freehand, non-contact diffuse optical tomography (DOT) system has been developed for multimodal imaging with intraoperative cone-beam CT (CBCT) during minimally-invasive cancer surgery. The DOT system is configured for near-infrared fluorescence imaging with indocyanine green (ICG) using a collimated 780 nm laser diode and a nearinfrared CCD camera (PCO Pixelfly USB). Depending on the intended surgical application, the camera is coupled to either a rigid 10 mm diameter endoscope (Karl Storz) or a 25 mm focal length lens (Edmund Optics). A prototype flatpanel CBCT C-Arm (Siemens Healthcare) acquires low-dose 3D images with sub-mm spatial resolution. A 3D mesh is extracted from CBCT for finite-element DOT implementation in NIRFAST (Dartmouth College), with the capability for soft/hard imaging priors (e.g., segmented lymph nodes). A stereoscopic optical camera (NDI Polaris) provides real-time 6D localization of reflective spheres mounted to the laser and camera. Camera calibration combined with tracking data is used to estimate intrinsic (focal length, principal point, non-linear distortion) and extrinsic (translation, rotation) lens parameters. Source/detector boundary data is computed from the tracked laser/camera positions using radiometry models. Target registration errors (TRE) between real and projected boundary points are ~1-2 mm for typical acquisition geometries. Pre-clinical studies using tissue phantoms are presented to characterize 3D imaging performance. This translational research system is under investigation for clinical applications in head-and-neck surgery including oral cavity tumour resection, lymph node mapping, and free-flap perforator assessment.

  4. Software for Acquiring Image Data for PIV

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.; Cheung, H. M.; Kressler, Brian

    2003-01-01

    PIV Acquisition (PIVACQ) is a computer program for acquisition of data for particle-image velocimetry (PIV). In the PIV system for which PIVACQ was developed, small particles entrained in a flow are illuminated with a sheet of light from a pulsed laser. The illuminated region is monitored by a charge-coupled-device camera that operates in conjunction with a data-acquisition system that includes a frame grabber and a counter-timer board, both installed in a single computer. The camera operates in "frame-straddle" mode where a pair of images can be obtained closely spaced in time (on the order of microseconds). The frame grabber acquires image data from the camera and stores the data in the computer memory. The counter/timer board triggers the camera and synchronizes the pulsing of the laser with acquisition of data from the camera. PIVPROC coordinates all of these functions and provides a graphical user interface, through which the user can control the PIV data-acquisition system. PIVACQ enables the user to acquire a sequence of single-exposure images, display the images, process the images, and then save the images to the computer hard drive. PIVACQ works in conjunction with the PIVPROC program which processes the images of particles into the velocity field in the illuminated plane.

  5. In-Situ Cameras for Radiometric Correction of Remotely Sensed Data

    NASA Astrophysics Data System (ADS)

    Kautz, Jess S.

    The atmosphere distorts the spectrum of remotely sensed data, negatively affecting all forms of investigating Earth's surface. To gather reliable data, it is vital that atmospheric corrections are accurate. The current state of the field of atmospheric correction does not account well for the benefits and costs of different correction algorithms. Ground spectral data are required to evaluate these algorithms better. This dissertation explores using cameras as radiometers as a means of gathering ground spectral data. I introduce techniques to implement a camera systems for atmospheric correction using off the shelf parts. To aid the design of future camera systems for radiometric correction, methods for estimating the system error prior to construction, calibration and testing of the resulting camera system are explored. Simulations are used to investigate the relationship between the reflectance accuracy of the camera system and the quality of atmospheric correction. In the design phase, read noise and filter choice are found to be the strongest sources of system error. I explain the calibration methods for the camera system, showing the problems of pixel to angle calibration, and adapting the web camera for scientific work. The camera system is tested in the field to estimate its ability to recover directional reflectance from BRF data. I estimate the error in the system due to the experimental set up, then explore how the system error changes with different cameras, environmental set-ups and inversions. With these experiments, I learn about the importance of the dynamic range of the camera, and the input ranges used for the PROSAIL inversion. Evidence that the camera can perform within the specification set for ELM correction in this dissertation is evaluated. The analysis is concluded by simulating an ELM correction of a scene using various numbers of calibration targets, and levels of system error, to find the number of cameras needed for a full-scale implementation.

  6. Harpicon camera for HDTV

    NASA Astrophysics Data System (ADS)

    Tanada, Jun

    1992-08-01

    Ikegami has been involved in broadcast equipment ever since it was established as a company. In conjunction with NHK it has brought forth countless television cameras, from black-and-white cameras to color cameras, HDTV cameras, and special-purpose cameras. In the early days of HDTV (high-definition television, also known as "High Vision") cameras the specifications were different from those for the cameras of the present-day system, and cameras using all kinds of components, having different arrangements of components, and having different appearances were developed into products, with time spent on experimentation, design, fabrication, adjustment, and inspection. But recently the knowhow built up thus far in components, , printed circuit boards, and wiring methods has been incorporated in camera fabrication, making it possible to make HDTV cameras by metbods similar to the present system. In addition, more-efficient production, lower costs, and better after-sales service are being achieved by using the same circuits, components, mechanism parts, and software for both HDTV cameras and cameras that operate by the present system.

  7. Condenser for photolithography system

    DOEpatents

    Sweatt, William C.

    2004-03-02

    A condenser for a photolithography system, in which a mask image from a mask is projected onto a wafer through a camera having an entrance pupil, includes a source of propagating radiation, a first mirror illuminated by the radiation, a mirror array illuminated by the radiation reflected from said first mirror, and a second mirror illuminated by the radiation reflected from the array. The mirror array includes a plurality of micromirrors. Each of the micromirrors is selectively actuatable independently of each other. The first mirror and the second mirror are disposed such that the source is imaged onto a plane of the mask and the mirror array is imaged into the entrance pupil of the camera.

  8. Evaluation of S190A radiometric exposure test data

    NASA Technical Reports Server (NTRS)

    Lockwood, H. E.; Goodding, R. A.

    1974-01-01

    The S190A preflight radiometric exposure test data generated as part of preflight and system test of KM-002 Sequence 29 on flight camera S/N 002 was analyzed. The analysis was to determine camera system transmission using available data which included: (1) films exposed to a calibrated light source subject; (2) filter transmission data; (3) calibrated light source data; (4) density vs. log10 exposure curves for the films; and (5) spectral sensitometric data for the films. The procedure used is outlined, and includes the data and a transmission matrix as a function of field position for nine measured points on each station-film-filter-aperture-shutter speed combination.

  9. Concept design of an 80-dual polarization element cryogenic phased array camera for the Arecibo Radio Telescope

    NASA Astrophysics Data System (ADS)

    Cortes-Medellin, German; Parshley, Stephen; Campbell, Donald B.; Warnick, Karl F.; Jeffs, Brian D.; Ganesh, Rajagopalan

    2016-08-01

    This paper presents the current concept design for ALPACA (Advanced L-Band Phased Array Camera for Arecibo) an L-Band cryo-phased array instrument proposed for the 305 m radio telescope of Arecibo. It includes the cryogenically cooled front-end with 160 low noise amplifiers, a RF-over-fiber signal transport and a digital beam former with an instantaneous bandwidth of 312.5 MHz per channel. The camera will digitally form 40 simultaneous beams inside the available field of view of the Arecibo telescope optics, with an expected system temperature goal of 30 K.

  10. A mobile device-based imaging spectrometer for environmental monitoring by attaching a lightweight small module to a commercial digital camera.

    PubMed

    Cai, Fuhong; Lu, Wen; Shi, Wuxiong; He, Sailing

    2017-11-15

    Spatially-explicit data are essential for remote sensing of ecological phenomena. Lately, recent innovations in mobile device platforms have led to an upsurge in on-site rapid detection. For instance, CMOS chips in smart phones and digital cameras serve as excellent sensors for scientific research. In this paper, a mobile device-based imaging spectrometer module (weighing about 99 g) is developed and equipped on a Single Lens Reflex camera. Utilizing this lightweight module, as well as commonly used photographic equipment, we demonstrate its utility through a series of on-site multispectral imaging, including ocean (or lake) water-color sensing and plant reflectance measurement. Based on the experiments we obtain 3D spectral image cubes, which can be further analyzed for environmental monitoring. Moreover, our system can be applied to many kinds of cameras, e.g., aerial camera and underwater camera. Therefore, any camera can be upgraded to an imaging spectrometer with the help of our miniaturized module. We believe it has the potential to become a versatile tool for on-site investigation into many applications.

  11. Person and gesture tracking with smart stereo cameras

    NASA Astrophysics Data System (ADS)

    Gordon, Gaile; Chen, Xiangrong; Buck, Ron

    2008-02-01

    Physical security increasingly involves sophisticated, real-time visual tracking of a person's location inside a given environment, often in conjunction with biometrics and other security-related technologies. However, demanding real-world conditions like crowded rooms, changes in lighting and physical obstructions have proved incredibly challenging for 2D computer vision technology. In contrast, 3D imaging technology is not affected by constant changes in lighting and apparent color, and thus allows tracking accuracy to be maintained in dynamically lit environments. In addition, person tracking with a 3D stereo camera can provide the location and movement of each individual very precisely, even in a very crowded environment. 3D vision only requires that the subject be partially visible to a single stereo camera to be correctly tracked; multiple cameras are used to extend the system's operational footprint, and to contend with heavy occlusion. A successful person tracking system, must not only perform visual analysis robustly, but also be small, cheap and consume relatively little power. The TYZX Embedded 3D Vision systems are perfectly suited to provide the low power, small footprint, and low cost points required by these types of volume applications. Several security-focused organizations, including the U.S Government, have deployed TYZX 3D stereo vision systems in security applications. 3D image data is also advantageous in the related application area of gesture tracking. Visual (uninstrumented) tracking of natural hand gestures and movement provides new opportunities for interactive control including: video gaming, location based entertainment, and interactive displays. 2D images have been used to extract the location of hands within a plane, but 3D hand location enables a much broader range of interactive applications. In this paper, we provide some background on the TYZX smart stereo cameras platform, describe the person tracking and gesture tracking systems implemented on this platform, and discuss some deployed applications.

  12. A Refrigerated Web Camera for Photogrammetric Video Measurement inside Biomass Boilers and Combustion Analysis

    PubMed Central

    Porteiro, Jacobo; Riveiro, Belén; Granada, Enrique; Armesto, Julia; Eguía, Pablo; Collazo, Joaquín

    2011-01-01

    This paper describes a prototype instrumentation system for photogrammetric measuring of bed and ash layers, as well as for flying particle detection and pursuit using a single device (CCD) web camera. The system was designed to obtain images of the combustion process in the interior of a domestic boiler. It includes a cooling system, needed because of the high temperatures in the combustion chamber of the boiler. The cooling system was designed using CFD simulations to ensure effectiveness. This method allows more complete and real-time monitoring of the combustion process taking place inside a boiler. The information gained from this system may facilitate the optimisation of boiler processes. PMID:22319349

  13. A refrigerated web camera for photogrammetric video measurement inside biomass boilers and combustion analysis.

    PubMed

    Porteiro, Jacobo; Riveiro, Belén; Granada, Enrique; Armesto, Julia; Eguía, Pablo; Collazo, Joaquín

    2011-01-01

    This paper describes a prototype instrumentation system for photogrammetric measuring of bed and ash layers, as well as for flying particle detection and pursuit using a single device (CCD) web camera. The system was designed to obtain images of the combustion process in the interior of a domestic boiler. It includes a cooling system, needed because of the high temperatures in the combustion chamber of the boiler. The cooling system was designed using CFD simulations to ensure effectiveness. This method allows more complete and real-time monitoring of the combustion process taking place inside a boiler. The information gained from this system may facilitate the optimisation of boiler processes.

  14. Mini AERCam: A Free-Flying Robot for Space Inspection

    NASA Technical Reports Server (NTRS)

    Fredrickson, Steven

    2001-01-01

    The NASA Johnson Space Center Engineering Directorate is developing the Autonomous Extravehicular Robotic Camera (AERCam), a free-flying camera system for remote viewing and inspection of human spacecraft. The AERCam project team is currently developing a miniaturized version of AERCam known as Mini AERCam, a spherical nanosatellite 7.5 inches in diameter. Mini AERCam development builds on the success of AERCam Sprint, a 1997 Space Shuttle flight experiment, by integrating new on-board sensing and processing capabilities while simultaneously reducing volume by 80%. Achieving these productivity-enhancing capabilities in a smaller package depends on aggressive component miniaturization. Technology innovations being incorporated include micro electromechanical system (MEMS) gyros, "camera-on-a-chip" CMOS imagers, rechargeable xenon gas propulsion, rechargeable lithium ion battery, custom avionics based on the PowerPC 740 microprocessor, GPS relative navigation, digital radio frequency communications and tracking, micropatch antennas, digital instrumentation, and dense mechanical packaging. The Mini AERCam free-flyer will initially be integrated into an approximate flight-like configuration for laboratory demonstration on an airbearing table. A pilot-in-the-loop and hardware-in-the-loop simulation to simulate on-orbit navigation and dynamics will complement the airbearing table demonstration. The Mini AERCam lab demonstration is intended to form the basis for future development of an AERCam flight system that provides on-orbit views of the Space Shuttle and International Space Station unobtainable from fixed cameras, cameras on robotic manipulators, or cameras carried by space-walking crewmembers.

  15. The exploration of outer space with cameras: A history of the NASA unmanned spacecraft missions

    NASA Astrophysics Data System (ADS)

    Mirabito, M. M.

    The use of television cameras and other video imaging devices to explore the solar system's planetary bodies with unmanned spacecraft is chronicled. Attention is given to the missions and the imaging devices, beginning with the Ranger 7 moon mission, which featured the first successfully operated electrooptical subsystem, six television cameras with vidicon image sensors. NASA established a network of parabolic, ground-based antennas on the earth (the Deep Space Network) to receive signals from spacecraft travelling farther than 16,000 km into space. The image processing and enhancement techniques used to convert spacecraft data transmissions into black and white and color photographs are described, together with the technological requirements that drove the development of the various systems. Terrestrial applications of the planetary imaging systems are explored, including medical and educational uses. Finally, the implementation and functional characteristics of CCDs are detailed, noting their installation on the Space Telescope.

  16. A digital underwater video camera system for aquatic research in regulated rivers

    USGS Publications Warehouse

    Martin, Benjamin M.; Irwin, Elise R.

    2010-01-01

    We designed a digital underwater video camera system to monitor nesting centrarchid behavior in the Tallapoosa River, Alabama, 20 km below a peaking hydropower dam with a highly variable flow regime. Major components of the system included a digital video recorder, multiple underwater cameras, and specially fabricated substrate stakes. The innovative design of the substrate stakes allowed us to effectively observe nesting redbreast sunfish Lepomis auritus in a highly regulated river. Substrate stakes, which were constructed for the specific substratum complex (i.e., sand, gravel, and cobble) identified at our study site, were able to withstand a discharge level of approximately 300 m3/s and allowed us to simultaneously record 10 active nests before and during water releases from the dam. We believe our technique will be valuable for other researchers that work in regulated rivers to quantify behavior of aquatic fauna in response to a discharge disturbance.

  17. Fast noninvasive eye-tracking and eye-gaze determination for biomedical and remote monitoring applications

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Morookian, John M.; Monacos, Steve P.; Lam, Raymond K.; Lebaw, C.; Bond, A.

    2004-04-01

    Eyetracking is one of the latest technologies that has shown potential in several areas including human-computer interaction for people with and without disabilities, and for noninvasive monitoring, detection, and even diagnosis of physiological and neurological problems in individuals. Current non-invasive eyetracking methods achieve a 30 Hz rate with possibly low accuracy in gaze estimation, that is insufficient for many applications. We propose a new non-invasive visual eyetracking system that is capable of operating at speeds as high as 6-12 KHz. A new CCD video camera and hardware architecture is used, and a novel fast image processing algorithm leverages specific features of the input CCD camera to yield a real-time eyetracking system. A field programmable gate array (FPGA) is used to control the CCD camera and execute the image processing operations. Initial results show the excellent performance of our system under severe head motion and low contrast conditions.

  18. Hyperspectral imaging for food processing automation

    NASA Astrophysics Data System (ADS)

    Park, Bosoon; Lawrence, Kurt C.; Windham, William R.; Smith, Doug P.; Feldner, Peggy W.

    2002-11-01

    This paper presents the research results that demonstrates hyperspectral imaging could be used effectively for detecting feces (from duodenum, ceca, and colon) and ingesta on the surface of poultry carcasses, and potential application for real-time, on-line processing of poultry for automatic safety inspection. The hyperspectral imaging system included a line scan camera with prism-grating-prism spectrograph, fiber optic line lighting, motorized lens control, and hyperspectral image processing software. Hyperspectral image processing algorithms, specifically band ratio of dual-wavelength (565/517) images and thresholding were effective on the identification of fecal and ingesta contamination of poultry carcasses. A multispectral imaging system including a common aperture camera with three optical trim filters (515.4 nm with 8.6- nm FWHM), 566.4 nm with 8.8-nm FWHM, and 631 nm with 10.2-nm FWHM), which were selected and validated by a hyperspectral imaging system, was developed for a real-time, on-line application. A total image processing time required to perform the current multispectral images captured by a common aperture camera was approximately 251 msec or 3.99 frames/sec. A preliminary test shows that the accuracy of real-time multispectral imaging system to detect feces and ingesta on corn/soybean fed poultry carcasses was 96%. However, many false positive spots that cause system errors were also detected.

  19. Prism-based single-camera system for stereo display

    NASA Astrophysics Data System (ADS)

    Zhao, Yue; Cui, Xiaoyu; Wang, Zhiguo; Chen, Hongsheng; Fan, Heyu; Wu, Teresa

    2016-06-01

    This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.

  20. Visualization of explosion phenomena using a high-speed video camera with an uncoupled objective lens by fiber-optic

    NASA Astrophysics Data System (ADS)

    Tokuoka, Nobuyuki; Miyoshi, Hitoshi; Kusano, Hideaki; Hata, Hidehiro; Hiroe, Tetsuyuki; Fujiwara, Kazuhito; Yasushi, Kondo

    2008-11-01

    Visualization of explosion phenomena is very important and essential to evaluate the performance of explosive effects. The phenomena, however, generate blast waves and fragments from cases. We must protect our visualizing equipment from any form of impact. In the tests described here, the front lens was separated from the camera head by means of a fiber-optic cable in order to be able to use the camera, a Shimadzu Hypervision HPV-1, for tests in severe blast environment, including the filming of explosions. It was possible to obtain clear images of the explosion that were not inferior to the images taken by the camera with the lens directly coupled to the camera head. It could be confirmed that this system is very useful for the visualization of dangerous events, e.g., at an explosion site, and for visualizations at angles that would be unachievable under normal circumstances.

  1. Constrained space camera assembly

    DOEpatents

    Heckendorn, Frank M.; Anderson, Erin K.; Robinson, Casandra W.; Haynes, Harriet B.

    1999-01-01

    A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras.

  2. MS Lucid and Blaha with MGBX aboard the Mir space station Priroda module

    NASA Image and Video Library

    1997-03-26

    STS079-S-092 (16-26 Sept. 1996) --- Astronauts Shannon W. Lucid and John E. Blaha work at a microgravity glove box on the Priroda Module aboard Russia's Mir Space Station complex. Blaha, who flew into Earth-orbit with the STS-79 crew, and Lucid are the first participants in a series of ongoing exchanges of NASA astronauts serving time as cosmonaut guest researchers onboard Mir. Lucid went on to spend a total of 188 days in space before returning to Earth with the STS-79 crew. During the STS-79 mission, the crew used an IMAX camera to document activities aboard the Space Shuttle Atlantis and the various Mir modules, with the cooperation of the Russian Space Agency (RSA). A hand-held version of the 65mm camera system accompanied the STS-79 crew into space in Atlantis' crew cabin. NASA has flown IMAX camera systems on many Shuttle missions, including a special cargo bay camera's coverage of other recent Shuttle-Mir rendezvous and/or docking missions.

  3. Feasibility evaluation and study of adapting the attitude reference system to the Orbiter camera payload system's large format camera

    NASA Technical Reports Server (NTRS)

    1982-01-01

    A design concept that will implement a mapping capability for the Orbital Camera Payload System (OCPS) when ground control points are not available is discussed. Through the use of stellar imagery collected by a pair of cameras whose optical axis are structurally related to the large format camera optical axis, such pointing information is made available.

  4. Dynamic edge warping - An experimental system for recovering disparity maps in weakly constrained systems

    NASA Technical Reports Server (NTRS)

    Boyer, K. L.; Wuescher, D. M.; Sarkar, S.

    1991-01-01

    Dynamic edge warping (DEW), a technique for recovering reasonably accurate disparity maps from uncalibrated stereo image pairs, is presented. No precise knowledge of the epipolar camera geometry is assumed. The technique is embedded in a system including structural stereopsis on the front end and robust estimation in digital photogrammetry on the other for the purpose of self-calibrating stereo image pairs. Once the relative camera orientation is known, the epipolar geometry is computed and the system can use this information to refine its representation of the object space. Such a system will find application in the autonomous extraction of terrain maps from stereo aerial photographs, for which camera position and orientation are unknown a priori, and for online autonomous calibration maintenance for robotic vision applications, in which the cameras are subject to vibration and other physical disturbances after calibration. This work thus forms a component of an intelligent system that begins with a pair of images and, having only vague knowledge of the conditions under which they were acquired, produces an accurate, dense, relative depth map. The resulting disparity map can also be used directly in some high-level applications involving qualitative scene analysis, spatial reasoning, and perceptual organization of the object space. The system as a whole substitutes high-level information and constraints for precise geometric knowledge in driving and constraining the early correspondence process.

  5. A portable high-speed camera system for vocal fold examinations.

    PubMed

    Hertegård, Stellan; Larsson, Hans

    2014-11-01

    In this article, we present a new portable low-cost system for high-speed examinations of the vocal folds. Analysis of glottal vibratory parameters from the high-speed recordings is compared with videostroboscopic recordings. The high-speed system is built around a Fastec 1 monochrome camera, which is used with newly developed software, High-Speed Studio (HSS). The HSS has options for video/image recording, contains a database, and has a set of analysis options. The Fastec/HSS system has been used clinically since 2011 in more than 2000 patient examinations and recordings. The Fastec 1 camera has sufficient time resolution (≥4000 frames/s) and light sensitivity (ISO 3200) to produce images for detailed analyses of parameters pertinent to vocal fold function. The camera can be used with both rigid and flexible endoscopes. The HSS software includes options for analyses of glottal vibrations, such as kymogram, phase asymmetry, glottal area variation, open and closed phase, and angle of vocal fold abduction. It can also be used for separate analysis of the left and vocal fold movements, including maximum speed during opening and closing, a parameter possibly related to vocal fold elasticity. A blinded analysis of 32 patients with various voice disorders examined with both the Fastec/HSS system and videostroboscopy showed that the high-speed recordings were significantly better for the analysis of glottal parameters (eg, mucosal wave and vibration asymmetry). The monochrome high-speed system can be used in daily clinical work within normal clinical time limits for patient examinations. A detailed analysis can be made of voice disorders and laryngeal pathology at a relatively low cost. Copyright © 2014 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  6. Multispectral imaging system for contaminant detection

    NASA Technical Reports Server (NTRS)

    Poole, Gavin H. (Inventor)

    2003-01-01

    An automated inspection system for detecting digestive contaminants on food items as they are being processed for consumption includes a conveyor for transporting the food items, a light sealed enclosure which surrounds a portion of the conveyor, with a light source and a multispectral or hyperspectral digital imaging camera disposed within the enclosure. Operation of the conveyor, light source and camera are controlled by a central computer unit. Light reflected by the food items within the enclosure is detected in predetermined wavelength bands, and detected intensity values are analyzed to detect the presence of digestive contamination.

  7. Traffic Sign Recognition with Invariance to Lighting in Dual-Focal Active Camera System

    NASA Astrophysics Data System (ADS)

    Gu, Yanlei; Panahpour Tehrani, Mehrdad; Yendo, Tomohiro; Fujii, Toshiaki; Tanimoto, Masayuki

    In this paper, we present an automatic vision-based traffic sign recognition system, which can detect and classify traffic signs at long distance under different lighting conditions. To realize this purpose, the traffic sign recognition is developed in an originally proposed dual-focal active camera system. In this system, a telephoto camera is equipped as an assistant of a wide angle camera. The telephoto camera can capture a high accuracy image for an object of interest in the view field of the wide angle camera. The image from the telephoto camera provides enough information for recognition when the accuracy of traffic sign is low from the wide angle camera. In the proposed system, the traffic sign detection and classification are processed separately for different images from the wide angle camera and telephoto camera. Besides, in order to detect traffic sign from complex background in different lighting conditions, we propose a type of color transformation which is invariant to light changing. This color transformation is conducted to highlight the pattern of traffic signs by reducing the complexity of background. Based on the color transformation, a multi-resolution detector with cascade mode is trained and used to locate traffic signs at low resolution in the image from the wide angle camera. After detection, the system actively captures a high accuracy image of each detected traffic sign by controlling the direction and exposure time of the telephoto camera based on the information from the wide angle camera. Moreover, in classification, a hierarchical classifier is constructed and used to recognize the detected traffic signs in the high accuracy image from the telephoto camera. Finally, based on the proposed system, a set of experiments in the domain of traffic sign recognition is presented. The experimental results demonstrate that the proposed system can effectively recognize traffic signs at low resolution in different lighting conditions.

  8. Road Weather Systems [SD .WMV (720x480/29fps/25.2 MB)

    DOT National Transportation Integrated Search

    2009-01-01

    Iowas road weather information system at work. The Iowa DOT will install new sensors and upgrades to most road weather information system (RWIS) sites. These include: : color cameras, new precipitation sensors, new speed sensors, revised weathervi...

  9. High-Speed Edge-Detecting Line Scan Smart Camera

    NASA Technical Reports Server (NTRS)

    Prokop, Norman F.

    2012-01-01

    A high-speed edge-detecting line scan smart camera was developed. The camera is designed to operate as a component in a NASA Glenn Research Center developed inlet shock detection system. The inlet shock is detected by projecting a laser sheet through the airflow. The shock within the airflow is the densest part and refracts the laser sheet the most in its vicinity, leaving a dark spot or shadowgraph. These spots show up as a dip or negative peak within the pixel intensity profile of an image of the projected laser sheet. The smart camera acquires and processes in real-time the linear image containing the shock shadowgraph and outputting the shock location. Previously a high-speed camera and personal computer would perform the image capture and processing to determine the shock location. This innovation consists of a linear image sensor, analog signal processing circuit, and a digital circuit that provides a numerical digital output of the shock or negative edge location. The smart camera is capable of capturing and processing linear images at over 1,000 frames per second. The edges are identified as numeric pixel values within the linear array of pixels, and the edge location information can be sent out from the circuit in a variety of ways, such as by using a microcontroller and onboard or external digital interface to include serial data such as RS-232/485, USB, Ethernet, or CAN BUS; parallel digital data; or an analog signal. The smart camera system can be integrated into a small package with a relatively small number of parts, reducing size and increasing reliability over the previous imaging system..

  10. Precision of FLEET Velocimetry Using High-speed CMOS Camera Systems

    NASA Technical Reports Server (NTRS)

    Peters, Christopher J.; Danehy, Paul M.; Bathel, Brett F.; Jiang, Naibo; Calvert, Nathan D.; Miles, Richard B.

    2015-01-01

    Femtosecond laser electronic excitation tagging (FLEET) is an optical measurement technique that permits quantitative velocimetry of unseeded air or nitrogen using a single laser and a single camera. In this paper, we seek to determine the fundamental precision of the FLEET technique using high-speed complementary metal-oxide semiconductor (CMOS) cameras. Also, we compare the performance of several different high-speed CMOS camera systems for acquiring FLEET velocimetry data in air and nitrogen free-jet flows. The precision was defined as the standard deviation of a set of several hundred single-shot velocity measurements. Methods of enhancing the precision of the measurement were explored such as digital binning (similar in concept to on-sensor binning, but done in post-processing), row-wise digital binning of the signal in adjacent pixels and increasing the time delay between successive exposures. These techniques generally improved precision; however, binning provided the greatest improvement to the un-intensified camera systems which had low signal-to-noise ratio. When binning row-wise by 8 pixels (about the thickness of the tagged region) and using an inter-frame delay of 65 micro sec, precisions of 0.5 m/s in air and 0.2 m/s in nitrogen were achieved. The camera comparison included a pco.dimax HD, a LaVision Imager scientific CMOS (sCMOS) and a Photron FASTCAM SA-X2, along with a two-stage LaVision High Speed IRO intensifier. Excluding the LaVision Imager sCMOS, the cameras were tested with and without intensification and with both short and long inter-frame delays. Use of intensification and longer inter-frame delay generally improved precision. Overall, the Photron FASTCAM SA-X2 exhibited the best performance in terms of greatest precision and highest signal-to-noise ratio primarily because it had the largest pixels.

  11. Structure-From for Calibration of a Vehicle Camera System with Non-Overlapping Fields-Of in AN Urban Environment

    NASA Astrophysics Data System (ADS)

    Hanel, A.; Stilla, U.

    2017-05-01

    Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between one and ten centimeters from tachymeter reference measurements.

  12. Passive detection of vehicle loading

    NASA Astrophysics Data System (ADS)

    McKay, Troy R.; Salvaggio, Carl; Faulring, Jason W.; Salvaggio, Philip S.; McKeown, Donald M.; Garrett, Alfred J.; Coleman, David H.; Koffman, Larry D.

    2012-01-01

    The Digital Imaging and Remote Sensing Laboratory (DIRS) at the Rochester Institute of Technology, along with the Savannah River National Laboratory is investigating passive methods to quantify vehicle loading. The research described in this paper investigates multiple vehicle indicators including brake temperature, tire temperature, engine temperature, acceleration and deceleration rates, engine acoustics, suspension response, tire deformation and vibrational response. Our investigation into these variables includes building and implementing a sensing system for data collection as well as multiple full-scale vehicle tests. The sensing system includes; infrared video cameras, triaxial accelerometers, microphones, video cameras and thermocouples. The full scale testing includes both a medium size dump truck and a tractor-trailer truck on closed courses with loads spanning the full range of the vehicle's capacity. Statistical analysis of the collected data is used to determine the effectiveness of each of the indicators for characterizing the weight of a vehicle. The final sensing system will monitor multiple load indicators and combine the results to achieve a more accurate measurement than any of the indicators could provide alone.

  13. Development, characterization, and modeling of a tunable filter camera

    NASA Astrophysics Data System (ADS)

    Sartor, Mark Alan

    1999-10-01

    This paper describes the development, characterization, and modeling of a Tunable Filter Camera (TFC). The TFC is a new multispectral instrument with electronically tuned spectral filtering and low-light-level sensitivity. It represents a hybrid between hyperspectral and multispectral imaging spectrometers that incorporates advantages from each, addressing issues such as complexity, cost, lack of sensitivity, and adaptability. These capabilities allow the TFC to be applied to low- altitude video surveillance for real-time spectral and spatial target detection and image exploitation. Described herein are the theory and principles of operation for the TFC, which includes a liquid crystal tunable filter, an intensified CCD, and a custom apochromatic lens. The results of proof-of-concept testing, and characterization of two prototype cameras are included, along with a summary of the design analyses for the development of a multiple-channel system. A significant result of this effort was the creation of a system-level model, which was used to facilitate development and predict performance. It includes models for the liquid crystal tunable filter and intensified CCD. Such modeling was necessary in the design of the system and is useful for evaluation of the system in remote-sensing applications. Also presented are characterization data from component testing, which included quantitative results for linearity, signal to noise ratio (SNR), linearity, and radiometric response. These data were used to help refine and validate the model. For a pre-defined source, the spatial and spectral response, and the noise of the camera, system can now be predicted. The innovation that sets this development apart is the fact that this instrument has been designed for integrated, multi-channel operation for the express purpose of real-time detection/identification in low- light-level conditions. Many of the requirements for the TFC were derived from this mission. In order to provide background for the design requirements for the TFC development, the mission and principles of operation behind the multi-channel system will be reviewed. Given the combination of the flexibility, simplicity, and sensitivity, the TFC and its multiple-channel extension can play a significant role in the next generation of remote-sensing instruments.

  14. Strategic options towards an affordable high-performance infrared camera

    NASA Astrophysics Data System (ADS)

    Oduor, Patrick; Mizuno, Genki; Dutta, Achyut K.; Lewis, Jay; Dhar, Nibir K.

    2016-05-01

    The promise of infrared (IR) imaging attaining low-cost akin to CMOS sensors success has been hampered by the inability to achieve cost advantages that are necessary for crossover from military and industrial applications into the consumer and mass-scale commercial realm despite well documented advantages. Banpil Photonics is developing affordable IR cameras by adopting new strategies to speed-up the decline of the IR camera cost curve. We present a new short-wave IR (SWIR) camera; 640x512 pixel InGaAs uncooled system that is high sensitivity low noise (<50e-), high dynamic range (100 dB), high-frame rates (> 500 frames per second (FPS)) at full resolution, and low power consumption (< 1 W) in a compact system. This camera paves the way towards mass market adoption by not only demonstrating high-performance IR imaging capability value add demanded by military and industrial application, but also illuminates a path towards justifiable price points essential for consumer facing application industries such as automotive, medical, and security imaging adoption. Among the strategic options presented include new sensor manufacturing technologies that scale favorably towards automation, multi-focal plane array compatible readout electronics, and dense or ultra-small pixel pitch devices.

  15. Analysis of Performance of Stereoscopic-Vision Software

    NASA Technical Reports Server (NTRS)

    Kim, Won; Ansar, Adnan; Steele, Robert; Steinke, Robert

    2007-01-01

    A team of JPL researchers has analyzed stereoscopic vision software and produced a document describing its performance. This software is of the type used in maneuvering exploratory robotic vehicles on Martian terrain. The software in question utilizes correlations between portions of the images recorded by two electronic cameras to compute stereoscopic disparities, which, in conjunction with camera models, are used in computing distances to terrain points to be included in constructing a three-dimensional model of the terrain. The analysis included effects of correlation- window size, a pyramidal image down-sampling scheme, vertical misalignment, focus, maximum disparity, stereo baseline, and range ripples. Contributions of sub-pixel interpolation, vertical misalignment, and foreshortening to stereo correlation error were examined theoretically and experimentally. It was found that camera-calibration inaccuracy contributes to both down-range and cross-range error but stereo correlation error affects only the down-range error. Experimental data for quantifying the stereo disparity error were obtained by use of reflective metrological targets taped to corners of bricks placed at known positions relative to the cameras. For the particular 1,024-by-768-pixel cameras of the system analyzed, the standard deviation of the down-range disparity error was found to be 0.32 pixel.

  16. Web Camera Use of Mothers and Fathers When Viewing Their Hospitalized Neonate.

    PubMed

    Rhoads, Sarah J; Green, Angela; Gauss, C Heath; Mitchell, Anita; Pate, Barbara

    2015-12-01

    Mothers and fathers of neonates hospitalized in a neonatal intensive care unit (NICU) differ in their experiences related to NICU visitation. To describe the frequency and length of maternal and paternal viewing of their hospitalized neonates via a Web camera. A total of 219 mothers and 101 fathers used the Web camera that allows 24/7 NICU viewing from September 1, 2010, to December 31, 2012, which included 40 mother and father dyads. We conducted a review of the Web camera's Web site log-on records in this nonexperimental, descriptive study. Mothers and fathers had a significant difference in the mean number of log-ons to the Web camera system (P = .0293). Fathers virtually visited the NICU less often than mothers, but there was not a statistical difference between mothers and fathers in terms of the mean total number of minutes viewing the neonate (P = .0834) or in the maximum number of minutes of viewing in 1 session (P = .6924). Patterns of visitations over time were not measured. Web camera technology could be a potential intervention to aid fathers in visiting their neonates. Both parents should be offered virtual visits using the Web camera and oriented regarding how to use the Web camera. These findings are important to consider when installing Web cameras in a NICU. Future research should continue to explore Web camera use in NICUs.

  17. Relative and Absolute Calibration of a Multihead Camera System with Oblique and Nadir Looking Cameras for a Uas

    NASA Astrophysics Data System (ADS)

    Niemeyer, F.; Schima, R.; Grenzdörffer, G.

    2013-08-01

    Numerous unmanned aerial systems (UAS) are currently flooding the market. For the most diverse applications UAVs are special designed and used. Micro and mini UAS (maximum take-off weight up to 5 kg) are of particular interest, because legal restrictions are still manageable but also the payload capacities are sufficient for many imaging sensors. Currently a camera system with four oblique and one nadir looking cameras is under development at the Chair for Geodesy and Geoinformatics. The so-called "Four Vision" camera system was successfully built and tested in the air. A MD4-1000 UAS from microdrones is used as a carrier system. Light weight industrial cameras are used and controlled by a central computer. For further photogrammetric image processing, each individual camera, as well as all the cameras together have to be calibrated. This paper focuses on the determination of the relative orientation between the cameras with the „Australis" software and will give an overview of the results and experiences of test flights.

  18. Compact Autonomous Hemispheric Vision System

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Cunningham, Thomas J.; Werne, Thomas A.; Eastwood, Michael L.; Walch, Marc J.; Staehle, Robert L.

    2012-01-01

    Solar System Exploration camera implementations to date have involved either single cameras with wide field-of-view (FOV) and consequently coarser spatial resolution, cameras on a movable mast, or single cameras necessitating rotation of the host vehicle to afford visibility outside a relatively narrow FOV. These cameras require detailed commanding from the ground or separate onboard computers to operate properly, and are incapable of making decisions based on image content that control pointing and downlink strategy. For color, a filter wheel having selectable positions was often added, which added moving parts, size, mass, power, and reduced reliability. A system was developed based on a general-purpose miniature visible-light camera using advanced CMOS (complementary metal oxide semiconductor) imager technology. The baseline camera has a 92 FOV and six cameras are arranged in an angled-up carousel fashion, with FOV overlaps such that the system has a 360 FOV (azimuth). A seventh camera, also with a FOV of 92 , is installed normal to the plane of the other 6 cameras giving the system a > 90 FOV in elevation and completing the hemispheric vision system. A central unit houses the common electronics box (CEB) controlling the system (power conversion, data processing, memory, and control software). Stereo is achieved by adding a second system on a baseline, and color is achieved by stacking two more systems (for a total of three, each system equipped with its own filter.) Two connectors on the bottom of the CEB provide a connection to a carrier (rover, spacecraft, balloon, etc.) for telemetry, commands, and power. This system has no moving parts. The system's onboard software (SW) supports autonomous operations such as pattern recognition and tracking.

  19. Cameras on the NEPTUNE Canada seafloor observatory: Towards monitoring hydrothermal vent ecosystem dynamics

    NASA Astrophysics Data System (ADS)

    Robert, K.; Matabos, M.; Sarrazin, J.; Sarradin, P.; Lee, R. W.; Juniper, K.

    2010-12-01

    Hydrothermal vent environments are among the most dynamic benthic habitats in the ocean. The relative roles of physical and biological factors in shaping vent community structure remain unclear. Undersea cabled observatories offer the power and bandwidth required for high-resolution, time-series study of the dynamics of vent communities and the physico-chemical forces that influence them. The NEPTUNE Canada cabled instrument array at the Endeavour hydrothermal vents provides a unique laboratory for researchers to conduct long-term, integrated studies of hydrothermal vent ecosystem dynamics in relation to environmental variability. Beginning in September-October 2010, NEPTUNE Canada (NC) will be deploying a multi-disciplinary suite of instruments on the Endeavour Segment of the Juan de Fuca Ridge. Two camera and sensor systems will be used to study ecosystem dynamics in relation to hydrothermal discharge. These studies will make use of new experimental protocols for time-series observations that we have been developing since 2008 at other observatory sites connected to the VENUS and NC networks. These protocols include sampling design, camera calibration (i.e. structure, position, light, settings) and image analysis methodologies (see communication by Aron et al.). The camera systems to be deployed in the Main Endeavour vent field include a Sidus high definition video camera (2010) and the TEMPO-mini system (2011), designed by IFREMER (France). Real-time data from three sensors (O2, dissolved Fe, temperature) integrated with the TEMPO-mini system will enhance interpretation of imagery. For the first year of observations, a suite of internally recording temperature probes will be strategically placed in the field of view of the Sidus camera. These installations aim at monitoring variations in vent community structure and dynamics (species composition and abundances, interactions within and among species) in response to changes in environmental conditions at different temporal scales. High-resolution time-series studies also provide a mean of studying population dynamics, biological rhythms, organism growth and faunal succession. In addition to programmed time-series monitoring, the NC infrastructure will also permit manual and automated modification of observational protocols in response to natural events. This will enhance our ability to document potentially critical but short-lived environmental forces affecting vent communities.

  20. Optimum color filters for CCD digital cameras

    NASA Astrophysics Data System (ADS)

    Engelhardt, Kai; Kunz, Rino E.; Seitz, Peter; Brunner, Harald; Knop, Karl

    1993-12-01

    As part of the ESPRIT II project No. 2103 (MASCOT) a high performance prototype color CCD still video camera was developed. Intended for professional usage such as in the graphic arts, the camera provides a maximum resolution of 3k X 3k full color pixels. A high colorimetric performance was achieved through specially designed dielectric filters and optimized matrixing. The color transformation was obtained by computer simulation of the camera system and non-linear optimization which minimized the perceivable color errors as measured in the 1976 CIELUV uniform color space for a set of about 200 carefully selected test colors. The color filters were designed to allow perfect colorimetric reproduction in principle and at the same time with imperceptible color noise and with special attention to fabrication tolerances. The camera system includes a special real-time digital color processor which carries out the color transformation. The transformation can be selected from a set of sixteen matrices optimized for different illuminants and output devices. Because the actual filter design was based on slightly incorrect data the prototype camera showed a mean colorimetric error of 2.7 j.n.d. (CIELUV) in experiments. Using correct input data in the redesign of the filters, a mean colorimetric error of only 1 j.n.d. (CIELUV) seems to be feasible, implying that it is possible with such an optimized color camera to achieve such a high colorimetric performance that the reproduced colors in an image cannot be distinguished from the original colors in a scene, even in direct comparison.

  1. The research of adaptive-exposure on spot-detecting camera in ATP system

    NASA Astrophysics Data System (ADS)

    Qian, Feng; Jia, Jian-jun; Zhang, Liang; Wang, Jian-Yu

    2013-08-01

    High precision acquisition, tracking, pointing (ATP) system is one of the key techniques of laser communication. The spot-detecting camera is used to detect the direction of beacon in laser communication link, so that it can get the position information of communication terminal for ATP system. The positioning accuracy of camera decides the capability of laser communication system directly. So the spot-detecting camera in satellite-to-earth laser communication ATP systems needs high precision on target detection. The positioning accuracy of cameras should be better than +/-1μ rad . The spot-detecting cameras usually adopt centroid algorithm to get the position information of light spot on detectors. When the intensity of beacon is moderate, calculation results of centroid algorithm will be precise. But the intensity of beacon changes greatly during communication for distance, atmospheric scintillation, weather etc. The output signal of detector will be insufficient when the camera underexposes to beacon because of low light intensity. On the other hand, the output signal of detector will be saturated when the camera overexposes to beacon because of high light intensity. The calculation accuracy of centroid algorithm becomes worse if the spot-detecting camera underexposes or overexposes, and then the positioning accuracy of camera will be reduced obviously. In order to improve the accuracy, space-based cameras should regulate exposure time in real time according to light intensity. The algorithm of adaptive-exposure technique for spot-detecting camera based on metal-oxide-semiconductor (CMOS) detector is analyzed. According to analytic results, a CMOS camera in space-based laser communication system is described, which utilizes the algorithm of adaptive-exposure to adapting exposure time. Test results from imaging experiment system formed verify the design. Experimental results prove that this design can restrain the reduction of positioning accuracy for the change of light intensity. So the camera can keep stable and high positioning accuracy during communication.

  2. Electrostatic camera system functional design study

    NASA Technical Reports Server (NTRS)

    Botticelli, R. A.; Cook, F. J.; Moore, R. F.

    1972-01-01

    A functional design study for an electrostatic camera system for application to planetary missions is presented. The electrostatic camera can produce and store a large number of pictures and provide for transmission of the stored information at arbitrary times after exposure. Preliminary configuration drawings and circuit diagrams for the system are illustrated. The camera system's size, weight, power consumption, and performance are characterized. Tradeoffs between system weight, power, and storage capacity are identified.

  3. Proceedings of the 1984 IEEE international conference on systems, man and cybernetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1984-01-01

    This conference contains papers on artificial intelligence, pattern recognition, and man-machine systems. Topics considered include concurrent minimization, a robot programming system, system modeling and simulation, camera calibration, thermal power plants, image processing, fault diagnosis, knowledge-based systems, power systems, hydroelectric power plants, expert systems, and electrical transients.

  4. Infrared stereo calibration for unmanned ground vehicle navigation

    NASA Astrophysics Data System (ADS)

    Harguess, Josh; Strange, Shawn

    2014-06-01

    The problem of calibrating two color cameras as a stereo pair has been heavily researched and many off-the-shelf software packages, such as Robot Operating System and OpenCV, include calibration routines that work in most cases. However, the problem of calibrating two infrared (IR) cameras for the purposes of sensor fusion and point could generation is relatively new and many challenges exist. We present a comparison of color camera and IR camera stereo calibration using data from an unmanned ground vehicle. There are two main challenges in IR stereo calibration; the calibration board (material, design, etc.) and the accuracy of calibration pattern detection. We present our analysis of these challenges along with our IR stereo calibration methodology. Finally, we present our results both visually and analytically with computed reprojection errors.

  5. Uncooled microbolometer sensors for unattended applications

    NASA Astrophysics Data System (ADS)

    Kohin, Margaret; Miller, James E.; Leary, Arthur R.; Backer, Brian S.; Swift, William; Aston, Peter

    2003-09-01

    BAE SYSTEMS has been developing and producing uncooled microbolometer sensors since 1995. Recently, uncooled sensors have been used on Pointer Unattended Aerial Vehicles and considered for several unattended sensor applications including DARPA Micro-Internetted Unattended Ground Sensors (MIUGS), Army Modular Acoustic Imaging Sensors (MAIS), and Redeployable Unattended Ground Sensors (R-UGS). This paper describes recent breakthrough uncooled sensor performance at BAE SYSTEMS and how this improved performance has been applied to a new Standard Camera Core (SCC) that is ideal for these unattended applications. Video imagery from a BAE SYSTEMS 640x480 imaging camera flown in a Pointer UAV is provided. Recent performance results are also provided.

  6. Airport Remote Tower Sensor Systems

    NASA Technical Reports Server (NTRS)

    Papasin, Richard; Gawdiak, Yuri; Maluf, David A.; Leidich, Christopher; Tran, Peter B.

    2001-01-01

    Remote Tower Sensor Systems (RTSS) are proof-of-concept prototypes being developed by NASA/Ames Research Center (NASA/ARC) with collaboration with the FAA (Federal Aviation Administration) and NOAA (National Oceanic Atmospheric Administration). RTSS began with the deployment of an Airport Approach Zone Camera System that includes real-time weather observations at San Francisco International Airport. The goal of this research is to develop, deploy, and demonstrate remotely operated cameras and sensors at several major airport hubs and un-towered airports. RTSS can provide real-time weather observations of airport approach zone. RTSS will integrate and test airport sensor packages that will allow remote access to realtime airport conditions and aircraft status.

  7. MO-AB-206-02: Testing Gamma Cameras Based On TG177 WG Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halama, J.

    2016-06-15

    This education session will cover the physics and operation principles of gamma cameras and PET scanners. The first talk will focus on PET imaging. An overview of the principles of PET imaging will be provided, including positron decay physics, and the transition from 2D to 3D imaging. More recent advances in hardware and software will be discussed, such as time-of-flight imaging, and improvements in reconstruction algorithms that provide for options such as depth-of-interaction corrections. Quantitative applications of PET will be discussed, as well as the requirements for doing accurate quantitation. Relevant performance tests will also be described. Learning Objectives: Bemore » able to describe basic physics principles of PET and operation of PET scanners. Learn about recent advances in PET scanner hardware technology. Be able to describe advances in reconstruction techniques and improvements Be able to list relevant performance tests. The second talk will focus on gamma cameras. The Nuclear Medicine subcommittee has charged a task group (TG177) to develop a report on the current state of physics testing of gamma cameras, SPECT, and SPECT/CT systems. The report makes recommendations for performance tests to be done for routine quality assurance, annual physics testing, and acceptance tests, and identifies those needed satisfy the ACR accreditation program and The Joint Commission imaging standards. The report is also intended to be used as a manual with detailed instructions on how to perform tests under widely varying conditions. Learning Objectives: At the end of the presentation members of the audience will: Be familiar with the tests recommended for routine quality assurance, annual physics testing, and acceptance tests of gamma cameras for planar imaging. Be familiar with the tests recommended for routine quality assurance, annual physics testing, and acceptance tests of SPECT systems. Be familiar with the tests of a SPECT/CT system that include the CT images for SPECT reconstructions. Become knowledgeable of items to be included in annual acceptance testing reports including CT dosimetry and PACS monitor measurements. T. Turkington, GE Healthcare.« less

  8. The SALSA Project - High-End Aerial 3d Camera

    NASA Astrophysics Data System (ADS)

    Rüther-Kindel, W.; Brauchle, J.

    2013-08-01

    The ATISS measurement drone, developed at the University of Applied Sciences Wildau, is an electrical powered motor glider with a maximum take-off weight of 25 kg including a payload capacity of 10 kg. Two 2.5 kW engines enable ultra short take-off procedures and the motor glider design results in a 1 h endurance. The concept of ATISS is based on the idea to strictly separate between aircraft and payload functions, which makes ATISS a very flexible research platform for miscellaneous payloads. ATISS is equipped with an autopilot for autonomous flight patterns but under permanent pilot control from the ground. On the basis of ATISS the project SALSA was undertaken. The aim was to integrate a system for digital terrain modelling. Instead of a laser scanner a new design concept was chosen based on two synchronized high resolution digital cameras, one in a fixed nadir orientation and the other in a oblique orientation. Thus from every object on the ground images from different view angles are taken. This new measurement camera system MACS-TumbleCam was developed at the German Aerospace Center DLR Berlin-Adlershof especially for the ATISS payload concept. Special advantage in comparison to laser scanning is the fact, that instead of a cloud of points a surface including texture is generated and a high-end inertial orientation system can be omitted. The first test flights show a ground resolution of 2 cm and height resolution of 3 cm, which underline the extraordinary capabilities of ATISS and the MACS measurement camera system.

  9. Vision System Measures Motions of Robot and External Objects

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2008-01-01

    A prototype of an advanced robotic vision system both (1) measures its own motion with respect to a stationary background and (2) detects other moving objects and estimates their motions, all by use of visual cues. Like some prior robotic and other optoelectronic vision systems, this system is based partly on concepts of optical flow and visual odometry. Whereas prior optoelectronic visual-odometry systems have been limited to frame rates of no more than 1 Hz, a visual-odometry subsystem that is part of this system operates at a frame rate of 60 to 200 Hz, given optical-flow estimates. The overall system operates at an effective frame rate of 12 Hz. Moreover, unlike prior machine-vision systems for detecting motions of external objects, this system need not remain stationary: it can detect such motions while it is moving (even vibrating). The system includes a stereoscopic pair of cameras mounted on a moving robot. The outputs of the cameras are digitized, then processed to extract positions and velocities. The initial image-data-processing functions of this system are the same as those of some prior systems: Stereoscopy is used to compute three-dimensional (3D) positions for all pixels in the camera images. For each pixel of each image, optical flow between successive image frames is used to compute the two-dimensional (2D) apparent relative translational motion of the point transverse to the line of sight of the camera. The challenge in designing this system was to provide for utilization of the 3D information from stereoscopy in conjunction with the 2D information from optical flow to distinguish between motion of the camera pair and motions of external objects, compute the motion of the camera pair in all six degrees of translational and rotational freedom, and robustly estimate the motions of external objects, all in real time. To meet this challenge, the system is designed to perform the following image-data-processing functions: The visual-odometry subsystem (the subsystem that estimates the motion of the camera pair relative to the stationary background) utilizes the 3D information from stereoscopy and the 2D information from optical flow. It computes the relationship between the 3D and 2D motions and uses a least-mean-squares technique to estimate motion parameters. The least-mean-squares technique is suitable for real-time implementation when the number of external-moving-object pixels is smaller than the number of stationary-background pixels.

  10. VUV testing of science cameras at MSFC: QE measurement of the CLASP flight cameras

    NASA Astrophysics Data System (ADS)

    Champey, P.; Kobayashi, K.; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, B.; Beabout, D.; Stewart, M.

    2015-08-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint MSFC, National Astronomical Observatory of Japan (NAOJ), Instituto de Astrofisica de Canarias (IAC) and Institut D'Astrophysique Spatiale (IAS) sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512 × 512 detector, dual channel analog readout and an internally mounted cold block. At the flight CCD temperature of -20C, the CLASP cameras exceeded the low-noise performance requirements (<= 25 e- read noise and <= 10 e- /sec/pixel dark current), in addition to maintaining a stable gain of ≍ 2.0 e-/DN. The e2v CCD57-10 detectors were coated with Lumogen-E to improve quantum efficiency (QE) at the Lyman- wavelength. A vacuum ultra-violet (VUV) monochromator and a NIST calibrated photodiode were employed to measure the QE of each camera. Three flight cameras and one engineering camera were tested in a high-vacuum chamber, which was configured to operate several tests intended to verify the QE, gain, read noise and dark current of the CCD. We present and discuss the QE measurements performed on the CLASP cameras. We also discuss the high-vacuum system outfitted for testing of UV, EUV and X-ray science cameras at MSFC.

  11. 3D morphology reconstruction using linear array CCD binocular stereo vision imaging system

    NASA Astrophysics Data System (ADS)

    Pan, Yu; Wang, Jinjiang

    2018-01-01

    Binocular vision imaging system, which has a small field of view, cannot reconstruct the 3-D shape of the dynamic object. We found a linear array CCD binocular vision imaging system, which uses different calibration and reconstruct methods. On the basis of the binocular vision imaging system, the linear array CCD binocular vision imaging systems which has a wider field of view can reconstruct the 3-D morphology of objects in continuous motion, and the results are accurate. This research mainly introduces the composition and principle of linear array CCD binocular vision imaging system, including the calibration, capture, matching and reconstruction of the imaging system. The system consists of two linear array cameras which were placed in special arrangements and a horizontal moving platform that can pick up objects. The internal and external parameters of the camera are obtained by calibrating in advance. And then using the camera to capture images of moving objects, the results are then matched and 3-D reconstructed. The linear array CCD binocular vision imaging systems can accurately measure the 3-D appearance of moving objects, this essay is of great significance to measure the 3-D morphology of moving objects.

  12. Constrained space camera assembly

    DOEpatents

    Heckendorn, F.M.; Anderson, E.K.; Robinson, C.W.; Haynes, H.B.

    1999-05-11

    A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity is disclosed. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras. 17 figs.

  13. Low-altitude photographic transects of the Arctic Network of National Park Units and Selawik National Wildlife Refuge, Alaska, July 2013

    USGS Publications Warehouse

    Marcot, Bruce G.; Jorgenson, M. Torre; DeGange, Anthony R.

    2014-01-01

    5. A Canon® Rebel 3Ti with a Sigma zoom lens (18–200 mm focal length). The Drift® HD-170 and GoPro® Hero3 cameras were secured to the struts and underwing for nadir (direct downward) imaging. The Panasonic® and Canon® cameras were each hand-held for oblique-angle landscape images, shooting through the airplanes’ windows, targeting both general landscape conditions as well as landscape features of special interest, such as tundra fire scars and landslips. The Drift® and GoPro® cameras each were set for time-lapse photography at 5-second intervals for overlapping coverage. Photographs from all cameras (100 percent .jpg format) were date- and time-synchronized to geographic positioning system waypoints taken during the flights, also at 5-second intervals, providing precise geotagging (latitude-longitude) of all files. All photographs were adjusted for color saturation and gamma, and nadir photographs were corrected for lens distortion for the Drift® and GoPro® cameras’ 170° wide-angle distortion. EXIF (exchangeable image file format) data on camera settings and geotagging were extracted into spreadsheet databases. An additional 1 hour, 20 minutes, and 43 seconds of high-resolution videos were recorded at 60 frames per second with the GoPro® camera along selected transect segments, and also were image-adjusted and corrected for lens distortion. Geotagged locations of 12,395 nadir photographs from the Drift® and GoPro® cameras were overlayed in a geographic information system (ArcMap 10.0) onto a map of 44 ecotypes (land- and water-cover types) of the Arctic Network study area. Presence and area of each ecotype occurring within a geographic information system window centered on the location of each photograph were recorded and included in the spreadsheet databases. All original and adjusted photographs, videos, geographic positioning system flight tracks, and photograph databases are available by contacting ascweb@usgs.gov.

  14. Marshall Grazing Incidence X-ray Spectrometer (MaGIXS) Slit-Jaw Imaging System

    NASA Astrophysics Data System (ADS)

    Wilkerson, P.; Champey, P. R.; Winebarger, A. R.; Kobayashi, K.; Savage, S. L.

    2017-12-01

    The Marshall Grazing Incidence X-ray Spectrometer is a NASA sounding rocket payload providing a 0.6 - 2.5 nm spectrum with unprecedented spatial and spectral resolution. The instrument is comprised of a novel optical design, featuring a Wolter1 grazing incidence telescope, which produces a focused solar image on a slit plate, an identical pair of stigmatic optics, a planar diffraction grating and a low-noise detector. When MaGIXS flies on a suborbital launch in 2019, a slit-jaw camera system will reimage the focal plane of the telescope providing a reference for pointing the telescope on the solar disk and aligning the data to supporting observations from satellites and other rockets. The telescope focuses the X-ray and EUV image of the sun onto a plate covered with a phosphor coating that absorbs EUV photons, which then fluoresces in visible light. This 10-week REU project was aimed at optimizing an off-axis mounted camera with 600-line resolution NTSC video for extremely low light imaging of the slit plate. Radiometric calculations indicate an intensity of less than 1 lux at the slit jaw plane, which set the requirement for camera sensitivity. We selected a Watec 910DB EIA charge-coupled device (CCD) monochrome camera, which has a manufacturer quoted sensitivity of 0.0001 lux at F1.2. A high magnification and low distortion lens was then identified to image the slit jaw plane from a distance of approximately 10 cm. With the selected CCD camera, tests show that at extreme low-light levels, we achieve a higher resolution than expected, with only a moderate drop in frame rate. Based on sounding rocket flight heritage, the launch vehicle attitude control system is known to stabilize the instrument pointing such that jitter does not degrade video quality for context imaging. Future steps towards implementation of the imaging system will include ruggedizing the flight camera housing and mounting the selected camera and lens combination to the instrument structure.

  15. Illumination box and camera system

    DOEpatents

    Haas, Jeffrey S.; Kelly, Fredrick R.; Bushman, John F.; Wiefel, Michael H.; Jensen, Wayne A.; Klunder, Gregory L.

    2002-01-01

    A hand portable, field-deployable thin-layer chromatography (TLC) unit and a hand portable, battery-operated unit for development, illumination, and data acquisition of the TLC plates contain many miniaturized features that permit a large number of samples to be processed efficiently. The TLC unit includes a solvent tank, a holder for TLC plates, and a variety of tool chambers for storing TLC plates, solvent, and pipettes. After processing in the TLC unit, a TLC plate is positioned in a collapsible illumination box, where the box and a CCD camera are optically aligned for optimal pixel resolution of the CCD images of the TLC plate. The TLC system includes an improved development chamber for chemical development of TLC plates that prevents solvent overflow.

  16. A self-learning camera for the validation of highly variable and pseudorandom patterns

    NASA Astrophysics Data System (ADS)

    Kelley, Michael

    2004-05-01

    Reliable and productive manufacturing operations have depended on people to quickly detect and solve problems whenever they appear. Over the last 20 years, more and more manufacturing operations have embraced machine vision systems to increase productivity, reliability and cost-effectiveness, including reducing the number of human operators required. Although machine vision technology has long been capable of solving simple problems, it has still not been broadly implemented. The reason is that until now, no machine vision system has been designed to meet the unique demands of complicated pattern recognition. The ZiCAM family was specifically developed to be the first practical hardware to meet these needs. To be able to address non-traditional applications, the machine vision industry must include smart camera technology that meets its users" demands for lower costs, better performance and the ability to address applications of irregular lighting, patterns and color. The next-generation smart cameras will need to evolve as a fundamentally different kind of sensor, with new technology that behaves like a human but performs like a computer. Neural network based systems, coupled with self-taught, n-space, non-linear modeling, promises to be the enabler of the next generation of machine vision equipment. Image processing technology is now available that enables a system to match an operator"s subjectivity. A Zero-Instruction-Set-Computer (ZISC) powered smart camera allows high-speed fuzzy-logic processing, without the need for computer programming. This can address applications of validating highly variable and pseudo-random patterns. A hardware-based implementation of a neural network, Zero-Instruction-Set-Computer, enables a vision system to "think" and "inspect" like a human, with the speed and reliability of a machine.

  17. Orbiter Camera Payload System

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Components for an orbiting camera payload system (OCPS) include the large format camera (LFC), a gas supply assembly, and ground test, handling, and calibration hardware. The LFC, a high resolution large format photogrammetric camera for use in the cargo bay of the space transport system, is also adaptable to use on an RB-57 aircraft or on a free flyer satellite. Carrying 4000 feet of film, the LFC is usable over the visible to near IR, at V/h rates of from 11 to 41 milliradians per second, overlap of 10, 60, 70 or 80 percent and exposure times of from 4 to 32 milliseconds. With a 12 inch focal length it produces a 9 by 18 inch format (long dimension in line of flight) with full format low contrast resolution of 88 lines per millimeter (AWAR), full format distortion of less than 14 microns and a complement of 45 Reseau marks and 12 fiducial marks. Weight of the OCPS as supplied, fully loaded is 944 pounds and power dissipation is 273 watts average when in operation, 95 watts in standby. The LFC contains an internal exposure sensor, or will respond to external command. It is able to photograph starfields for inflight calibration upon command.

  18. Opto-mechanical system design of test system for near-infrared and visible target

    NASA Astrophysics Data System (ADS)

    Wang, Chunyan; Zhu, Guodong; Wang, Yuchao

    2014-12-01

    Guidance precision is the key indexes of the guided weapon shooting. The factors of guidance precision including: information processing precision, control system accuracy, laser irradiation accuracy and so on. The laser irradiation precision is an important factor. This paper aimed at the demand of the precision test of laser irradiator,and developed the laser precision test system. The system consists of modified cassegrain system, the wide range CCD camera, tracking turntable and industrial PC, and makes visible light and near infrared target imaging at the same time with a Near IR camera. Through the analysis of the design results, when it exposures the target of 1000 meters that the system measurement precision is43mm, fully meet the needs of the laser precision test.

  19. Real-time Enhancement, Registration, and Fusion for an Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.

    2006-01-01

    Over the last few years NASA Langley Research Center (LaRC) has been developing an Enhanced Vision System (EVS) to aid pilots while flying in poor visibility conditions. The EVS captures imagery using two infrared video cameras. The cameras are placed in an enclosure that is mounted and flown forward-looking underneath the NASA LaRC ARIES 757 aircraft. The data streams from the cameras are processed in real-time and displayed on monitors on-board the aircraft. With proper processing the camera system can provide better-than-human-observed imagery particularly during poor visibility conditions. However, to obtain this goal requires several different stages of processing including enhancement, registration, and fusion, and specialized processing hardware for real-time performance. We are using a real-time implementation of the Retinex algorithm for image enhancement, affine transformations for registration, and weighted sums to perform fusion. All of the algorithms are executed on a single TI DM642 digital signal processor (DSP) clocked at 720 MHz. The image processing components were added to the EVS system, tested, and demonstrated during flight tests in August and September of 2005. In this paper we briefly discuss the EVS image processing hardware and algorithms. We then discuss implementation issues and show examples of the results obtained during flight tests.

  20. Non Contacting Evaluation of Strains and Cracking Using Optical and Infrared Imaging Techniques

    DTIC Science & Technology

    1988-08-22

    Compatible Zenith Z-386 microcomputer with plotter II. 3-D Motion Measurinq System 1. Complete OPTOTRAK three dimensional digitizing system. System includes...acquisition unit - 16 single ended analog input channels 3. Data Analysis Package software (KINEPLOT) 4. Extra OPTOTRAK Camera (max 224 per system

  1. A Robust Mechanical Sensing System for Unmanned Sea Surface Vehicles

    NASA Technical Reports Server (NTRS)

    Kulczycki, Eric A.; Magnone, Lee J.; Huntsberger, Terrance; Aghazarian, Hrand; Padgett, Curtis W.; Trotz, David C.; Garrett, Michael S.

    2009-01-01

    The need for autonomous navigation and intelligent control of unmanned sea surface vehicles requires a mechanically robust sensing architecture that is watertight, durable, and insensitive to vibration and shock loading. The sensing system developed here comprises four black and white cameras and a single color camera. The cameras are rigidly mounted to a camera bar that can be reconfigured to mount multiple vehicles, and act as both navigational cameras and application cameras. The cameras are housed in watertight casings to protect them and their electronics from moisture and wave splashes. Two of the black and white cameras are positioned to provide lateral vision. They are angled away from the front of the vehicle at horizontal angles to provide ideal fields of view for mapping and autonomous navigation. The other two black and white cameras are positioned at an angle into the color camera's field of view to support vehicle applications. These two cameras provide an overlap, as well as a backup to the front camera. The color camera is positioned directly in the middle of the bar, aimed straight ahead. This system is applicable to any sea-going vehicle, both on Earth and in space.

  2. Digital readout for image converter cameras

    NASA Astrophysics Data System (ADS)

    Honour, Joseph

    1991-04-01

    There is an increasing need for fast and reliable analysis of recorded sequences from image converter cameras so that experimental information can be readily evaluated without recourse to more time consuming photographic procedures. A digital readout system has been developed using a randomly triggerable high resolution CCD camera, the output of which is suitable for use with IBM AT compatible PC. Within half a second from receipt of trigger pulse, the frame reformatter displays the image and transfer to storage media can be readily achieved via the PC and dedicated software. Two software programmes offer different levels of image manipulation which includes enhancement routines and parameter calculations with accuracy down to pixel levels. Hard copy prints can be acquired using a specially adapted Polaroid printer, outputs for laser and video printer extend the overall versatility of the system.

  3. Prototype color field sequential television lens assembly

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The design, development, and evaluation of a prototype modular lens assembly with a self-contained field sequential color wheel is presented. The design of a color wheel of maximum efficiency, the selection of spectral filters, and the design of a quiet, efficient wheel drive system are included. Design tradeoffs considered for each aspect of the modular assembly are discussed. Emphasis is placed on achieving a design which can be attached directly to an unmodified camera, thus permitting use of the assembly in evaluating various candidate camera and sensor designs. A technique is described which permits maintaining high optical efficiency with an unmodified camera. A motor synchronization system is developed which requires only the vertical synchronization signal as a reference frequency input. Equations and tradeoff curves are developed to permit optimizing the filter wheel aperture shapes for a variety of different design conditions.

  4. Minimizing camera-eye optical aberrations during the 3D reconstruction of retinal structures

    NASA Astrophysics Data System (ADS)

    Aldana-Iuit, Javier; Martinez-Perez, M. Elena; Espinosa-Romero, Arturo; Diaz-Uribe, Rufino

    2010-05-01

    3D reconstruction of blood vessels is a powerful visualization tool for physicians, since it allows them to refer to qualitative representation of their subject of study. In this paper we propose a 3D reconstruction method of retinal vessels from fundus images. The reconstruction method propose herein uses images of the same retinal structure in epipolar geometry. Images are preprocessed by RISA system for segmenting blood vessels and obtaining feature points for correspondences. The correspondence points process is solved using correlation. The LMedS analysis and Graph Transformation Matching algorithm are used for outliers suppression. Camera projection matrices are computed with the normalized eight point algorithm. Finally, we retrieve 3D position of the retinal tree points by linear triangulation. In order to increase the power of visualization, 3D tree skeletons are represented by surfaces via generalized cylinders whose radius correspond to morphological measurements obtained by RISA. In this paper the complete calibration process including the fundus camera and the optical properties of the eye, the so called camera-eye system is proposed. On one hand, the internal parameters of the fundus camera are obtained by classical algorithms using a reference pattern. On the other hand, we minimize the undesirable efects of the aberrations induced by the eyeball optical system assuming that contact enlarging lens corrects astigmatism, spherical and coma aberrations are reduced changing the aperture size and eye refractive errors are suppressed adjusting camera focus during image acquisition. Evaluation of two self-calibration proposals and results of 3D blood vessel surface reconstruction are presented.

  5. Adaptive-Repetitive Visual-Servo Control of Low-Flying Aerial Robots via Uncalibrated High-Flying Cameras

    NASA Astrophysics Data System (ADS)

    Guo, Dejun; Bourne, Joseph R.; Wang, Hesheng; Yim, Woosoon; Leang, Kam K.

    2017-08-01

    This paper presents the design and implementation of an adaptive-repetitive visual-servo control system for a moving high-flying vehicle (HFV) with an uncalibrated camera to monitor, track, and precisely control the movements of a low-flying vehicle (LFV) or mobile ground robot. Applications of this control strategy include the use of high-flying unmanned aerial vehicles (UAVs) with computer vision for monitoring, controlling, and coordinating the movements of lower altitude agents in areas, for example, where GPS signals may be unreliable or nonexistent. When deployed, a remote operator of the HFV defines the desired trajectory for the LFV in the HFV's camera frame. Due to the circular motion of the HFV, the resulting motion trajectory of the LFV in the image frame can be periodic in time, thus an adaptive-repetitive control system is exploited for regulation and/or trajectory tracking. The adaptive control law is able to handle uncertainties in the camera's intrinsic and extrinsic parameters. The design and stability analysis of the closed-loop control system is presented, where Lyapunov stability is shown. Simulation and experimental results are presented to demonstrate the effectiveness of the method for controlling the movement of a low-flying quadcopter, demonstrating the capabilities of the visual-servo control system for localization (i.e.,, motion capturing) and trajectory tracking control. In fact, results show that the LFV can be commanded to hover in place as well as track a user-defined flower-shaped closed trajectory, while the HFV and camera system circulates above with constant angular velocity. On average, the proposed adaptive-repetitive visual-servo control system reduces the average RMS tracking error by over 77% in the image plane and over 71% in the world frame compared to using just the adaptive visual-servo control law.

  6. Head-coupled remote stereoscopic camera system for telepresence applications

    NASA Astrophysics Data System (ADS)

    Bolas, Mark T.; Fisher, Scott S.

    1990-09-01

    The Virtual Environment Workstation Project (VIEW) at NASA's Ames Research Center has developed a remotely controlled stereoscopic camera system that can be used for telepresence research and as a tool to develop and evaluate configurations for head-coupled visual systems associated with space station telerobots and remote manipulation robotic arms. The prototype camera system consists of two lightweight CCD video cameras mounted on a computer controlled platform that provides real-time pan, tilt, and roll control of the camera system in coordination with head position transmitted from the user. This paper provides an overall system description focused on the design and implementation of the camera and platform hardware configuration and the development of control software. Results of preliminary performance evaluations are reported with emphasis on engineering and mechanical design issues and discussion of related psychophysiological effects and objectives.

  7. Camera system resolution and its influence on digital image correlation

    DOE PAGES

    Reu, Phillip L.; Sweatt, William; Miller, Timothy; ...

    2014-09-21

    Digital image correlation (DIC) uses images from a camera and lens system to make quantitative measurements of the shape, displacement, and strain of test objects. This increasingly popular method has had little research on the influence of the imaging system resolution on the DIC results. This paper investigates the entire imaging system and studies how both the camera and lens resolution influence the DIC results as a function of the system Modulation Transfer Function (MTF). It will show that when making spatial resolution decisions (including speckle size) the resolution limiting component should be considered. A consequence of the loss ofmore » spatial resolution is that the DIC uncertainties will be increased. This is demonstrated using both synthetic and experimental images with varying resolution. The loss of image resolution and DIC accuracy can be compensated for by increasing the subset size, or better, by increasing the speckle size. The speckle-size and spatial resolution are now a function of the lens resolution rather than the more typical assumption of the pixel size. The study will demonstrate the tradeoffs associated with limited lens resolution.« less

  8. A feasibility study of damage detection in beams using high-speed camera (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Wan, Chao; Yuan, Fuh-Gwo

    2017-04-01

    In this paper a method for damage detection in beam structures using high-speed camera is presented. Traditional methods of damage detection in structures typically involve contact (i.e., piezoelectric sensor or accelerometer) or non-contact sensors (i.e., laser vibrometer) which can be costly and time consuming to inspect an entire structure. With the popularity of the digital camera and the development of computer vision technology, video cameras offer a viable capability of measurement including higher spatial resolution, remote sensing and low-cost. In the study, a damage detection method based on the high-speed camera was proposed. The system setup comprises a high-speed camera and a line-laser which can capture the out-of-plane displacement of a cantilever beam. The cantilever beam with an artificial crack was excited and the vibration process was recorded by the camera. A methodology called motion magnification, which can amplify subtle motions in a video is used for modal identification of the beam. A finite element model was used for validation of the proposed method. Suggestions for applications of this methodology and challenges in future work will be discussed.

  9. An integrated port camera and display system for laparoscopy.

    PubMed

    Terry, Benjamin S; Ruppert, Austin D; Steinhaus, Kristen R; Schoen, Jonathan A; Rentschler, Mark E

    2010-05-01

    In this paper, we built and tested the port camera, a novel, inexpensive, portable, and battery-powered laparoscopic tool that integrates the components of a vision system with a cannula port. This new device 1) minimizes the invasiveness of laparoscopic surgery by combining a camera port and tool port; 2) reduces the cost of laparoscopic vision systems by integrating an inexpensive CMOS sensor and LED light source; and 3) enhances laparoscopic surgical procedures by mechanically coupling the camera, tool port, and liquid crystal display (LCD) screen to provide an on-patient visual display. The port camera video system was compared to two laparoscopic video systems: a standard resolution unit from Karl Storz (model 22220130) and a high definition unit from Stryker (model 1188HD). Brightness, contrast, hue, colorfulness, and sharpness were compared. The port camera video is superior to the Storz scope and approximately equivalent to the Stryker scope. An ex vivo study was conducted to measure the operative performance of the port camera. The results suggest that simulated tissue identification and biopsy acquisition with the port camera is as efficient as with a traditional laparoscopic system. The port camera was successfully used by a laparoscopic surgeon for exploratory surgery and liver biopsy during a porcine surgery, demonstrating initial surgical feasibility.

  10. A real-time camera calibration system based on OpenCV

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Wang, Hua; Guo, Huinan; Ren, Long; Zhou, Zuofeng

    2015-07-01

    Camera calibration is one of the essential steps in the computer vision research. This paper describes a real-time OpenCV based camera calibration system, and developed and implemented in the VS2008 environment. Experimental results prove that the system to achieve a simple and fast camera calibration, compared with MATLAB, higher precision and does not need manual intervention, and can be widely used in various computer vision system.

  11. Validation of the Microsoft Kinect® camera system for measurement of lower extremity jump landing and squatting kinematics.

    PubMed

    Eltoukhy, Moataz; Kelly, Adam; Kim, Chang-Young; Jun, Hyung-Pil; Campbell, Richard; Kuenze, Christopher

    2016-01-01

    Cost effective, quantifiable assessment of lower extremity movement represents potential improvement over standard tools for evaluation of injury risk. Ten healthy participants completed three trials of a drop jump, overhead squat, and single leg squat task. Peak hip and knee kinematics were assessed using an 8 camera BTS Smart 7000DX motion analysis system and the Microsoft Kinect® camera system. The agreement and consistency between both uncorrected and correct Kinect kinematic variables and the BTS camera system were assessed using interclass correlations coefficients. Peak sagittal plane kinematics measured using the Microsoft Kinect® camera system explained a significant amount of variance [Range(hip) = 43.5-62.8%; Range(knee) = 67.5-89.6%] in peak kinematics measured using the BTS camera system. Across tasks, peak knee flexion angle and peak hip flexion were found to be consistent and in agreement when the Microsoft Kinect® camera system was directly compared to the BTS camera system but these values were improved following application of a corrective factor. The Microsoft Kinect® may not be an appropriate surrogate for traditional motion analysis technology, but it may have potential applications as a real-time feedback tool in pathological or high injury risk populations.

  12. Experiences in teleoperation of land vehicles

    NASA Technical Reports Server (NTRS)

    Mcgovern, Douglas E.

    1989-01-01

    Teleoperation of land vehicles allows the removal of the operator from the vehicle to a remote location. This can greatly increase operator safety and comfort in applications such as security patrol or military combat. The cost includes system complexity and reduced system performance. All feedback on vehicle performance and on environmental conditions must pass through sensors, a communications channel, and displays. In particular, this requires vision to be transmitted by close-circuit television with a consequent degradation of information content. Vehicular teleoperation, as a result, places severe demands on the operator. Teleoperated land vehicles have been built and tested by many organizations, including Sandia National Laboratories (SNL). The SNL fleet presently includes eight vehicles of varying capability. These vehicles have been operated using different types of controls, displays, and visual systems. Experimentation studying the effects of vision system characteristics on off-road, remote driving was performed for conditions of fixed camera versus steering-coupled camera and of color versus black and white video display. Additionally, much experience was gained through system demonstrations and hardware development trials. The preliminary experimental findings and the results of the accumulated operational experience are discussed.

  13. Evaluation of camera-based systems to reduce transit bus side collisions : phase II.

    DOT National Transportation Integrated Search

    2012-12-01

    The sideview camera system has been shown to eliminate blind zones by providing a view to the driver in real time. In : order to provide the best integration of these systems, an integrated camera-mirror system (hybrid system) was : developed and tes...

  14. Real time moving scene holographic camera system

    NASA Technical Reports Server (NTRS)

    Kurtz, R. L. (Inventor)

    1973-01-01

    A holographic motion picture camera system producing resolution of front surface detail is described. The system utilizes a beam of coherent light and means for dividing the beam into a reference beam for direct transmission to a conventional movie camera and two reflection signal beams for transmission to the movie camera by reflection from the front side of a moving scene. The system is arranged so that critical parts of the system are positioned on the foci of a pair of interrelated, mathematically derived ellipses. The camera has the theoretical capability of producing motion picture holograms of projectiles moving at speeds as high as 900,000 cm/sec (about 21,450 mph).

  15. Calibration Techniques for Accurate Measurements by Underwater Camera Systems

    PubMed Central

    Shortis, Mark

    2015-01-01

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems. PMID:26690172

  16. Stereo optical guidance system for control of industrial robots

    NASA Technical Reports Server (NTRS)

    Powell, Bradley W. (Inventor); Rodgers, Mike H. (Inventor)

    1992-01-01

    A device for the generation of basic electrical signals which are supplied to a computerized processing complex for the operation of industrial robots. The system includes a stereo mirror arrangement for the projection of views from opposite sides of a visible indicia formed on a workpiece. The views are projected onto independent halves of the retina of a single camera. The camera retina is of the CCD (charge-coupled-device) type and is therefore capable of providing signals in response to the image projected thereupon. These signals are then processed for control of industrial robots or similar devices.

  17. Real-time Enhancement, Registration, and Fusion for a Multi-Sensor Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.

    2006-01-01

    Over the last few years NASA Langley Research Center (LaRC) has been developing an Enhanced Vision System (EVS) to aid pilots while flying in poor visibility conditions. The EVS captures imagery using two infrared video cameras. The cameras are placed in an enclosure that is mounted and flown forward-looking underneath the NASA LaRC ARIES 757 aircraft. The data streams from the cameras are processed in real-time and displayed on monitors on-board the aircraft. With proper processing the camera system can provide better-than- human-observed imagery particularly during poor visibility conditions. However, to obtain this goal requires several different stages of processing including enhancement, registration, and fusion, and specialized processing hardware for real-time performance. We are using a real-time implementation of the Retinex algorithm for image enhancement, affine transformations for registration, and weighted sums to perform fusion. All of the algorithms are executed on a single TI DM642 digital signal processor (DSP) clocked at 720 MHz. The image processing components were added to the EVS system, tested, and demonstrated during flight tests in August and September of 2005. In this paper we briefly discuss the EVS image processing hardware and algorithms. We then discuss implementation issues and show examples of the results obtained during flight tests. Keywords: enhanced vision system, image enhancement, retinex, digital signal processing, sensor fusion

  18. A detailed comparison of single-camera light-field PIV and tomographic PIV

    NASA Astrophysics Data System (ADS)

    Shi, Shengxian; Ding, Junfei; Atkinson, Callum; Soria, Julio; New, T. H.

    2018-03-01

    This paper conducts a comprehensive study between the single-camera light-field particle image velocimetry (LF-PIV) and the multi-camera tomographic particle image velocimetry (Tomo-PIV). Simulation studies were first performed using synthetic light-field and tomographic particle images, which extensively examine the difference between these two techniques by varying key parameters such as pixel to microlens ratio (PMR), light-field camera Tomo-camera pixel ratio (LTPR), particle seeding density and tomographic camera number. Simulation results indicate that the single LF-PIV can achieve accuracy consistent with that of multi-camera Tomo-PIV, but requires the use of overall greater number of pixels. Experimental studies were then conducted by simultaneously measuring low-speed jet flow with single-camera LF-PIV and four-camera Tomo-PIV systems. Experiments confirm that given a sufficiently high pixel resolution, a single-camera LF-PIV system can indeed deliver volumetric velocity field measurements for an equivalent field of view with a spatial resolution commensurate with those of multi-camera Tomo-PIV system, enabling accurate 3D measurements in applications where optical access is limited.

  19. Videogrammetric Model Deformation Measurement Technique

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Liu, Tian-Shu

    2001-01-01

    The theory, methods, and applications of the videogrammetric model deformation (VMD) measurement technique used at NASA for wind tunnel testing are presented. The VMD technique, based on non-topographic photogrammetry, can determine static and dynamic aeroelastic deformation and attitude of a wind-tunnel model. Hardware of the system includes a video-rate CCD camera, a computer with an image acquisition frame grabber board, illumination lights, and retroreflective or painted targets on a wind tunnel model. Custom software includes routines for image acquisition, target-tracking/identification, target centroid calculation, camera calibration, and deformation calculations. Applications of the VMD technique at five large NASA wind tunnels are discussed.

  20. Photogrammetry System and Method for Determining Relative Motion Between Two Bodies

    NASA Technical Reports Server (NTRS)

    Miller, Samuel A. (Inventor); Severance, Kurt (Inventor)

    2014-01-01

    A photogrammetry system and method provide for determining the relative position between two objects. The system utilizes one or more imaging devices, such as high speed cameras, that are mounted on a first body, and three or more photogrammetry targets of a known location on a second body. The system and method can be utilized with cameras having fish-eye, hyperbolic, omnidirectional, or other lenses. The system and method do not require overlapping fields-of-view if two or more cameras are utilized. The system and method derive relative orientation by equally weighting information from an arbitrary number of heterogeneous cameras, all with non-overlapping fields-of-view. Furthermore, the system can make the measurements with arbitrary wide-angle lenses on the cameras.

  1. Caught on Camera.

    ERIC Educational Resources Information Center

    Milshtein, Amy

    2002-01-01

    Describes the benefits of and rules to be followed when using surveillance cameras for school security. Discusses various camera models, including indoor and outdoor fixed position cameras, pan-tilt zoom cameras, and pinhole-lens cameras for covert surveillance. (EV)

  2. In vivo verification of particle therapy: how Compton camera configurations affect 3D image quality

    NASA Astrophysics Data System (ADS)

    Mackin, D.; Draeger, E.; Peterson, S.; Polf, J.; Beddar, S.

    2017-05-01

    The steep dose gradients enabled by the Bragg peaks of particle therapy beams are a double edged sword. They enable highly conformal dose distributions, but even small deviations from the planned beam range can cause overdosing of healthy tissue or under-dosing of the tumour. To reduce this risk, particle therapy treatment plans include margins large enough to account for all the sources of range uncertainty, which include patient setup errors, patient anatomy changes, and CT number to stopping power ratios. Any system that could verify the beam range in vivo, would allow reduced margins and more conformal dose distributions. Toward our goal developing such a system based on Compton camera (CC) imaging, we studied how three configurations (single camera, parallel opposed, and orthogonal) affect the quality of the 3D images. We found that single CC and parallel opposed configurations produced superior images in 2D. The increase in parallax produced by an orthogonal CC configuration was shown to be beneficial in producing artefact free 3D images.

  3. Advanced imaging system

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This document describes the Advanced Imaging System CCD based camera. The AIS1 camera system was developed at Photometric Ltd. in Tucson, Arizona as part of a Phase 2 SBIR contract No. NAS5-30171 from the NASA/Goddard Space Flight Center in Greenbelt, Maryland. The camera project was undertaken as a part of the Space Telescope Imaging Spectrograph (STIS) project. This document is intended to serve as a complete manual for the use and maintenance of the camera system. All the different parts of the camera hardware and software are discussed and complete schematics and source code listings are provided.

  4. A versatile photogrammetric camera automatic calibration suite for multispectral fusion and optical helmet tracking

    NASA Astrophysics Data System (ADS)

    de Villiers, Jason; Jermy, Robert; Nicolls, Fred

    2014-06-01

    This paper presents a system to determine the photogrammetric parameters of a camera. The lens distortion, focal length and camera six degree of freedom (DOF) position are calculated. The system caters for cameras of different sensitivity spectra and fields of view without any mechanical modifications. The distortion characterization, a variant of Brown's classic plumb line method, allows many radial and tangential distortion coefficients and finds the optimal principal point. Typical values are 5 radial and 3 tangential coefficients. These parameters are determined stably and demonstrably produce superior results to low order models despite popular and prevalent misconceptions to the contrary. The system produces coefficients to model both the distorted to undistorted pixel coordinate transformation (e.g. for target designation) and the inverse transformation (e.g. for image stitching and fusion) allowing deterministic rates far exceeding real time. The focal length is determined to minimise the error in absolute photogrammetric positional measurement for both multi camera systems or monocular (e.g. helmet tracker) systems. The system determines the 6 DOF position of the camera in a chosen coordinate system. It can also determine the 6 DOF offset of the camera relative to its mechanical mount. This allows faulty cameras to be replaced without requiring a recalibration of the entire system (such as an aircraft cockpit). Results from two simple applications of the calibration results are presented: stitching and fusion of the images from a dual-band visual/ LWIR camera array, and a simple laboratory optical helmet tracker.

  5. VUV Testing of Science Cameras at MSFC: QE Measurement of the CLASP Flight Cameras

    NASA Technical Reports Server (NTRS)

    Champey, Patrick; Kobayashi, Ken; Winebarger, Amy; Cirtain, Jonathan; Hyde, David; Robertson, Bryan; Beabout, Brent; Beabout, Dyana; Stewart, Mike

    2015-01-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512x512 detector, dual channel analog readout electronics and an internally mounted cold block. At the flight operating temperature of -20 C, the CLASP cameras achieved the low-noise performance requirements (less than or equal to 25 e- read noise and greater than or equal to 10 e-/sec/pix dark current), in addition to maintaining a stable gain of approximately equal to 2.0 e-/DN. The e2v CCD57-10 detectors were coated with Lumogen-E to improve quantum efficiency (QE) at the Lyman- wavelength. A vacuum ultra-violet (VUV) monochromator and a NIST calibrated photodiode were employed to measure the QE of each camera. Four flight-like cameras were tested in a high-vacuum chamber, which was configured to operate several tests intended to verify the QE, gain, read noise, dark current and residual non-linearity of the CCD. We present and discuss the QE measurements performed on the CLASP cameras. We also discuss the high-vacuum system outfitted for testing of UV and EUV science cameras at MSFC.

  6. Localization and Mapping Using a Non-Central Catadioptric Camera System

    NASA Astrophysics Data System (ADS)

    Khurana, M.; Armenakis, C.

    2018-05-01

    This work details the development of an indoor navigation and mapping system using a non-central catadioptric omnidirectional camera and its implementation for mobile applications. Omnidirectional catadioptric cameras find their use in navigation and mapping of robotic platforms, owing to their wide field of view. Having a wider field of view, or rather a potential 360° field of view, allows the system to "see and move" more freely in the navigation space. A catadioptric camera system is a low cost system which consists of a mirror and a camera. Any perspective camera can be used. A platform was constructed in order to combine the mirror and a camera to build a catadioptric system. A calibration method was developed in order to obtain the relative position and orientation between the two components so that they can be considered as one monolithic system. The mathematical model for localizing the system was determined using conditions based on the reflective properties of the mirror. The obtained platform positions were then used to map the environment using epipolar geometry. Experiments were performed to test the mathematical models and the achieved location and mapping accuracies of the system. An iterative process of positioning and mapping was applied to determine object coordinates of an indoor environment while navigating the mobile platform. Camera localization and 3D coordinates of object points obtained decimetre level accuracies.

  7. Reticle stage based linear dosimeter

    DOEpatents

    Berger, Kurt W [Livermore, CA

    2007-03-27

    A detector to measure EUV intensity employs a linear array of photodiodes. The detector is particularly suited for photolithography systems that includes: (i) a ringfield camera; (ii) a source of radiation; (iii) a condenser for processing radiation from the source of radiation to produce a ringfield illumination field for illuminating a mask; (iv) a reticle that is positioned at the ringfield camera's object plane and from which a reticle image in the form of an intensity profile is reflected into the entrance pupil of the ringfield camera, wherein the reticle moves in a direction that is transverse to the length of the ringfield illumination field that illuminates the reticle; (v) detector for measuring the entire intensity along the length of the ringfield illumination field that is projected onto the reticle; and (vi) a wafer onto which the reticle imaged is projected from the ringfield camera.

  8. Reticle stage based linear dosimeter

    DOEpatents

    Berger, Kurt W.

    2005-06-14

    A detector to measure EUV intensity employs a linear array of photodiodes. The detector is particularly suited for photolithography systems that includes: (i) a ringfield camera; (ii) a source of radiation; (iii) a condenser for processing radiation from the source of radiation to produce a ringfield illumination field for illuminating a mask; (iv) a reticle that is positioned at the ringfield camera's object plane and from which a reticle image in the form of an intensity profile is reflected into the entrance pupil of the ringfield camera, wherein the reticle moves in a direction that is transverse to the length of the ringfield illumination field that illuminates the reticle; (v) detector for measuring the entire intensity along the length of the ringfield illumination field that is projected onto the reticle; and (vi) a wafer onto which the reticle imaged is projected from the ringfield camera.

  9. Design and realization of an AEC&AGC system for the CCD aerial camera

    NASA Astrophysics Data System (ADS)

    Liu, Hai ying; Feng, Bing; Wang, Peng; Li, Yan; Wei, Hao yun

    2015-08-01

    An AEC and AGC(Automatic Exposure Control and Automatic Gain Control) system was designed for a CCD aerial camera with fixed aperture and electronic shutter. The normal AEC and AGE algorithm is not suitable to the aerial camera since the camera always takes high-resolution photographs in high-speed moving. The AEC and AGE system adjusts electronic shutter and camera gain automatically according to the target brightness and the moving speed of the aircraft. An automatic Gamma correction is used before the image is output so that the image is better for watching and analyzing by human eyes. The AEC and AGC system could avoid underexposure, overexposure, or image blurring caused by fast moving or environment vibration. A series of tests proved that the system meet the requirements of the camera system with its fast adjusting speed, high adaptability, high reliability in severe complex environment.

  10. Overview of a Hybrid Underwater Camera System

    DTIC Science & Technology

    2014-07-01

    meters), in increments of 200ps. The camera is also equipped with 6:1 motorized zoom lens. A precision miniature attitude, heading reference system ( AHRS ...LUCIE Control & Power Distribution System AHRS Pulsed LASER Gated Camera -^ Sonar Transducer (b) LUCIE sub-systems Proc. ofSPIEVol. 9111

  11. An efficient approach for site-specific scenery prediction in surveillance imaging near Earth's surface

    NASA Astrophysics Data System (ADS)

    Jylhä, Juha; Marjanen, Kalle; Rantala, Mikko; Metsäpuro, Petri; Visa, Ari

    2006-09-01

    Surveillance camera automation and camera network development are growing areas of interest. This paper proposes a competent approach to enhance the camera surveillance with Geographic Information Systems (GIS) when the camera is located at the height of 10-1000 m. A digital elevation model (DEM), a terrain class model, and a flight obstacle register comprise exploited auxiliary information. The approach takes into account spherical shape of the Earth and realistic terrain slopes. Accordingly, considering also forests, it determines visible and shadow regions. The efficiency arises out of reduced dimensionality in the visibility computation. Image processing is aided by predicting certain advance features of visible terrain. The features include distance from the camera and the terrain or object class such as coniferous forest, field, urban site, lake, or mast. The performance of the approach is studied by comparing a photograph of Finnish forested landscape with the prediction. The predicted background is well-fitting, and potential knowledge-aid for various purposes becomes apparent.

  12. Ultrahigh- and high-speed photography, videography, and photonics '91; Proceedings of the Meeting, San Diego, CA, July 24-26, 1991

    NASA Astrophysics Data System (ADS)

    Jaanimagi, Paul A.

    1992-01-01

    This volume presents papers grouped under the topics on advances in streak and framing camera technology, applications of ultrahigh-speed photography, characterizing high-speed instrumentation, high-speed electronic imaging technology and applications, new technology for high-speed photography, high-speed imaging and photonics in detonics, and high-speed velocimetry. The papers presented include those on a subpicosecond X-ray streak camera, photocathodes for ultrasoft X-ray region, streak tube dynamic range, high-speed TV cameras for streak tube readout, femtosecond light-in-flight holography, and electrooptical systems characterization techniques. Attention is also given to high-speed electronic memory video recording techniques, high-speed IR imaging of repetitive events using a standard RS-170 imager, use of a CCD array as a medium-speed streak camera, the photography of shock waves in explosive crystals, a single-frame camera based on the type LD-S-10 intensifier tube, and jitter diagnosis for pico- and femtosecond sources.

  13. STS-79 crew watches from aft flight deck during undocking from Mir

    NASA Image and Video Library

    1997-03-26

    STS079-S-097 (16-26 Sept. 1996) --- Left to right, Terrence W. (Terry) Wilcutt, pilot; Shannon W. Lucid, mission specialist; and William F. Readdy, mission commander, are pictured on the space shuttle Atlantis' aft flight deck during undocking operations with Russia's Mir Space Station. Mir had served as both work and home for Lucid for over six months before greeting her American colleagues upon docking of Mir and Atlantis last week. Following her lengthy stay aboard Mir and several days on Atlantis, Lucid went on to spend 188 consecutive days in space before returning to Earth with the STS-79 crew. During the STS-79 mission, the crew used an IMAX camera to document activities aboard the Space Shuttle Atlantis and the various Mir modules. A hand-held version of the 65mm camera system accompanied the STS-79 crew into space in Atlantis' crew cabin. NASA has flown IMAX camera systems on many Shuttle missions, including a special cargo bay camera's coverage of other recent Shuttle-Mir rendezvous and/or docking missions.

  14. The development of large-aperture test system of infrared camera and visible CCD camera

    NASA Astrophysics Data System (ADS)

    Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying

    2015-10-01

    Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.

  15. Assessment of the DoD Embedded Media Program

    DTIC Science & Technology

    2004-09-01

    Classified and Sensitive Information ................... VII-22 3. Weapons Systems Video, Gun Camera Video, and Lipstick Cameras...Weapons Systems Video, Gun Camera Video, and Lipstick Cameras A SECDEF and CJCS message to commanders stated, “Put in place mechanisms and processes...of public communication activities.”126 The 10 February 2003 PAG stated, “Use of lipstick and helmet-mounted cameras on combat sorties is approved

  16. The development of automated behavior analysis software

    NASA Astrophysics Data System (ADS)

    Jaana, Yuki; Prima, Oky Dicky A.; Imabuchi, Takashi; Ito, Hisayoshi; Hosogoe, Kumiko

    2015-03-01

    The measurement of behavior for participants in a conversation scene involves verbal and nonverbal communications. The measurement validity may vary depending on the observers caused by some aspects such as human error, poorly designed measurement systems, and inadequate observer training. Although some systems have been introduced in previous studies to automatically measure the behaviors, these systems prevent participants to talk in a natural way. In this study, we propose a software application program to automatically analyze behaviors of the participants including utterances, facial expressions (happy or neutral), head nods, and poses using only a single omnidirectional camera. The camera is small enough to be embedded into a table to allow participants to have spontaneous conversation. The proposed software utilizes facial feature tracking based on constrained local model to observe the changes of the facial features captured by the camera, and the Japanese female facial expression database to recognize expressions. Our experiment results show that there are significant correlations between measurements observed by the observers and by the software.

  17. Electronic method for autofluorography of macromolecules on two-D matrices

    DOEpatents

    Davidson, Jackson B.; Case, Arthur L.

    1983-01-01

    A method for detecting, localizing, and quantifying macromolecules contained in a two-dimensional matrix is provided which employs a television-based position sensitive detection system. A molecule-containing matrix may be produced by conventional means to produce spots of light at the molecule locations which are detected by the television system. The matrix, such as a gel matrix, is exposed to an electronic camera system including an image-intensifier and secondary electron conduction camera capable of light integrating times of many minutes. A light image stored in the form of a charge image on the camera tube target is scanned by conventional television techniques, digitized, and stored in a digital memory. Intensity of any point on the image may be determined from the number at the memory address of the point. The entire image may be displayed on a television monitor for inspection and photographing or individual spots may be analyzed through selected readout of the memory locations. Compared to conventional film exposure methods, the exposure time may be reduced 100-1000 times.

  18. Lytro camera technology: theory, algorithms, performance analysis

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor; Yu, Zhan; Lumsdaine, Andrew; Goma, Sergio

    2013-03-01

    The Lytro camera is the first implementation of a plenoptic camera for the consumer market. We consider it a successful example of the miniaturization aided by the increase in computational power characterizing mobile computational photography. The plenoptic camera approach to radiance capture uses a microlens array as an imaging system focused on the focal plane of the main camera lens. This paper analyzes the performance of Lytro camera from a system level perspective, considering the Lytro camera as a black box, and uses our interpretation of Lytro image data saved by the camera. We present our findings based on our interpretation of Lytro camera file structure, image calibration and image rendering; in this context, artifacts and final image resolution are discussed.

  19. Data rate enhancement of optical camera communications by compensating inter-frame gaps

    NASA Astrophysics Data System (ADS)

    Nguyen, Duy Thong; Park, Youngil

    2017-07-01

    Optical camera communications (OCC) is a convenient way of transmitting data between LED lamps and image sensors that are included in most smart devices. Although many schemes have been suggested to increase the data rate of the OCC system, it is still much lower than that of the photodiode-based LiFi system. One major reason of this low data rate is attributed to the inter-frame gap (IFG) of image sensor system, that is, the time gap between consecutive image frames. In this paper, we propose a way to compensate for this IFG efficiently by an interleaved Hamming coding scheme. The proposed scheme is implemented and the performance is measured.

  20. Affordable CZT SPECT with dose-time minimization (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Hugg, James W.; Harris, Brian W.; Radley, Ian

    2017-03-01

    PURPOSE Pixelated CdZnTe (CZT) detector arrays are used in molecular imaging applications that can enable precision medicine, including small-animal SPECT, cardiac SPECT, molecular breast imaging (MBI), and general purpose SPECT. The interplay of gamma camera, collimator, gantry motion, and image reconstruction determines image quality and dose-time-FOV tradeoffs. Both dose and exam time can be minimized without compromising diagnostic content. METHODS Integration of pixelated CZT detectors with advanced ASICs and readout electronics improves system performance. Because historically CZT was expensive, the first clinical applications were limited to small FOV. Radiation doses were initially high and exam times long. Advances have significantly improved efficiency of CZT-based molecular imaging systems and the cost has steadily declined. We have built a general purpose SPECT system using our 40 cm x 53 cm CZT gamma camera with 2 mm pixel pitch and characterized system performance. RESULTS Compared to NaI scintillator gamma cameras: intrinsic spatial resolution improved from 3.8 mm to 2.0 mm; energy resolution improved from 9.8% to <4 % at 140 keV; maximum count rate is <1.5 times higher; non-detection camera edges are reduced 3-fold. Scattered photons are greatly reduced in the photopeak energy window; image contrast is improved; and the optimal FOV is increased to the entire camera area. CONCLUSION Continual improvements in CZT detector arrays for molecular imaging, coupled with optimal collimator and image reconstruction, result in minimized dose and exam time. With CZT cost improving, affordable whole-body CZT general purpose SPECT is expected to enable precision medicine applications.

  1. Visualizing the history of living spaces.

    PubMed

    Ivanov, Yuri; Wren, Christopher; Sorokin, Alexander; Kaur, Ishwinder

    2007-01-01

    The technology available to building designers now makes it possible to monitor buildings on a very large scale. Video cameras and motion sensors are commonplace in practically every office space, and are slowly making their way into living spaces. The application of such technologies, in particular video cameras, while improving security, also violates privacy. On the other hand, motion sensors, while being privacy-conscious, typically do not provide enough information for a human operator to maintain the same degree of awareness about the space that can be achieved by using video cameras. We propose a novel approach in which we use a large number of simple motion sensors and a small set of video cameras to monitor a large office space. In our system we deployed 215 motion sensors and six video cameras to monitor the 3,000-square-meter office space occupied by 80 people for a period of about one year. The main problem in operating such systems is finding a way to present this highly multidimensional data, which includes both spatial and temporal components, to a human operator to allow browsing and searching recorded data in an efficient and intuitive way. In this paper we present our experiences and the solutions that we have developed in the course of our work on the system. We consider this work to be the first step in helping designers and managers of building systems gain access to information about occupants' behavior in the context of an entire building in a way that is only minimally intrusive to the occupants' privacy.

  2. Energy-efficient lighting system for television

    DOEpatents

    Cawthorne, Duane C.

    1987-07-21

    A light control system for a television camera comprises an artificial light control system which is cooperative with an iris control system. This artificial light control system adjusts the power to lamps illuminating the camera viewing area to provide only sufficient artificial illumination necessary to provide a sufficient video signal when the camera iris is substantially open.

  3. A Camera and Multi-Sensor Automated Station Design for Polar Physical and Biological Systems Monitoring: AMIGOS

    NASA Astrophysics Data System (ADS)

    Bohlander, J. A.; Ross, R.; Scambos, T.; Haran, T. M.; Bauer, R. J.

    2012-12-01

    The Automated Meteorology - Ice/Indigenous species - Geophysics Observation System (AMIGOS) consists of a set of measurement instruments and camera(s) controlled by a single-board computer with a simplified Linux operating system and an Iridium satellite modem supporting two-way communication. Primary features of the system relevant to polar operations are low power requirements, daily data uploading, reprogramming, tolerance for low temperatures, and various approaches for automatic resets and recovery from low power or cold shut-down. Instruments include a compact weather station, C/A or dual-frequency GPS, solar flux and reflectivity sensors, sonic snow gages, simplified radio-echo-sounder, and resistance thermometer string in the firn column. In the current state of development, there are two basic designs. One is intended for in situ observations of glacier conditions. The other design supports a high-resolution camera for monitoring biological or geophysical systems from short distances (100 m to 20 km). The stations have been successfully used in several locations for operational support, monitoring rapid ice changes in response to climate change or iceberg drift, and monitoring penguin colony activity. As of June, 2012, there are 9 AMIGOS systems installed, all on the Antarctic continent. The stations are a working prototype for a planned series of upgraded stations, currently termed 'Sentinels'. These stations would carry further instrumentation, communications, and processing capability to investigate ice - ocean interaction from ice tongue, ice shelf, or fjord coastline areas.

  4. Improved Airborne System for Sensing Wildfires

    NASA Technical Reports Server (NTRS)

    McKeown, Donald; Richardson, Michael

    2008-01-01

    The Wildfire Airborne Sensing Program (WASP) is engaged in a continuing effort to develop an improved airborne instrumentation system for sensing wildfires. The system could also be used for other aerial-imaging applications, including mapping and military surveillance. Unlike prior airborne fire-detection instrumentation systems, the WASP system would not be based on custom-made multispectral line scanners and associated custom- made complex optomechanical servomechanisms, sensors, readout circuitry, and packaging. Instead, the WASP system would be based on commercial off-the-shelf (COTS) equipment that would include (1) three or four electronic cameras (one for each of three or four wavelength bands) instead of a multispectral line scanner; (2) all associated drive and readout electronics; (3) a camera-pointing gimbal; (4) an inertial measurement unit (IMU) and a Global Positioning System (GPS) receiver for measuring the position, velocity, and orientation of the aircraft; and (5) a data-acquisition subsystem. It would be necessary to custom-develop an integrated sensor optical-bench assembly, a sensor-management subsystem, and software. The use of mostly COTS equipment is intended to reduce development time and cost, relative to those of prior systems.

  5. DETECTION AND IDENTIFICATION OF TOXIC AIR POLLUTANTS USING FIELD PORTABLE AND AIRBORNE REMOTE IMAGING SYSTEMS

    EPA Science Inventory

    Remote sensing technologies are a class of instrument and sensor systems that include laser imageries, imaging spectrometers, and visible to thermal infrared cameras. These systems have been successfully used for gas phase chemical compound identification in a variety of field e...

  6. Robotic Welding and Inspection System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    H. B. Smartt; D. P. Pace; E. D. Larsen

    2008-06-01

    This paper presents a robotic system for GTA welding of lids on cylindrical vessels. The system consists of an articulated robot arm, a rotating positioner, end effectors for welding, grinding, ultrasonic and eddy current inspection. Features include weld viewing cameras, modular software, and text-based procedural files for process and motion trajectories.

  7. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.

    PubMed

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-06-24

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.

  8. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras

    PubMed Central

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-01-01

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turkington, T.

    This education session will cover the physics and operation principles of gamma cameras and PET scanners. The first talk will focus on PET imaging. An overview of the principles of PET imaging will be provided, including positron decay physics, and the transition from 2D to 3D imaging. More recent advances in hardware and software will be discussed, such as time-of-flight imaging, and improvements in reconstruction algorithms that provide for options such as depth-of-interaction corrections. Quantitative applications of PET will be discussed, as well as the requirements for doing accurate quantitation. Relevant performance tests will also be described. Learning Objectives: Bemore » able to describe basic physics principles of PET and operation of PET scanners. Learn about recent advances in PET scanner hardware technology. Be able to describe advances in reconstruction techniques and improvements Be able to list relevant performance tests. The second talk will focus on gamma cameras. The Nuclear Medicine subcommittee has charged a task group (TG177) to develop a report on the current state of physics testing of gamma cameras, SPECT, and SPECT/CT systems. The report makes recommendations for performance tests to be done for routine quality assurance, annual physics testing, and acceptance tests, and identifies those needed satisfy the ACR accreditation program and The Joint Commission imaging standards. The report is also intended to be used as a manual with detailed instructions on how to perform tests under widely varying conditions. Learning Objectives: At the end of the presentation members of the audience will: Be familiar with the tests recommended for routine quality assurance, annual physics testing, and acceptance tests of gamma cameras for planar imaging. Be familiar with the tests recommended for routine quality assurance, annual physics testing, and acceptance tests of SPECT systems. Be familiar with the tests of a SPECT/CT system that include the CT images for SPECT reconstructions. Become knowledgeable of items to be included in annual acceptance testing reports including CT dosimetry and PACS monitor measurements. T. Turkington, GE Healthcare.« less

  10. Stereoscopic Machine-Vision System Using Projected Circles

    NASA Technical Reports Server (NTRS)

    Mackey, Jeffrey R.

    2010-01-01

    A machine-vision system capable of detecting obstacles large enough to damage or trap a robotic vehicle is undergoing development. The system includes (1) a pattern generator that projects concentric circles of laser light forward onto the terrain, (2) a stereoscopic pair of cameras that are aimed forward to acquire images of the circles, (3) a frame grabber and digitizer for acquiring image data from the cameras, and (4) a single-board computer that processes the data. The system is being developed as a prototype of machine- vision systems to enable robotic vehicles ( rovers ) on remote planets to avoid craters, large rocks, and other terrain features that could capture or damage the vehicles. Potential terrestrial applications of systems like this one could include terrain mapping, collision avoidance, navigation of robotic vehicles, mining, and robotic rescue. This system is based partly on the same principles as those of a prior stereoscopic machine-vision system in which the cameras acquire images of a single stripe of laser light that is swept forward across the terrain. However, this system is designed to afford improvements over some of the undesirable features of the prior system, including the need for a pan-and-tilt mechanism to aim the laser to generate the swept stripe, ambiguities in interpretation of the single-stripe image, the time needed to sweep the stripe across the terrain and process the data from many images acquired during that time, and difficulty of calibration because of the narrowness of the stripe. In this system, the pattern generator does not contain any moving parts and need not be mounted on a pan-and-tilt mechanism: the pattern of concentric circles is projected steadily in the forward direction. The system calibrates itself by use of data acquired during projection of the concentric-circle pattern onto a known target representing flat ground. The calibration- target image data are stored in the computer memory for use as a template in processing terrain images. During operation on terrain, the images acquired by the left and right cameras are analyzed. The analysis includes (1) computation of the horizontal and vertical dimensions and the aspect ratios of rectangles that bound the circle images and (2) comparison of these aspect ratios with those of the template. Coordinates of distortions of the circles are used to identify and locate objects. If the analysis leads to identification of an object of significant size, then stereoscopicvision algorithms are used to estimate the distance to the object. The time taken in performing this analysis on a single pair of images acquired by the left and right cameras in this system is a fraction of the time taken in processing the many pairs of images acquired in a sweep of the laser stripe across the field of view in the prior system. The results of the analysis include data on sizes and shapes of, and distances and directions to, objects. Coordinates of objects are updated as the vehicle moves so that intelligent decisions regarding speed and direction can be made. The results of the analysis are utilized in a computational decision-making process that generates obstacle-avoidance data and feeds those data to the control system of the robotic vehicle.

  11. Composite video and graphics display for multiple camera viewing system in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1991-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  12. Composite video and graphics display for camera viewing systems in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1993-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  13. Optical image acquisition system for colony analysis

    NASA Astrophysics Data System (ADS)

    Wang, Weixing; Jin, Wenbiao

    2006-02-01

    For counting of both colonies and plaques, there is a large number of applications including food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing, AMES testing, pharmaceuticals, paints, sterile fluids and fungal contamination. Recently, many researchers and developers have made efforts for this kind of systems. By investigation, some existing systems have some problems since they belong to a new technology product. One of the main problems is image acquisition. In order to acquire colony images with good quality, an illumination box was constructed as: the box includes front lightning and back lightning, which can be selected by users based on properties of colony dishes. With the illumination box, lightning can be uniform; colony dish can be put in the same place every time, which make image processing easy. A digital camera in the top of the box connected to a PC computer with a USB cable, all the camera functions are controlled by the computer.

  14. Infrared Camera Characterization of Bi-Propellant Reaction Control Engines during Auxiliary Propulsion Systems Tests at NASA's White Sands Test Facility in Las Cruces, New Mexico

    NASA Technical Reports Server (NTRS)

    Holleman, Elizabeth; Sharp, David; Sheller, Richard; Styron, Jason

    2007-01-01

    This paper describes the application of a FUR Systems A40M infrared (IR) digital camera for thermal monitoring of a Liquid Oxygen (LOX) and Ethanol bi-propellant Reaction Control Engine (RCE) during Auxiliary Propulsion System (APS) testing at the National Aeronautics & Space Administration's (NASA) White Sands Test Facility (WSTF) near Las Cruces, New Mexico. Typically, NASA has relied mostly on the use of ThermoCouples (TC) for this type of thermal monitoring due to the variability of constraints required to accurately map rapidly changing temperatures from ambient to glowing hot chamber material. Obtaining accurate real-time temperatures in the JR spectrum is made even more elusive by the changing emissivity of the chamber material as it begins to glow. The parameters evaluated prior to APS testing included: (1) remote operation of the A40M camera using fiber optic Firewire signal sender and receiver units; (2) operation of the camera inside a Pelco explosion proof enclosure with a germanium window; (3) remote analog signal display for real-time monitoring; (4) remote digital data acquisition of the A40M's sensor information using FUR's ThermaCAM Researcher Pro 2.8 software; and (5) overall reliability of the system. An initial characterization report was prepared after the A40M characterization tests at Marshall Space Flight Center (MSFC) to document controlled heat source comparisons to calibrated TCs. Summary IR digital data recorded from WSTF's APS testing is included within this document along with findings, lessons learned, and recommendations for further usage as a monitoring tool for the development of rocket engines.

  15. Tests of commercial colour CMOS cameras for astronomical applications

    NASA Astrophysics Data System (ADS)

    Pokhvala, S. M.; Reshetnyk, V. M.; Zhilyaev, B. E.

    2013-12-01

    We present some results of testing commercial colour CMOS cameras for astronomical applications. Colour CMOS sensors allow to perform photometry in three filters simultaneously that gives a great advantage compared with monochrome CCD detectors. The Bayer BGR colour system realized in colour CMOS sensors is close to the astronomical Johnson BVR system. The basic camera characteristics: read noise (e^{-}/pix), thermal noise (e^{-}/pix/sec) and electronic gain (e^{-}/ADU) for the commercial digital camera Canon 5D MarkIII are presented. We give the same characteristics for the scientific high performance cooled CCD camera system ALTA E47. Comparing results for tests of Canon 5D MarkIII and CCD ALTA E47 show that present-day commercial colour CMOS cameras can seriously compete with the scientific CCD cameras in deep astronomical imaging.

  16. Research on the electro-optical assistant landing system based on the dual camera photogrammetry algorithm

    NASA Astrophysics Data System (ADS)

    Mi, Yuhe; Huang, Yifan; Li, Lin

    2015-08-01

    Based on the location technique of beacon photogrammetry, Dual Camera Photogrammetry (DCP) algorithm was used to assist helicopters landing on the ship. In this paper, ZEMAX was used to simulate the two Charge Coupled Device (CCD) cameras imaging four beacons on both sides of the helicopter and output the image to MATLAB. Target coordinate systems, image pixel coordinate systems, world coordinate systems and camera coordinate systems were established respectively. According to the ideal pin-hole imaging model, the rotation matrix and translation vector of the target coordinate systems and the camera coordinate systems could be obtained by using MATLAB to process the image information and calculate the linear equations. On the basis mentioned above, ambient temperature and the positions of the beacons and cameras were changed in ZEMAX to test the accuracy of the DCP algorithm in complex sea status. The numerical simulation shows that in complex sea status, the position measurement accuracy can meet the requirements of the project.

  17. Studies on dynamic behavior of rotating mirrors

    NASA Astrophysics Data System (ADS)

    Li, Jingzhen; Sun, Fengshan; Gong, Xiangdong; Huang, Hongbin; Tian, Jie

    2005-02-01

    A rotating mirror is a kernel unit in a Miller-type high speed camera, which is both as an imaging element in optical path and as an element to implement ultrahigh speed photography. According to Schardin"s Principle, information capacity of an ultrahigh speed camera with rotating mirror depends on primary wavelength of lighting used by the camera and limit linear velocity on edge of the rotating-mirror: the latter is related to material (including specifications in technology), cross-section shape and lateral structure of rotating mirror. In this manuscript dynamic behavior of high strength aluminium alloy rotating mirrors is studied, from which it is preliminarily shown that an aluminium alloy rotating mirror can be absolutely used as replacement for a steel rotating-mirror or a titanium alloy rotating-mirror in framing photographic systems, and it could be also used as a substitute for a beryllium rotating-mirror in streak photographic systems.

  18. High speed movies of turbulence in Alcator C-Mod

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terry, J.L.; Zweben, S.J.; Bose, B.

    2004-10-01

    A high speed (250 kHz), 300 frame charge coupled device camera has been used to image turbulence in the Alcator C-Mod Tokamak. The camera system is described and some of its important characteristics are measured, including time response and uniformity over the field-of-view. The diagnostic has been used in two applications. One uses gas-puff imaging to illuminate the turbulence in the edge/scrape-off-layer region, where D{sub 2} gas puffs localize the emission in a plane perpendicular to the magnetic field when viewed by the camera system. The dynamics of the underlying turbulence around and outside the separatrix are detected in thismore » manner. In a second diagnostic application, the light from an injected, ablating, high speed Li pellet is observed radially from the outer midplane, and fast poloidal motion of toroidal striations are seen in the Li{sup +} light well inside the separatrix.« less

  19. Study of a quasi-microscope design for planetary landers

    NASA Technical Reports Server (NTRS)

    Giat, O.; Brown, E. B.

    1973-01-01

    The Viking Lander fascimile camera, in its present form, provides for a minimum object distance of 1.9 meters, at which distance its resolution of 0.0007 radian provides an object resolution of 1.33 millimeters. It was deemed desirable, especially for follow-on Viking missions, to provide means for examing Martian terrain at resolutions considerably higher than that now provided. This led to the concept of quasi-microscope, an attachment to be used in conjunction with the fascimile camera to convert it to a low power microscope. The results are reported of an investigation to consider alternate optical configurations for the quasi-microscope and to develop optical designs for the selected system or systems. Initial requirements included consideration of object resolutions in the range of 2 to 50 micrometers, an available field of view of the order of 500 pixels, and no significant modifications to the fascimile camera.

  20. Low-cost digital dynamic visualization system

    NASA Astrophysics Data System (ADS)

    Asundi, Anand K.; Sajan, M. R.

    1995-05-01

    High speed photographic systems like the image rotation camera, the Cranz Schardin camera and the drum camera are typically used for recording and visualization of dynamic events in stress analysis, fluid mechanics, etc. All these systems are fairly expensive and generally not simple to use. Furthermore they are all based on photographic film recording systems requiring time consuming and tedious wet processing of the films. Currently digital cameras are replacing to certain extent the conventional cameras for static experiments. Recently, there is lot of interest in developing and modifying CCD architectures and recording arrangements for dynamic scene analysis. Herein we report the use of a CCD camera operating in the Time Delay and Integration (TDI) mode for digitally recording dynamic scenes. Applications in solid as well as fluid impact problems are presented.

  1. Multisensory System for Fruit Harvesting Robots. Experimental Testing in Natural Scenarios and with Different Kinds of Crops

    PubMed Central

    Fernández, Roemi; Salinas, Carlota; Montes, Héctor; Sarria, Javier

    2014-01-01

    The motivation of this research was to explore the feasibility of detecting and locating fruits from different kinds of crops in natural scenarios. To this end, a unique, modular and easily adaptable multisensory system and a set of associated pre-processing algorithms are proposed. The offered multisensory rig combines a high resolution colour camera and a multispectral system for the detection of fruits, as well as for the discrimination of the different elements of the plants, and a Time-Of-Flight (TOF) camera that provides fast acquisition of distances enabling the localisation of the targets in the coordinate space. A controlled lighting system completes the set-up, increasing its flexibility for being used in different working conditions. The pre-processing algorithms designed for the proposed multisensory system include a pixel-based classification algorithm that labels areas of interest that belong to fruits and a registration algorithm that combines the results of the aforementioned classification algorithm with the data provided by the TOF camera for the 3D reconstruction of the desired regions. Several experimental tests have been carried out in outdoors conditions in order to validate the capabilities of the proposed system. PMID:25615730

  2. KSC-01pp1802

    NASA Image and Video Library

    2001-12-01

    KENNEDY SPACE CENTER, Fla. - STS-109 Mission Specialist Richard Lennehan (left) and Payload Commander John Grunsfeld get a feel for tools and equipment that will be used on the mission. The crew is at KSC to take part in Crew Equipment Interface Test activities that include familiarization with the orbiter and equipment. The goal of the mission is to service the HST, replacing Solar Array 2 with Solar Array 3, replacing the Power Control Unit, removing the Faint Object Camera and installing the Advanced Camera for Surveys, installing the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) Cooling System, and installing New Outer Blanket Layer insulation on bays 5 through 8. Mission STS-109 is scheduled for launch Feb. 14, 2002

  3. The role of an open-space CCTV system in limiting alcohol-related assault injuries in a late-night entertainment precinct in a tropical Queensland city, Australia.

    PubMed

    Pointing, Shane; Hayes-Jonkers, Charmaine; Bohanna, India; Clough, Alan

    2012-02-01

    Closed circuit television (CCTV) systems which incorporate real-time communication links between camera room operators and on-the-ground security may limit injuries resulting from alcohol-related assault. This pilot study examined CCTV footage and operator records of security responses for two periods totalling 22 days in 2010-2011 when 30 alcohol-related assaults were recorded. Semistructured discussions were conducted with camera room operators during 18 h of observation. Camera operators were proactive, efficiently directing street security to assault incidents. The system intervened in 40% (n=12) of alcohol-related assaults, limiting possible injury. This included three incidents judged as potentially preventable. A further five (17%) assault incidents were also judged as potentially preventable, while 43% (n=13) happened too quickly for intervention. Case studies describe security intervention in each category. Further research is recommended, particularly to evaluate the effects on preventing injuries through targeted awareness training to improve responsiveness and enhance the preventative capacity of similar CCTV systems.

  4. Medical imaging systems

    DOEpatents

    Frangioni, John V

    2013-06-25

    A medical imaging system provides simultaneous rendering of visible light and diagnostic or functional images. The system may be portable, and may include adapters for connecting various light sources and cameras in open surgical environments or laparascopic or endoscopic environments. A user interface provides control over the functionality of the integrated imaging system. In one embodiment, the system provides a tool for surgical pathology.

  5. The Visible Imaging System (VIS) for the Polar Spacecraft

    NASA Technical Reports Server (NTRS)

    Frank, L. A.; Sigwarth, J. B.; Craven, J. D.; Cravens, J. P.; Dolan, J. S.; Dvorsky, M. R.; Hardebeck, P. K.; Harvey, J. D.; Muller, D. W.

    1995-01-01

    The Visible Imaging System (VIS) is a set of three low-light-level cameras to be flown on the POLAR spacecraft of the Global Geospace Science (GGS) program which is an element of the International Solar-Terrestrial Physics (ISTP) campaign. Two of these cameras share primary and some secondary optics and are designed to provide images of the nighttime auroral oval at visible wavelengths. A third camera is used to monitor the directions of the fields-of-view of these sensitive auroral cameras with respect to sunlit Earth. The auroral emissions of interest include those from N+2 at 391.4 nm, 0 I at 557.7 and 630.0 nm, H I at 656.3 nm, and 0 II at 732.0 nm. The two auroral cameras have different spatial resolutions. These resolutions are about 10 and 20 km from a spacecraft altitude of 8 R(sub e). The time to acquire and telemeter a 256 x 256-pixel image is about 12 s. The primary scientific objectives of this imaging instrumentation, together with the in-situ observations from the ensemble of ISTP spacecraft, are (1) quantitative assessment of the dissipation of magnetospheric energy into the auroral ionosphere, (2) an instantaneous reference system for the in-situ measurements, (3) development of a substantial model for energy flow within the magnetosphere, (4) investigation of the topology of the magnetosphere, and (5) delineation of the responses of the magnetosphere to substorms and variable solar wind conditions.

  6. IMAX camera (12-IML-1)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The IMAX camera system is used to record on-orbit activities of interest to the public. Because of the extremely high resolution of the IMAX camera, projector, and audio systems, the audience is afforded a motion picture experience unlike any other. IMAX and OMNIMAX motion picture systems were designed to create motion picture images of superior quality and audience impact. The IMAX camera is a 65 mm, single lens, reflex viewing design with a 15 perforation per frame horizontal pull across. The frame size is 2.06 x 2.77 inches. Film travels through the camera at a rate of 336 feet per minute when the camera is running at the standard 24 frames/sec.

  7. Accurate estimation of camera shot noise in the real-time

    NASA Astrophysics Data System (ADS)

    Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.

    2017-10-01

    Nowadays digital cameras are essential parts of various technological processes and daily tasks. They are widely used in optics and photonics, astronomy, biology and other various fields of science and technology such as control systems and video-surveillance monitoring. One of the main information limitations of photo- and videocameras are noises of photosensor pixels. Camera's photosensor noise can be divided into random and pattern components. Temporal noise includes random noise component while spatial noise includes pattern noise component. Temporal noise can be divided into signal-dependent shot noise and signal-nondependent dark temporal noise. For measurement of camera noise characteristics, the most widely used methods are standards (for example, EMVA Standard 1288). It allows precise shot and dark temporal noise measurement but difficult in implementation and time-consuming. Earlier we proposed method for measurement of temporal noise of photo- and videocameras. It is based on the automatic segmentation of nonuniform targets (ASNT). Only two frames are sufficient for noise measurement with the modified method. In this paper, we registered frames and estimated shot and dark temporal noises of cameras consistently in the real-time. The modified ASNT method is used. Estimation was performed for the cameras: consumer photocamera Canon EOS 400D (CMOS, 10.1 MP, 12 bit ADC), scientific camera MegaPlus II ES11000 (CCD, 10.7 MP, 12 bit ADC), industrial camera PixeLink PL-B781F (CMOS, 6.6 MP, 10 bit ADC) and video-surveillance camera Watec LCL-902C (CCD, 0.47 MP, external 8 bit ADC). Experimental dependencies of temporal noise on signal value are in good agreement with fitted curves based on a Poisson distribution excluding areas near saturation. Time of registering and processing of frames used for temporal noise estimation was measured. Using standard computer, frames were registered and processed during a fraction of second to several seconds only. Also the accuracy of the obtained temporal noise values was estimated.

  8. Development of biostereometric experiments. [stereometric camera system

    NASA Technical Reports Server (NTRS)

    Herron, R. E.

    1978-01-01

    The stereometric camera was designed for close-range techniques in biostereometrics. The camera focusing distance of 360 mm to infinity covers a broad field of close-range photogrammetry. The design provides for a separate unit for the lens system and interchangeable backs on the camera for the use of single frame film exposure, roll-type film cassettes, or glass plates. The system incorporates the use of a surface contrast optical projector.

  9. Airport Remote Tower Sensor Systems

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Gawdiak, Yuri; Leidichj, Christopher; Papasin, Richard; Tran, Peter B.; Bass, Kevin

    2006-01-01

    Networks of video cameras, meteorological sensors, and ancillary electronic equipment are under development in collaboration among NASA Ames Research Center, the Federal Aviation Administration (FAA), and the National Oceanic Atmospheric Administration (NOAA). These networks are to be established at and near airports to provide real-time information on local weather conditions that affect aircraft approaches and landings. The prototype network is an airport-approach-zone camera system (AAZCS), which has been deployed at San Francisco International Airport (SFO) and San Carlos Airport (SQL). The AAZCS includes remotely controlled color video cameras located on top of SFO and SQL air-traffic control towers. The cameras are controlled by the NOAA Center Weather Service Unit located at the Oakland Air Route Traffic Control Center and are accessible via a secure Web site. The AAZCS cameras can be zoomed and can be panned and tilted to cover a field of view 220 wide. The NOAA observer can see the sky condition as it is changing, thereby making possible a real-time evaluation of the conditions along the approach zones of SFO and SQL. The next-generation network, denoted a remote tower sensor system (RTSS), will soon be deployed at the Half Moon Bay Airport and a version of it will eventually be deployed at Los Angeles International Airport. In addition to remote control of video cameras via secure Web links, the RTSS offers realtime weather observations, remote sensing, portability, and a capability for deployment at remote and uninhabited sites. The RTSS can be used at airports that lack control towers, as well as at major airport hubs, to provide synthetic augmentation of vision for both local and remote operations under what would otherwise be conditions of low or even zero visibility.

  10. A direct-view customer-oriented digital holographic camera

    NASA Astrophysics Data System (ADS)

    Besaga, Vira R.; Gerhardt, Nils C.; Maksimyak, Peter P.; Hofmann, Martin R.

    2018-01-01

    In this paper, we propose a direct-view digital holographic camera system consisting mostly of customer-oriented components. The camera system is based on standard photographic units such as camera sensor and objective and is adapted to operate under off-axis external white-light illumination. The common-path geometry of the holographic module of the system ensures direct-view operation. The system can operate in both self-reference and self-interference modes. As a proof of system operability, we present reconstructed amplitude and phase information of a test sample.

  11. Investigation into the use of smartphone as a machine vision device for engineering metrology and flaw detection, with focus on drilling

    NASA Astrophysics Data System (ADS)

    Razdan, Vikram; Bateman, Richard

    2015-05-01

    This study investigates the use of a Smartphone and its camera vision capabilities in Engineering metrology and flaw detection, with a view to develop a low cost alternative to Machine vision systems which are out of range for small scale manufacturers. A Smartphone has to provide a similar level of accuracy as Machine Vision devices like Smart cameras. The objective set out was to develop an App on an Android Smartphone, incorporating advanced Computer vision algorithms written in java code. The App could then be used for recording measurements of Twist Drill bits and hole geometry, and analysing the results for accuracy. A detailed literature review was carried out for in-depth study of Machine vision systems and their capabilities, including a comparison between the HTC One X Android Smartphone and the Teledyne Dalsa BOA Smart camera. A review of the existing metrology Apps in the market was also undertaken. In addition, the drilling operation was evaluated to establish key measurement parameters of a twist Drill bit, especially flank wear and diameter. The methodology covers software development of the Android App, including the use of image processing algorithms like Gaussian Blur, Sobel and Canny available from OpenCV software library, as well as designing and developing the experimental set-up for carrying out the measurements. The results obtained from the experimental set-up were analysed for geometry of Twist Drill bits and holes, including diametrical measurements and flaw detection. The results show that Smartphones like the HTC One X have the processing power and the camera capability to carry out metrological tasks, although dimensional accuracy achievable from the Smartphone App is below the level provided by Machine vision devices like Smart cameras. A Smartphone with mechanical attachments, capable of image processing and having a reasonable level of accuracy in dimensional measurement, has the potential to become a handy low-cost Machine vision system for small scale manufacturers, especially in field metrology and flaw detection.

  12. Video-Camera-Based Position-Measuring System

    NASA Technical Reports Server (NTRS)

    Lane, John; Immer, Christopher; Brink, Jeffrey; Youngquist, Robert

    2005-01-01

    A prototype optoelectronic system measures the three-dimensional relative coordinates of objects of interest or of targets affixed to objects of interest in a workspace. The system includes a charge-coupled-device video camera mounted in a known position and orientation in the workspace, a frame grabber, and a personal computer running image-data-processing software. Relative to conventional optical surveying equipment, this system can be built and operated at much lower cost; however, it is less accurate. It is also much easier to operate than are conventional instrumentation systems. In addition, there is no need to establish a coordinate system through cooperative action by a team of surveyors. The system operates in real time at around 30 frames per second (limited mostly by the frame rate of the camera). It continuously tracks targets as long as they remain in the field of the camera. In this respect, it emulates more expensive, elaborate laser tracking equipment that costs of the order of 100 times as much. Unlike laser tracking equipment, this system does not pose a hazard of laser exposure. Images acquired by the camera are digitized and processed to extract all valid targets in the field of view. The three-dimensional coordinates (x, y, and z) of each target are computed from the pixel coordinates of the targets in the images to accuracy of the order of millimeters over distances of the orders of meters. The system was originally intended specifically for real-time position measurement of payload transfers from payload canisters into the payload bay of the Space Shuttle Orbiters (see Figure 1). The system may be easily adapted to other applications that involve similar coordinate-measuring requirements. Examples of such applications include manufacturing, construction, preliminary approximate land surveying, and aerial surveying. For some applications with rectangular symmetry, it is feasible and desirable to attach a target composed of black and white squares to an object of interest (see Figure 2). For other situations, where circular symmetry is more desirable, circular targets also can be created. Such a target can readily be generated and modified by use of commercially available software and printed by use of a standard office printer. All three relative coordinates (x, y, and z) of each target can be determined by processing the video image of the target. Because of the unique design of corresponding image-processing filters and targets, the vision-based position- measurement system is extremely robust and tolerant of widely varying fields of view, lighting conditions, and varying background imagery.

  13. Clinical applications of commercially available video recording and monitoring systems: inexpensive, high-quality video recording and monitoring systems for endoscopy and microsurgery.

    PubMed

    Tsunoda, Koichi; Tsunoda, Atsunobu; Ishimoto, ShinnIchi; Kimura, Satoko

    2006-01-01

    The exclusive charge-coupled device (CCD) camera system for the endoscope and electronic fiberscopes are in widespread use. However, both are usually stationary in an office or examination room, and a wheeled cart is needed for mobility. The total costs of the CCD camera system and electronic fiberscopy system are at least US Dollars 10,000 and US Dollars 30,000, respectively. Recently, the performance of audio and visual instruments has improved dramatically, with a concomitant reduction in their cost. Commercially available CCD video cameras with small monitors have become common. They provide excellent image quality and are much smaller and less expensive than previous models. The authors have developed adaptors for the popular mini-digital video (mini-DV) camera. The camera also provides video and acoustic output signals; therefore, the endoscopic images can be viewed on a large monitor simultaneously. The new system (a mini-DV video camera and an adaptor) costs only US Dollars 1,000. Therefore, the system is both cost-effective and useful for the outpatient clinic or casualty setting, or on house calls for the purpose of patient education. In the future, the authors plan to introduce the clinical application of a high-vision camera and an infrared camera as medical instruments for clinical and research situations.

  14. A Quasi-Static Method for Determining the Characteristics of a Motion Capture Camera System in a "Split-Volume" Configuration

    NASA Technical Reports Server (NTRS)

    Miller, Chris; Mulavara, Ajitkumar; Bloomberg, Jacob

    2001-01-01

    To confidently report any data collected from a video-based motion capture system, its functional characteristics must be determined, namely accuracy, repeatability and resolution. Many researchers have examined these characteristics with motion capture systems, but they used only two cameras, positioned 90 degrees to each other. Everaert used 4 cameras, but all were aligned along major axes (two in x, one in y and z). Richards compared the characteristics of different commercially available systems set-up in practical configurations, but all cameras viewed a single calibration volume. The purpose of this study was to determine the accuracy, repeatability and resolution of a 6-camera Motion Analysis system in a split-volume configuration using a quasistatic methodology.

  15. Streak camera receiver definition study

    NASA Technical Reports Server (NTRS)

    Johnson, C. B.; Hunkler, L. T., Sr.; Letzring, S. A.; Jaanimagi, P.

    1990-01-01

    Detailed streak camera definition studies were made as a first step toward full flight qualification of a dual channel picosecond resolution streak camera receiver for the Geoscience Laser Altimeter and Ranging System (GLRS). The streak camera receiver requirements are discussed as they pertain specifically to the GLRS system, and estimates of the characteristics of the streak camera are given, based upon existing and near-term technological capabilities. Important problem areas are highlighted, and possible corresponding solutions are discussed.

  16. Systematic Calibration for a Backpacked Spherical Photogrammetry Imaging System

    NASA Astrophysics Data System (ADS)

    Rau, J. Y.; Su, B. W.; Hsiao, K. W.; Jhan, J. P.

    2016-06-01

    A spherical camera can observe the environment for almost 720 degrees' field of view in one shoot, which is useful for augmented reality, environment documentation, or mobile mapping applications. This paper aims to develop a spherical photogrammetry imaging system for the purpose of 3D measurement through a backpacked mobile mapping system (MMS). The used equipment contains a Ladybug-5 spherical camera, a tactical grade positioning and orientation system (POS), i.e. SPAN-CPT, and an odometer, etc. This research aims to directly apply photogrammetric space intersection technique for 3D mapping from a spherical image stereo-pair. For this purpose, several systematic calibration procedures are required, including lens distortion calibration, relative orientation calibration, boresight calibration for direct georeferencing, and spherical image calibration. The lens distortion is serious on the ladybug-5 camera's original 6 images. Meanwhile, for spherical image mosaicking from these original 6 images, we propose the use of their relative orientation and correct their lens distortion at the same time. However, the constructed spherical image still contains systematic error, which will reduce the 3D measurement accuracy. Later for direct georeferencing purpose, we need to establish a ground control field for boresight/lever-arm calibration. Then, we can apply the calibrated parameters to obtain the exterior orientation parameters (EOPs) of all spherical images. In the end, the 3D positioning accuracy after space intersection will be evaluated, including EOPs obtained by structure from motion method.

  17. Precision of FLEET Velocimetry Using High-Speed CMOS Camera Systems

    NASA Technical Reports Server (NTRS)

    Peters, Christopher J.; Danehy, Paul M.; Bathel, Brett F.; Jiang, Naibo; Calvert, Nathan D.; Miles, Richard B.

    2015-01-01

    Femtosecond laser electronic excitation tagging (FLEET) is an optical measurement technique that permits quantitative velocimetry of unseeded air or nitrogen using a single laser and a single camera. In this paper, we seek to determine the fundamental precision of the FLEET technique using high-speed complementary metal-oxide semiconductor (CMOS) cameras. Also, we compare the performance of several different high-speed CMOS camera systems for acquiring FLEET velocimetry data in air and nitrogen free-jet flows. The precision was defined as the standard deviation of a set of several hundred single-shot velocity measurements. Methods of enhancing the precision of the measurement were explored such as digital binning (similar in concept to on-sensor binning, but done in post-processing), row-wise digital binning of the signal in adjacent pixels and increasing the time delay between successive exposures. These techniques generally improved precision; however, binning provided the greatest improvement to the un-intensified camera systems which had low signal-to-noise ratio. When binning row-wise by 8 pixels (about the thickness of the tagged region) and using an inter-frame delay of 65 microseconds, precisions of 0.5 meters per second in air and 0.2 meters per second in nitrogen were achieved. The camera comparison included a pco.dimax HD, a LaVision Imager scientific CMOS (sCMOS) and a Photron FASTCAM SA-X2, along with a two-stage LaVision HighSpeed IRO intensifier. Excluding the LaVision Imager sCMOS, the cameras were tested with and without intensification and with both short and long inter-frame delays. Use of intensification and longer inter-frame delay generally improved precision. Overall, the Photron FASTCAM SA-X2 exhibited the best performance in terms of greatest precision and highest signal-to-noise ratio primarily because it had the largest pixels.

  18. KSC-01pp1760

    NASA Image and Video Library

    2001-11-29

    KENNEDY SPACE CENTER, Fla. -- Fully unwrapped, the Advanced Camera for Surveys, which is suspended by an overhead crane, is checked over by workers. Part of the payload on the Hubble Space Telescope Servicing Mission, STS-109, the ACS will increase the discovery efficiency of the HST by a factor of ten. It consists of three electronic cameras and a complement of filters and dispersers that detect light from the ultraviolet to the near infrared (1200 - 10,000 angstroms). The ACS was built through a collaborative effort between Johns Hopkins University, Goddard Space Flight Center, Ball Aerospace Corporation and Space Telescope Science Institute. Tasks for the mission include replacing Solar Array 2 with Solar Array 3, replacing the Power Control Unit, removing the Faint Object Camera and installing the ACS, installing the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) Cooling System, and installing New Outer Blanket Layer insulation on bays 5 through 8. Mission STS-109 is scheduled for launch Feb. 14, 2002

  19. Mach-zehnder based optical marker/comb generator for streak camera calibration

    DOEpatents

    Miller, Edward Kirk

    2015-03-03

    This disclosure is directed to a method and apparatus for generating marker and comb indicia in an optical environment using a Mach-Zehnder (M-Z) modulator. High speed recording devices are configured to record image or other data defining a high speed event. To calibrate and establish time reference, the markers or combs are indicia which serve as timing pulses (markers) or a constant-frequency train of optical pulses (comb) to be imaged on a streak camera for accurate time based calibration and time reference. The system includes a camera, an optic signal generator which provides an optic signal to an M-Z modulator and biasing and modulation signal generators configured to provide input to the M-Z modulator. An optical reference signal is provided to the M-Z modulator. The M-Z modulator modulates the reference signal to a higher frequency optical signal which is output through a fiber coupled link to the streak camera.

  20. Design, simulation and experimental analysis of an anti-stray-light illumination system of fundus camera

    NASA Astrophysics Data System (ADS)

    Ma, Chen; Cheng, Dewen; Xu, Chen; Wang, Yongtian

    2014-11-01

    Fundus camera is a complex optical system for retinal photography, involving illumination and imaging of the retina. Stray light is one of the most significant problems of fundus camera because the retina is so minimally reflective that back reflections from the cornea and any other optical surface are likely to be significantly greater than the light reflected from the retina. To provide maximum illumination to the retina while eliminating back reflections, a novel design of illumination system used in portable fundus camera is proposed. Internal illumination, in which eyepiece is shared by both the illumination system and the imaging system but the condenser and the objective are separated by a beam splitter, is adopted for its high efficiency. To eliminate the strong stray light caused by corneal center and make full use of light energy, the annular stop in conventional illumination systems is replaced by a fiber-coupled, ring-shaped light source that forms an annular beam. Parameters including size and divergence angle of the light source are specially designed. To weaken the stray light, a polarized light source is used, and an analyzer plate is placed after beam splitter in the imaging system. Simulation results show that the illumination uniformity at the fundus exceeds 90%, and the stray light is within 1%. Finally, a proof-of-concept prototype is developed and retinal photos of an ophthalmophantom are captured. The experimental results show that ghost images and stray light have been greatly reduced to a level that professional diagnostic will not be interfered with.

  1. Dynamic photoelasticity by TDI imaging

    NASA Astrophysics Data System (ADS)

    Asundi, Anand K.; Sajan, M. R.

    2001-06-01

    High speed photographic system like the image rotation camera, the Cranz Schardin camera and the drum camera are typically used for the recording and visualization of dynamic events in stress analysis, fluid mechanics, etc. All these systems are fairly expensive and generally not simple to use. Furthermore they are all based on photographic film recording system requiring time consuming and tedious wet processing of the films. Digital cameras are replacing the conventional cameras, to certain extent in static experiments. Recently, there is lots of interest in development and modifying CCD architectures and recording arrangements for dynamic scenes analysis. Herein we report the use of a CCD camera operating in the Time Delay and Integration mode for digitally recording dynamic photoelastic stress patterns. Applications in strobe and streak photoelastic pattern recording and system limitations will be explained in the paper.

  2. Fiber optic TV direct

    NASA Technical Reports Server (NTRS)

    Kassak, John E.

    1991-01-01

    The objective of the operational television (OTV) technology was to develop a multiple camera system (up to 256 cameras) for NASA Kennedy installations where camera video, synchronization, control, and status data are transmitted bidirectionally via a single fiber cable at distances in excess of five miles. It is shown that the benefits (such as improved video performance, immunity from electromagnetic interference and radio frequency interference, elimination of repeater stations, and more system configuration flexibility) can be realized if application of the proven fiber optic transmission concept is used. The control system will marry the lens, pan and tilt, and camera control functions into a modular based Local Area Network (LAN) control network. Such a system does not exist commercially at present since the Television Broadcast Industry's current practice is to divorce the positional controls from the camera control system. The application software developed for this system will have direct applicability to similar systems in industry using LAN based control systems.

  3. NV-CMOS HD camera for day/night imaging

    NASA Astrophysics Data System (ADS)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE <90%), as well as projected low noise (<2h+) readout. Power consumption is minimized in the camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  4. Experimental task-based optimization of a four-camera variable-pinhole small-animal SPECT system

    NASA Astrophysics Data System (ADS)

    Hesterman, Jacob Y.; Kupinski, Matthew A.; Furenlid, Lars R.; Wilson, Donald W.

    2005-04-01

    We have previously utilized lumpy object models and simulated imaging systems in conjunction with the ideal observer to compute figures of merit for hardware optimization. In this paper, we describe the development of methods and phantoms necessary to validate or experimentally carry out these optimizations. Our study was conducted on a four-camera small-animal SPECT system that employs interchangeable pinhole plates to operate under a variety of pinhole configurations and magnifications (representing optimizable system parameters). We developed a small-animal phantom capable of producing random backgrounds for each image sequence. The task chosen for the study was the detection of a 2mm diameter sphere within the phantom-generated random background. A total of 138 projection images were used, half of which included the signal. As our observer, we employed the channelized Hotelling observer (CHO) with Laguerre-Gauss channels. The signal-to-noise (SNR) of this observer was used to compare different system configurations. Results indicate agreement between experimental and simulated data with higher detectability rates found for multiple-camera, multiple-pinhole, and high-magnification systems, although it was found that mixtures of magnifications often outperform systems employing a single magnification. This work will serve as a basis for future studies pertaining to system hardware optimization.

  5. Levels of Autonomy and Autonomous System Performance Assessment for Intelligent Unmanned Systems

    DTIC Science & Technology

    2014-04-01

    LIDAR and camera sensors that is driven entirely by teleoperation would be AL 0. If that same robot used its LIDAR and camera data to generate a...obstacle detection, mapping, path planning 3 CMMAD semi- autonomous counter- mine system (Few 2010) Talon UGV, camera, LIDAR , metal detector...NCAP framework are performed on individual UMS components and do not require mission level evaluations. For example, bench testing of camera, LIDAR

  6. Viking lander imaging investigation during extended and continuation automatic missions. Volume 2: Lander 2 picture catalog of experiment data record

    NASA Technical Reports Server (NTRS)

    Jones, K. L.; Henshaw, M.; Mcmenomy, C.; Robles, A.; Scribner, P. C.; Wall, S. D.; Wilson, J. W.

    1981-01-01

    Images returned by the two Viking landers during the extended and continuation automatic phases of the Viking Mission are presented. Information describing the conditions under which the images were acquired is included with skyline drawings showing the images positioned in the field of view of the cameras. Subsets of the images are listed in a variety of sequences to aid in locating images of interest. The format and organization of the digital magnetic tape storage of the images are described. A brief description of the mission and the camera system is also included.

  7. Techniques for optically compressing light intensity ranges

    DOEpatents

    Rushford, Michael C.

    1989-01-01

    A pin hole camera assembly for use in viewing an object having a relatively large light intensity range, for example a crucible containing molten uranium in an atomic vapor laser isotope separator (AVLIS) system is disclosed herein. The assembly includes means for optically compressing the light intensity range appearing at its input sufficient to make it receivable and decipherable by a standard video camera. A number of different means for compressing the intensity range are disclosed. These include the use of photogray glass, the use of a pair of interference filters, and the utilization of a new liquid crystal notch filter in combination with an interference filter.

  8. Techniques for optically compressing light intensity ranges

    DOEpatents

    Rushford, M.C.

    1989-03-28

    A pin hole camera assembly for use in viewing an object having a relatively large light intensity range, for example a crucible containing molten uranium in an atomic vapor laser isotope separator (AVLIS) system is disclosed herein. The assembly includes means for optically compressing the light intensity range appearing at its input sufficient to make it receivable and decipherable by a standard video camera. A number of different means for compressing the intensity range are disclosed. These include the use of photogray glass, the use of a pair of interference filters, and the utilization of a new liquid crystal notch filter in combination with an interference filter. 18 figs.

  9. High-speed optical 3D sensing and its applications

    NASA Astrophysics Data System (ADS)

    Watanabe, Yoshihiro

    2016-12-01

    This paper reviews high-speed optical 3D sensing technologies for obtaining the 3D shape of a target using a camera. The focusing speed is from 100 to 1000 fps, exceeding normal camera frame rates, which are typically 30 fps. In particular, contactless, active, and real-time systems are introduced. Also, three example applications of this type of sensing technology are introduced, including surface reconstruction from time-sequential depth images, high-speed 3D user interaction, and high-speed digital archiving.

  10. Dual Beam System (SEM/FIB) Equipment for the Kleberg Advanced Microscopy Center

    DTIC Science & Technology

    2015-06-05

    Journal of Applied Physics (04 2015) John E. Sanchez, Ramón Díaz de León, Fernando Mendoza Santoyo, Gabriel González, Miguel José Yacaman, Arturo Ponce...includes PCs and cameras , will have available support for three years. What is Not Covered: This Warranty does not cover high-wear, consumable...110 x 110 mm eucentric stage - Multi-purpose holder - CCD IR camera - In-lens detectors: Lower (T1) and Upper (T2) - SE detector (ET-SED

  11. Video occupant detection and classification

    DOEpatents

    Krumm, John C.

    1999-01-01

    A system for determining when it is not safe to arm a vehicle airbag by storing representations of known situations as observed by a camera at a passenger seat; and comparing a representation of a camera output of the current situation to the stored representations to determine the known situation most closely represented by the current situation. In the preferred embodiment, the stored representations include the presence or absence of a person or infant seat in the front passenger seat of an automobile.

  12. SU-E-J-17: A Study of Accelerator-Induced Cerenkov Radiation as a Beam Diagnostic and Dosimetry Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bateman, F; Tosh, R

    2014-06-01

    Purpose: To investigate accelerator-induced Cerenkov radiation imaging as a possible beam diagnostic and medical dosimetry tool. Methods: Cerenkov emission produced by clinical accelerator beams in a water phantom was imaged using a camera system comprised of a high-sensitivity thermoelectrically-cooled CCD camera coupled to a large aperture (f/0.75) objective lens with 16:1 magnification. This large format lens allows a significant amount of the available Cerenkov light to be collected and focused onto the CCD camera to form the image. Preliminary images, obtained with 6 MV photon beams, used an unshielded camera mounted horizontally with the beam normal to the water surface,more » and confirmed the detection of Cerenkov radiation. Several improvements were subsequently made including the addition of radiation shielding around the camera, and altering of the beam and camera angles to give a more favorable geometry for Cerenkov light collection. A detailed study was then undertaken over a range of electron and photon beam energies and dose rates to investigate the possibility of using this technique for beam diagnostics and dosimetry. Results: A series of images were obtained at a fixed dose rate over a range of electron energies from 6 to 20 MeV. The location of maximum intensity was found to vary linearly with the energy of the beam. A linear relationship was also found between the light observed from a fixed point on the central axis and the dose rate for both photon and electron beams. Conclusion: We have found that the analysis of images of beam-induced Cerenkov light in a water phantom has potential for use as a beam diagnostic and medical dosimetry tool. Our future goals include the calibration of the light output in terms of radiation dose and development of a tomographic system for 3D Cerenkov imaging in water phantoms and other media.« less

  13. The imaging system design of three-line LMCCD mapping camera

    NASA Astrophysics Data System (ADS)

    Zhou, Huai-de; Liu, Jin-Guo; Wu, Xing-Xing; Lv, Shi-Liang; Zhao, Ying; Yu, Da

    2011-08-01

    In this paper, the authors introduced the theory about LMCCD (line-matrix CCD) mapping camera firstly. On top of the introduction were consists of the imaging system of LMCCD mapping camera. Secondly, some pivotal designs which were Introduced about the imaging system, such as the design of focal plane module, the video signal's procession, the controller's design of the imaging system, synchronous photography about forward and nadir and backward camera and the nadir camera of line-matrix CCD. At last, the test results of LMCCD mapping camera imaging system were introduced. The results as following: the precision of synchronous photography about forward and nadir and backward camera is better than 4 ns and the nadir camera of line-matrix CCD is better than 4 ns too; the photography interval of line-matrix CCD of the nadir camera can satisfy the butter requirements of LMCCD focal plane module; the SNR tested in laboratory is better than 95 under typical working condition(the solar incidence degree is 30, the reflectivity of the earth's surface is 0.3) of each CCD image; the temperature of the focal plane module is controlled under 30° in a working period of 15 minutes. All of these results can satisfy the requirements about the synchronous photography, the temperature control of focal plane module and SNR, Which give the guarantee of precision for satellite photogrammetry.

  14. Achieving thermography with a thermal security camera using uncooled amorphous silicon microbolometer image sensors

    NASA Astrophysics Data System (ADS)

    Wang, Yu-Wei; Tesdahl, Curtis; Owens, Jim; Dorn, David

    2012-06-01

    Advancements in uncooled microbolometer technology over the last several years have opened up many commercial applications which had been previously cost prohibitive. Thermal technology is no longer limited to the military and government market segments. One type of thermal sensor with low NETD which is available in the commercial market segment is the uncooled amorphous silicon (α-Si) microbolometer image sensor. Typical thermal security cameras focus on providing the best image quality by auto tonemaping (contrast enhancing) the image, which provides the best contrast depending on the temperature range of the scene. While this may provide enough information to detect objects and activities, there are further benefits of being able to estimate the actual object temperatures in a scene. This thermographic ability can provide functionality beyond typical security cameras by being able to monitor processes. Example applications of thermography[2] with thermal camera include: monitoring electrical circuits, industrial machinery, building thermal leaks, oil/gas pipelines, power substations, etc...[3][5] This paper discusses the methodology of estimating object temperatures by characterizing/calibrating different components inside a thermal camera utilizing an uncooled amorphous silicon microbolometer image sensor. Plots of system performance across camera operating temperatures will be shown.

  15. The first satellite laser echoes recorded on the streak camera

    NASA Technical Reports Server (NTRS)

    Hamal, Karel; Prochazka, Ivan; Kirchner, Georg; Koidl, F.

    1993-01-01

    The application of the streak camera with the circular sweep for the satellite laser ranging is described. The Modular Streak Camera system employing the circular sweep option was integrated into the conventional Satellite Laser System. The experimental satellite tracking and ranging has been performed. The first satellite laser echo streak camera records are presented.

  16. Multi-band infrared camera systems

    NASA Astrophysics Data System (ADS)

    Davis, Tim; Lang, Frank; Sinneger, Joe; Stabile, Paul; Tower, John

    1994-12-01

    The program resulted in an IR camera system that utilizes a unique MOS addressable focal plane array (FPA) with full TV resolution, electronic control capability, and windowing capability. Two systems were delivered, each with two different camera heads: a Stirling-cooled 3-5 micron band head and a liquid nitrogen-cooled, filter-wheel-based, 1.5-5 micron band head. Signal processing features include averaging up to 16 frames, flexible compensation modes, gain and offset control, and real-time dither. The primary digital interface is a Hewlett-Packard standard GPID (IEEE-488) port that is used to upload and download data. The FPA employs an X-Y addressed PtSi photodiode array, CMOS horizontal and vertical scan registers, horizontal signal line (HSL) buffers followed by a high-gain preamplifier and a depletion NMOS output amplifier. The 640 x 480 MOS X-Y addressed FPA has a high degree of flexibility in operational modes. By changing the digital data pattern applied to the vertical scan register, the FPA can be operated in either an interlaced or noninterlaced format. The thermal sensitivity performance of the second system's Stirling-cooled head was the best of the systems produced.

  17. IR-camera methods for automotive brake system studies

    NASA Astrophysics Data System (ADS)

    Dinwiddie, Ralph B.; Lee, Kwangjin

    1998-03-01

    Automotive brake systems are energy conversion devices that convert kinetic energy into heat energy. Several mechanisms, mostly related to noise and vibration problems, can occur during brake operation and are often related to non-uniform temperature distribution on the brake disk. These problems are of significant cost to the industry and are a quality concern to automotive companies and brake system vendors. One such problem is thermo-elastic instabilities in brake system. During the occurrence of these instabilities several localized hot spots will form around the circumferential direction of the brake disk. The temperature distribution and the time dependence of these hot spots, a critical factor in analyzing this problem and in developing a fundamental understanding of this phenomenon, were recorded. Other modes of non-uniform temperature distributions which include hot banding and extreme localized heating were also observed. All of these modes of non-uniform temperature distributions were observed on automotive brake systems using a high speed IR camera operating in snap-shot mode. The camera was synchronized with the rotation of the brake disk so that the time evolution of hot regions could be studied. This paper discusses the experimental approach in detail.

  18. Modulated electron-multiplied fluorescence lifetime imaging microscope: all-solid-state camera for fluorescence lifetime imaging.

    PubMed

    Zhao, Qiaole; Schelen, Ben; Schouten, Raymond; van den Oever, Rein; Leenen, René; van Kuijk, Harry; Peters, Inge; Polderdijk, Frank; Bosiers, Jan; Raspe, Marcel; Jalink, Kees; Geert Sander de Jong, Jan; van Geest, Bert; Stoop, Karel; Young, Ian Ted

    2012-12-01

    We have built an all-solid-state camera that is directly modulated at the pixel level for frequency-domain fluorescence lifetime imaging microscopy (FLIM) measurements. This novel camera eliminates the need for an image intensifier through the use of an application-specific charge coupled device design in a frequency-domain FLIM system. The first stage of evaluation for the camera has been carried out. Camera characteristics such as noise distribution, dark current influence, camera gain, sampling density, sensitivity, linearity of photometric response, and optical transfer function have been studied through experiments. We are able to do lifetime measurement using our modulated, electron-multiplied fluorescence lifetime imaging microscope (MEM-FLIM) camera for various objects, e.g., fluorescein solution, fixed green fluorescent protein (GFP) cells, and GFP-actin stained live cells. A detailed comparison of a conventional microchannel plate (MCP)-based FLIM system and the MEM-FLIM system is presented. The MEM-FLIM camera shows higher resolution and a better image quality. The MEM-FLIM camera provides a new opportunity for performing frequency-domain FLIM.

  19. Teleoperated control system for underground room and pillar mining

    DOEpatents

    Mayercheck, William D.; Kwitowski, August J.; Brautigam, Albert L.; Mueller, Brian K.

    1992-01-01

    A teleoperated mining system is provided for remotely controlling the various machines involved with thin seam mining. A thin seam continuous miner located at a mining face includes a camera mounted thereon and a slave computer for controlling the miner and the camera. A plurality of sensors for relaying information about the miner and the face to the slave computer. A slave computer controlled ventilation sub-system which removes combustible material from the mining face. A haulage sub-system removes material mined by the continuous miner from the mining face to a collection site and is also controlled by the slave computer. A base station, which controls the supply of power and water to the continuous miner, haulage system, and ventilation systems, includes cable/hose handling module for winding or unwinding cables/hoses connected to the miner, an operator control module, and a hydraulic power and air compressor module for supplying air to the miner. An operator controlled host computer housed in the operator control module is connected to the slave computer via a two wire communications line.

  20. Comparison between magnetic anchoring and guidance system camera-assisted laparoendoscopic single-site surgery nephrectomy and conventional laparoendoscopic single-site surgery nephrectomy in a porcine model: focus on ergonomics and workload profiles.

    PubMed

    Han, Woong Kyu; Tan, Yung K; Olweny, Ephrem O; Yin, Gang; Liu, Zhuo-Wei; Faddegon, Stephen; Scott, Daniel J; Cadeddu, Jeffrey A

    2013-04-01

    To compare surgeon-assessed ergonomic and workload demands of magnetic anchoring and guidance system (MAGS) laparoendoscopic single-site surgery (LESS) nephrectomy with conventional LESS nephrectomy in a porcine model. Participants included two expert and five novice surgeons who each performed bilateral LESS nephrectomy in two nonsurvival animals using either the MAGS camera or conventional laparoscope. Task difficulty and workload demands of the surgeon and camera driver were assessed using the validated National Aeronautics and Space Administration Task Load Index (NASA-TLX) questionnaire. Surgeons were also asked to score 6 parameters on a Likert scale (range 1=low/easy to 5=high/hard): procedure-associated workload, ergonomics, technical challenge, visualization, accidental events, and instrument handling. Each step of the nephrectomy was also timed and instrument clashing was quantified. Scores for each parameter on the Likert scale were significantly lower for MAGS-LESS nephrectomy. Mean number of internal and external clashes were significantly lower for the MAGS camera (p<0.001). Mean task times for each procedure were shorter for experts than for novices, but this was not statistically significant. NASA-TLX workload ratings by the surgeon and camera driver showed that MAGS resulted in a significantly lower workload than the conventional laparoscope during LESS nephrectomy (p<0.05). The use of the MAGS camera during LESS nephrectomy lowers the task workload for both the surgeon and camera driver when compared to conventional laparoscope use. Subjectively, it appears to also improve surgeons' impressions of ergonomics and technical challenge. Pending approval for clinical use, further evaluation in the clinical setting is warranted.

  1. A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration

    PubMed Central

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum. PMID:23112656

  2. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

    PubMed

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  3. High-definition television evaluation for remote handling task performance

    NASA Astrophysics Data System (ADS)

    Fujita, Y.; Omori, E.; Hayashi, S.; Draper, J. V.; Herndon, J. N.

    Described are experiments designed to evaluate the impact of HDTV (High-Definition Television) on the performance of typical remote tasks. The experiments described in this paper compared the performance of four operators using HDTV with their performance while using other television systems. The experiments included four television systems: (1) high-definition color television, (2) high-definition monochromatic television, (3) standard-resolution monochromatic television, and (4) standard-resolution stereoscopic monochromatic television. The stereo system accomplished stereoscopy by displaying two cross-polarized images, one reflected by a half-silvered mirror and one seen through the mirror. Observers wore spectacles with cross-polarized lenses so that the left eye received only the view from the left camera and the right eye received only the view from the right camera.

  4. Global calibration of multi-cameras with non-overlapping fields of view based on photogrammetry and reconfigurable target

    NASA Astrophysics Data System (ADS)

    Xia, Renbo; Hu, Maobang; Zhao, Jibin; Chen, Songlin; Chen, Yueling

    2018-06-01

    Multi-camera vision systems are often needed to achieve large-scale and high-precision measurement because these systems have larger fields of view (FOV) than a single camera. Multiple cameras may have no or narrow overlapping FOVs in many applications, which pose a huge challenge to global calibration. This paper presents a global calibration method for multi-cameras without overlapping FOVs based on photogrammetry technology and a reconfigurable target. Firstly, two planar targets are fixed together and made into a long target according to the distance between the two cameras to be calibrated. The relative positions of the two planar targets can be obtained by photogrammetric methods and used as invariant constraints in global calibration. Then, the reprojection errors of target feature points in the two cameras’ coordinate systems are calculated at the same time and optimized by the Levenberg–Marquardt algorithm to find the optimal solution of the transformation matrix between the two cameras. Finally, all the camera coordinate systems are converted to the reference coordinate system in order to achieve global calibration. Experiments show that the proposed method has the advantages of high accuracy (the RMS error is 0.04 mm) and low cost and is especially suitable for on-site calibration.

  5. Uav Cameras: Overview and Geometric Calibration Benchmark

    NASA Astrophysics Data System (ADS)

    Cramer, M.; Przybilla, H.-J.; Zurhorst, A.

    2017-08-01

    Different UAV platforms and sensors are used in mapping already, many of them equipped with (sometimes) modified cameras as known from the consumer market. Even though these systems normally fulfil their requested mapping accuracy, the question arises, which system performs best? This asks for a benchmark, to check selected UAV based camera systems in well-defined, reproducible environments. Such benchmark is tried within this work here. Nine different cameras used on UAV platforms, representing typical camera classes, are considered. The focus is laid on the geometry here, which is tightly linked to the process of geometrical calibration of the system. In most applications the calibration is performed in-situ, i.e. calibration parameters are obtained as part of the project data itself. This is often motivated because consumer cameras do not keep constant geometry, thus, cannot be seen as metric cameras. Still, some of the commercial systems are quite stable over time, as it was proven from repeated (terrestrial) calibrations runs. Already (pre-)calibrated systems may offer advantages, especially when the block geometry of the project does not allow for a stable and sufficient in-situ calibration. Especially for such scenario close to metric UAV cameras may have advantages. Empirical airborne test flights in a calibration field have shown how block geometry influences the estimated calibration parameters and how consistent the parameters from lab calibration can be reproduced.

  6. A novel camera localization system for extending three-dimensional digital image correlation

    NASA Astrophysics Data System (ADS)

    Sabato, Alessandro; Reddy, Narasimha; Khan, Sameer; Niezrecki, Christopher

    2018-03-01

    The monitoring of civil, mechanical, and aerospace structures is important especially as these systems approach or surpass their design life. Often, Structural Health Monitoring (SHM) relies on sensing techniques for condition assessment. Advancements achieved in camera technology and optical sensors have made three-dimensional (3D) Digital Image Correlation (DIC) a valid technique for extracting structural deformations and geometry profiles. Prior to making stereophotogrammetry measurements, a calibration has to be performed to obtain the vision systems' extrinsic and intrinsic parameters. It means that the position of the cameras relative to each other (i.e. separation distance, cameras angle, etc.) must be determined. Typically, cameras are placed on a rigid bar to prevent any relative motion between the cameras. This constraint limits the utility of the 3D-DIC technique, especially as it is applied to monitor large-sized structures and from various fields of view. In this preliminary study, the design of a multi-sensor system is proposed to extend 3D-DIC's capability and allow for easier calibration and measurement. The suggested system relies on a MEMS-based Inertial Measurement Unit (IMU) and a 77 GHz radar sensor for measuring the orientation and relative distance of the stereo cameras. The feasibility of the proposed combined IMU-radar system is evaluated through laboratory tests, demonstrating its ability in determining the cameras position in space for performing accurate 3D-DIC calibration and measurements.

  7. Depth Perception In Remote Stereoscopic Viewing Systems

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Von Sydow, Marika

    1989-01-01

    Report describes theoretical and experimental studies of perception of depth by human operators through stereoscopic video systems. Purpose of such studies to optimize dual-camera configurations used to view workspaces of remote manipulators at distances of 1 to 3 m from cameras. According to analysis, static stereoscopic depth distortion decreased, without decreasing stereoscopitc depth resolution, by increasing camera-to-object and intercamera distances and camera focal length. Further predicts dynamic stereoscopic depth distortion reduced by rotating cameras around center of circle passing through point of convergence of viewing axes and first nodal points of two camera lenses.

  8. A New Comprehensive Lightning Instrumentation System for Pad 39B at the Kennedy Space Center, Florida

    NASA Technical Reports Server (NTRS)

    Mata, Carlos T.; Rakov, Vladimir A.; Mata, Angel G.; Bonilla Tatiana; Navedo, Emmanuel; Snyder, Gary P.

    2010-01-01

    A new comprehensive lightning instrumentation system has been designed for Launch Complex 39B at the Kennedy Space Center, Florida. This new instrumentation system includes the synchronized recording of six high-speed video cameras, currents through the nine downconductors of the new lightning protection system, four B-dot, 3-axis measurement stations, and five D-dot stations composed of two antennas each. The instrumentation system is composed of centralized transient recorders and digitizers that located close to the sensors in the field. The sensors and transient recorders communicate via optical fiber. The transient recorders are triggered by the B-dot sensors, the E-dot sensors, or the current through the downlead conductors. The high-speed cameras are triggered by the transient recorders when the latter perceives a qualified trigger.

  9. Selecting among competing models of electro-optic, infrared camera system range performance

    USGS Publications Warehouse

    Nichols, Jonathan M.; Hines, James E.; Nichols, James D.

    2013-01-01

    Range performance is often the key requirement around which electro-optical and infrared camera systems are designed. This work presents an objective framework for evaluating competing range performance models. Model selection based on the Akaike’s Information Criterion (AIC) is presented for the type of data collected during a typical human observer and target identification experiment. These methods are then demonstrated on observer responses to both visible and infrared imagery in which one of three maritime targets was placed at various ranges. We compare the performance of a number of different models, including those appearing previously in the literature. We conclude that our model-based approach offers substantial improvements over the traditional approach to inference, including increased precision and the ability to make predictions for some distances other than the specific set for which experimental trials were conducted.

  10. Applications of optical fibers and miniature photonic elements in medical diagnostics

    NASA Astrophysics Data System (ADS)

    Blaszczak, Urszula; Gilewski, Marian; Gryko, Lukasz; Zajac, Andrzej; Kukwa, Andrzej; Kukwa, Wojciech

    2014-05-01

    Construction of endoscopes which are known for decades, in particular in small devices with the diameter of few millimetres, are based on the application of fibre optic imaging bundles or bundles of fibers in the illumination systems (usually with a halogen source). Cameras - CCD and CMOS - with the sensor size of less than 5 mm emerging commercially and high power LED solutions allow to design and construct modern endoscopes characterized by many innovative properties. These constructions offer higher resolution. They are also relatively cheaper especially in the context of the integration of the majority of the functions on a single chip. Mentioned features of the CMOS sensors reduce the cycle of introducing the newly developed instruments to the market. The paper includes a description of the concept of the endoscope with a miniature camera built on the basis of CMOS detector manufactured by Omni Vision. The set of LEDs located at the operator side works as the illuminating system. Fibre optic system and the lens of the camera are used in shaping the beam illuminating the observed tissue. Furthermore, to broaden the range of applications of the endoscope, the illuminator allows to control the spectral characteristics of emitted light. The paper presents the analysis of the basic parameters of the light-and-optical system of the endoscope. The possibility of adjusting the magnifications of the lens, the field of view of the camera and its spatial resolution is discussed. Special attention was drawn to the issues related to the selection of the light sources used for the illumination in terms of energy efficiency and the possibility of providing adjusting the colour of the emitted light in order to improve the quality of the image obtained by the camera.

  11. Development of an unmanned agricultural robotics system for measuring crop conditions for precision aerial application

    USDA-ARS?s Scientific Manuscript database

    An Unmanned Agricultural Robotics System (UARS) is acquired, rebuilt with desired hardware, and operated in both classrooms and field. The UARS includes crop height sensor, crop canopy analyzer, normalized difference vegetative index (NDVI) sensor, multispectral camera, and hyperspectral radiometer...

  12. International testing of a Mars rover prototype

    NASA Astrophysics Data System (ADS)

    Kemurjian, Alexsandr Leonovich; Linkin, V.; Friedman, L.

    1993-03-01

    Tests on a prototype engineering model of the Russian Mars 96 Rover were conducted by an international team in and near Death Valley in the United States in late May, 1992. These tests were part of a comprehensive design and testing program initiated by the three Russian groups responsible for the rover development. The specific objectives of the May tests were: (1) evaluate rover performance over different Mars-like terrains; (2) evaluate state-of-the-art teleoperation and autonomy development for Mars rover command, control and navigation; and (3) organize an international team to contribute expertise and capability on the rover development for the flight project. The range and performance that can be planned for the Mars mission is dependent on the degree of autonomy that will be possible to implement on the mission. Current plans are for limited autonomy, with Earth-based teleoperation for the nominal navigation system. Several types of television systems are being investigated for inclusion in the navigation system including panoramic camera, stereo, and framing cameras. The tests used each of these in teleoperation experiments. Experiments were included to consider use of such TV data in autonomy algorithms. Image processing and some aspects of closed-loop control software were also tested. A micro-rover was tested to help consider the value of such a device as a payload supplement to the main rover. The concept is for the micro-rover to serve like a mobile hand, with its own sensors including a television camera.

  13. Single-camera stereo-digital image correlation with a four-mirror adapter: optimized design and validation

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2016-12-01

    A low-cost, easy-to-implement but practical single-camera stereo-digital image correlation (DIC) system using a four-mirror adapter is established for accurate shape and three-dimensional (3D) deformation measurements. The mirrors assisted pseudo-stereo imaging system can convert a single camera into two virtual cameras, which view a specimen from different angles and record the surface images of the test object onto two halves of the camera sensor. To enable deformation measurement in non-laboratory conditions or extreme high temperature environments, an active imaging optical design, combining an actively illuminated monochromatic source with a coupled band-pass optical filter, is compactly integrated to the pseudo-stereo DIC system. The optical design, basic principles and implementation procedures of the established system for 3D profile and deformation measurements are described in detail. The effectiveness and accuracy of the established system are verified by measuring the profile of a regular cylinder surface and displacements of a translated planar plate. As an application example, the established system is used to determine the tensile strains and Poisson's ratio of a composite solid propellant specimen during stress relaxation test. Since the established single-camera stereo-DIC system only needs a single camera and presents strong robustness against variations in ambient light or the thermal radiation of a hot object, it demonstrates great potential in determining transient deformation in non-laboratory or high-temperature environments with the aid of a single high-speed camera.

  14. Mitigation of Atmospheric Effects on Imaging Systems

    DTIC Science & Technology

    2004-03-31

    focal length. The imaging system had two cameras: an Electrim camera sensitive in the visible (0.6 µ m) waveband and an Amber QWIP infrared camera...sensitive in the 9–micron region. The Amber QWIP infrared camera had 256x256 pixels, pixel pitch 38 mµ , focal length of 1.8 m, FOV of 5.4 x5.4 mr...each day. Unfortunately, signals from the different read ports of the Electrim camera picked up noise on their way to the digitizer, and this resulted

  15. AERCam Autonomy: Intelligent Software Architecture for Robotic Free Flying Nanosatellite Inspection Vehicles

    NASA Technical Reports Server (NTRS)

    Fredrickson, Steven E.; Duran, Steve G.; Braun, Angela N.; Straube, Timothy M.; Mitchell, Jennifer D.

    2006-01-01

    The NASA Johnson Space Center has developed a nanosatellite-class Free Flyer intended for future external inspection and remote viewing of human spacecraft. The Miniature Autonomous Extravehicular Robotic Camera (Mini AERCam) technology demonstration unit has been integrated into the approximate form and function of a flight system. The spherical Mini AERCam Free Flyer is 7.5 inches in diameter and weighs approximately 10 pounds, yet it incorporates significant additional capabilities compared to the 35-pound, 14-inch diameter AERCam Sprint that flew as a Shuttle flight experiment in 1997. Mini AERCam hosts a full suite of miniaturized avionics, instrumentation, communications, navigation, power, propulsion, and imaging subsystems, including digital video cameras and a high resolution still image camera. The vehicle is designed for either remotely piloted operations or supervised autonomous operations, including automatic stationkeeping, point-to-point maneuvering, and waypoint tracking. The Mini AERCam Free Flyer is accompanied by a sophisticated control station for command and control, as well as a docking system for automated deployment, docking, and recharge at a parent spacecraft. Free Flyer functional testing has been conducted successfully on both an airbearing table and in a six-degree-of-freedom closed-loop orbital simulation with avionics hardware in the loop. Mini AERCam aims to provide beneficial on-orbit views that cannot be obtained from fixed cameras, cameras on robotic manipulators, or cameras carried by crewmembers during extravehicular activities (EVA s). On Shuttle or International Space Station (ISS), for example, Mini AERCam could support external robotic operations by supplying orthogonal views to the intravehicular activity (IVA) robotic operator, supply views of EVA operations to IVA and/or ground crews monitoring the EVA, and carry out independent visual inspections of areas of interest around the spacecraft. To enable these future benefits with minimal impact on IVA operators and ground controllers, the Mini AERCam system architecture incorporates intelligent systems attributes that support various autonomous capabilities. 1) A robust command sequencer enables task-level command scripting. Command scripting is employed for operations such as automatic inspection scans over a region of interest, and operator-hands-off automated docking. 2) A system manager built on the same expert-system software as the command sequencer provides detection and smart-response capability for potential system-level anomalies, like loss of communications between the Free Flyer and control station. 3) An AERCam dynamics manager provides nominal and off-nominal management of guidance, navigation, and control (GN&C) functions. It is employed for safe trajectory monitoring, contingency maneuvering, and related roles. This paper will describe these architectural components of Mini AERCam autonomy, as well as the interaction of these elements with a human operator during supervised autonomous control.

  16. Autocalibration of a projector-camera system.

    PubMed

    Okatani, Takayuki; Deguchi, Koichiro

    2005-12-01

    This paper presents a method for calibrating a projector-camera system that consists of multiple projectors (or multiple poses of a single projector), a camera, and a planar screen. We consider the problem of estimating the homography between the screen and the image plane of the camera or the screen-camera homography, in the case where there is no prior knowledge regarding the screen surface that enables the direct computation of the homography. It is assumed that the pose of each projector is unknown while its internal geometry is known. Subsequently, it is shown that the screen-camera homography can be determined from only the images projected by the projectors and then obtained by the camera, up to a transformation with four degrees of freedom. This transformation corresponds to arbitrariness in choosing a two-dimensional coordinate system on the screen surface and when this coordinate system is chosen in some manner, the screen-camera homography as well as the unknown poses of the projectors can be uniquely determined. A noniterative algorithm is presented, which computes the homography from three or more images. Several experimental results on synthetic as well as real images are shown to demonstrate the effectiveness of the method.

  17. Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects

    PubMed Central

    Lambers, Martin; Kolb, Andreas

    2017-01-01

    In the last decade, Time-of-Flight (ToF) range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these artifacts, is important to improve camera and algorithm development. This paper presents a physically-based, interactive simulation technique for amplitude modulated continuous wave (AMCW) ToF cameras, which, among other error sources, includes single bounce indirect multipath interference based on an enhanced image-space approach. The simulation accounts for physical units down to the charge level accumulated in sensor pixels. Furthermore, we present the first quantified comparison for ToF camera simulators. We present bidirectional reference distribution function (BRDF) measurements for selected, purchasable materials in the near-infrared (NIR) range, craft real and synthetic scenes out of these materials and quantitatively compare the range sensor data. PMID:29271888

  18. Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects.

    PubMed

    Bulczak, David; Lambers, Martin; Kolb, Andreas

    2017-12-22

    In the last decade, Time-of-Flight (ToF) range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these artifacts, is important to improve camera and algorithm development. This paper presents a physically-based, interactive simulation technique for amplitude modulated continuous wave (AMCW) ToF cameras, which, among other error sources, includes single bounce indirect multipath interference based on an enhanced image-space approach. The simulation accounts for physical units down to the charge level accumulated in sensor pixels. Furthermore, we present the first quantified comparison for ToF camera simulators. We present bidirectional reference distribution function (BRDF) measurements for selected, purchasable materials in the near-infrared (NIR) range, craft real and synthetic scenes out of these materials and quantitatively compare the range sensor data.

  19. ACT-Vision: active collaborative tracking for multiple PTZ cameras

    NASA Astrophysics Data System (ADS)

    Broaddus, Christopher; Germano, Thomas; Vandervalk, Nicholas; Divakaran, Ajay; Wu, Shunguang; Sawhney, Harpreet

    2009-04-01

    We describe a novel scalable approach for the management of a large number of Pan-Tilt-Zoom (PTZ) cameras deployed outdoors for persistent tracking of humans and vehicles, without resorting to the large fields of view of associated static cameras. Our system, Active Collaborative Tracking - Vision (ACT-Vision), is essentially a real-time operating system that can control hundreds of PTZ cameras to ensure uninterrupted tracking of target objects while maintaining image quality and coverage of all targets using a minimal number of sensors. The system ensures the visibility of targets between PTZ cameras by using criteria such as distance from sensor and occlusion.

  20. Knowledge-based imaging-sensor fusion system

    NASA Technical Reports Server (NTRS)

    Westrom, George

    1989-01-01

    An imaging system which applies knowledge-based technology to supervise and control both sensor hardware and computation in the imaging system is described. It includes the development of an imaging system breadboard which brings together into one system work that we and others have pursued for LaRC for several years. The goal is to combine Digital Signal Processing (DSP) with Knowledge-Based Processing and also include Neural Net processing. The system is considered a smart camera. Imagine that there is a microgravity experiment on-board Space Station Freedom with a high frame rate, high resolution camera. All the data cannot possibly be acquired from a laboratory on Earth. In fact, only a small fraction of the data will be received. Again, imagine being responsible for some experiments on Mars with the Mars Rover: the data rate is a few kilobits per second for data from several sensors and instruments. Would it not be preferable to have a smart system which would have some human knowledge and yet follow some instructions and attempt to make the best use of the limited bandwidth for transmission. The system concept, current status of the breadboard system and some recent experiments at the Mars-like Amboy Lava Fields in California are discussed.

  1. Ice-Borehole Probe

    NASA Technical Reports Server (NTRS)

    Behar, Alberto; Carsey, Frank; Lane, Arthur; Engelhardt, Herman

    2006-01-01

    An instrumentation system has been developed for studying interactions between a glacier or ice sheet and the underlying rock and/or soil. Prior borehole imaging systems have been used in well-drilling and mineral-exploration applications and for studying relatively thin valley glaciers, but have not been used for studying thick ice sheets like those of Antarctica. The system includes a cylindrical imaging probe that is lowered into a hole that has been bored through the ice to the ice/bedrock interface by use of an established hot-water-jet technique. The images acquired by the cameras yield information on the movement of the ice relative to the bedrock and on visible features of the lower structure of the ice sheet, including ice layers formed at different times, bubbles, and mineralogical inclusions. At the time of reporting the information for this article, the system was just deployed in two boreholes on the Amery ice shelf in East Antarctica and after successful 2000 2001 deployments in 4 boreholes at Ice Stream C, West Antarctica, and in 2002 at Black Rapids Glacier, Alaska. The probe is designed to operate at temperatures from 40 to +40 C and to withstand the cold, wet, high-pressure [130-atm (13.20-MPa)] environment at the bottom of a water-filled borehole in ice as deep as 1.6 km. A current version is being outfitted to service 2.4-km-deep boreholes at the Rutford Ice Stream in West Antarctica. The probe (see figure) contains a sidelooking charge-coupled-device (CCD) camera that generates both a real-time analog video signal and a sequence of still-image data, and contains a digital videotape recorder. The probe also contains a downward-looking CCD analog video camera, plus halogen lamps to illuminate the fields of view of both cameras. The analog video outputs of the cameras are converted to optical signals that are transmitted to a surface station via optical fibers in a cable. Electric power is supplied to the probe through wires in the cable at a potential of 170 VDC. A DC-to-DC converter steps the supply down to 12 VDC for the lights, cameras, and image-data-transmission circuitry. Heat generated by dissipation of electric power in the probe is removed simply by conduction through the probe housing to the visible features of the lower structure of the ice sheet, including ice layers formed at different times, bubbles, and mineralogical inclusions. At the time of reporting the information for this article, the system was just deployed in two boreholes on the Amery ice shelf in East Antarctica and after successful 2000 2001 deployments in 4 boreholes at Ice Stream C, West Antarctica, and in 2002 at Black Rapids Glacier, Alaska. The probe is designed to operate at temperatures from 40 to +40 C and to withstand the cold, wet, high-pressure [130-atm (13.20-MPa)] environment at the bottom of a water-filled borehole in ice as deep as 1.6 km. A current version is being outfitted to service 2.4-km-deep boreholes at the Rutford Ice Stream in West Antarctica. The probe (see figure) contains a sidelooking charge-coupled-device (CCD) camera that generates both a real-time analog video signal and a sequence of still-image data, and contains a digital videotape recorder. The probe also contains a downward-looking CCD analog video camera, plus halogen lamps to illuminate the fields of view of both cameras. The analog video outputs of the cameras are converted to optical signals that are transmitted to a surface station via optical fibers in a cable. Electric power is supplied to the probe through wires in the cable at a potential of 170 VDC. A DC-to-DC converter steps the supply down to 12 VDC for the lights, cameras, and image-datatransmission circuitry. Heat generated by dissipation of electric power in the probe is removed simply by conduction through the probe housing to the visible features of the lower structure of the ice sheet, including ice layers formed at different times, bubbles, and mineralogical inclusions. At thime of reporting the information for this article, the system was just deployed in two boreholes on the Amery ice shelf in East Antarctica and after successful 2000 2001 deployments in 4 boreholes at Ice Stream C, West Antarctica, and in 2002 at Black Rapids Glacier, Alaska. The probe is designed to operate at temperatures from 40 to +40 C and to withstand the cold, wet, high-pressure [130-atm (13.20-MPa)] environment at the bottom of a water-filled borehole in ice as deep as 1.6 km. A current version is being outfitted to service 2.4-km-deep boreholes at the Rutford Ice Stream in West Antarctica. The probe (see figure) contains a sidelooking charge-coupled-device (CCD) camera that generates both a real-time analog video signal and a sequence of still-image data, and contains a digital videotape recorder. The probe also contains a downward-looking CCD analog video camera, plus halogen lamps to illuminate the fields of view of both cameras. The analog video outputs of the cameras are converted to optical signals that are transmitted to a surface station via optical fibers in a cable. Electric power is supplied to the probe through wires in the cable at a potential of 170 VDC. A DC-to-DC converter steps the supply down to 12 VDC for the lights, cameras, and image-datatransmission circuitry. Heat generated by dissipation of electric power in the probe is removed simply by conduction through the probe housing to the adjacent water and ice.

  2. Design, demonstration and testing of low F-number LWIR panoramic imaging relay optics

    NASA Astrophysics Data System (ADS)

    Furxhi, Orges; Frascati, Joe; Driggers, Ronald

    2018-04-01

    Panoramic imaging is inherently wide field of view. High sensitivity uncooled Long Wave Infrared (LWIR) imaging requires low F-number optics. These two requirements result in short back working distance designs that, in addition to being costly, are challenging to integrate with commercially available uncooled LWIR cameras and cores. Common challenges include the relocation of the shutter flag, custom calibration of the camera dynamic range and NUC tables, focusing, and athermalization. Solutions to these challenges add to the system cost and make panoramic uncooled LWIR cameras commercially unattractive. In this paper, we present the design of Panoramic Imaging Relay Optics (PIRO) and show imagery and test results with one of the first prototypes. PIRO designs use several reflective surfaces (generally two) to relay a panoramic scene onto a real, donut-shaped image. The PIRO donut is imaged on the focal plane of the camera using a commercially-off-the-shelf (COTS) low F-number lens. This approach results in low component cost and effortless integration with pre-calibrated commercially available cameras and lenses.

  3. Optical Design of the LSST Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olivier, S S; Seppala, L; Gilmore, K

    2008-07-16

    The Large Synoptic Survey Telescope (LSST) uses a novel, three-mirror, modified Paul-Baker design, with an 8.4-meter primary mirror, a 3.4-m secondary, and a 5.0-m tertiary feeding a camera system that includes a set of broad-band filters and refractive corrector lenses to produce a flat focal plane with a field of view of 9.6 square degrees. Optical design of the camera lenses and filters is integrated with optical design of telescope mirrors to optimize performance, resulting in excellent image quality over the entire field from ultra-violet to near infra-red wavelengths. The LSST camera optics design consists of three refractive lenses withmore » clear aperture diameters of 1.55 m, 1.10 m and 0.69 m and six interchangeable, broad-band, filters with clear aperture diameters of 0.75 m. We describe the methodology for fabricating, coating, mounting and testing these lenses and filters, and we present the results of detailed tolerance analyses, demonstrating that the camera optics will perform to the specifications required to meet their performance goals.« less

  4. LightDenseYOLO: A Fast and Accurate Marker Tracker for Autonomous UAV Landing by Visible Light Camera Sensor on Drone.

    PubMed

    Nguyen, Phong Ha; Arsalan, Muhammad; Koo, Ja Hyung; Naqvi, Rizwan Ali; Truong, Noi Quang; Park, Kang Ryoung

    2018-05-24

    Autonomous landing of an unmanned aerial vehicle or a drone is a challenging problem for the robotics research community. Previous researchers have attempted to solve this problem by combining multiple sensors such as global positioning system (GPS) receivers, inertial measurement unit, and multiple camera systems. Although these approaches successfully estimate an unmanned aerial vehicle location during landing, many calibration processes are required to achieve good detection accuracy. In addition, cases where drones operate in heterogeneous areas with no GPS signal should be considered. To overcome these problems, we determined how to safely land a drone in a GPS-denied environment using our remote-marker-based tracking algorithm based on a single visible-light-camera sensor. Instead of using hand-crafted features, our algorithm includes a convolutional neural network named lightDenseYOLO to extract trained features from an input image to predict a marker's location by visible light camera sensor on drone. Experimental results show that our method significantly outperforms state-of-the-art object trackers both using and not using convolutional neural network in terms of both accuracy and processing time.

  5. Traffic monitoring with distributed smart cameras

    NASA Astrophysics Data System (ADS)

    Sidla, Oliver; Rosner, Marcin; Ulm, Michael; Schwingshackl, Gert

    2012-01-01

    The observation and monitoring of traffic with smart visions systems for the purpose of improving traffic safety has a big potential. Today the automated analysis of traffic situations is still in its infancy--the patterns of vehicle motion and pedestrian flow in an urban environment are too complex to be fully captured and interpreted by a vision system. 3In this work we present steps towards a visual monitoring system which is designed to detect potentially dangerous traffic situations around a pedestrian crossing at a street intersection. The camera system is specifically designed to detect incidents in which the interaction of pedestrians and vehicles might develop into safety critical encounters. The proposed system has been field-tested at a real pedestrian crossing in the City of Vienna for the duration of one year. It consists of a cluster of 3 smart cameras, each of which is built from a very compact PC hardware system in a weatherproof housing. Two cameras run vehicle detection and tracking software, one camera runs a pedestrian detection and tracking module based on the HOG dectection principle. All 3 cameras use sparse optical flow computation in a low-resolution video stream in order to estimate the motion path and speed of objects. Geometric calibration of the cameras allows us to estimate the real-world co-ordinates of detected objects and to link the cameras together into one common reference system. This work describes the foundation for all the different object detection modalities (pedestrians, vehicles), and explains the system setup, tis design, and evaluation results which we have achieved so far.

  6. Opto-mechanical design of the G-CLEF flexure control camera system

    NASA Astrophysics Data System (ADS)

    Oh, Jae Sok; Park, Chan; Kim, Jihun; Kim, Kang-Min; Chun, Moo-Young; Yu, Young Sam; Lee, Sungho; Nah, Jakyoung; Park, Sung-Joon; Szentgyorgyi, Andrew; McMuldroch, Stuart; Norton, Timothy; Podgorski, William; Evans, Ian; Mueller, Mark; Uomoto, Alan; Crane, Jeffrey; Hare, Tyson

    2016-08-01

    The GMT-Consortium Large Earth Finder (G-CLEF) is the very first light instrument of the Giant Magellan Telescope (GMT). The G-CLEF is a fiber feed, optical band echelle spectrograph that is capable of extremely precise radial velocity measurement. KASI (Korea Astronomy and Space Science Institute) is responsible for Flexure Control Camera (FCC) included in the G-CLEF Front End Assembly (GCFEA). The FCC is a kind of guide camera, which monitors the field images focused on a fiber mirror to control the flexure and the focus errors within the GCFEA. The FCC consists of five optical components: a collimator including triple lenses for producing a pupil, neutral density filters allowing us to use much brighter star as a target or a guide, a tent prism as a focus analyzer for measuring the focus offset at the fiber mirror, a reimaging camera with three pair of lenses for focusing the beam on a CCD focal plane, and a CCD detector for capturing the image on the fiber mirror. In this article, we present the optical and mechanical FCC designs which have been modified after the PDR in April 2015.

  7. Airborne ballistic camera tracking systems

    NASA Technical Reports Server (NTRS)

    Redish, W. L.

    1976-01-01

    An operational airborne ballistic camera tracking system was tested for operational and data reduction feasibility. The acquisition and data processing requirements of the system are discussed. Suggestions for future improvements are also noted. A description of the data reduction mathematics is outlined. Results from a successful reentry test mission are tabulated. The test mission indicated that airborne ballistic camera tracking systems are feasible.

  8. Development of an Extra-vehicular (EVA) Infrared (IR) Camera Inspection System

    NASA Technical Reports Server (NTRS)

    Gazarik, Michael; Johnson, Dave; Kist, Ed; Novak, Frank; Antill, Charles; Haakenson, David; Howell, Patricia; Pandolf, John; Jenkins, Rusty; Yates, Rusty

    2006-01-01

    Designed to fulfill a critical inspection need for the Space Shuttle Program, the EVA IR Camera System can detect crack and subsurface defects in the Reinforced Carbon-Carbon (RCC) sections of the Space Shuttle s Thermal Protection System (TPS). The EVA IR Camera performs this detection by taking advantage of the natural thermal gradients induced in the RCC by solar flux and thermal emission from the Earth. This instrument is a compact, low-mass, low-power solution (1.2cm3, 1.5kg, 5.0W) for TPS inspection that exceeds existing requirements for feature detection. Taking advantage of ground-based IR thermography techniques, the EVA IR Camera System provides the Space Shuttle program with a solution that can be accommodated by the existing inspection system. The EVA IR Camera System augments the visible and laser inspection systems and finds cracks and subsurface damage that is not measurable by the other sensors, and thus fills a critical gap in the Space Shuttle s inspection needs. This paper discusses the on-orbit RCC inspection measurement concept and requirements, and then presents a detailed description of the EVA IR Camera System design.

  9. Selecting a digital camera for telemedicine.

    PubMed

    Patricoski, Chris; Ferguson, A Stewart

    2009-06-01

    The digital camera is an essential component of store-and-forward telemedicine (electronic consultation). There are numerous makes and models of digital cameras on the market, and selecting a suitable consumer-grade camera can be complicated. Evaluation of digital cameras includes investigating the features and analyzing image quality. Important features include the camera settings, ease of use, macro capabilities, method of image transfer, and power recharging. Consideration needs to be given to image quality, especially as it relates to color (skin tones) and detail. It is important to know the level of the photographer and the intended application. The goal is to match the characteristics of the camera with the telemedicine program requirements. In the end, selecting a digital camera is a combination of qualitative (subjective) and quantitative (objective) analysis. For the telemedicine program in Alaska in 2008, the camera evaluation and decision process resulted in a specific selection based on the criteria developed for our environment.

  10. OSIRIS-REx Asteroid Sample Return Mission Image Analysis

    NASA Astrophysics Data System (ADS)

    Chevres Fernandez, Lee Roger; Bos, Brent

    2018-01-01

    NASA’s Origins Spectral Interpretation Resource Identification Security-Regolith Explorer (OSIRIS-REx) mission constitutes the “first-of-its-kind” project to thoroughly characterize a near-Earth asteroid. The selected asteroid is (101955) 1999 RQ36 (a.k.a. Bennu). The mission launched in September 2016, and the spacecraft will reach its asteroid target in 2018 and return a sample to Earth in 2023. The spacecraft that will travel to, and collect a sample from, Bennu has five integrated instruments from national and international partners. NASA's OSIRIS-REx asteroid sample return mission spacecraft includes the Touch-And-Go Camera System (TAGCAMS) three camera-head instrument. The purpose of TAGCAMS is to provide imagery during the mission to facilitate navigation to the target asteroid, confirm acquisition of the asteroid sample and document asteroid sample stowage. Two of the TAGCAMS cameras, NavCam 1 and NavCam 2, serve as fully redundant navigation cameras to support optical navigation and natural feature tracking. The third TAGCAMS camera, StowCam, provides imagery to assist with and confirm proper stowage of the asteroid sample. Analysis of spacecraft imagery acquired by the TAGCAMS during cruise to the target asteroid Bennu was performed using custom codes developed in MATLAB. Assessment of the TAGCAMS in-flight performance using flight imagery was done to characterize camera performance. One specific area of investigation that was targeted was bad pixel mapping. A recent phase of the mission, known as the Earth Gravity Assist (EGA) maneuver, provided images that were used for the detection and confirmation of “questionable” pixels, possibly under responsive, using image segmentation analysis. Ongoing work on point spread function morphology and camera linearity and responsivity will also be used for calibration purposes and further analysis in preparation for proximity operations around Bennu. Said analyses will provide a broader understanding regarding the functionality of the camera system, which will in turn aid in the fly-down to the asteroid, as it will allow the pick of a suitable landing and sample location.

  11. Electronic method for autofluorography of macromolecules on two-D matrices. [Patent application

    DOEpatents

    Davidson, J.B.; Case, A.L.

    1981-12-30

    A method for detecting, localizing, and quantifying macromolecules contained in a two-dimensional matrix is provided which employs a television-based position sensitive detection system. A molecule-containing matrix may be produced by conventional means to produce spots of light at the molecule locations which are detected by the television system. The matrix, such as a gel matrix, is exposed to an electronic camera system including an image-intensifier and secondary electron conduction camera capable of light integrating times of many minutes. A light image stored in the form of a charge image on the camera tube target is scanned by conventional television techniques, digitized, and stored in a digital memory. Intensity of any point on the image may be determined from the number at the memory address of the point. The entire image may be displayed on a television monitor for inspection and photographing or individual spots may be analyzed through selected readout of the memory locations. Compared to conventional film exposure methods, the exposure time may be reduced 100 to 1000 times.

  12. A 3D photographic capsule endoscope system with full field of view

    NASA Astrophysics Data System (ADS)

    Ou-Yang, Mang; Jeng, Wei-De; Lai, Chien-Cheng; Kung, Yi-Chinn; Tao, Kuan-Heng

    2013-09-01

    Current capsule endoscope uses one camera to capture the surface image in the intestine. It can only observe the abnormal point, but cannot know the exact information of this abnormal point. Using two cameras can generate 3D images, but the visual plane changes while capsule endoscope rotates. It causes that two cameras can't capture the images information completely. To solve this question, this research provides a new kind of capsule endoscope to capture 3D images, which is 'A 3D photographic capsule endoscope system'. The system uses three cameras to capture images in real time. The advantage is increasing the viewing range up to 2.99 times respect to the two camera system. The system can accompany 3D monitor provides the exact information of symptom points, helping doctors diagnose the disease.

  13. 23 CFR 1200.13 - Special funding conditions for Section 402 grants.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    .... (b) Automated traffic enforcement systems prohibition. The State may not expend funds apportioned to the State under 23 U.S.C. 402 to carry out a program to purchase, operate, or maintain an automated traffic enforcement system. The term “automated traffic enforcement system” includes any camera which...

  14. 23 CFR 1200.13 - Special funding conditions for Section 402 grants.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    .... (b) Automated traffic enforcement systems prohibition. The State may not expend funds apportioned to the State under 23 U.S.C. 402 to carry out a program to purchase, operate, or maintain an automated traffic enforcement system. The term “automated traffic enforcement system” includes any camera which...

  15. A Versatile Time-Lapse Camera System Developed by the Hawaiian Volcano Observatory for Use at Kilauea Volcano, Hawaii

    USGS Publications Warehouse

    Orr, Tim R.; Hoblitt, Richard P.

    2008-01-01

    Volcanoes can be difficult to study up close. Because it may be days, weeks, or even years between important events, direct observation is often impractical. In addition, volcanoes are often inaccessible due to their remote location and (or) harsh environmental conditions. An eruption adds another level of complexity to what already may be a difficult and dangerous situation. For these reasons, scientists at the U.S. Geological Survey (USGS) Hawaiian Volcano Observatory (HVO) have, for years, built camera systems to act as surrogate eyes. With the recent advances in digital-camera technology, these eyes are rapidly improving. One type of photographic monitoring involves the use of near-real-time network-enabled cameras installed at permanent sites (Hoblitt and others, in press). Time-lapse camera-systems, on the other hand, provide an inexpensive, easily transportable monitoring option that offers more versatility in site location. While time-lapse systems lack near-real-time capability, they provide higher image resolution and can be rapidly deployed in areas where the use of sophisticated telemetry required by the networked cameras systems is not practical. This report describes the latest generation (as of 2008) time-lapse camera system used by HVO for photograph acquisition in remote and hazardous sites on Kilauea Volcano.

  16. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User's Head Movement.

    PubMed

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-08-31

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user's head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest.

  17. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User’s Head Movement

    PubMed Central

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-01-01

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user’s head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest. PMID:27589768

  18. The Orbiter camera payload system's large-format camera and attitude reference system

    NASA Technical Reports Server (NTRS)

    Schardt, B. B.; Mollberg, B. H.

    1985-01-01

    The Orbiter camera payload system (OCPS) is an integrated photographic system carried into earth orbit as a payload in the Space Transportation System (STS) Orbiter vehicle's cargo bay. The major component of the OCPS is a large-format camera (LFC), a precision wide-angle cartographic instrument capable of producing high-resolution stereophotography of great geometric fidelity in multiple base-to-height ratios. A secondary and supporting system to the LFC is the attitude reference system (ARS), a dual-lens stellar camera array (SCA) and camera support structure. The SCA is a 70 mm film system that is rigidly mounted to the LFC lens support structure and, through the simultaneous acquisition of two star fields with each earth viewing LFC frame, makes it possible to precisely determine the pointing of the LFC optical axis with reference to the earth nadir point. Other components complete the current OCPS configuration as a high-precision cartographic data acquisition system. The primary design objective for the OCPS was to maximize system performance characteristics while maintaining a high level of reliability compatible with rocket launch conditions and the on-orbit environment. The full OCPS configuration was launched on a highly successful maiden voyage aboard the STS Orbiter vehicle Challenger on Oct. 5, 1984, as a major payload aboard the STS-41G mission.

  19. Modification of Instructional Delivery and Student Learning with the Use of Educational Technologies

    ERIC Educational Resources Information Center

    Rogers, Jeffrey Ray

    2012-01-01

    The purpose of this study was to determine if educational technologies, including LCD projectors, interactive whiteboards, tablets, document cameras, and student response systems, modify instructional delivery and student learning. This case study was researched in four classrooms, including an English, math, science and social studies classroom…

  20. An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates

    USGS Publications Warehouse

    Hobbs, Michael T.; Brehme, Cheryl S.

    2017-01-01

    Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing.

  1. An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates.

    PubMed

    Hobbs, Michael T; Brehme, Cheryl S

    2017-01-01

    Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing.

  2. 21 CFR 876.1300 - Ingestible telemetric gastrointestinal capsule imaging system.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... images of the small bowel with a wireless camera contained in a capsule. This device includes an... receiving/recording unit, a data storage device, computer software to process the images, and accessories...

  3. 21 CFR 876.1300 - Ingestible telemetric gastrointestinal capsule imaging system.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... images of the small bowel with a wireless camera contained in a capsule. This device includes an... receiving/recording unit, a data storage device, computer software to process the images, and accessories...

  4. 21 CFR 876.1300 - Ingestible telemetric gastrointestinal capsule imaging system.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... images of the small bowel with a wireless camera contained in a capsule. This device includes an... receiving/recording unit, a data storage device, computer software to process the images, and accessories...

  5. 21 CFR 876.1300 - Ingestible telemetric gastrointestinal capsule imaging system.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... images of the small bowel with a wireless camera contained in a capsule. This device includes an... receiving/recording unit, a data storage device, computer software to process the images, and accessories...

  6. Single chip camera active pixel sensor

    NASA Technical Reports Server (NTRS)

    Shaw, Timothy (Inventor); Pain, Bedabrata (Inventor); Olson, Brita (Inventor); Nixon, Robert H. (Inventor); Fossum, Eric R. (Inventor); Panicacci, Roger A. (Inventor); Mansoorian, Barmak (Inventor)

    2003-01-01

    A totally digital single chip camera includes communications to operate most of its structure in serial communication mode. The digital single chip camera include a D/A converter for converting an input digital word into an analog reference signal. The chip includes all of the necessary circuitry for operating the chip using a single pin.

  7. ARNICA, the Arcetri Near-Infrared Camera

    NASA Astrophysics Data System (ADS)

    Lisi, F.; Baffa, C.; Bilotti, V.; Bonaccini, D.; del Vecchio, C.; Gennari, S.; Hunt, L. K.; Marcucci, G.; Stanga, R.

    1996-04-01

    ARNICA (ARcetri Near-Infrared CAmera) is the imaging camera for the near-infrared bands between 1.0 and 2.5 microns that the Arcetri Observatory has designed and built for the Infrared Telescope TIRGO located at Gornergrat, Switzerland. We describe the mechanical and optical design of the camera, and report on the astronomical performance of ARNICA as measured during the commissioning runs at the TIRGO (December, 1992 to December 1993), and an observing run at the William Herschel Telescope, Canary Islands (December, 1993). System performance is defined in terms of efficiency of the camera+telescope system and camera sensitivity for extended and point-like sources. (SECTION: Astronomical Instrumentation)

  8. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2012-01-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  9. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2011-12-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  10. Detecting method of subjects' 3D positions and experimental advanced camera control system

    NASA Astrophysics Data System (ADS)

    Kato, Daiichiro; Abe, Kazuo; Ishikawa, Akio; Yamada, Mitsuho; Suzuki, Takahito; Kuwashima, Shigesumi

    1997-04-01

    Steady progress is being made in the development of an intelligent robot camera capable of automatically shooting pictures with a powerful sense of reality or tracking objects whose shooting requires advanced techniques. Currently, only experienced broadcasting cameramen can provide these pictures.TO develop an intelligent robot camera with these abilities, we need to clearly understand how a broadcasting cameraman assesses his shooting situation and how his camera is moved during shooting. We use a real- time analyzer to study a cameraman's work and his gaze movements at studios and during sports broadcasts. This time, we have developed a detecting method of subjects' 3D positions and an experimental camera control system to help us further understand the movements required for an intelligent robot camera. The features are as follows: (1) Two sensor cameras shoot a moving subject and detect colors, producing its 3D coordinates. (2) Capable of driving a camera based on camera movement data obtained by a real-time analyzer. 'Moving shoot' is the name we have given to the object position detection technology on which this system is based. We used it in a soccer game, producing computer graphics showing how players moved. These results will also be reported.

  11. Design of a portable optical emission tomography system for microwave induced compact plasma for visible to near-infrared emission lines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rathore, Kavita, E-mail: kavira@iitk.ac.in, E-mail: pmunshi@iitk.ac.in, E-mail: sudeepb@iitk.ac.in; Munshi, Prabhat, E-mail: kavira@iitk.ac.in, E-mail: pmunshi@iitk.ac.in, E-mail: sudeepb@iitk.ac.in; Bhattacharjee, Sudeep, E-mail: kavira@iitk.ac.in, E-mail: pmunshi@iitk.ac.in, E-mail: sudeepb@iitk.ac.in

    A new non-invasive diagnostic system is developed for Microwave Induced Plasma (MIP) to reconstruct tomographic images of a 2D emission profile. A compact MIP system has wide application in industry as well as research application such as thrusters for space propulsion, high current ion beams, and creation of negative ions for heating of fusion plasma. Emission profile depends on two crucial parameters, namely, the electron temperature and density (over the entire spatial extent) of the plasma system. Emission tomography provides basic understanding of plasmas and it is very useful to monitor internal structure of plasma phenomena without disturbing its actualmore » processes. This paper presents development of a compact, modular, and versatile Optical Emission Tomography (OET) tool for a cylindrical, magnetically confined MIP system. It has eight slit-hole cameras and each consisting of a complementary metal–oxide–semiconductor linear image sensor for light detection. The optical noise is reduced by using aspheric lens and interference band-pass filters in each camera. The entire cylindrical plasma can be scanned with automated sliding ring mechanism arranged in fan-beam data collection geometry. The design of the camera includes a unique possibility to incorporate different filters to get the particular wavelength light from the plasma. This OET system includes selected band-pass filters for particular argon emission 750 nm, 772 nm, and 811 nm lines and hydrogen emission H{sub α} (656 nm) and H{sub β} (486 nm) lines. Convolution back projection algorithm is used to obtain the tomographic images of plasma emission line. The paper mainly focuses on (a) design of OET system in detail and (b) study of emission profile for 750 nm argon emission lines to validate the system design.« less

  12. Performance Characteristics For The Orbiter Camera Payload System's Large Format Camera (LFC)

    NASA Astrophysics Data System (ADS)

    MoIIberg, Bernard H.

    1981-11-01

    The Orbiter Camera Payload System, the OCPS, is an integrated photographic system which is carried into Earth orbit as a payload in the Shuttle Orbiter vehicle's cargo bay. The major component of the OCPS is a Large Format Camera (LFC) which is a precision wide-angle cartographic instrument that is capable of produc-ing high resolution stereophotography of great geometric fidelity in multiple base to height ratios. The primary design objective for the LFC was to maximize all system performance characteristics while maintaining a high level of reliability compatible with rocket launch conditions and the on-orbit environment.

  13. Eye gaze tracking for endoscopic camera positioning: an application of a hardware/software interface developed to automate Aesop.

    PubMed

    Ali, S M; Reisner, L A; King, B; Cao, A; Auner, G; Klein, M; Pandya, A K

    2008-01-01

    A redesigned motion control system for the medical robot Aesop allows automating and programming its movements. An IR eye tracking system has been integrated with this control interface to implement an intelligent, autonomous eye gaze-based laparoscopic positioning system. A laparoscopic camera held by Aesop can be moved based on the data from the eye tracking interface to keep the user's gaze point region at the center of a video feedback monitor. This system setup provides autonomous camera control that works around the surgeon, providing an optimal robotic camera platform.

  14. Active imaging system with Faraday filter

    DOEpatents

    Snyder, James J.

    1993-01-01

    An active imaging system has a low to medium powered laser transmitter and receiver wherein the receiver includes a Faraday filter with an ultranarrow optical bandpass and a bare (nonintensified) CCD camera. The laser is locked in the vicinity of the passband of the Faraday filter. The system has high sensitivity to the laser illumination while eliminating solar background.

  15. Active imaging system with Faraday filter

    DOEpatents

    Snyder, J.J.

    1993-04-13

    An active imaging system has a low to medium powered laser transmitter and receiver wherein the receiver includes a Faraday filter with an ultranarrow optical bandpass and a bare (nonintensified) CCD camera. The laser is locked in the vicinity of the passband of the Faraday filter. The system has high sensitivity to the laser illumination while eliminating solar background.

  16. Novel algorithm implementations in DARC: the Durham AO real-time controller

    NASA Astrophysics Data System (ADS)

    Basden, Alastair; Bitenc, Urban; Jenkins, David

    2016-07-01

    The Durham AO Real-time Controller has been used on-sky with the CANARY AO demonstrator instrument since 2010, and is also used to provide control for several AO test-benches, including DRAGON. Over this period, many new real-time algorithms have been developed, implemented and demonstrated, leading to performance improvements for CANARY. Additionally, the computational performance of this real-time system has continued to improve. Here, we provide details about recent updates and changes made to DARC, and the relevance of these updates, including new algorithms, to forthcoming AO systems. We present the computational performance of DARC when used on different hardware platforms, including hardware accelerators, and determine the relevance and potential for ELT scale systems. Recent updates to DARC have included algorithms to handle elongated laser guide star images, including correlation wavefront sensing, with options to automatically update references during AO loop operation. Additionally, sub-aperture masking options have been developed to increase signal to noise ratio when operating with non-symmetrical wavefront sensor images. The development of end-user tools has progressed with new options for configuration and control of the system. New wavefront sensor camera models and DM models have been integrated with the system, increasing the number of possible hardware configurations available, and a fully open-source AO system is now a reality, including drivers necessary for commercial cameras and DMs. The computational performance of DARC makes it suitable for ELT scale systems when implemented on suitable hardware. We present tests made on different hardware platforms, along with the strategies taken to optimise DARC for these systems.

  17. Continuous monitoring of Hawaiian volcanoes using thermal cameras

    NASA Astrophysics Data System (ADS)

    Patrick, M. R.; Orr, T. R.; Antolik, L.; Lee, R.; Kamibayashi, K.

    2012-12-01

    Thermal cameras are becoming more common at volcanoes around the world, and have become a powerful tool for observing volcanic activity. Fixed, continuously recording thermal cameras have been installed by the Hawaiian Volcano Observatory in the last two years at four locations on Kilauea Volcano to better monitor its two ongoing eruptions. The summit eruption, which began in March 2008, hosts an active lava lake deep within a fume-filled vent crater. A thermal camera perched on the rim of Halema`uma`u Crater, acquiring an image every five seconds, has now captured about two years of sustained lava lake activity, including frequent lava level fluctuations, small explosions , and several draining events. This thermal camera has been able to "see" through the thick fume in the crater, providing truly 24/7 monitoring that would not be possible with normal webcams. The east rift zone eruption, which began in 1983, has chiefly consisted of effusion through lava tubes onto the surface, but over the past two years has been interrupted by an intrusion, lava fountaining, crater collapse, and perched lava lake growth and draining. The three thermal cameras on the east rift zone, all on Pu`u `O`o cone and acquiring an image every several minutes, have captured many of these changes and are providing an improved means for alerting observatory staff of new activity. Plans are underway to install a thermal camera at the summit of Mauna Loa to monitor and alert to any future changes there. Thermal cameras are more difficult to install, and image acquisition and processing are more complicated than with visual webcams. Our system is based in part on the successful thermal camera installations by Italian volcanologists on Stromboli and Vulcano. Equipment includes custom enclosures with IR transmissive windows, power, and telemetry. Data acquisition is based on ActiveX controls, and data management is done using automated Matlab scripts. Higher-level data processing, also done with Matlab, includes automated measurements of lava lake level and surface crust velocity, tracking temperatures and hot areas in real-time, and alerts which notify users of notable temperature increases via text messaging. Lastly, real-time image and processed data display, which is vital for effective use of the images at the observatory, is done through a custom Web-based environment . Near real-time webcam images are displayed for the public at hvo.wr.usgs.gov/cams. Thermal cameras are costly, but have proven to be an extremely effective monitoring and research tool at the Hawaiian Volcano Observatory.

  18. Pedestrian Detection Based on Adaptive Selection of Visible Light or Far-Infrared Light Camera Image by Fuzzy Inference System and Convolutional Neural Network-Based Verification.

    PubMed

    Kang, Jin Kyu; Hong, Hyung Gil; Park, Kang Ryoung

    2017-07-08

    A number of studies have been conducted to enhance the pedestrian detection accuracy of intelligent surveillance systems. However, detecting pedestrians under outdoor conditions is a challenging problem due to the varying lighting, shadows, and occlusions. In recent times, a growing number of studies have been performed on visible light camera-based pedestrian detection systems using a convolutional neural network (CNN) in order to make the pedestrian detection process more resilient to such conditions. However, visible light cameras still cannot detect pedestrians during nighttime, and are easily affected by shadows and lighting. There are many studies on CNN-based pedestrian detection through the use of far-infrared (FIR) light cameras (i.e., thermal cameras) to address such difficulties. However, when the solar radiation increases and the background temperature reaches the same level as the body temperature, it remains difficult for the FIR light camera to detect pedestrians due to the insignificant difference between the pedestrian and non-pedestrian features within the images. Researchers have been trying to solve this issue by inputting both the visible light and the FIR camera images into the CNN as the input. This, however, takes a longer time to process, and makes the system structure more complex as the CNN needs to process both camera images. This research adaptively selects a more appropriate candidate between two pedestrian images from visible light and FIR cameras based on a fuzzy inference system (FIS), and the selected candidate is verified with a CNN. Three types of databases were tested, taking into account various environmental factors using visible light and FIR cameras. The results showed that the proposed method performs better than the previously reported methods.

  19. Thermal tracking in mobile robots for leak inspection activities.

    PubMed

    Ibarguren, Aitor; Molina, Jorge; Susperregi, Loreto; Maurtua, Iñaki

    2013-10-09

    Maintenance tasks are crucial for all kind of industries, especially in extensive industrial plants, like solar thermal power plants. The incorporation of robots is a key issue for automating inspection activities, as it will allow a constant and regular control over the whole plant. This paper presents an autonomous robotic system to perform pipeline inspection for early detection and prevention of leakages in thermal power plants, based on the work developed within the MAINBOT (http://www.mainbot.eu) European project. Based on the information provided by a thermographic camera, the system is able to detect leakages in the collectors and pipelines. Beside the leakage detection algorithms, the system includes a particle filter-based tracking algorithm to keep the target in the field of view of the camera and to avoid the irregularities of the terrain while the robot patrols the plant. The information provided by the particle filter is further used to command a robot arm, which handles the camera and ensures that the target is always within the image. The obtained results show the suitability of the proposed approach, adding a tracking algorithm to improve the performance of the leakage detection system.

  20. Thermal Tracking in Mobile Robots for Leak Inspection Activities

    PubMed Central

    Ibarguren, Aitor; Molina, Jorge; Susperregi, Loreto; Maurtua, Iñaki

    2013-01-01

    Maintenance tasks are crucial for all kind of industries, especially in extensive industrial plants, like solar thermal power plants. The incorporation of robots is a key issue for automating inspection activities, as it will allow a constant and regular control over the whole plant. This paper presents an autonomous robotic system to perform pipeline inspection for early detection and prevention of leakages in thermal power plants, based on the work developed within the MAINBOT (http://www.mainbot.eu) European project. Based on the information provided by a thermographic camera, the system is able to detect leakages in the collectors and pipelines. Beside the leakage detection algorithms, the system includes a particle filter-based tracking algorithm to keep the target in the field of view of the camera and to avoid the irregularities of the terrain while the robot patrols the plant. The information provided by the particle filter is further used to command a robot arm, which handles the camera and ensures that the target is always within the image. The obtained results show the suitability of the proposed approach, adding a tracking algorithm to improve the performance of the leakage detection system. PMID:24113684

  1. Lymphoscintigraphy

    MedlinePlus

    ... The special camera and imaging techniques used in nuclear medicine include the gamma camera and single-photon emission-computed tomography (SPECT). The gamma camera, also called a scintillation camera, detects radioactive energy that is emitted from the patient's body and ...

  2. Hepatobiliary

    MedlinePlus

    ... The special camera and imaging techniques used in nuclear medicine include the gamma camera and single-photon emission-computed tomography (SPECT). The gamma camera, also called a scintillation camera, detects radioactive energy that is emitted from the patient's body and ...

  3. UCam: universal camera controller and data acquisition system

    NASA Astrophysics Data System (ADS)

    McLay, S. A.; Bezawada, N. N.; Atkinson, D. C.; Ives, D. J.

    2010-07-01

    This paper describes the software architecture and design concepts used in the UKATC's generic camera control and data acquisition software system (UCam) which was originally developed for use with the ARC controller hardware. The ARC detector control electronics are developed by Astronomical Research Cameras (ARC), of San Diego, USA. UCam provides an alternative software solution programmed in C/C++ and python that runs on a real-time Linux operating system to achieve critical speed performance for high time resolution instrumentation. UCam is a server based application that can be accessed remotely and easily integrated as part of a larger instrument control system. It comes with a user friendly client application interface that has several features including a FITS header editor and support for interfacing with network devices. Support is also provided for writing automated scripts in python or as text files. UCam has an application centric design where custom applications for different types of detectors and read out modes can be developed, downloaded and executed on the ARC controller. The built-in de-multiplexer can be easily reconfigured to readout any number of channels for almost any type of detector. It also provides support for numerous sampling modes such as CDS, FOWLER, NDR and threshold limited NDR. UCam has been developed over several years for use on many instruments such as the Wide Field Infra Red Camera (WFCAM) at UKIRT in Hawaii, the mid-IR imager/spectrometer UIST and is also used on instruments at SUBARU, Gemini and Palomar.

  4. Application of infrared uncooled cameras in surveillance systems

    NASA Astrophysics Data System (ADS)

    Dulski, R.; Bareła, J.; Trzaskawka, P.; PiÄ tkowski, T.

    2013-10-01

    The recent necessity to protect military bases, convoys and patrols gave serious impact to the development of multisensor security systems for perimeter protection. One of the most important devices used in such systems are IR cameras. The paper discusses technical possibilities and limitations to use uncooled IR camera in a multi-sensor surveillance system for perimeter protection. Effective ranges of detection depend on the class of the sensor used and the observed scene itself. Application of IR camera increases the probability of intruder detection regardless of the time of day or weather conditions. It also simultaneously decreased the false alarm rate produced by the surveillance system. The role of IR cameras in the system was discussed as well as technical possibilities to detect human being. Comparison of commercially available IR cameras, capable to achieve desired ranges was done. The required spatial resolution for detection, recognition and identification was calculated. The simulation of detection ranges was done using a new model for predicting target acquisition performance which uses the Targeting Task Performance (TTP) metric. Like its predecessor, the Johnson criteria, the new model bounds the range performance with image quality. The scope of presented analysis is limited to the estimation of detection, recognition and identification ranges for typical thermal cameras with uncooled microbolometer focal plane arrays. This type of cameras is most widely used in security systems because of competitive price to performance ratio. Detection, recognition and identification range calculations were made, and the appropriate results for the devices with selected technical specifications were compared and discussed.

  5. Three-Dimensional Digital Image Correlation of a Composite Overwrapped Pressure Vessel During Hydrostatic Pressure Tests

    NASA Technical Reports Server (NTRS)

    Revilock, Duane M., Jr.; Thesken, John C.; Schmidt, Timothy E.

    2007-01-01

    Ambient temperature hydrostatic pressurization tests were conducted on a composite overwrapped pressure vessel (COPV) to understand the fiber stresses in COPV components. Two three-dimensional digital image correlation systems with high speed cameras were used in the evaluation to provide full field displacement and strain data for each pressurization test. A few of the key findings will be discussed including how the principal strains provided better insight into system behavior than traditional gauges, a high localized strain that was measured where gages were not present and the challenges of measuring curved surfaces with the use of a 1.25 in. thick layered polycarbonate panel that protected the cameras.

  6. High dynamic range CMOS (HDRC) imagers for safety systems

    NASA Astrophysics Data System (ADS)

    Strobel, Markus; Döttling, Dietmar

    2013-04-01

    The first part of this paper describes the high dynamic range CMOS (HDRC®) imager - a special type of CMOS image sensor with logarithmic response. The powerful property of a high dynamic range (HDR) image acquisition is detailed by mathematical definition and measurement of the optoelectronic conversion function (OECF) of two different HDRC imagers. Specific sensor parameters will be discussed including the pixel design for the global shutter readout. The second part will give an outline on the applications and requirements of cameras for industrial safety. Equipped with HDRC global shutter sensors SafetyEYE® is a high-performance stereo camera system for safe three-dimensional zone monitoring enabling new and more flexible solutions compared to existing safety guards.

  7. Graphic overlays in high-precision teleoperation: Current and future work at JPL

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Venema, Steven C.

    1989-01-01

    In space teleoperation additional problems arise, including signal transmission time delays. These can greatly reduce operator performance. Recent advances in graphics open new possibilities for addressing these and other problems. Currently a multi-camera system with normal 3-D TV and video graphics capabilities is being developed. Trained and untrained operators will be tested for high precision performance using two force reflecting hand controllers and a voice recognition system to control two robot arms and up to 5 movable stereo or non-stereo TV cameras. A number of new techniques of integrating TV and video graphics displays to improve operator training and performance in teleoperation and supervised automation are evaluated.

  8. Electronic magnification for astronomical camera tubes

    NASA Technical Reports Server (NTRS)

    Vine, J.; Hansen, J. R.; Pietrzyk, J. P.

    1974-01-01

    Definitions, test schemes, and analyses used to provide variable magnification in the image section of the television sensor for large space telescopes are outlined. Experimental results show a definite form of magnetic field distribution is necessary to achieve magnification in the range 3X to 4X. Coil systems to establish the required field shapes were built, and both image intensifiers and camera tubes were operated at high magnification. The experiments confirm that such operation is practical and can provide satisfactory image quality. The main problem with such a system was identified as heating of the photocathode due to concentration of coil power dissipation in that vicinity. Suggestions for overcoming this disadvantage are included.

  9. Video model deformation system for the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Goad, W. K.

    1983-01-01

    A photogrammetric closed circuit television system to measure model deformation at the National Transonic Facility is described. The photogrammetric approach was chosen because of its inherent rapid data recording of the entire object field. Video cameras are used to acquire data instead of film cameras due to the inaccessibility of cameras which must be housed within the cryogenic, high pressure plenum of this facility. A rudimentary theory section is followed by a description of the video-based system and control measures required to protect cameras from the hostile environment. Preliminary results obtained with the same camera placement as planned for NTF are presented and plans for facility testing with a specially designed test wing are discussed.

  10. Evaluation of thermal cameras in quality systems according to ISO 9000 or EN 45000 standards

    NASA Astrophysics Data System (ADS)

    Chrzanowski, Krzysztof

    2001-03-01

    According to the international standards ISO 9001-9004 and EN 45001-45003 the industrial plants and the accreditation laboratories that implemented the quality systems according to these standards are required to evaluate an uncertainty of measurements. Manufacturers of thermal cameras do not offer any data that could enable estimation of measurement uncertainty of these imagers. Difficulties in determining the measurement uncertainty is an important limitation of thermal cameras for applications in the industrial plants and the cooperating accreditation laboratories that have implemented these quality systems. A set of parameters for characterization of commercial thermal cameras, a measuring set, some results of testing of these cameras, a mathematical model of uncertainty, and a software that enables quick calculation of uncertainty of temperature measurements with thermal cameras are presented in this paper.

  11. A high-sensitivity EM-CCD camera for the open port telescope cavity of SOFIA

    NASA Astrophysics Data System (ADS)

    Wiedemann, Manuel; Wolf, Jürgen; McGrotty, Paul; Edwards, Chris; Krabbe, Alfred

    2016-08-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) has three target acquisition and tracking cameras. All three imagers originally used the same cameras, which did not meet the sensitivity requirements, due to low quantum efficiency and high dark current. The Focal Plane Imager (FPI) suffered the most from high dark current, since it operated in the aircraft cabin at room temperatures without active cooling. In early 2013 the FPI was upgraded with an iXon3 888 from Andor Techonolgy. Compared to the original cameras, the iXon3 has a factor five higher QE, thanks to its back-illuminated sensor, and orders of magnitude lower dark current, due to a thermo-electric cooler and "inverted mode operation." This leads to an increase in sensitivity of about five stellar magnitudes. The Wide Field Imager (WFI) and Fine Field Imager (FFI) shall now be upgraded with equally sensitive cameras. However, they are exposed to stratospheric conditions in flight (typical conditions: T≍-40° C, p≍ 0:1 atm) and there are no off-the-shelf CCD cameras with the performance of an iXon3, suited for these conditions. Therefore, Andor Technology and the Deutsches SOFIA Institut (DSI) are jointly developing and qualifying a camera for these conditions, based on the iXon3 888. These changes include replacement of electrical components with MIL-SPEC or industrial grade components and various system optimizations, a new data interface that allows the image data transmission over 30m of cable from the camera to the controller, a new power converter in the camera to generate all necessary operating voltages of the camera locally and a new housing that fulfills airworthiness requirements. A prototype of this camera has been built and tested in an environmental test chamber at temperatures down to T=-62° C and pressure equivalent to 50 000 ft altitude. In this paper, we will report about the development of the camera and present results from the environmental testing.

  12. Defining ray sets for the analysis of lenslet-based optical systems including plenoptic cameras and Shack-Hartmann wavefront sensors

    NASA Astrophysics Data System (ADS)

    Moore, Lori

    Plenoptic cameras and Shack-Hartmann wavefront sensors are lenslet-based optical systems that do not form a conventional image. The addition of a lens array into these systems allows for the aberrations generated by the combination of the object and the optical components located prior to the lens array to be measured or corrected with post-processing. This dissertation provides a ray selection method to determine the rays that pass through each lenslet in a lenslet-based system. This first-order, ray trace method is developed for any lenslet-based system with a well-defined fore optic, where in this dissertation the fore optic is all of the optical components located prior to the lens array. For example, in a plenoptic camera the fore optic is a standard camera lens. Because a lens array at any location after the exit pupil of the fore optic is considered in this analysis, it is applicable to both plenoptic cameras and Shack-Hartmann wavefront sensors. Only a generic, unaberrated fore optic is considered, but this dissertation establishes a framework for considering the effect of an aberrated fore optic in lenslet-based systems. The rays from the fore optic that pass through a lenslet placed at any location after the fore optic are determined. This collection of rays is reduced to three rays that describe the entire lenslet ray set. The lenslet ray set is determined at the object, image, and pupil planes of the fore optic. The consideration of the apertures that define the lenslet ray set for an on-axis lenslet leads to three classes of lenslet-based systems. Vignetting of the lenslet rays is considered for off-axis lenslets. Finally, the lenslet ray set is normalized into terms similar to the field and aperture vector used to describe the aberrated wavefront of the fore optic. The analysis in this dissertation is complementary to other first-order models that have been developed for a specific plenoptic camera layout or Shack-Hartmann wavefront sensor application. This general analysis determines the location where the rays of each lenslet pass through the fore optic establishing a framework to consider the effect of an aberrated fore optic in a future analysis.

  13. Feedback from video for virtual reality Navigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsap, L V

    2000-10-27

    Important preconditions for wide acceptance of virtual reality (VR) systems include their comfort, ease and naturalness to use. Most existing trackers super from discomfort-related issues. For example, body-based trackers (hand controllers, joysticks, helmet attachments, etc.) restrict spontaneity and naturalness of motion, while ground-based devices (e.g., hand controllers) limit the workspace by literally binding an operator to the ground. There are similar problems with controls. This paper describes using real-time video with registered depth information (from a commercially available camera) for virtual reality navigation. Camera-based setup can replace cumbersome trackers. The method includes selective depth processing for increased speed, and amore » robust skin-color segmentation for accounting illumination variations.« less

  14. Toward detection of marine vehicles on horizon from buoy camera

    NASA Astrophysics Data System (ADS)

    Fefilatyev, Sergiy; Goldgof, Dmitry B.; Langebrake, Lawrence

    2007-10-01

    This paper presents a new technique for automatic detection of marine vehicles in open sea from a buoy camera system using computer vision approach. Users of such system include border guards, military, port safety and flow management, sanctuary protection personnel. The system is intended to work autonomously, taking images of the surrounding ocean surface and analyzing them on the subject of presence of marine vehicles. The goal of the system is to detect an approximate window around the ship and prepare the small image for transmission and human evaluation. The proposed computer vision-based algorithm combines horizon detection method with edge detection and post-processing. The dataset of 100 images is used to evaluate the performance of proposed technique. We discuss promising results of ship detection and suggest necessary improvements for achieving better performance.

  15. The Advanced Gamma-ray Imaging System (AGIS): Camera Electronics Designs

    NASA Astrophysics Data System (ADS)

    Tajima, H.; Buckley, J.; Byrum, K.; Drake, G.; Falcone, A.; Funk, S.; Holder, J.; Horan, D.; Krawczynski, H.; Ong, R.; Swordy, S.; Wagner, R.; Williams, D.

    2008-04-01

    AGIS, a next generation of atmospheric Cherenkov telescope arrays, aims to achieve a sensitivity level of a milliCrab for gamma-ray observations in the energy band of 40 GeV to 100 TeV. Such improvement requires cost reduction of individual components with high reliability in order to equip the order of 100 telescopes necessary to achieve the sensitivity goal. We are exploring several design concepts to reduce the cost of camera electronics while improving their performance. These design concepts include systems based on multi-channel waveform sampling ASIC optimized for AGIS, a system based on IIT (image intensifier tube) for large channel (order of 1 million channels) readout as well as a multiplexed FADC system based on the current VERITAS readout design. Here we present trade-off in the studies of these design concepts.

  16. The Advanced Gamma-ray Imaging System (AGIS): Camera Electronics Designs

    NASA Astrophysics Data System (ADS)

    Tajima, Hiroyasu; Buckley, J.; Byrum, K.; Drake, G.; Falcone, A.; Funk, S.; Holder, J.; Horan, D.; Krawczynski, H.; Ong, R.; Swordy, S.; Wagner, R.; Wakely, S.; Williams, D.; Camera Electronics Working Group; AGIS Collaboration

    2008-03-01

    AGIS, a next generation of atmospheric Cherenkov telescope arrays, aims to achieve a sensitivity level of a milliCrab for gamma-ray observations in in the energy band of 40 GeV to 100 TeV. Such improvement requires cost reduction of individual components with high reliability in order to equip the order of 100 telescopes necessary to achieve the sensitivity goal. We are exploring several design concepts to reduce the cost of camera electronics while improving their performance. These design concepts include systems based on multi-channel waveform sampling ASIC optimized for AGIS, a system based on IIT (image intensifier tube) for large channel (order of 1 million channels) readout as well as a multiplexed FADC system based on the current VERITAS readout design. Here we present trade-off studies of these design concepts.

  17. Space Shuttle Projects

    NASA Image and Video Library

    2002-03-07

    STS-109 Astronaut Michael J. Massimino, mission specialist, perched on the Shuttle's robotic arm, is preparing to install the Electronic Support Module (ESM) in the aft shroud of the Hubble Space telescope (HST), with the assistance of astronaut James H. Newman (out of frame). The module will support a new experimental cooling system to be installed during the next day's fifth and final space walk of the mission. That cooling system is designed to bring the telescope's Near-Infrared Camera and Multi Spectrometer (NICMOS) back to life the which had been dormant since January 1999 when its original coolant ran out. The Space Shuttle Columbia STS-109 mission lifted off March 1, 2002 with goals of repairing and upgrading the Hubble Space Telescope (HST). The Marshall Space Flight Center in Huntsville, Alabama had the responsibility for the design, development, and construction of the HST, which is the most powerful and sophisticated telescope ever built. In addition to the installation of the experimental cooling system for the Hubble's Near-Infrared Camera and NICMOS, STS-109 upgrades to the HST included replacement of the solar array panels, replacement of the power control unit (PCU), and replacement of the Faint Object Camera (FOC) with a new advanced camera for Surveys (ACS). Lasting 10 days, 22 hours, and 11 minutes, the STS-109 mission was the 108th flight overall in NASA's Space Shuttle Program.

  18. Video Mosaicking for Inspection of Gas Pipelines

    NASA Technical Reports Server (NTRS)

    Magruder, Darby; Chien, Chiun-Hong

    2005-01-01

    A vision system that includes a specially designed video camera and an image-data-processing computer is under development as a prototype of robotic systems for visual inspection of the interior surfaces of pipes and especially of gas pipelines. The system is capable of providing both forward views and mosaicked radial views that can be displayed in real time or after inspection. To avoid the complexities associated with moving parts and to provide simultaneous forward and radial views, the video camera is equipped with a wide-angle (>165 ) fish-eye lens aimed along the axis of a pipe to be inspected. Nine white-light-emitting diodes (LEDs) placed just outside the field of view of the lens (see Figure 1) provide ample diffuse illumination for a high-contrast image of the interior pipe wall. The video camera contains a 2/3-in. (1.7-cm) charge-coupled-device (CCD) photodetector array and functions according to the National Television Standards Committee (NTSC) standard. The video output of the camera is sent to an off-the-shelf video capture board (frame grabber) by use of a peripheral component interconnect (PCI) interface in the computer, which is of the 400-MHz, Pentium II (or equivalent) class. Prior video-mosaicking techniques are applicable to narrow-field-of-view (low-distortion) images of evenly illuminated, relatively flat surfaces viewed along approximately perpendicular lines by cameras that do not rotate and that move approximately parallel to the viewed surfaces. One such technique for real-time creation of mosaic images of the ocean floor involves the use of visual correspondences based on area correlation, during both the acquisition of separate images of adjacent areas and the consolidation (equivalently, integration) of the separate images into a mosaic image, in order to insure that there are no gaps in the mosaic image. The data-processing technique used for mosaicking in the present system also involves area correlation, but with several notable differences: Because the wide-angle lens introduces considerable distortion, the image data must be processed to effectively unwarp the images (see Figure 2). The computer executes special software that includes an unwarping algorithm that takes explicit account of the cylindrical pipe geometry. To reduce the processing time needed for unwarping, parameters of the geometric mapping between the circular view of a fisheye lens and pipe wall are determined in advance from calibration images and compiled into an electronic lookup table. The software incorporates the assumption that the optical axis of the camera is parallel (rather than perpendicular) to the direction of motion of the camera. The software also compensates for the decrease in illumination with distance from the ring of LEDs.

  19. Configuration of electro-optic fire source detection system

    NASA Astrophysics Data System (ADS)

    Fabian, Ram Z.; Steiner, Zeev; Hofman, Nir

    2007-04-01

    The recent fighting activities in various parts of the world have highlighted the need for accurate fire source detection on one hand and fast "sensor to shooter cycle" capabilities on the other. Both needs can be met by the SPOTLITE system which dramatically enhances the capability to rapidly engage hostile fire source with a minimum of casualties to friendly force and to innocent bystanders. Modular system design enable to meet each customer specific requirements and enable excellent future growth and upgrade potential. The design and built of a fire source detection system is governed by sets of requirements issued by the operators. This can be translated into the following design criteria: I) Long range, fast and accurate fire source detection capability. II) Different threat detection and classification capability. III) Threat investigation capability. IV) Fire source data distribution capability (Location, direction, video image, voice). V) Men portability. ) In order to meet these design criteria, an optimized concept was presented and exercised for the SPOTLITE system. Three major modular components were defined: I) Electro Optical Unit -Including FLIR camera, CCD camera, Laser Range Finder and Marker II) Electronic Unit -including system computer and electronic. III) Controller Station Unit - Including the HMI of the system. This article discusses the system's components definition and optimization processes, and also show how SPOTLITE designers successfully managed to introduce excellent solutions for other system parameters.

  20. Determination of feature generation methods for PTZ camera object tracking

    NASA Astrophysics Data System (ADS)

    Doyle, Daniel D.; Black, Jonathan T.

    2012-06-01

    Object detection and tracking using computer vision (CV) techniques have been widely applied to sensor fusion applications. Many papers continue to be written that speed up performance and increase learning of artificially intelligent systems through improved algorithms, workload distribution, and information fusion. Military application of real-time tracking systems is becoming more and more complex with an ever increasing need of fusion and CV techniques to actively track and control dynamic systems. Examples include the use of metrology systems for tracking and measuring micro air vehicles (MAVs) and autonomous navigation systems for controlling MAVs. This paper seeks to contribute to the determination of select tracking algorithms that best track a moving object using a pan/tilt/zoom (PTZ) camera applicable to both of the examples presented. The select feature generation algorithms compared in this paper are the trained Scale-Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), the Mixture of Gaussians (MoG) background subtraction method, the Lucas- Kanade optical flow method (2000) and the Farneback optical flow method (2003). The matching algorithm used in this paper for the trained feature generation algorithms is the Fast Library for Approximate Nearest Neighbors (FLANN). The BSD licensed OpenCV library is used extensively to demonstrate the viability of each algorithm and its performance. Initial testing is performed on a sequence of images using a stationary camera. Further testing is performed on a sequence of images such that the PTZ camera is moving in order to capture the moving object. Comparisons are made based upon accuracy, speed and memory.

  1. Design study for a 16x zoom lens system for visible surveillance camera

    NASA Astrophysics Data System (ADS)

    Vella, Anthony; Li, Heng; Zhao, Yang; Trumper, Isaac; Gandara-Montano, Gustavo A.; Xu, Di; Nikolov, Daniel K.; Chen, Changchen; Brown, Nicolas S.; Guevara-Torres, Andres; Jung, Hae Won; Reimers, Jacob; Bentley, Julie

    2015-09-01

    *avella@ur.rochester.edu Design study for a 16x zoom lens system for visible surveillance camera Anthony Vella*, Heng Li, Yang Zhao, Isaac Trumper, Gustavo A. Gandara-Montano, Di Xu, Daniel K. Nikolov, Changchen Chen, Nicolas S. Brown, Andres Guevara-Torres, Hae Won Jung, Jacob Reimers, Julie Bentley The Institute of Optics, University of Rochester, Wilmot Building, 275 Hutchison Rd, Rochester, NY, USA 14627-0186 ABSTRACT High zoom ratio zoom lenses have extensive applications in broadcasting, cinema, and surveillance. Here, we present a design study on a 16x zoom lens with 4 groups (including two internal moving groups), designed for, but not limited to, a visible spectrum surveillance camera. Fifteen different solutions were discovered with nearly diffraction limited performance, using PNPX or PNNP design forms with the stop located in either the third or fourth group. Some interesting patterns and trends in the summarized results include the following: (a) in designs with such a large zoom ratio, the potential of locating the aperture stop in the front half of the system is limited, with ray height variations through zoom necessitating a very large lens diameter; (b) in many cases, the lens zoom motion has significant freedom to vary due to near zero total power in the middle two groups; and (c) we discuss the trade-offs between zoom configuration, stop location, packaging factors, and zoom group aberration sensitivity.

  2. SU-F-BRB-05: Collision Avoidance Mapping Using Consumer 3D Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cardan, R; Popple, R

    2015-06-15

    Purpose: To develop a fast and economical method of scanning a patient’s full body contour for use in collision avoidance mapping without the use of ionizing radiation. Methods: Two consumer level 3D cameras used in electronic gaming were placed in a CT simulator room to scan a phantom patient set up in a high collision probability position. A registration pattern and computer vision algorithms were used to transform the scan into the appropriate coordinate systems. The cameras were then used to scan the surface of a gantry in the treatment vault. Each scan was converted into a polygon mesh formore » collision testing in a general purpose polygon interference algorithm. All clinically relevant transforms were applied to the gantry and patient support to create a map of all possible collisions. The map was then tested for accuracy by physically testing the collisions with the phantom in the vault. Results: The scanning fidelity of both the gantry and patient was sufficient to produce a collision prediction accuracy of 97.1% with 64620 geometry states tested in 11.5 s. The total scanning time including computation, transformation, and generation was 22.3 seconds. Conclusion: Our results demonstrate an economical system to generate collision avoidance maps. Future work includes testing the speed of the framework in real-time collision avoidance scenarios. Research partially supported by a grant from Varian Medical Systems.« less

  3. General Nuclear Medicine

    MedlinePlus

    ... The special camera and imaging techniques used in nuclear medicine include the gamma camera and single-photon emission-computed tomography (SPECT). The gamma camera, also called a scintillation camera, detects radioactive energy that is emitted from the patient's body and ...

  4. Low SWaP multispectral sensors using dichroic filter arrays

    NASA Astrophysics Data System (ADS)

    Dougherty, John; Varghese, Ron

    2015-06-01

    The benefits of multispectral imaging are well established in a variety of applications including remote sensing, authentication, satellite and aerial surveillance, machine vision, biomedical, and other scientific and industrial uses. However, many of the potential solutions require more compact, robust, and cost-effective cameras to realize these benefits. The next generation of multispectral sensors and cameras needs to deliver improvements in size, weight, power, portability, and spectral band customization to support widespread deployment for a variety of purpose-built aerial, unmanned, and scientific applications. A novel implementation uses micro-patterning of dichroic filters1 into Bayer and custom mosaics, enabling true real-time multispectral imaging with simultaneous multi-band image acquisition. Consistent with color image processing, individual spectral channels are de-mosaiced with each channel providing an image of the field of view. This approach can be implemented across a variety of wavelength ranges and on a variety of detector types including linear, area, silicon, and InGaAs. This dichroic filter array approach can also reduce payloads and increase range for unmanned systems, with the capability to support both handheld and autonomous systems. Recent examples and results of 4 band RGB + NIR dichroic filter arrays in multispectral cameras are discussed. Benefits and tradeoffs of multispectral sensors using dichroic filter arrays are compared with alternative approaches - including their passivity, spectral range, customization options, and scalable production.

  5. Low-cost real-time automatic wheel classification system

    NASA Astrophysics Data System (ADS)

    Shabestari, Behrouz N.; Miller, John W. V.; Wedding, Victoria

    1992-11-01

    This paper describes the design and implementation of a low-cost machine vision system for identifying various types of automotive wheels which are manufactured in several styles and sizes. In this application, a variety of wheels travel on a conveyor in random order through a number of processing steps. One of these processes requires the identification of the wheel type which was performed manually by an operator. A vision system was designed to provide the required identification. The system consisted of an annular illumination source, a CCD TV camera, frame grabber, and 386-compatible computer. Statistical pattern recognition techniques were used to provide robust classification as well as a simple means for adding new wheel designs to the system. Maintenance of the system can be performed by plant personnel with minimal training. The basic steps for identification include image acquisition, segmentation of the regions of interest, extraction of selected features, and classification. The vision system has been installed in a plant and has proven to be extremely effective. The system properly identifies the wheels correctly up to 30 wheels per minute regardless of rotational orientation in the camera's field of view. Correct classification can even be achieved if a portion of the wheel is blocked off from the camera. Significant cost savings have been achieved by a reduction in scrap associated with incorrect manual classification as well as a reduction of labor in a tedious task.

  6. Geocam Space: Enhancing Handheld Digital Camera Imagery from the International Space Station for Research and Applications

    NASA Technical Reports Server (NTRS)

    Stefanov, William L.; Lee, Yeon Jin; Dille, Michael

    2016-01-01

    Handheld astronaut photography of the Earth has been collected from the International Space Station (ISS) since 2000, making it the most temporally extensive remotely sensed dataset from this unique Low Earth orbital platform. Exclusive use of digital handheld cameras to perform Earth observations from the ISS began in 2004. Nadir viewing imagery is constrained by the inclined equatorial orbit of the ISS to between 51.6 degrees North and South latitude, however numerous oblique images of land surfaces above these latitudes are included in the dataset. While unmodified commercial off-the-shelf digital cameras provide only visible wavelength, three-band spectral information of limited quality current cameras used with long (400+ mm) lenses can obtain high quality spatial information approaching 2 meters/ground pixel resolution. The dataset is freely available online at the Gateway to Astronaut Photography of Earth site (http://eol.jsc.nasa.gov), and now comprises over 2 million images. Despite this extensive image catalog, use of the data for scientific research, disaster response, commercial applications and visualizations is minimal in comparison to other data collected from free-flying satellite platforms such as Landsat, Worldview, etc. This is due primarily to the lack of fully-georeferenced data products - while current digital cameras typically have integrated GPS, this does not function in the Low Earth Orbit environment. The Earth Science and Remote Sensing (ESRS) Unit at NASA Johnson Space Center provides training in Earth Science topics to ISS crews, performs daily operations and Earth observation target delivery to crews through the Crew Earth Observations (CEO) Facility on board ISS, and also catalogs digital handheld imagery acquired from orbit by manually adding descriptive metadata and determining an image geographic centerpoint using visual feature matching with other georeferenced data, e.g. Landsat, Google Earth, etc. The lack of full geolocation information native to the data makes it difficult to integrate astronaut photographs with other georeferenced data to facilitate quantitative analysis such as urban land cover/land use classification, change detection, or geologic mapping. The manual determination of image centerpoints is both time and labor-intensive, leading to delays in releasing geolocated and cataloged data to the public, such as the timely use of data for disaster response. The GeoCam Space project was funded by the ISS Program in 2015 to develop an on-orbit hardware and ground-based software system for increasing the efficiency of geolocating astronaut photographs from the ISS (Fig. 1). The Intelligent Robotics Group at NASA Ames Research Center leads the development of both the ground and on-orbit systems in collaboration with the ESRS Unit. The hardware component consists of modified smartphone elements including cameras, central processing unit, wireless Ethernet, and an inertial measurement unit (gyroscopes/accelerometers/magnetometers) reconfigured into a compact unit that attaches to the base of the current Nikon D4 camera - and its replacement, the Nikon D5 - and connects using the standard Nikon peripheral connector or USB port. This provides secondary, side and downward facing cameras perpendicular to the primary camera pointing direction. The secondary cameras observe calibration targets with known internal X, Y, and Z position affixed to the interior of the ISS to determine the camera pose corresponding to each image frame. This information is recorded by the GeoCam Space unit and indexed for correlation to the camera time recorded for each image frame. Data - image, EXIF header, and camera pose information - is transmitted to the ground software system (GeoRef) using the established Ku-band USOS downlink system. Following integration on the ground, the camera pose information provides an initial geolocation estimate for the individual film frame. This new capability represents a significant advance in geolocation from the manual feature-matching approach for both nadir and off-nadir viewing imagery. With the initial geolocation estimate, full georeferencing of an image is completed using the rapid tie-pointing interface in GeoRef, and the resulting data is added to the Gateway to Astronaut Photography of Earth online database in both Geotiff and Keyhole Markup Language (kml) formats. The integration of the GeoRef software component of Geocam Space into the CEO image cataloging workflow is complete, and disaster response imagery acquired by the ISS crew is now fully georeferenced as a standard data product. The on-orbit hardware component (GeoSens) is in final prototyping phase, and is on-schedule for launch to the ISS in late 2016. Installation and routine use of the Geocam Space system for handheld digital camera photography from the ISS is expected to significantly improve the usefulness of this unique dataset for a variety of public- and private-sector applications.

  7. An interactive web-based system using cloud for large-scale visual analytics

    NASA Astrophysics Data System (ADS)

    Kaseb, Ahmed S.; Berry, Everett; Rozolis, Erik; McNulty, Kyle; Bontrager, Seth; Koh, Youngsol; Lu, Yung-Hsiang; Delp, Edward J.

    2015-03-01

    Network cameras have been growing rapidly in recent years. Thousands of public network cameras provide tremendous amount of visual information about the environment. There is a need to analyze this valuable information for a better understanding of the world around us. This paper presents an interactive web-based system that enables users to execute image analysis and computer vision techniques on a large scale to analyze the data from more than 65,000 worldwide cameras. This paper focuses on how to use both the system's website and Application Programming Interface (API). Given a computer program that analyzes a single frame, the user needs to make only slight changes to the existing program and choose the cameras to analyze. The system handles the heterogeneity of the geographically distributed cameras, e.g. different brands, resolutions. The system allocates and manages Amazon EC2 and Windows Azure cloud resources to meet the analysis requirements.

  8. LSST camera control system

    NASA Astrophysics Data System (ADS)

    Marshall, Stuart; Thaler, Jon; Schalk, Terry; Huffer, Michael

    2006-06-01

    The LSST Camera Control System (CCS) will manage the activities of the various camera subsystems and coordinate those activities with the LSST Observatory Control System (OCS). The CCS comprises a set of modules (nominally implemented in software) which are each responsible for managing one camera subsystem. Generally, a control module will be a long lived "server" process running on an embedded computer in the subsystem. Multiple control modules may run on a single computer or a module may be implemented in "firmware" on a subsystem. In any case control modules must exchange messages and status data with a master control module (MCM). The main features of this approach are: (1) control is distributed to the local subsystem level; (2) the systems follow a "Master/Slave" strategy; (3) coordination will be achieved by the exchange of messages through the interfaces between the CCS and its subsystems. The interface between the camera data acquisition system and its downstream clients is also presented.

  9. A system for extracting 3-dimensional measurements from a stereo pair of TV cameras

    NASA Technical Reports Server (NTRS)

    Yakimovsky, Y.; Cunningham, R.

    1976-01-01

    Obtaining accurate three-dimensional (3-D) measurement from a stereo pair of TV cameras is a task requiring camera modeling, calibration, and the matching of the two images of a real 3-D point on the two TV pictures. A system which models and calibrates the cameras and pairs the two images of a real-world point in the two pictures, either manually or automatically, was implemented. This system is operating and provides three-dimensional measurements resolution of + or - mm at distances of about 2 m.

  10. Applications of a shadow camera system for energy meteorology

    NASA Astrophysics Data System (ADS)

    Kuhn, Pascal; Wilbert, Stefan; Prahl, Christoph; Garsche, Dominik; Schüler, David; Haase, Thomas; Ramirez, Lourdes; Zarzalejo, Luis; Meyer, Angela; Blanc, Philippe; Pitz-Paal, Robert

    2018-02-01

    Downward-facing shadow cameras might play a major role in future energy meteorology. Shadow cameras directly image shadows on the ground from an elevated position. They are used to validate other systems (e.g. all-sky imager based nowcasting systems, cloud speed sensors or satellite forecasts) and can potentially provide short term forecasts for solar power plants. Such forecasts are needed for electricity grids with high penetrations of renewable energy and can help to optimize plant operations. In this publication, two key applications of shadow cameras are briefly presented.

  11. Nonholonomic camera-space manipulation using cameras mounted on a mobile base

    NASA Astrophysics Data System (ADS)

    Goodwine, Bill; Seelinger, Michael J.; Skaar, Steven B.; Ma, Qun

    1998-10-01

    The body of work called `Camera Space Manipulation' is an effective and proven method of robotic control. Essentially, this technique identifies and refines the input-output relationship of the plant using estimation methods and drives the plant open-loop to its target state. 3D `success' of the desired motion, i.e., the end effector of the manipulator engages a target at a particular location with a particular orientation, is guaranteed when there is camera space success in two cameras which are adequately separated. Very accurate, sub-pixel positioning of a robotic end effector is possible using this method. To date, however, most efforts in this area have primarily considered holonomic systems. This work addresses the problem of nonholonomic camera space manipulation by considering the problem of a nonholonomic robot with two cameras and a holonomic manipulator on board the nonholonomic platform. While perhaps not as common in robotics, such a combination of holonomic and nonholonomic degrees of freedom are ubiquitous in industry: fork lifts and earth moving equipment are common examples of a nonholonomic system with an on-board holonomic actuator. The nonholonomic nature of the system makes the automation problem more difficult due to a variety of reasons; in particular, the target location is not fixed in the image planes, as it is for holonomic systems (since the cameras are attached to a moving platform), and there is a fundamental `path dependent' nature of nonholonomic kinematics. This work focuses on the sensor space or camera-space-based control laws necessary for effectively implementing an autonomous system of this type.

  12. Backing collisions: a study of drivers' eye and backing behaviour using combined rear-view camera and sensor systems.

    PubMed

    Hurwitz, David S; Pradhan, Anuj; Fisher, Donald L; Knodler, Michael A; Muttart, Jeffrey W; Menon, Rajiv; Meissner, Uwe

    2010-04-01

    Backing crash injures can be severe; approximately 200 of the 2,500 reported injuries of this type per year to children under the age of 15 years result in death. Technology for assisting drivers when backing has limited success in preventing backing crashes. Two questions are addressed: Why is the reduction in backing crashes moderate when rear-view cameras are deployed? Could rear-view cameras augment sensor systems? 46 drivers (36 experimental, 10 control) completed 16 parking trials over 2 days (eight trials per day). Experimental participants were provided with a sensor camera system, controls were not. Three crash scenarios were introduced. Parking facility at UMass Amherst, USA. 46 drivers (33 men, 13 women) average age 29 years, who were Massachusetts residents licensed within the USA for an average of 9.3 years. Interventions Vehicles equipped with a rear-view camera and sensor system-based parking aid. Subject's eye fixations while driving and researcher's observation of collision with objects during backing. Only 20% of drivers looked at the rear-view camera before backing, and 88% of those did not crash. Of those who did not look at the rear-view camera before backing, 46% looked after the sensor warned the driver. This study indicates that drivers not only attend to an audible warning, but will look at a rear-view camera if available. Evidence suggests that when used appropriately, rear-view cameras can mitigate the occurrence of backing crashes, particularly when paired with an appropriate sensor system.

  13. Backing collisions: a study of drivers’ eye and backing behaviour using combined rear-view camera and sensor systems

    PubMed Central

    Hurwitz, David S; Pradhan, Anuj; Fisher, Donald L; Knodler, Michael A; Muttart, Jeffrey W; Menon, Rajiv; Meissner, Uwe

    2012-01-01

    Context Backing crash injures can be severe; approximately 200 of the 2,500 reported injuries of this type per year to children under the age of 15 years result in death. Technology for assisting drivers when backing has limited success in preventing backing crashes. Objectives Two questions are addressed: Why is the reduction in backing crashes moderate when rear-view cameras are deployed? Could rear-view cameras augment sensor systems? Design 46 drivers (36 experimental, 10 control) completed 16 parking trials over 2 days (eight trials per day). Experimental participants were provided with a sensor camera system, controls were not. Three crash scenarios were introduced. Setting Parking facility at UMass Amherst, USA. Subjects 46 drivers (33 men, 13 women) average age 29 years, who were Massachusetts residents licensed within the USA for an average of 9.3 years. Interventions Vehicles equipped with a rear-view camera and sensor system-based parking aid. Main Outcome Measures Subject’s eye fixations while driving and researcher’s observation of collision with objects during backing. Results Only 20% of drivers looked at the rear-view camera before backing, and 88% of those did not crash. Of those who did not look at the rear-view camera before backing, 46% looked after the sensor warned the driver. Conclusions This study indicates that drivers not only attend to an audible warning, but will look at a rear-view camera if available. Evidence suggests that when used appropriately, rear-view cameras can mitigate the occurrence of backing crashes, particularly when paired with an appropriate sensor system. PMID:20363812

  14. Mapping Land and Water Surface Topography with instantaneous Structure from Motion

    NASA Astrophysics Data System (ADS)

    Dietrich, J.; Fonstad, M. A.

    2012-12-01

    Structure from Motion (SfM) has given researchers an invaluable tool for low-cost, high-resolution 3D mapping of the environment. These SfM 3D surface models are commonly constructed from many digital photographs collected with one digital camera (either handheld or attached to aerial platform). This method works for stationary or very slow moving objects. However, objects in motion are impossible to capture with one-camera SfM. With multiple simultaneously triggered cameras, it becomes possible to capture multiple photographs at the same time which allows for the construction 3D surface models of moving objects and surfaces, an instantaneous SfM (ISfM) surface model. In river science, ISfM provides a low-cost solution for measuring a number of river variables that researchers normally estimate or are unable to collect over large areas. With ISfM and sufficient coverage of the banks and RTK-GPS control it is possible to create a digital surface model of land and water surface elevations across an entire channel and water surface slopes at any point within the surface model. By setting the cameras to collect time-lapse photography of a scene it is possible to create multiple surfaces that can be compared using traditional digital surface model differencing. These water surface models could be combined the high-resolution bathymetry to create fully 3D cross sections that could be useful in hydrologic modeling. Multiple temporal image sets could also be used in 2D or 3D particle image velocimetry to create 3D surface velocity maps of a channel. Other applications in earth science include anything where researchers could benefit from temporal surface modeling like mass movements, lava flows, dam removal monitoring. The camera system that was used for this research consisted of ten pocket digital cameras (Canon A3300) equipped with wireless triggers. The triggers were constructed with an Arduino-style microcontroller and off-the-shelf handheld radios with a maximum range of several kilometers. The cameras are controlled from another microcontroller/radio combination that allows for manual or automatic triggering of the cameras. The total cost of the camera system was approximately 1500 USD.

  15. Note: Tormenta: An open source Python-powered control software for camera based optical microscopy.

    PubMed

    Barabas, Federico M; Masullo, Luciano A; Stefani, Fernando D

    2016-12-01

    Until recently, PC control and synchronization of scientific instruments was only possible through closed-source expensive frameworks like National Instruments' LabVIEW. Nowadays, efficient cost-free alternatives are available in the context of a continuously growing community of open-source software developers. Here, we report on Tormenta, a modular open-source software for the control of camera-based optical microscopes. Tormenta is built on Python, works on multiple operating systems, and includes some key features for fluorescence nanoscopy based on single molecule localization.

  16. Solar-Powered Airplane with Cameras and WLAN

    NASA Technical Reports Server (NTRS)

    Higgins, Robert G.; Dunagan, Steve E.; Sullivan, Don; Slye, Robert; Brass, James; Leung, Joe G.; Gallmeyer, Bruce; Aoyagi, Michio; Wei, Mei Y.; Herwitz, Stanley R.; hide

    2004-01-01

    An experimental airborne remote sensing system includes a remotely controlled, lightweight, solar-powered airplane (see figure) that carries two digital-output electronic cameras and communicates with a nearby ground control and monitoring station via a wireless local-area network (WLAN). The speed of the airplane -- typically <50 km/h -- is low enough to enable loitering over farm fields, disaster scenes, or other areas of interest to collect high-resolution digital imagery that could be delivered to end users (e.g., farm managers or disaster-relief coordinators) in nearly real time.

  17. Note: Tormenta: An open source Python-powered control software for camera based optical microscopy

    NASA Astrophysics Data System (ADS)

    Barabas, Federico M.; Masullo, Luciano A.; Stefani, Fernando D.

    2016-12-01

    Until recently, PC control and synchronization of scientific instruments was only possible through closed-source expensive frameworks like National Instruments' LabVIEW. Nowadays, efficient cost-free alternatives are available in the context of a continuously growing community of open-source software developers. Here, we report on Tormenta, a modular open-source software for the control of camera-based optical microscopes. Tormenta is built on Python, works on multiple operating systems, and includes some key features for fluorescence nanoscopy based on single molecule localization.

  18. International Ultraviolet Explorer observations of the peculiar variable spectrum of the eclipsing binary R Arae

    NASA Technical Reports Server (NTRS)

    Mccluskey, G. E.; Kondo, Y.

    1983-01-01

    The eclipsing binary system R Arae = HD 149730 is a relatively bright southern system with an orbital period of about 4.4 days. It is a single-lined spectroscopic binary. The spectral class of the primary component is B9 Vp. The system was included in a study of mass flow and evolution in close binary systems using the International Ultraviolet Explorer satellite (IUE). Four spectra in the wavelength range from 1150 to 1900 A were obtained with the far-ultraviolet SWP camera, and six spectra in the range from 1900 to 3200 range were obtained with the mid-ultraviolet LWR camera. The close binary R Arae exhibits very unusual ultraviolet spectra. It appears that no other close binary system, observed with any of the orbiting satellites, shows outside-eclipse ultraviolet continuum flux variations of this nature.

  19. Coincidence ion imaging with a fast frame camera

    NASA Astrophysics Data System (ADS)

    Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander H.; Fan, Lin; Li, Wen

    2014-12-01

    A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.

  20. Development of the radial neutron camera system for the HL-2A tokamak

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Y. P., E-mail: zhangyp@swip.ac.cn; Yang, J. W.; Liu, Yi

    2016-06-15

    A new radial neutron camera system has been developed and operated recently in the HL-2A tokamak to measure the spatial and time resolved 2.5 MeV D-D fusion neutron, enhancing the understanding of the energetic-ion physics. The camera mainly consists of a multichannel collimator, liquid-scintillation detectors, shielding systems, and a data acquisition system. Measurements of the D-D fusion neutrons using the camera have been successfully performed during the 2015 HL-2A experiment campaign. The measurements show that the distribution of the fusion neutrons in the HL-2A plasma has a peaked profile, suggesting that the neutral beam injection beam ions in the plasmamore » have a peaked distribution. It also suggests that the neutrons are primarily produced from beam-target reactions in the plasma core region. The measurement results from the neutron camera are well consistent with the results of both a standard {sup 235}U fission chamber and NUBEAM neutron calculations. In this paper, the new radial neutron camera system on HL-2A and the first experimental results are described.« less

  1. VSF Measurements and Inversion for RaDyO

    DTIC Science & Technology

    2012-09-30

    near-surface waters, including the surf zone. APPROACH MASCOT (Multi-Angle SCattering Optical Tool) has a 30 mW 658 nm laser diode source...in Santa Barbara Channel are provided in Fig. 1. Despite the widespread use of polarized laser sources across a diversity of Navy applications, this...operations that rely on divers, cameras, laser imaging systems, and active and passive remote sensing systems. These include mine countermeasures, harbor

  2. Coordinating High-Resolution Traffic Cameras : Developing Intelligent, Collaborating Cameras for Transportation Security and Communications

    DOT National Transportation Integrated Search

    2015-08-01

    Cameras are used prolifically to monitor transportation incidents, infrastructure, and congestion. Traditional camera systems often require human monitoring and only offer low-resolution video. Researchers for the Exploratory Advanced Research (EAR) ...

  3. An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates

    PubMed Central

    2017-01-01

    Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing. PMID:28981533

  4. Mask technology for EUV lithography

    NASA Astrophysics Data System (ADS)

    Bujak, M.; Burkhart, Scott C.; Cerjan, Charles J.; Kearney, Patrick A.; Moore, Craig E.; Prisbrey, Shon T.; Sweeney, Donald W.; Tong, William M.; Vernon, Stephen P.; Walton, Christopher C.; Warrick, Abbie L.; Weber, Frank J.; Wedowski, Marco; Wilhelmsen, Karl C.; Bokor, Jeffrey; Jeong, Sungho; Cardinale, Gregory F.; Ray-Chaudhuri, Avijit K.; Stivers, Alan R.; Tejnil, Edita; Yan, Pei-yang; Hector, Scott D.; Nguyen, Khanh B.

    1999-04-01

    Extreme UV Lithography (EUVL) is one of the leading candidates for the next generation lithography, which will decrease critical feature size to below 100 nm within 5 years. EUVL uses 10-14 nm light as envisioned by the EUV Limited Liability Company, a consortium formed by Intel and supported by Motorola and AMD to perform R and D work at three national laboratories. Much work has already taken place, with the first prototypical cameras operational at 13.4 nm using low energy laser plasma EUV light sources to investigate issues including the source, camera, electro- mechanical and system issues, photoresists, and of course the masks. EUV lithograph masks are fundamentally different than conventional photolithographic masks as they are reflective instead of transmissive. EUV light at 13.4 nm is rapidly absorbed by most materials, thus all light transmission within the EUVL system from source to silicon wafer, including EUV reflected from the mask, is performed by multilayer mirrors in vacuum.

  5. The use of low cost compact cameras with focus stacking functionality in entomological digitization projects

    PubMed Central

    Mertens, Jan E.J.; Roie, Martijn Van; Merckx, Jonas; Dekoninck, Wouter

    2017-01-01

    Abstract Digitization of specimen collections has become a key priority of many natural history museums. The camera systems built for this purpose are expensive, providing a barrier in institutes with limited funding, and therefore hampering progress. An assessment is made on whether a low cost compact camera with image stacking functionality can help expedite the digitization process in large museums or provide smaller institutes and amateur entomologists with the means to digitize their collections. Images of a professional setup were compared with the Olympus Stylus TG-4 Tough, a low-cost compact camera with internal focus stacking functions. Parameters considered include image quality, digitization speed, price, and ease-of-use. The compact camera’s image quality, although inferior to the professional setup, is exceptional considering its fourfold lower price point. Producing the image slices in the compact camera is a matter of seconds and when optimal image quality is less of a priority, the internal stacking function omits the need for dedicated stacking software altogether, further decreasing the cost and speeding up the process. In general, it is found that, aware of its limitations, this compact camera is capable of digitizing entomological collections with sufficient quality. As technology advances, more institutes and amateur entomologists will be able to easily and affordably catalogue their specimens. PMID:29134038

  6. Analysis of the variation of range parameters of thermal cameras

    NASA Astrophysics Data System (ADS)

    Bareła, Jarosław; Kastek, Mariusz; Firmanty, Krzysztof; Krupiński, Michał

    2016-10-01

    Measured range characteristics may vary considerably (up to several dozen percent) between different samples of the same camera type. The question is whether the manufacturing process somehow lacks repeatability or the commonly used measurement procedures themselves need improvement. The presented paper attempts to deal with the aforementioned question. The measurement method has been thoroughly analyzed as well as the measurement test bed. Camera components (such as detector and optics) have also been analyzed and their key parameters have been measured, including noise figures of the entire system. Laboratory measurements are the most precise method used to determine range parameters of a thermal camera. However, in order to obtain reliable results several important conditions have to be fulfilled. One must have the test equipment capable of measurement accuracy (uncertainty) significantly better than the magnitudes of measured quantities. The measurements must be performed in a controlled environment thus excluding the influence of varying environmental conditions. The personnel must be well-trained, experienced in testing the thermal imaging devices and familiar with the applied measurement procedures. The measurement data recorded for several dozen of cooled thermal cameras (from one of leading camera manufacturers) have been the basis of the presented analysis. The measurements were conducted in the accredited research laboratory of Institute of Optoelectronics (Military University of Technology).

  7. A telephoto camera system with shooting direction control by gaze detection

    NASA Astrophysics Data System (ADS)

    Teraya, Daiki; Hachisu, Takumi; Yendo, Tomohiro

    2015-05-01

    For safe driving, it is important for driver to check traffic conditions such as traffic lights, or traffic signs as early as soon. If on-vehicle camera takes image of important objects to understand traffic conditions from long distance and shows these to driver, driver can understand traffic conditions earlier. To take image of long distance objects clearly, the focal length of camera must be long. When the focal length is long, on-vehicle camera doesn't have enough field of view to check traffic conditions. Therefore, in order to get necessary images from long distance, camera must have long-focal length and controllability of shooting direction. In previous study, driver indicates shooting direction on displayed image taken by a wide-angle camera, a direction controllable camera takes telescopic image, and displays these to driver. However, driver uses a touch panel to indicate the shooting direction in previous study. It is cause of disturb driving. So, we propose a telephoto camera system for driving support whose shooting direction is controlled by driver's gaze to avoid disturbing drive. This proposed system is composed of a gaze detector and an active telephoto camera whose shooting direction is controlled. We adopt non-wear detecting method to avoid hindrance to drive. The gaze detector measures driver's gaze by image processing. The shooting direction of the active telephoto camera is controlled by galvanometer scanners and the direction can be switched within a few milliseconds. We confirmed that the proposed system takes images of gazing straight ahead of subject by experiments.

  8. MO-AB-206-00: Nuclear Medicine Physics and Testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    This education session will cover the physics and operation principles of gamma cameras and PET scanners. The first talk will focus on PET imaging. An overview of the principles of PET imaging will be provided, including positron decay physics, and the transition from 2D to 3D imaging. More recent advances in hardware and software will be discussed, such as time-of-flight imaging, and improvements in reconstruction algorithms that provide for options such as depth-of-interaction corrections. Quantitative applications of PET will be discussed, as well as the requirements for doing accurate quantitation. Relevant performance tests will also be described. Learning Objectives: Bemore » able to describe basic physics principles of PET and operation of PET scanners. Learn about recent advances in PET scanner hardware technology. Be able to describe advances in reconstruction techniques and improvements Be able to list relevant performance tests. The second talk will focus on gamma cameras. The Nuclear Medicine subcommittee has charged a task group (TG177) to develop a report on the current state of physics testing of gamma cameras, SPECT, and SPECT/CT systems. The report makes recommendations for performance tests to be done for routine quality assurance, annual physics testing, and acceptance tests, and identifies those needed satisfy the ACR accreditation program and The Joint Commission imaging standards. The report is also intended to be used as a manual with detailed instructions on how to perform tests under widely varying conditions. Learning Objectives: At the end of the presentation members of the audience will: Be familiar with the tests recommended for routine quality assurance, annual physics testing, and acceptance tests of gamma cameras for planar imaging. Be familiar with the tests recommended for routine quality assurance, annual physics testing, and acceptance tests of SPECT systems. Be familiar with the tests of a SPECT/CT system that include the CT images for SPECT reconstructions. Become knowledgeable of items to be included in annual acceptance testing reports including CT dosimetry and PACS monitor measurements. T. Turkington, GE Healthcare.« less

  9. International Space Station Instmments Collect Imagery of Natural Disasters

    NASA Technical Reports Server (NTRS)

    Evans, C. A.; Stefanov, W. L.

    2013-01-01

    A new focus for utilization of the International Space Station (ISS) is conducting basic and applied research that directly benefits Earth's citizenry. In the Earth Sciences, one such activity is collecting remotely sensed imagery of disaster areas and making those data immediately available through the USGS Hazards Data Distribution System, especially in response to activations of the International Charter for Space and Major Disasters (known informally as the "International Disaster Charter", or IDC). The ISS, together with other NASA orbital sensor assets, responds to IDC activations following notification by the USGS. Most of the activations are due to natural hazard events, including large floods, impacts of tropical systems, major fires, and volcanic eruptions and earthquakes. Through the ISS Program Science Office, we coordinate with ISS instrument teams for image acquisition using several imaging systems. As of 1 August 2013, we have successfully contributed imagery data in support of 14 Disaster Charter Activations, including regions in both Haiti and the east coast of the US impacted by Hurricane Sandy; flooding events in Russia, Mozambique, India, Germany and western Africa; and forest fires in Algeria and Ecuador. ISS-based sensors contributing data include the Hyperspectral Imager for the Coastal Ocean (HICO), the ISERV (ISS SERVIR Environmental Research and Visualization System) Pathfinder camera mounted in the US Window Observational Research Facility (WORF), the ISS Agricultural Camera (ISSAC), formerly operating from the WORF, and high resolution handheld camera photography collected by crew members (Crew Earth Observations). When orbital parameters and operations support data collection, ISS-based imagery adds to the resources available to disaster response teams and contributes to the publicdomain record of these events for later analyses.

  10. Skeletal Scintigraphy (Bone Scan)

    MedlinePlus

    ... The special camera and imaging techniques used in nuclear medicine include the gamma camera and single-photon emission-computed tomography (SPECT). The gamma camera, also called a scintillation camera, detects radioactive energy that is emitted from the patient's body and ...

  11. Laser-Directed Ranging System Implementing Single Camera System for Telerobotics Applications

    NASA Technical Reports Server (NTRS)

    Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1995-01-01

    The invention relates generally to systems for determining the range of an object from a reference point and, in one embodiment, to laser-directed ranging systems useful in telerobotics applications. Digital processing techniques are employed which minimize the complexity and cost of the hardware and software for processing range calculations, thereby enhancing the commercial attractiveness of the system for use in relatively low-cost robotic systems. The system includes a video camera for generating images of the target, image digitizing circuitry, and an associated frame grabber circuit. The circuit first captures one of the pairs of stereo video images of the target, and then captures a second video image of the target as it is partly illuminated by the light beam, suitably generated by a laser. The two video images, taken sufficiently close together in time to minimize camera and scene motion, are converted to digital images and then compared. Common pixels are eliminated, leaving only a digital image of the laser-illuminated spot on the target. Mw centroid of the laser illuminated spot is dm obtained and compared with a predetermined reference point, predetermined by design or calibration, which represents the coordinate at the focal plane of the laser illumination at infinite range. Preferably, the laser and camera are mounted on a servo-driven platform which can be oriented to direct the camera and the laser toward the target. In one embodiment the platform is positioned in response to movement of the operator's head. Position and orientation sensors are used to monitor head movement. The disparity between the digital image of the laser spot and the reference point is calculated for determining range to the target. Commercial applications for the system relate to active range-determination systems, such as those used with robotic systems in which it is necessary to determine the, range to a workpiece or object to be grasped or acted upon by a robot arm end-effector in response to commands generated by an operator. In one embodiment, the system provides a real-time image of the target for the operator as the robot approaches the object. The system is also adapted for use in virtual reality systems in which a remote object or workpiece is to be acted upon by a remote robot arm or other mechanism controlled by an operator.

  12. Image intensification; Proceedings of the Meeting, Los Angeles, CA, Jan. 17, 18, 1989

    NASA Astrophysics Data System (ADS)

    Csorba, Illes P.

    Various papers on image intensification are presented. Individual topics discussed include: status of high-speed optical detector technologies, super second generation imge intensifier, gated image intensifiers and applications, resistive-anode position-sensing photomultiplier tube operational modeling, undersea imaging and target detection with gated image intensifier tubes, image intensifier modules for use with commercially available solid state cameras, specifying the components of an intensified solid state television camera, superconducting IR focal plane arrays, one-inch TV camera tube with very high resolution capacity, CCD-Digicon detector system performance parameters, high-resolution X-ray imaging device, high-output technology microchannel plate, preconditioning of microchannel plate stacks, recent advances in small-pore microchannel plate technology, performance of long-life curved channel microchannel plates, low-noise microchannel plates, development of a quartz envelope heater.

  13. SPARTAN Near-IR Camera | SOAR

    Science.gov Websites

    SPARTAN Near-IR Camera SPARTAN Cookbook Ohio State Infrared Imager/Spectrograph (OSIRIS) - NO LONGER Instrumentation at SOAR»SPARTAN Near-IR Camera SPARTAN Near-IR Camera System Overview The Spartan Infrared Camera is a high spatial resolution near-IR imager. Spartan has a focal plane conisisting of four "

  14. Evaluation of the MSFC facsimile camera system as a tool for extraterrestrial geologic exploration

    NASA Technical Reports Server (NTRS)

    Wolfe, E. W.; Alderman, J. D.

    1971-01-01

    Utility of the Marshall Space Flight (MSFC) facsimile camera system for extraterrestrial geologic exploration was investigated during the spring of 1971 near Merriam Crater in northern Arizona. Although the system with its present hard-wired recorder operates erratically, the imagery showed that the camera could be developed as a prime imaging tool for automated missions. Its utility would be enhanced by development of computer techniques that utilize digital camera output for construction of topographic maps, and it needs increased resolution for examining near field details. A supplementary imaging system may be necessary for hand specimen examination at low magnification.

  15. Preliminary analysis on faint luminous lightning events recorded by multiple high speed cameras

    NASA Astrophysics Data System (ADS)

    Alves, J.; Saraiva, A. V.; Pinto, O.; Campos, L. Z.; Antunes, L.; Luz, E. S.; Medeiros, C.; Buzato, T. S.

    2013-12-01

    The objective of this work is the study of some faint luminous events produced by lightning flashes that were recorded simultaneously by multiple high-speed cameras during the previous RAMMER (Automated Multi-camera Network for Monitoring and Study of Lightning) campaigns. The RAMMER network is composed by three fixed cameras and one mobile color camera separated by, in average, distances of 13 kilometers. They were located in the Paraiba Valley (in the cities of São José dos Campos and Caçapava), SP, Brazil, arranged in a quadrilateral shape, centered in São José dos Campos region. This configuration allowed RAMMER to see a thunderstorm from different angles, registering the same lightning flashes simultaneously by multiple cameras. Each RAMMER sensor is composed by a triggering system and a Phantom high-speed camera version 9.1, which is set to operate at a frame rate of 2,500 frames per second with a lens Nikkor (model AF-S DX 18-55 mm 1:3.5 - 5.6 G in the stationary sensors, and a lens model AF-S ED 24 mm - 1:1.4 in the mobile sensor). All videos were GPS (Global Positioning System) time stamped. For this work we used a data set collected in four RAMMER manual operation days in the campaign of 2012 and 2013. On Feb. 18th the data set is composed by 15 flashes recorded by two cameras and 4 flashes recorded by three cameras. On Feb. 19th a total of 5 flashes was registered by two cameras and 1 flash registered by three cameras. On Feb. 22th we obtained 4 flashes registered by two cameras. Finally, in March 6th two cameras recorded 2 flashes. The analysis in this study proposes an evaluation methodology for faint luminous lightning events, such as continuing current. Problems in the temporal measurement of the continuing current can generate some imprecisions during the optical analysis, therefore this work aim to evaluate the effects of distance in this parameter with this preliminary data set. In the cases that include the color camera we analyzed the RGB mode (red, green, blue) and compared them with the data provided by the black and white cameras for the same event and the influence of these parameters with the luminosity intensity of the flashes. Two peculiar cases presented, from the data obtained at one site, a stroke, some continuing current during the interval between the strokes and, then, a subsequent stroke; however, the other site showed that the subsequent stroke was in fact an M-component, since the continuing current had not vanished after its parent stroke. These events generated a dubious classification for the same event that was based only in a visual analysis with high-speed cameras and they were analyzed in this work.

  16. The use of consumer depth cameras for 3D surface imaging of people with obesity: A feasibility study.

    PubMed

    Wheat, J S; Clarkson, S; Flint, S W; Simpson, C; Broom, D R

    2018-05-21

    Three dimensional (3D) surface imaging is a viable alternative to traditional body morphology measures, but the feasibility of using this technique with people with obesity has not been fully established. Therefore, the aim of this study was to investigate the validity, repeatability and acceptability of a consumer depth camera 3D surface imaging system in imaging people with obesity. The concurrent validity of the depth camera based system was investigated by comparing measures of mid-trunk volume to a gold-standard. The repeatability and acceptability of the depth camera system was assessed in people with obesity at a clinic. There was evidence of a fixed systematic difference between the depth camera system and the gold standard but excellent correlation between volume estimates (r 2 =0.997), with little evidence of proportional bias. The depth camera system was highly repeatable - low typical error (0.192L), high intraclass correlation coefficient (>0.999) and low technical error of measurement (0.64%). Depth camera based 3D surface imaging was also acceptable to people with obesity. It is feasible (valid, repeatable and acceptable) to use a low cost, flexible 3D surface imaging system to monitor the body size and shape of people with obesity in a clinical setting. Copyright © 2018 Asia Oceania Association for the Study of Obesity. Published by Elsevier Ltd. All rights reserved.

  17. Development of a camera casing suited for cryogenic and vacuum applications

    NASA Astrophysics Data System (ADS)

    Delaquis, S. C.; Gornea, R.; Janos, S.; Lüthi, M.; von Rohr, Ch Rudolf; Schenk, M.; Vuilleumier, J.-L.

    2013-12-01

    We report on the design, construction, and operation of a PID temperature controlled and vacuum tight camera casing. The camera casing contains a commercial digital camera and a lighting system. The design of the camera casing and its components are discussed in detail. Pictures taken by this cryo-camera while immersed in argon vapour and liquid nitrogen are presented. The cryo-camera can provide a live view inside cryogenic set-ups and allows to record video.

  18. Use and validation of mirrorless digital single light reflex camera for recording of vitreoretinal surgeries in high definition

    PubMed Central

    Khanduja, Sumeet; Sampangi, Raju; Hemlatha, B C; Singh, Satvir; Lall, Ashish

    2018-01-01

    Purpose: The purpose of this study is to describe the use of commercial digital single light reflex (DSLR) for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD) camera. Methods: Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a) anterior segment surgery, (b) surgery under direct viewing system, and (c) surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better) in DSLR camera. Conclusion: Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching. PMID:29283133

  19. Use and validation of mirrorless digital single light reflex camera for recording of vitreoretinal surgeries in high definition.

    PubMed

    Khanduja, Sumeet; Sampangi, Raju; Hemlatha, B C; Singh, Satvir; Lall, Ashish

    2018-01-01

    The purpose of this study is to describe the use of commercial digital single light reflex (DSLR) for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD) camera. Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a) anterior segment surgery, (b) surgery under direct viewing system, and (c) surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better) in DSLR camera. Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching.

  20. Relative Panoramic Camera Position Estimation for Image-Based Virtual Reality Networks in Indoor Environments

    NASA Astrophysics Data System (ADS)

    Nakagawa, M.; Akano, K.; Kobayashi, T.; Sekiguchi, Y.

    2017-09-01

    Image-based virtual reality (VR) is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS) positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.

  1. KSC-01pp1758

    NASA Image and Video Library

    2001-11-29

    KENNEDY SPACE CENTER, Fla. -- In Hangar A&E, workers watch as an overhead crane lifts the Advanced Camera for Surveys out of its transportation container. Part of the payload on the Hubble Space Telescope Servicing Mission, STS-109, the ACS will increase the discovery efficiency of the HST by a factor of ten. It consists of three electronic cameras and a complement of filters and dispersers that detect light from the ultraviolet to the near infrared (1200 - 10,000 angstroms). The ACS was built through a collaborative effort between Johns Hopkins University, Goddard Space Flight Center, Ball Aerospace Corporation and Space Telescope Science Institute. Tasks for the mission include replacing Solar Array 2 with Solar Array 3, replacing the Power Control Unit, removing the Faint Object Camera and installing the ACS, installing the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) Cooling System, and installing New Outer Blanket Layer insulation on bays 5 through 8. Mission STS-109 is scheduled for launch Feb. 14, 2002

  2. A high-speed digital camera system for the observation of rapid H-alpha fluctuations in solar flares

    NASA Technical Reports Server (NTRS)

    Kiplinger, Alan L.; Dennis, Brian R.; Orwig, Larry E.

    1989-01-01

    Researchers developed a prototype digital camera system for obtaining H-alpha images of solar flares with 0.1 s time resolution. They intend to operate this system in conjunction with SMM's Hard X Ray Burst Spectrometer, with x ray instruments which will be available on the Gamma Ray Observatory and eventually with the Gamma Ray Imaging Device (GRID), and with the High Resolution Gamma-Ray and Hard X Ray Spectrometer (HIREGS) which are being developed for the Max '91 program. The digital camera has recently proven to be successful as a one camera system operating in the blue wing of H-alpha during the first Max '91 campaign. Construction and procurement of a second and possibly a third camera for simultaneous observations at other wavelengths are underway as are analyses of the campaign data.

  3. Electronic Fingerprinting for Industry

    NASA Technical Reports Server (NTRS)

    1995-01-01

    Veritec's VeriSystem is a complete identification and tracking system for component traceability, improved manufacturing and processing, and automated shop floor applications. The system includes the Vericode Symbol, a more accurate and versatile alternative to the traditional bar code, that is scanned by charge coupled device (CCD) cameras. The system was developed by Veritec, Rockwell International and Marshall Space Flight Center to identify and track Space Shuttle parts.

  4. Accuracy Potential and Applications of MIDAS Aerial Oblique Camera System

    NASA Astrophysics Data System (ADS)

    Madani, M.

    2012-07-01

    Airborne oblique cameras such as Fairchild T-3A were initially used for military reconnaissance in 30s. A modern professional digital oblique camera such as MIDAS (Multi-camera Integrated Digital Acquisition System) is used to generate lifelike three dimensional to the users for visualizations, GIS applications, architectural modeling, city modeling, games, simulators, etc. Oblique imagery provide the best vantage for accessing and reviewing changes to the local government tax base, property valuation assessment, buying & selling of residential/commercial for better decisions in a more timely manner. Oblique imagery is also used for infrastructure monitoring making sure safe operations of transportation, utilities, and facilities. Sanborn Mapping Company acquired one MIDAS from TrackAir in 2011. This system consists of four tilted (45 degrees) cameras and one vertical camera connected to a dedicated data acquisition computer system. The 5 digital cameras are based on the Canon EOS 1DS Mark3 with Zeiss lenses. The CCD size is 5,616 by 3,744 (21 MPixels) with the pixel size of 6.4 microns. Multiple flights using different camera configurations (nadir/oblique (28 mm/50 mm) and (50 mm/50 mm)) were flown over downtown Colorado Springs, Colorado. Boresight fights for 28 mm nadir camera were flown at 600 m and 1,200 m and for 50 mm nadir camera at 750 m and 1500 m. Cameras were calibrated by using a 3D cage and multiple convergent images utilizing Australis model. In this paper, the MIDAS system is described, a number of real data sets collected during the aforementioned flights are presented together with their associated flight configurations, data processing workflow, system calibration and quality control workflows are highlighted and the achievable accuracy is presented in some detail. This study revealed that the expected accuracy of about 1 to 1.5 GSD (Ground Sample Distance) for planimetry and about 2 to 2.5 GSD for vertical can be achieved. Remaining systematic errors were modeled by analyzing residuals using correction grid. The results of the final bundle adjustments are sufficient to enable Sanborn to produce DEM/DTM and orthophotos from the nadir imagery and create 3D models using georeferenced oblique imagery.

  5. Compensation for positioning error of industrial robot for flexible vision measuring system

    NASA Astrophysics Data System (ADS)

    Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui

    2013-01-01

    Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.

  6. PubMed Central

    Baum, S.; Sillem, M.; Ney, J. T.; Baum, A.; Friedrich, M.; Radosa, J.; Kramer, K. M.; Gronwald, B.; Gottschling, S.; Solomayer, E. F.; Rody, A.; Joukhadar, R.

    2017-01-01

    Introduction Minimally invasive operative techniques are being used increasingly in gynaecological surgery. The expansion of the laparoscopic operation spectrum is in part the result of improved imaging. This study investigates the practical advantages of using 3D cameras in routine surgical practice. Materials and Methods Two different 3-dimensional camera systems were compared with a 2-dimensional HD system; the operating surgeonʼs experiences were documented immediately postoperatively using a questionnaire. Results Significant advantages were reported for suturing and cutting of anatomical structures when using the 3D compared to 2D camera systems. There was only a slight advantage for coagulating. The use of 3D cameras significantly improved the general operative visibility and in particular the representation of spacial depth compared to 2-dimensional images. There was not a significant advantage for image width. Depiction of adhesions and retroperitoneal neural structures was significantly improved by the stereoscopic cameras, though this did not apply to blood vessels, ureter, uterus or ovaries. Conclusion 3-dimensional cameras were particularly advantageous for the depiction of fine anatomical structures due to improved spacial depth representation compared to 2D systems. 3D cameras provide the operating surgeon with a monitor image that more closely resembles actual anatomy, thus simplifying laparoscopic procedures. PMID:28190888

  7. Analysis of edge density fluctuation measured by trial KSTAR beam emission spectroscopy systema)

    NASA Astrophysics Data System (ADS)

    Nam, Y. U.; Zoletnik, S.; Lampert, M.; Kovácsik, Á.

    2012-10-01

    A beam emission spectroscopy (BES) system based on direct imaging avalanche photodiode (APD) camera has been designed for Korea Superconducting Tokamak Advanced Research (KSTAR) and a trial system has been constructed and installed for evaluating feasibility of the design. The system contains two cameras, one is an APD camera for BES measurement and another is a fast visible camera for position calibration. Two pneumatically actuated mirrors were positioned at front and rear of lens optics. The front mirror can switch the measurement between edge and core region of plasma and the rear mirror can switch between the APD and the visible camera. All systems worked properly and the measured photon flux was reasonable as expected from the simulation. While the measurement data from the trial system were limited, it revealed some interesting characteristics of KSTAR plasma suggesting future research works with fully installed BES system. The analysis result and the development plan will be presented in this paper.

  8. Real-time tracking and fast retrieval of persons in multiple surveillance cameras of a shopping mall

    NASA Astrophysics Data System (ADS)

    Bouma, Henri; Baan, Jan; Landsmeer, Sander; Kruszynski, Chris; van Antwerpen, Gert; Dijk, Judith

    2013-05-01

    The capability to track individuals in CCTV cameras is important for e.g. surveillance applications at large areas such as train stations, airports and shopping centers. However, it is laborious to track and trace people over multiple cameras. In this paper, we present a system for real-time tracking and fast interactive retrieval of persons in video streams from multiple static surveillance cameras. This system is demonstrated in a shopping mall, where the cameras are positioned without overlapping fields-of-view and have different lighting conditions. The results show that the system allows an operator to find the origin or destination of a person more efficiently. The misses are reduced with 37%, which is a significant improvement.

  9. Photometry of Galactic and Extragalactic Far-Infrared Sources using the 91.5 cm Airborne Infrared Telescope

    NASA Technical Reports Server (NTRS)

    Harper, D. A.

    1996-01-01

    The objective of this grant was to construct a series of far infrared photometers, cameras, and supporting systems for use in astronomical observations in the Kuiper Airborne Observatory. The observations have included studies of galaxies, star formation regions, and objects within the Solar System.

  10. Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera.

    PubMed

    Nguyen, Thuy Tuong; Slaughter, David C; Hanson, Bradley D; Barber, Andrew; Freitas, Amy; Robles, Daniel; Whelan, Erin

    2015-07-28

    This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a three-step image analysis process based upon: (1) an orthographic plant projection method integrating a perspective transform with automatic parameter estimation; (2) a plant counting method based on projection histograms; and (3) a double-counting avoidance method based on a homography transform. Experimental results demonstrate the ability to count large numbers of plants automatically with no human effort. Results show that, for tree seedlings having a height up to 40 cm and a within-row tree spacing of approximately 10 cm, the algorithms successfully estimated the number of plants with an average accuracy of 95.2% for trees within a single image and 98% for counting of the whole plant population in a large sequence of images.

  11. Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera

    PubMed Central

    Nguyen, Thuy Tuong; Slaughter, David C.; Hanson, Bradley D.; Barber, Andrew; Freitas, Amy; Robles, Daniel; Whelan, Erin

    2015-01-01

    This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a three-step image analysis process based upon: (1) an orthographic plant projection method integrating a perspective transform with automatic parameter estimation; (2) a plant counting method based on projection histograms; and (3) a double-counting avoidance method based on a homography transform. Experimental results demonstrate the ability to count large numbers of plants automatically with no human effort. Results show that, for tree seedlings having a height up to 40 cm and a within-row tree spacing of approximately 10 cm, the algorithms successfully estimated the number of plants with an average accuracy of 95.2% for trees within a single image and 98% for counting of the whole plant population in a large sequence of images. PMID:26225982

  12. ATS-5 12 year in orbit evaluation

    NASA Technical Reports Server (NTRS)

    Kissel, F. J.; Schaffer, J.

    1981-01-01

    The results of the ATS 5 End-of-Life (EOL) tests are presented. Whenever possible, the test data were compared with prelaunch and inflight data. Pertinent memos and other informal documents containing these data are included as appendixes. The EOL tests include communication subsystems tests, power sybsystem tests, and gravity gradient television camera system tests.

  13. Real-time implementation of camera positioning algorithm based on FPGA & SOPC

    NASA Astrophysics Data System (ADS)

    Yang, Mingcao; Qiu, Yuehong

    2014-09-01

    In recent years, with the development of positioning algorithm and FPGA, to achieve the camera positioning based on real-time implementation, rapidity, accuracy of FPGA has become a possibility by way of in-depth study of embedded hardware and dual camera positioning system, this thesis set up an infrared optical positioning system based on FPGA and SOPC system, which enables real-time positioning to mark points in space. Thesis completion include: (1) uses a CMOS sensor to extract the pixel of three objects with total feet, implemented through FPGA hardware driver, visible-light LED, used here as the target point of the instrument. (2) prior to extraction of the feature point coordinates, the image needs to be filtered to avoid affecting the physical properties of the system to bring the platform, where the median filtering. (3) Coordinate signs point to FPGA hardware circuit extraction, a new iterative threshold selection method for segmentation of images. Binary image is then segmented image tags, which calculates the coordinates of the feature points of the needle through the center of gravity method. (4) direct linear transformation (DLT) and extreme constraints method is applied to three-dimensional reconstruction of the plane array CMOS system space coordinates. using SOPC system on a chip here, taking advantage of dual-core computing systems, which let match and coordinate operations separately, thus increase processing speed.

  14. FPGA Based Adaptive Rate and Manifold Pattern Projection for Structured Light 3D Camera System †

    PubMed Central

    Lee, Sukhan

    2018-01-01

    The quality of the captured point cloud and the scanning speed of a structured light 3D camera system depend upon their capability of handling the object surface of a large reflectance variation in the trade-off of the required number of patterns to be projected. In this paper, we propose and implement a flexible embedded framework that is capable of triggering the camera single or multiple times for capturing single or multiple projections within a single camera exposure setting. This allows the 3D camera system to synchronize the camera and projector even for miss-matched frame rates such that the system is capable of projecting different types of patterns for different scan speed applications. This makes the system capturing a high quality of 3D point cloud even for the surface of a large reflectance variation while achieving a high scan speed. The proposed framework is implemented on the Field Programmable Gate Array (FPGA), where the camera trigger is adaptively generated in such a way that the position and the number of triggers are automatically determined according to camera exposure settings. In other words, the projection frequency is adaptive to different scanning applications without altering the architecture. In addition, the proposed framework is unique as it does not require any external memory for storage because pattern pixels are generated in real-time, which minimizes the complexity and size of the application-specific integrated circuit (ASIC) design and implementation. PMID:29642506

  15. Combined use of a priori data for fast system self-calibration of a non-rigid multi-camera fringe projection system

    NASA Astrophysics Data System (ADS)

    Stavroulakis, Petros I.; Chen, Shuxiao; Sims-Waterhouse, Danny; Piano, Samanta; Southon, Nicholas; Bointon, Patrick; Leach, Richard

    2017-06-01

    In non-rigid fringe projection 3D measurement systems, where either the camera or projector setup can change significantly between measurements or the object needs to be tracked, self-calibration has to be carried out frequently to keep the measurements accurate1. In fringe projection systems, it is common to use methods developed initially for photogrammetry for the calibration of the camera(s) in the system in terms of extrinsic and intrinsic parameters. To calibrate the projector(s) an extra correspondence between a pre-calibrated camera and an image created by the projector is performed. These recalibration steps are usually time consuming and involve the measurement of calibrated patterns on planes, before the actual object can continue to be measured after a motion of a camera or projector has been introduced in the setup and hence do not facilitate fast 3D measurement of objects when frequent experimental setup changes are necessary. By employing and combining a priori information via inverse rendering, on-board sensors, deep learning and leveraging a graphics processor unit (GPU), we assess a fine camera pose estimation method which is based on optimising the rendering of a model of a scene and the object to match the view from the camera. We find that the success of this calibration pipeline can be greatly improved by using adequate a priori information from the aforementioned sources.

  16. A Spot Reminder System for the Visually Impaired Based on a Smartphone Camera

    PubMed Central

    Takizawa, Hotaka; Orita, Kazunori; Aoyagi, Mayumi; Ezaki, Nobuo; Mizuno, Shinji

    2017-01-01

    The present paper proposes a smartphone-camera-based system to assist visually impaired users in recalling their memories related to important locations, called spots, that they visited. The memories are recorded as voice memos, which can be played back when the users return to the spots. Spot-to-spot correspondence is determined by image matching based on the scale invariant feature transform. The main contribution of the proposed system is to allow visually impaired users to associate arbitrary voice memos with arbitrary spots. The users do not need any special devices or systems except smartphones and do not need to remember the spots where the voice memos were recorded. In addition, the proposed system can identify spots in environments that are inaccessible to the global positioning system. The proposed system has been evaluated by two experiments: image matching tests and a user study. The experimental results suggested the effectiveness of the system to help visually impaired individuals, including blind individuals, recall information about regularly-visited spots. PMID:28165403

  17. A Spot Reminder System for the Visually Impaired Based on a Smartphone Camera.

    PubMed

    Takizawa, Hotaka; Orita, Kazunori; Aoyagi, Mayumi; Ezaki, Nobuo; Mizuno, Shinji

    2017-02-04

    The present paper proposes a smartphone-camera-based system to assist visually impaired users in recalling their memories related to important locations, called spots, that they visited. The memories are recorded as voice memos, which can be played back when the users return to the spots. Spot-to-spot correspondence is determined by image matching based on the scale invariant feature transform. The main contribution of the proposed system is to allow visually impaired users to associate arbitrary voice memos with arbitrary spots. The users do not need any special devices or systems except smartphones and do not need to remember the spots where the voice memos were recorded. In addition, the proposed system can identify spots in environments that are inaccessible to the global positioning system. The proposed system has been evaluated by two experiments: image matching tests and a user study. The experimental results suggested the effectiveness of the system to help visually impaired individuals, including blind individuals, recall information about regularly-visited spots.

  18. Development and Evaluation of a UAV-Photogrammetry System for Precise 3D Environmental Modeling.

    PubMed

    Shahbazi, Mozhdeh; Sohn, Gunho; Théau, Jérôme; Menard, Patrick

    2015-10-30

    The specific requirements of UAV-photogrammetry necessitate particular solutions for system development, which have mostly been ignored or not assessed adequately in recent studies. Accordingly, this paper presents the methodological and experimental aspects of correctly implementing a UAV-photogrammetry system. The hardware of the system consists of an electric-powered helicopter, a high-resolution digital camera and an inertial navigation system. The software of the system includes the in-house programs specifically designed for camera calibration, platform calibration, system integration, on-board data acquisition, flight planning and on-the-job self-calibration. The detailed features of the system are discussed, and solutions are proposed in order to enhance the system and its photogrammetric outputs. The developed system is extensively tested for precise modeling of the challenging environment of an open-pit gravel mine. The accuracy of the results is evaluated under various mapping conditions, including direct georeferencing and indirect georeferencing with different numbers, distributions and types of ground control points. Additionally, the effects of imaging configuration and network stability on modeling accuracy are assessed. The experiments demonstrated that 1.55 m horizontal and 3.16 m vertical absolute modeling accuracy could be achieved via direct geo-referencing, which was improved to 0.4 cm and 1.7 cm after indirect geo-referencing.

  19. Development and Evaluation of a UAV-Photogrammetry System for Precise 3D Environmental Modeling

    PubMed Central

    Shahbazi, Mozhdeh; Sohn, Gunho; Théau, Jérôme; Menard, Patrick

    2015-01-01

    The specific requirements of UAV-photogrammetry necessitate particular solutions for system development, which have mostly been ignored or not assessed adequately in recent studies. Accordingly, this paper presents the methodological and experimental aspects of correctly implementing a UAV-photogrammetry system. The hardware of the system consists of an electric-powered helicopter, a high-resolution digital camera and an inertial navigation system. The software of the system includes the in-house programs specifically designed for camera calibration, platform calibration, system integration, on-board data acquisition, flight planning and on-the-job self-calibration. The detailed features of the system are discussed, and solutions are proposed in order to enhance the system and its photogrammetric outputs. The developed system is extensively tested for precise modeling of the challenging environment of an open-pit gravel mine. The accuracy of the results is evaluated under various mapping conditions, including direct georeferencing and indirect georeferencing with different numbers, distributions and types of ground control points. Additionally, the effects of imaging configuration and network stability on modeling accuracy are assessed. The experiments demonstrated that 1.55 m horizontal and 3.16 m vertical absolute modeling accuracy could be achieved via direct geo-referencing, which was improved to 0.4 cm and 1.7 cm after indirect geo-referencing. PMID:26528976

  20. Alternative images for perpendicular parking : a usability test of a multi-camera parking assistance system.

    DOT National Transportation Integrated Search

    2004-10-01

    The parking assistance system evaluated consisted of four outward facing cameras whose images could be presented on a monitor on the center console. The images presented varied in the location of the virtual eye point of the camera (the height above ...

Top