Sample records for camera system called

  1. Vision Aided Inertial Navigation System Augmented with a Coded Aperture

    DTIC Science & Technology

    2011-03-24

    as the change in blur at different distances from the pixel plane can be inferred. Cameras with a micro lens array (called plenoptic cameras...images from 8 slightly different perspectives [14,43]. Dappled photography is a similar to the plenoptic camera approach except that a cosine mask

  2. Advances in Gamma-Ray Imaging with Intensified Quantum-Imaging Detectors

    NASA Astrophysics Data System (ADS)

    Han, Ling

    Nuclear medicine, an important branch of modern medical imaging, is an essential tool for both diagnosis and treatment of disease. As the fundamental element of nuclear medicine imaging, the gamma camera is able to detect gamma-ray photons emitted by radiotracers injected into a patient and form an image of the radiotracer distribution, reflecting biological functions of organs or tissues. Recently, an intensified CCD/CMOS-based quantum detector, called iQID, was developed in the Center for Gamma-Ray Imaging. Originally designed as a novel type of gamma camera, iQID demonstrated ultra-high spatial resolution (< 100 micron) and many other advantages over traditional gamma cameras. This work focuses on advancing this conceptually-proven gamma-ray imaging technology to make it ready for both preclinical and clinical applications. To start with, a Monte Carlo simulation of the key light-intensification device, i.e. the image intensifier, was developed, which revealed the dominating factor(s) that limit energy resolution performance of the iQID cameras. For preclinical imaging applications, a previously-developed iQID-based single-photon-emission computed-tomography (SPECT) system, called FastSPECT III, was fully advanced in terms of data acquisition software, system sensitivity and effective FOV by developing and adopting a new photon-counting algorithm, thicker columnar scintillation detectors, and system calibration method. Originally designed for mouse brain imaging, the system is now able to provide full-body mouse imaging with sub-350-micron spatial resolution. To further advance the iQID technology to include clinical imaging applications, a novel large-area iQID gamma camera, called LA-iQID, was developed from concept to prototype. Sub-mm system resolution in an effective FOV of 188 mm x 188 mm has been achieved. The camera architecture, system components, design and integration, data acquisition, camera calibration, and performance evaluation are presented in this work. Mounted on a castered counter-weighted clinical cart, the camera also features portable and mobile capabilities for easy handling and on-site applications at remote locations where hospital facilities are not available.

  3. Camera Ready to Install on Mars Reconnaissance Orbiter

    NASA Image and Video Library

    2005-01-07

    A telescopic camera called the High Resolution Imaging Science Experiment, or HiRISE, right was installed onto the main structure of NASA Mars Reconnaissance Orbiter left on Dec. 11, 2004 at Lockheed Martin Space Systems, Denver.

  4. A teledentistry system for the second opinion.

    PubMed

    Gambino, Orazio; Lima, Fausto; Pirrone, Roberto; Ardizzone, Edoardo; Campisi, Giuseppina; di Fede, Olga

    2014-01-01

    In this paper we present a Teledentistry system aimed to the Second Opinion task. It make use of a particular camera called intra-oral camera, also called dental camera, in order to perform the photo shooting and real-time video of the inner part of the mouth. The pictures acquired by the Operator with such a device are sent to the Oral Medicine Expert (OME) by means of a current File Transfer Protocol (FTP) service and the real-time video is channeled into a video streaming thanks to the VideoLan client/server (VLC) application. It is composed by a HTML5 web-pages generated by PHP and allows to perform the Second Opinion both when Operator and OME are logged and when one of them is offline.

  5. Development of a digital camera tree evaluation system

    Treesearch

    Neil Clark; Daniel L. Schmoldt; Philip A. Araman

    2000-01-01

    Within the Strategic Plan for Forest Inventory and Monitoring (USDA Forest Service 1998), there is a call to "conduct applied research in the use of [advanced technology] towards the end of increasing the operational efficiency and effectiveness of our program". The digital camera tree evaluation system is part of that research, aimed at decreasing field...

  6. A simple method to achieve full-field and real-scale reconstruction using a movable stereo rig

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Song, Zhan; Tang, Suming

    2018-06-01

    This paper introduces a simple method to achieve full-field and real-scale reconstruction using a movable binocular vision system (MBVS). The MBVS is composed of two cameras, one is called the tracking camera, and the other is called the working camera. The tracking camera is used for tracking the positions of the MBVS and the working camera is used for the 3D reconstruction task. The MBVS has several advantages compared with a single moving camera or multi-camera networks. Firstly, the MBVS could recover the real-scale-depth-information from the captured image sequences without using auxiliary objects whose geometry or motion should be precisely known. Secondly, the removability of the system could guarantee appropriate baselines to supply more robust point correspondences. Additionally, using one camera could avoid the drawback which exists in multi-camera networks, that the variability of a cameras’ parameters and performance could significantly affect the accuracy and robustness of the feature extraction and stereo matching methods. The proposed framework consists of local reconstruction and initial pose estimation of the MBVS based on transferable features, followed by overall optimization and accurate integration of multi-view 3D reconstruction data. The whole process requires no information other than the input images. The framework has been verified with real data, and very good results have been obtained.

  7. Relative and Absolute Calibration of a Multihead Camera System with Oblique and Nadir Looking Cameras for a Uas

    NASA Astrophysics Data System (ADS)

    Niemeyer, F.; Schima, R.; Grenzdörffer, G.

    2013-08-01

    Numerous unmanned aerial systems (UAS) are currently flooding the market. For the most diverse applications UAVs are special designed and used. Micro and mini UAS (maximum take-off weight up to 5 kg) are of particular interest, because legal restrictions are still manageable but also the payload capacities are sufficient for many imaging sensors. Currently a camera system with four oblique and one nadir looking cameras is under development at the Chair for Geodesy and Geoinformatics. The so-called "Four Vision" camera system was successfully built and tested in the air. A MD4-1000 UAS from microdrones is used as a carrier system. Light weight industrial cameras are used and controlled by a central computer. For further photogrammetric image processing, each individual camera, as well as all the cameras together have to be calibrated. This paper focuses on the determination of the relative orientation between the cameras with the „Australis" software and will give an overview of the results and experiences of test flights.

  8. Correction And Use Of Jitter In Television Images

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Fender, Derek H.; Fender, Antony R. H.

    1989-01-01

    Proposed system stabilizes jittering television image and/or measures jitter to extract information on motions of objects in image. Alternative version, system controls lateral motion on camera to generate stereoscopic views to measure distances to objects. In another version, motion of camera controlled to keep object in view. Heart of system is digital image-data processor called "jitter-miser", which includes frame buffer and logic circuits to correct for jitter in image. Signals from motion sensors on camera sent to logic circuits and processed into corrections for motion along and across line of sight.

  9. Development of a real time multiple target, multi camera tracker for civil security applications

    NASA Astrophysics Data System (ADS)

    Åkerlund, Hans

    2009-09-01

    A surveillance system has been developed that can use multiple TV-cameras to detect and track personnel and objects in real time in public areas. The document describes the development and the system setup. The system is called NIVS Networked Intelligent Video Surveillance. Persons in the images are tracked and displayed on a 3D map of the surveyed area.

  10. Digital micromirror device camera with per-pixel coded exposure for high dynamic range imaging.

    PubMed

    Feng, Wei; Zhang, Fumin; Wang, Weijing; Xing, Wei; Qu, Xinghua

    2017-05-01

    In this paper, we overcome the limited dynamic range of the conventional digital camera, and propose a method of realizing high dynamic range imaging (HDRI) from a novel programmable imaging system called a digital micromirror device (DMD) camera. The unique feature of the proposed new method is that the spatial and temporal information of incident light in our DMD camera can be flexibly modulated, and it enables the camera pixels always to have reasonable exposure intensity by DMD pixel-level modulation. More importantly, it allows different light intensity control algorithms used in our programmable imaging system to achieve HDRI. We implement the optical system prototype, analyze the theory of per-pixel coded exposure for HDRI, and put forward an adaptive light intensity control algorithm to effectively modulate the different light intensity to recover high dynamic range images. Via experiments, we demonstrate the effectiveness of our method and implement the HDRI on different objects.

  11. Clinical applications of commercially available video recording and monitoring systems: inexpensive, high-quality video recording and monitoring systems for endoscopy and microsurgery.

    PubMed

    Tsunoda, Koichi; Tsunoda, Atsunobu; Ishimoto, ShinnIchi; Kimura, Satoko

    2006-01-01

    The exclusive charge-coupled device (CCD) camera system for the endoscope and electronic fiberscopes are in widespread use. However, both are usually stationary in an office or examination room, and a wheeled cart is needed for mobility. The total costs of the CCD camera system and electronic fiberscopy system are at least US Dollars 10,000 and US Dollars 30,000, respectively. Recently, the performance of audio and visual instruments has improved dramatically, with a concomitant reduction in their cost. Commercially available CCD video cameras with small monitors have become common. They provide excellent image quality and are much smaller and less expensive than previous models. The authors have developed adaptors for the popular mini-digital video (mini-DV) camera. The camera also provides video and acoustic output signals; therefore, the endoscopic images can be viewed on a large monitor simultaneously. The new system (a mini-DV video camera and an adaptor) costs only US Dollars 1,000. Therefore, the system is both cost-effective and useful for the outpatient clinic or casualty setting, or on house calls for the purpose of patient education. In the future, the authors plan to introduce the clinical application of a high-vision camera and an infrared camera as medical instruments for clinical and research situations.

  12. A modular positron camera for the study of industrial processes

    NASA Astrophysics Data System (ADS)

    Leadbeater, T. W.; Parker, D. J.

    2011-10-01

    Positron imaging techniques rely on the detection of the back-to-back annihilation photons arising from positron decay within the system under study. A standard technique, called positron emitting particle tracking (PEPT) [1], uses a number of these detected events to rapidly determine the position of a positron emitting tracer particle introduced into the system under study. Typical applications of PEPT are in the study of granular and multi-phase materials in the disciplines of engineering and the physical sciences. Using components from redundant medical PET scanners a modular positron camera has been developed. This camera consists of a number of small independent detector modules, which can be arranged in custom geometries tailored towards the application in question. The flexibility of the modular camera geometry allows for high photon detection efficiency within specific regions of interest, the ability to study large and bulky systems and the application of PEPT to difficult or remote processes as the camera is inherently transportable.

  13. Brute Force Matching Between Camera Shots and Synthetic Images from Point Clouds

    NASA Astrophysics Data System (ADS)

    Boerner, R.; Kröhnert, M.

    2016-06-01

    3D point clouds, acquired by state-of-the-art terrestrial laser scanning techniques (TLS), provide spatial information about accuracies up to several millimetres. Unfortunately, common TLS data has no spectral information about the covered scene. However, the matching of TLS data with images is important for monoplotting purposes and point cloud colouration. Well-established methods solve this issue by matching of close range images and point cloud data by fitting optical camera systems on top of laser scanners or rather using ground control points. The approach addressed in this paper aims for the matching of 2D image and 3D point cloud data from a freely moving camera within an environment covered by a large 3D point cloud, e.g. a 3D city model. The key advantage of the free movement affects augmented reality applications or real time measurements. Therefore, a so-called real image, captured by a smartphone camera, has to be matched with a so-called synthetic image which consists of reverse projected 3D point cloud data to a synthetic projection centre whose exterior orientation parameters match the parameters of the image, assuming an ideal distortion free camera.

  14. Plate refractive camera model and its applications

    NASA Astrophysics Data System (ADS)

    Huang, Longxiang; Zhao, Xu; Cai, Shen; Liu, Yuncai

    2017-03-01

    In real applications, a pinhole camera capturing objects through a planar parallel transparent plate is frequently employed. Due to the refractive effects of the plate, such an imaging system does not comply with the conventional pinhole camera model. Although the system is ubiquitous, it has not been thoroughly studied. This paper aims at presenting a simple virtual camera model, called a plate refractive camera model, which has a form similar to a pinhole camera model and can efficiently model refractions through a plate. The key idea is to employ a pixel-wise viewpoint concept to encode the refraction effects into a pixel-wise pinhole camera model. The proposed camera model realizes an efficient forward projection computation method and has some advantages in applications. First, the model can help to compute the caustic surface to represent the changes of the camera viewpoints. Second, the model has strengths in analyzing and rectifying the image caustic distortion caused by the plate refraction effects. Third, the model can be used to calibrate the camera's intrinsic parameters without removing the plate. Last but not least, the model contributes to putting forward the plate refractive triangulation methods in order to solve the plate refractive triangulation problem easily in multiviews. We verify our theory in both synthetic and real experiments.

  15. Commercially available high-speed system for recording and monitoring vocal fold vibrations.

    PubMed

    Sekimoto, Sotaro; Tsunoda, Koichi; Kaga, Kimitaka; Makiyama, Kiyoshi; Tsunoda, Atsunobu; Kondo, Kenji; Yamasoba, Tatsuya

    2009-12-01

    We have developed a special purpose adaptor making it possible to use a commercially available high-speed camera to observe vocal fold vibrations during phonation. The camera can capture dynamic digital images at speeds of 600 or 1200 frames per second. The adaptor is equipped with a universal-type attachment and can be used with most endoscopes sold by various manufacturers. Satisfactory images can be obtained with a rigid laryngoscope even with the standard light source. The total weight of the adaptor and camera (including battery) is only 1010 g. The new system comprising the high-speed camera and the new adaptor can be purchased for about $3000 (US), while the least expensive stroboscope costs about 10 times that price, and a high-performance high-speed imaging system may cost 100 times as much. Therefore the system is both cost-effective and useful in the outpatient clinic or casualty setting, on house calls, and for the purpose of student or patient education.

  16. Nonholonomic camera-space manipulation using cameras mounted on a mobile base

    NASA Astrophysics Data System (ADS)

    Goodwine, Bill; Seelinger, Michael J.; Skaar, Steven B.; Ma, Qun

    1998-10-01

    The body of work called `Camera Space Manipulation' is an effective and proven method of robotic control. Essentially, this technique identifies and refines the input-output relationship of the plant using estimation methods and drives the plant open-loop to its target state. 3D `success' of the desired motion, i.e., the end effector of the manipulator engages a target at a particular location with a particular orientation, is guaranteed when there is camera space success in two cameras which are adequately separated. Very accurate, sub-pixel positioning of a robotic end effector is possible using this method. To date, however, most efforts in this area have primarily considered holonomic systems. This work addresses the problem of nonholonomic camera space manipulation by considering the problem of a nonholonomic robot with two cameras and a holonomic manipulator on board the nonholonomic platform. While perhaps not as common in robotics, such a combination of holonomic and nonholonomic degrees of freedom are ubiquitous in industry: fork lifts and earth moving equipment are common examples of a nonholonomic system with an on-board holonomic actuator. The nonholonomic nature of the system makes the automation problem more difficult due to a variety of reasons; in particular, the target location is not fixed in the image planes, as it is for holonomic systems (since the cameras are attached to a moving platform), and there is a fundamental `path dependent' nature of nonholonomic kinematics. This work focuses on the sensor space or camera-space-based control laws necessary for effectively implementing an autonomous system of this type.

  17. Infrared On-Orbit RCC Inspection With the EVA IR Camera: Development of Flight Hardware From a COTS System

    NASA Technical Reports Server (NTRS)

    Gazanik, Michael; Johnson, Dave; Kist, Ed; Novak, Frank; Antill, Charles; Haakenson, David; Howell, Patricia; Jenkins, Rusty; Yates, Rusty; Stephan, Ryan; hide

    2005-01-01

    In November 2004, NASA's Space Shuttle Program approved the development of the Extravehicular (EVA) Infrared (IR) Camera to test the application of infrared thermography to on-orbit reinforced carbon-carbon (RCC) damage detection. A multi-center team composed of members from NASA's Johnson Space Center (JSC), Langley Research Center (LaRC), and Goddard Space Flight Center (GSFC) was formed to develop the camera system and plan a flight test. The initial development schedule called for the delivery of the system in time to support STS-115 in late 2005. At the request of Shuttle Program managers and the flight crews, the team accelerated its schedule and delivered a certified EVA IR Camera system in time to support STS-114 in July 2005 as a contingency. The development of the camera system, led by LaRC, was based on the Commercial-Off-the-Shelf (COTS) FLIR S65 handheld infrared camera. An assessment of the S65 system in regards to space-flight operation was critical to the project. This paper discusses the space-flight assessment and describes the significant modifications required for EVA use by the astronaut crew. The on-orbit inspection technique will be demonstrated during the third EVA of STS-121 in September 2005 by imaging damaged RCC samples mounted in a box in the Shuttle's cargo bay.

  18. JPRS Report, Science & Technology, Japan, 27th Aircraft Symposium

    DTIC Science & Technology

    1990-10-29

    screen; the relative attitude is then determined . 2) Video Sensor System Specific patterns (grapple target, etc.) drawn on the target spacecraft , or the...entire target spacecraft , is imaged by camera . Navigation information is obtained by on-board image processing, such as extraction of contours and...standard figure called "grapple target" located in the vicinity of the grapple fixture on the target spacecraft is imaged by camera . Contour lines and

  19. Lymphoscintigraphy

    MedlinePlus

    ... The special camera and imaging techniques used in nuclear medicine include the gamma camera and single-photon emission-computed tomography (SPECT). The gamma camera, also called a scintillation camera, detects radioactive energy that is emitted from the patient's body and ...

  20. Hepatobiliary

    MedlinePlus

    ... The special camera and imaging techniques used in nuclear medicine include the gamma camera and single-photon emission-computed tomography (SPECT). The gamma camera, also called a scintillation camera, detects radioactive energy that is emitted from the patient's body and ...

  1. Exploring the imaging properties of thin lenses for cryogenic infrared cameras

    NASA Astrophysics Data System (ADS)

    Druart, Guillaume; Verdet, Sebastien; Guerineau, Nicolas; Magli, Serge; Chambon, Mathieu; Grulois, Tatiana; Matallah, Noura

    2016-05-01

    Designing a cryogenic camera is a good strategy to miniaturize and simplify an infrared camera using a cooled detector. Indeed, the integration of optics inside the cold shield allows to simply athermalize the design, guarantees a cold pupil and releases the constraint on having a high back focal length for small focal length systems. By this way, cameras made of a single lens or two lenses are viable systems with good optical features and a good stability in image correction. However it involves a relatively significant additional optical mass inside the dewar and thus increases the cool down time of the camera. ONERA is currently exploring a minimalist strategy consisting in giving an imaging function to thin optical plates that are found in conventional dewars. By this way, we could make a cryogenic camera that has the same cool down time as a traditional dewar without an imagery function. Two examples will be presented: the first one is a camera using a dual-band infrared detector made of a lens outside the dewar and a lens inside the cold shield, the later having the main optical power of the system. We were able to design a cold plano-convex lens with a thickness lower than 1mm. The second example is an evolution of a former cryogenic camera called SOIE. We replaced the cold meniscus by a plano-convex Fresnel lens with a decrease of the optical thermal mass of 66%. The performances of both cameras will be compared.

  2. Fall incidents unraveled: a series of 26 video-based real-life fall events in three frail older persons

    PubMed Central

    2013-01-01

    Background For prevention and detection of falls, it is essential to unravel the way in which older people fall. This study aims to provide a description of video-based real-life fall events and to examine real-life falls using the classification system by Noury and colleagues, which divides a fall into four phases (the prefall, critical, postfall and recovery phase). Methods Observational study of three older persons at high risk for falls, residing in assisted living or residential care facilities: a camera system was installed in each participant’s room covering all areas, using a centralized PC platform in combination with standard Internet Protocol (IP) cameras. After a fall, two independent researchers analyzed recorded images using the camera position with the clearest viewpoint. Results A total of 30 falls occurred of which 26 were recorded on camera over 17 months. Most falls happened in the morning or evening (62%), when no other persons were present (88%). Participants mainly fell backward (initial fall direction and landing configuration) on the pelvis or torso and none could get up unaided. In cases where a call alarm was used (54%), an average of 70 seconds (SD=64; range 15–224) was needed to call for help. Staff responded to the call after an average of eight minutes (SD=8.4; range 2–33). Mean time on the ground was 28 minutes (SD=25.4; range 2–59) without using a call alarm compared to 11 minutes (SD=9.2; range 3–38) when using a call alarm (p=0.445). The real life falls were comparable with the prefall and recovery phase of Noury’s classification system. The critical phase, however, showed a prolonged duration in all falls. We suggest distinguishing two separate phases: a prolonged loss of balance phase and the actual descending phase after failure to recover balance, resulting in the impact of the body on the ground. In contrast to the theoretical description, the postfall phase was not typically characterized by inactivity; this depended on the individual. Conclusions This study contributes to a better understanding of the fall process in private areas of assisted living and residential care settings in older persons at high risk for falls. PMID:24090211

  3. General Nuclear Medicine

    MedlinePlus

    ... The special camera and imaging techniques used in nuclear medicine include the gamma camera and single-photon emission-computed tomography (SPECT). The gamma camera, also called a scintillation camera, detects radioactive energy that is emitted from the patient's body and ...

  4. Skeletal Scintigraphy (Bone Scan)

    MedlinePlus

    ... The special camera and imaging techniques used in nuclear medicine include the gamma camera and single-photon emission-computed tomography (SPECT). The gamma camera, also called a scintillation camera, detects radioactive energy that is emitted from the patient's body and ...

  5. The system analysis of light field information collection based on the light field imaging

    NASA Astrophysics Data System (ADS)

    Wang, Ye; Li, Wenhua; Hao, Chenyang

    2016-10-01

    Augmented reality(AR) technology is becoming the study focus, and the AR effect of the light field imaging makes the research of light field camera attractive. The micro array structure was adopted in most light field information acquisition system(LFIAS) since emergence of light field camera, micro lens array(MLA) and micro pinhole array(MPA) system mainly included. It is reviewed in this paper the structure of the LFIAS that the Light field camera commonly used in recent years. LFIAS has been analyzed based on the theory of geometrical optics. Meanwhile, this paper presents a novel LFIAS, plane grating system, we call it "micro aperture array(MAA." And the LFIAS are analyzed based on the knowledge of information optics; This paper proves that there is a little difference in the multiple image produced by the plane grating system. And the plane grating system can collect and record the amplitude and phase information of the field light.

  6. Virtual Vision

    NASA Astrophysics Data System (ADS)

    Terzopoulos, Demetri; Qureshi, Faisal Z.

    Computer vision and sensor networks researchers are increasingly motivated to investigate complex multi-camera sensing and control issues that arise in the automatic visual surveillance of extensive, highly populated public spaces such as airports and train stations. However, they often encounter serious impediments to deploying and experimenting with large-scale physical camera networks in such real-world environments. We propose an alternative approach called "Virtual Vision", which facilitates this type of research through the virtual reality simulation of populated urban spaces, camera sensor networks, and computer vision on commodity computers. We demonstrate the usefulness of our approach by developing two highly automated surveillance systems comprising passive and active pan/tilt/zoom cameras that are deployed in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. The easily reconfigurable virtual cameras distributed in this environment generate synthetic video feeds that emulate those acquired by real surveillance cameras monitoring public spaces. The novel multi-camera control strategies that we describe enable the cameras to collaborate in persistently observing pedestrians of interest and in acquiring close-up videos of pedestrians in designated areas.

  7. In-flight photogrammetric camera calibration and validation via complementary lidar

    NASA Astrophysics Data System (ADS)

    Gneeniss, A. S.; Mills, J. P.; Miller, P. E.

    2015-02-01

    This research assumes lidar as a reference dataset against which in-flight camera system calibration and validation can be performed. The methodology utilises a robust least squares surface matching algorithm to align a dense network of photogrammetric points to the lidar reference surface, allowing for the automatic extraction of so-called lidar control points (LCPs). Adjustment of the photogrammetric data is then repeated using the extracted LCPs in a self-calibrating bundle adjustment with additional parameters. This methodology was tested using two different photogrammetric datasets, a Microsoft UltraCamX large format camera and an Applanix DSS322 medium format camera. Systematic sensitivity testing explored the influence of the number and weighting of LCPs. For both camera blocks it was found that when the number of control points increase, the accuracy improves regardless of point weighting. The calibration results were compared with those obtained using ground control points, with good agreement found between the two.

  8. Camera Layout Design for the Upper Stage Thrust Cone

    NASA Technical Reports Server (NTRS)

    Wooten, Tevin; Fowler, Bart

    2010-01-01

    Engineers in the Integrated Design and Analysis Division (EV30) use a variety of different tools to aid in the design and analysis of the Ares I vehicle. One primary tool in use is Pro-Engineer. Pro-Engineer is a computer-aided design (CAD) software that allows designers to create computer generated structural models of vehicle structures. For the Upper State thrust cone, Pro-Engineer was used to assist in the design of a layout for two camera housings. These cameras observe the separation between the first and second stage of the Ares I vehicle. For the Ares I-X, one standard speed camera was used. The Ares I design calls for two separate housings, three cameras, and a lighting system. With previous design concepts and verification strategies in mind, a new layout for the two camera design concept was developed with members of the EV32 team. With the new design, Pro-Engineer was used to draw the layout to observe how the two camera housings fit with the thrust cone assembly. Future analysis of the camera housing design will verify the stability and clearance of the camera with other hardware present on the thrust cone.

  9. Event-Driven Random-Access-Windowing CCD Imaging System

    NASA Technical Reports Server (NTRS)

    Monacos, Steve; Portillo, Angel; Ortiz, Gerardo; Alexander, James; Lam, Raymond; Liu, William

    2004-01-01

    A charge-coupled-device (CCD) based high-speed imaging system, called a realtime, event-driven (RARE) camera, is undergoing development. This camera is capable of readout from multiple subwindows [also known as regions of interest (ROIs)] within the CCD field of view. Both the sizes and the locations of the ROIs can be controlled in real time and can be changed at the camera frame rate. The predecessor of this camera was described in High-Frame-Rate CCD Camera Having Subwindow Capability (NPO- 30564) NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 26. The architecture of the prior camera requires tight coupling between camera control logic and an external host computer that provides commands for camera operation and processes pixels from the camera. This tight coupling limits the attainable frame rate and functionality of the camera. The design of the present camera loosens this coupling to increase the achievable frame rate and functionality. From a host computer perspective, the readout operation in the prior camera was defined on a per-line basis; in this camera, it is defined on a per-ROI basis. In addition, the camera includes internal timing circuitry. This combination of features enables real-time, event-driven operation for adaptive control of the camera. Hence, this camera is well suited for applications requiring autonomous control of multiple ROIs to track multiple targets moving throughout the CCD field of view. Additionally, by eliminating the need for control intervention by the host computer during the pixel readout, the present design reduces ROI-readout times to attain higher frame rates. This camera (see figure) includes an imager card consisting of a commercial CCD imager and two signal-processor chips. The imager card converts transistor/ transistor-logic (TTL)-level signals from a field programmable gate array (FPGA) controller card. These signals are transmitted to the imager card via a low-voltage differential signaling (LVDS) cable assembly. The FPGA controller card is connected to the host computer via a standard peripheral component interface (PCI).

  10. Adjustable control station with movable monitors and cameras for viewing systems in robotics and teleoperations

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor)

    1994-01-01

    Real-time video presentations are provided in the field of operator-supervised automation and teleoperation, particularly in control stations having movable cameras for optimal viewing of a region of interest in robotics and teleoperations for performing different types of tasks. Movable monitors to match the corresponding camera orientations (pan, tilt, and roll) are provided in order to match the coordinate systems of all the monitors to the operator internal coordinate system. Automated control of the arrangement of cameras and monitors, and of the configuration of system parameters, is provided for optimal viewing and performance of each type of task for each operator since operators have different individual characteristics. The optimal viewing arrangement and system parameter configuration is determined and stored for each operator in performing each of many types of tasks in order to aid the automation of setting up optimal arrangements and configurations for successive tasks in real time. Factors in determining what is optimal include the operator's ability to use hand-controllers for each type of task. Robot joint locations, forces and torques are used, as well as the operator's identity, to identify the current type of task being performed in order to call up a stored optimal viewing arrangement and system parameter configuration.

  11. Overview of LBTI: A Multipurpose Facility for High Spatial Resolution Observations

    NASA Technical Reports Server (NTRS)

    Hinz, P. M.; Defrere, D.; Skemer, A.; Bailey, V.; Stone, J.; Spalding, E.; Vaz, A.; Pinna, E.; Puglisi, A.; Esposito, S.; hide

    2016-01-01

    The Large Binocular Telescope Interferometer (LBTI) is a high spatial resolution instrument developed for coherent imaging and nulling interferometry using the 14.4 m baseline of the 2x8.4 m LBT. The unique telescope design, comprising of the dual apertures on a common elevation-azimuth mount, enables a broad use of observing modes. The full system is comprised of dual adaptive optics systems, a near-infrared phasing camera, a 1-5 micrometer camera (called LMIRCam), and an 8-13 micrometer camera (called NOMIC). The key program for LBTI is the Hunt for Observable Signatures of Terrestrial planetary Systems (HOSTS), a survey using nulling interferometry to constrain the typical brightness from exozodiacal dust around nearby stars. Additional observations focus on the detection and characterization of giant planets in the thermal infrared, high spatial resolution imaging of complex scenes such as Jupiter's moon, Io, planets forming in transition disks, and the structure of active Galactic Nuclei (AGN). Several instrumental upgrades are currently underway to improve and expand the capabilities of LBTI. These include: Improving the performance and limiting magnitude of the parallel adaptive optics systems; quadrupling the field of view of LMIRcam (increasing to 20"x20"); adding an integral field spectrometry mode; and implementing a new algorithm for path length correction that accounts for dispersion due to atmospheric water vapor. We present the current architecture and performance of LBTI, as well as an overview of the upgrades.

  12. The Automatically Triggered Video or Imaging Station (ATVIS): An Inexpensive Way to Catch Geomorphic Events on Camera

    NASA Astrophysics Data System (ADS)

    Wickert, A. D.

    2010-12-01

    To understand how single events can affect landscape change, we must catch the landscape in the act. Direct observations are rare and often dangerous. While video is a good alternative, commercially-available video systems for field installation cost 11,000, weigh ~100 pounds (45 kg), and shoot 640x480 pixel video at 4 frames per second. This is the same resolution as a cheap point-and-shoot camera, with a frame rate that is nearly an order of magnitude worse. To overcome these limitations of resolution, cost, and portability, I designed and built a new observation station. This system, called ATVIS (Automatically Triggered Video or Imaging Station), costs 450--500 and weighs about 15 pounds. It can take roughly 3 hours of 1280x720 pixel video, 6.5 hours of 640x480 video, or 98,000 1600x1200 pixel photos (one photo every 7 seconds for 8 days). The design calls for a simple Canon point-and-shoot camera fitted with custom firmware that allows 5V pulses through its USB cable to trigger it to take a picture or to initiate or stop video recording. These pulses are provided by a programmable microcontroller that can take input from either sensors or a data logger. The design is easily modifiable to a variety of camera and sensor types, and can also be used for continuous time-lapse imagery. We currently have prototypes set up at a gully near West Bijou Creek on the Colorado high plains and at tributaries to Marble Canyon in northern Arizona. Hopefully, a relatively inexpensive and portable system such as this will allow geomorphologists to supplement sensor networks with photo or video monitoring and allow them to see—and better quantify—the fantastic array of processes that modify landscapes as they unfold. Camera station set up at Badger Canyon, Arizona.Inset: view into box. Clockwise from bottom right: camera, microcontroller (blue), DC converter (red), solar charge controller, 12V battery. Materials and installation assistance courtesy of Ron Griffiths and the USGS Grand Canyon Monitoring and Research Center.

  13. PreCam Survey Work at ANL

    Science.gov Websites

    - Astrophysics - DES - PreCam PreCam Work at ANL The Argonne/HEP Dark Energy Survey (DES) group, working on the Dark Energy Camera (DECam), built a mini-DECam camera called PreCam. This camera has provided valuable

  14. An evolution of technologies and applications of gamma imagers in the nuclear cycle industry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khalil, R. A.; Carrel, F.; Menaa, N.

    The tracking of radiation contamination and distribution has become a high priority in the nuclear cycle industry in order to respect the ALARA principle which is a main challenge during decontamination and dismantling activities. To support this need, AREVA/CANBERRA and CEA LIST have been actively carrying out research and development on a gamma-radiation imager. In this paper we will present the new generation of gamma camera, called GAMPIX. This system is based on the Timepix chip, hybridized with a CdTe substrate. A coded mask could be used in order to increase the sensitivity of the camera. Moreover, due to themore » USB connection with a standard computer, this gamma camera is immediately operational and user-friendly. The final system is a very compact gamma camera (global weight is less than 1 kg without any shielding) which could be used as a hand-held device for radioprotection purposes. In this article, we present the main characteristics of this new generation of gamma camera and we expose experimental results obtained during in situ measurements. Even though we present preliminary results the final product is under industrialization phase to address various applications specifications. (authors)« less

  15. Modular telerobot control system for accident response

    NASA Astrophysics Data System (ADS)

    Anderson, Richard J. M.; Shirey, David L.

    1999-08-01

    The Accident Response Mobile Manipulator System (ARMMS) is a teleoperated emergency response vehicle that deploys two hydraulic manipulators, five cameras, and an array of sensors to the scene of an incident. It is operated from a remote base station that can be situated up to four kilometers away from the site. Recently, a modular telerobot control architecture called SMART was applied to ARMMS to improve the precision, safety, and operability of the manipulators on board. Using SMART, a prototype manipulator control system was developed in a couple of days, and an integrated working system was demonstrated within a couple of months. New capabilities such as camera-frame teleoperation, autonomous tool changeout and dual manipulator control have been incorporated. The final system incorporates twenty-two separate modules and implements seven different behavior modes. This paper describes the integration of SMART into the ARMMS system.

  16. Scientific Design of a High Contrast Integral Field Spectrograph for the Subaru Telescope

    NASA Technical Reports Server (NTRS)

    McElwain, Michael W.

    2012-01-01

    Ground based telescopes equipped with adaptive optics systems and specialized science cameras are now capable of directly detecting extrasolar planets. We present the scientific design for a high contrast integral field spectrograph for the Subaru Telescope. This lenslet based integral field spectrograph will be implemented into the new extreme adaptive optics system at Subaru, called SCExAO.

  17. A feasibility study of damage detection in beams using high-speed camera (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Wan, Chao; Yuan, Fuh-Gwo

    2017-04-01

    In this paper a method for damage detection in beam structures using high-speed camera is presented. Traditional methods of damage detection in structures typically involve contact (i.e., piezoelectric sensor or accelerometer) or non-contact sensors (i.e., laser vibrometer) which can be costly and time consuming to inspect an entire structure. With the popularity of the digital camera and the development of computer vision technology, video cameras offer a viable capability of measurement including higher spatial resolution, remote sensing and low-cost. In the study, a damage detection method based on the high-speed camera was proposed. The system setup comprises a high-speed camera and a line-laser which can capture the out-of-plane displacement of a cantilever beam. The cantilever beam with an artificial crack was excited and the vibration process was recorded by the camera. A methodology called motion magnification, which can amplify subtle motions in a video is used for modal identification of the beam. A finite element model was used for validation of the proposed method. Suggestions for applications of this methodology and challenges in future work will be discussed.

  18. Determination of the microbolometric FPA's responsivity with imaging system's radiometric considerations

    NASA Astrophysics Data System (ADS)

    Gogler, Slawomir; Bieszczad, Grzegorz; Krupinski, Michal

    2013-10-01

    Thermal imagers and used therein infrared array sensors are subject to calibration procedure and evaluation of their voltage sensitivity on incident radiation during manufacturing process. The calibration procedure is especially important in so-called radiometric cameras, where accurate radiometric quantities, given in physical units, are of concern. Even though non-radiometric cameras are not expected to stand up to such elevated standards, it is still important, that the image faithfully represents temperature variations across the scene. Detectors used in thermal camera are illuminated by infrared radiation transmitted through an infrared transmitting optical system. Often an optical system, when exposed to uniform Lambertian source forms a non-uniform irradiation distribution in its image plane. In order to be able to carry out an accurate non-uniformity correction it is essential to correctly predict irradiation distribution from a uniform source. In the article a non-uniformity correction method has been presented, that takes into account optical system's radiometry. Predictions of the irradiation distribution have been confronted with measured irradiance values. Presented radiometric model allows fast and accurate non-uniformity correction to be carried out.

  19. Crack Detection in Concrete Tunnels Using a Gabor Filter Invariant to Rotation.

    PubMed

    Medina, Roberto; Llamas, José; Gómez-García-Bermejo, Jaime; Zalama, Eduardo; Segarra, Miguel José

    2017-07-20

    In this article, a system for the detection of cracks in concrete tunnel surfaces, based on image sensors, is presented. Both data acquisition and processing are covered. Linear cameras and proper lighting are used for data acquisition. The required resolution of the camera sensors and the number of cameras is discussed in terms of the crack size and the tunnel type. Data processing is done by applying a new method called Gabor filter invariant to rotation, allowing the detection of cracks in any direction. The parameter values of this filter are set by using a modified genetic algorithm based on the Differential Evolution optimization method. The detection of the pixels belonging to cracks is obtained to a balanced accuracy of 95.27%, thus improving the results of previous approaches.

  20. Generation of animation sequences of three dimensional models

    NASA Technical Reports Server (NTRS)

    Poi, Sharon (Inventor); Bell, Brad N. (Inventor)

    1990-01-01

    The invention is directed toward a method and apparatus for generating an animated sequence through the movement of three-dimensional graphical models. A plurality of pre-defined graphical models are stored and manipulated in response to interactive commands or by means of a pre-defined command file. The models may be combined as part of a hierarchical structure to represent physical systems without need to create a separate model which represents the combined system. System motion is simulated through the introduction of translation, rotation and scaling parameters upon a model within the system. The motion is then transmitted down through the system hierarchy of models in accordance with hierarchical definitions and joint movement limitations. The present invention also calls for a method of editing hierarchical structure in response to interactive commands or a command file such that a model may be included, deleted, copied or moved within multiple system model hierarchies. The present invention also calls for the definition of multiple viewpoints or cameras which may exist as part of a system hierarchy or as an independent camera. The simulated movement of the models and systems is graphically displayed on a monitor and a frame is recorded by means of a video controller. Multiple movement and hierarchy manipulations are then recorded as a sequence of frames which may be played back as an animation sequence on a video cassette recorder.

  1. C-RED One and C-RED2: SWIR high-performance cameras using Saphira e-APD and Snake InGaAs detectors

    NASA Astrophysics Data System (ADS)

    Gach, Jean-Luc; Feautrier, Philippe; Stadler, Eric; Clop, Fabien; Lemarchand, Stephane; Carmignani, Thomas; Wanwanscappel, Yann; Boutolleau, David

    2018-02-01

    After the development of the OCAM2 EMCCD fast visible camera dedicated to advanced adaptive optics wavefront sensing, First Light Imaging moved to the SWIR fast cameras with the development of the C-RED One and the C-RED 2 cameras. First Light Imaging's C-RED One infrared camera is capable of capturing up to 3500 full frames per second with a subelectron readout noise and very low background. C-RED One is based on the last version of the SAPHIRA detector developed by Leonardo UK. This breakthrough has been made possible thanks to the use of an e-APD infrared focal plane array which is a real disruptive technology in imagery. C-RED One is an autonomous system with an integrated cooling system and a vacuum regeneration system. It operates its sensor with a wide variety of read out techniques and processes video on-board thanks to an FPGA. We will show its performances and expose its main features. In addition to this project, First Light Imaging developed an InGaAs 640x512 fast camera with unprecedented performances in terms of noise, dark and readout speed based on the SNAKE SWIR detector from Sofradir. The camera was called C-RED 2. The C-RED 2 characteristics and performances will be described. The C-RED One project has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement N° 673944. The C-RED 2 development is supported by the "Investments for the future" program and the Provence Alpes Côte d'Azur Region, in the frame of the CPER.

  2. Uav Photogrammetric Solution Using a Raspberry pi Camera Module and Smart Devices: Test and Results

    NASA Astrophysics Data System (ADS)

    Piras, M.; Grasso, N.; Jabbar, A. Abdul

    2017-08-01

    Nowadays, smart technologies are an important part of our action and life, both in indoor and outdoor environment. There are several smart devices very friendly to be setting, where they can be integrated and embedded with other sensors, having a very low cost. Raspberry allows to install an internal camera called Raspberry Pi Camera Module, both in RGB band and NIR band. The advantage of this system is the limited cost (< 60 euro), their light weight and their simplicity to be used and embedded. This paper will describe a research where a Raspberry Pi with the Camera Module was installed onto a UAV hexacopter based on arducopter system, with purpose to collect pictures for photogrammetry issue. Firstly, the system was tested with aim to verify the performance of RPi camera in terms of frame per second/resolution and the power requirement. Moreover, a GNSS receiver Ublox M8T was installed and connected to the Raspberry platform in order to collect real time position and the raw data, for data processing and to define the time reference. IMU was also tested to see the impact of UAV rotors noise on different sensors like accelerometer, Gyroscope and Magnetometer. A comparison of the achieved results (accuracy) on some check points of the point clouds obtained by the camera will be reported as well in order to analyse in deeper the main discrepancy on the generated point cloud and the potentiality of these proposed approach. In this contribute, the assembling of the system is described, in particular the dataset acquired and the results carried out will be analysed.

  3. Image Sensors Enhance Camera Technologies

    NASA Technical Reports Server (NTRS)

    2010-01-01

    In the 1990s, a Jet Propulsion Laboratory team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to miniaturize cameras on spacecraft while maintaining scientific image quality. Fossum s team founded a company to commercialize the resulting CMOS active pixel sensor. Now called the Aptina Imaging Corporation, based in San Jose, California, the company has shipped over 1 billion sensors for use in applications such as digital cameras, camera phones, Web cameras, and automotive cameras. Today, one of every three cell phone cameras on the planet feature Aptina s sensor technology.

  4. Minimizing camera-eye optical aberrations during the 3D reconstruction of retinal structures

    NASA Astrophysics Data System (ADS)

    Aldana-Iuit, Javier; Martinez-Perez, M. Elena; Espinosa-Romero, Arturo; Diaz-Uribe, Rufino

    2010-05-01

    3D reconstruction of blood vessels is a powerful visualization tool for physicians, since it allows them to refer to qualitative representation of their subject of study. In this paper we propose a 3D reconstruction method of retinal vessels from fundus images. The reconstruction method propose herein uses images of the same retinal structure in epipolar geometry. Images are preprocessed by RISA system for segmenting blood vessels and obtaining feature points for correspondences. The correspondence points process is solved using correlation. The LMedS analysis and Graph Transformation Matching algorithm are used for outliers suppression. Camera projection matrices are computed with the normalized eight point algorithm. Finally, we retrieve 3D position of the retinal tree points by linear triangulation. In order to increase the power of visualization, 3D tree skeletons are represented by surfaces via generalized cylinders whose radius correspond to morphological measurements obtained by RISA. In this paper the complete calibration process including the fundus camera and the optical properties of the eye, the so called camera-eye system is proposed. On one hand, the internal parameters of the fundus camera are obtained by classical algorithms using a reference pattern. On the other hand, we minimize the undesirable efects of the aberrations induced by the eyeball optical system assuming that contact enlarging lens corrects astigmatism, spherical and coma aberrations are reduced changing the aperture size and eye refractive errors are suppressed adjusting camera focus during image acquisition. Evaluation of two self-calibration proposals and results of 3D blood vessel surface reconstruction are presented.

  5. Homography-based multiple-camera person-tracking

    NASA Astrophysics Data System (ADS)

    Turk, Matthew R.

    2009-01-01

    Multiple video cameras are cheaply installed overlooking an area of interest. While computerized single-camera tracking is well-developed, multiple-camera tracking is a relatively new problem. The main multi-camera problem is to give the same tracking label to all projections of a real-world target. This is called the consistent labelling problem. Khan and Shah (2003) introduced a method to use field of view lines to perform multiple-camera tracking. The method creates inter-camera meta-target associations when objects enter at the scene edges. They also said that a plane-induced homography could be used for tracking, but this method was not well described. Their homography-based system would not work if targets use only one side of a camera to enter the scene. This paper overcomes this limitation and fully describes a practical homography-based tracker. A new method to find the feet feature is introduced. The method works especially well if the camera is tilted, when using the bottom centre of the target's bounding-box would produce inaccurate results. The new method is more accurate than the bounding-box method even when the camera is not tilted. Next, a method is presented that uses a series of corresponding point pairs "dropped" by oblivious, live human targets to find a plane-induced homography. The point pairs are created by tracking the feet locations of moving targets that were associated using the field of view line method. Finally, a homography-based multiple-camera tracking algorithm is introduced. Rules governing when to create the homography are specified. The algorithm ensures that homography-based tracking only starts after a non-degenerate homography is found. The method works when not all four field of view lines are discoverable; only one line needs to be found to use the algorithm. To initialize the system, the operator must specify pairs of overlapping cameras. Aside from that, the algorithm is fully automatic and uses the natural movement of live targets for training. No calibration is required. Testing shows that the algorithm performs very well in real-world sequences. The consistent labelling problem is solved, even for targets that appear via in-scene entrances. Full occlusions are handled. Although implemented in Matlab, the multiple-camera tracking system runs at eight frames per second. A faster implementation would be suitable for real-world use at typical video frame rates.

  6. Single-pixel camera with one graphene photodetector.

    PubMed

    Li, Gongxin; Wang, Wenxue; Wang, Yuechao; Yang, Wenguang; Liu, Lianqing

    2016-01-11

    Consumer cameras in the megapixel range are ubiquitous, but the improvement of them is hindered by the poor performance and high cost of traditional photodetectors. Graphene, a two-dimensional micro-/nano-material, recently has exhibited exceptional properties as a sensing element in a photodetector over traditional materials. However, it is difficult to fabricate a large-scale array of graphene photodetectors to replace the traditional photodetector array. To take full advantage of the unique characteristics of the graphene photodetector, in this study we integrated a graphene photodetector in a single-pixel camera based on compressive sensing. To begin with, we introduced a method called laser scribing for fabrication the graphene. It produces the graphene components in arbitrary patterns more quickly without photoresist contamination as do traditional methods. Next, we proposed a system for calibrating the optoelectrical properties of micro/nano photodetectors based on a digital micromirror device (DMD), which changes the light intensity by controlling the number of individual micromirrors positioned at + 12°. The calibration sensitivity is driven by the sum of all micromirrors of the DMD and can be as high as 10(-5)A/W. Finally, the single-pixel camera integrated with one graphene photodetector was used to recover a static image to demonstrate the feasibility of the single-pixel imaging system with the graphene photodetector. A high-resolution image can be recovered with the camera at a sampling rate much less than Nyquist rate. The study was the first demonstration for ever record of a macroscopic camera with a graphene photodetector. The camera has the potential for high-speed and high-resolution imaging at much less cost than traditional megapixel cameras.

  7. A projective surgical navigation system for cancer resection

    NASA Astrophysics Data System (ADS)

    Gan, Qi; Shao, Pengfei; Wang, Dong; Ye, Jian; Zhang, Zeshu; Wang, Xinrui; Xu, Ronald

    2016-03-01

    Near infrared (NIR) fluorescence imaging technique can provide precise and real-time information about tumor location during a cancer resection surgery. However, many intraoperative fluorescence imaging systems are based on wearable devices or stand-alone displays, leading to distraction of the surgeons and suboptimal outcome. To overcome these limitations, we design a projective fluorescence imaging system for surgical navigation. The system consists of a LED excitation light source, a monochromatic CCD camera, a host computer, a mini projector and a CMOS camera. A software program is written by C++ to call OpenCV functions for calibrating and correcting fluorescence images captured by the CCD camera upon excitation illumination of the LED source. The images are projected back to the surgical field by the mini projector. Imaging performance of this projective navigation system is characterized in a tumor simulating phantom. Image-guided surgical resection is demonstrated in an ex-vivo chicken tissue model. In all the experiments, the projected images by the projector match well with the locations of fluorescence emission. Our experimental results indicate that the proposed projective navigation system can be a powerful tool for pre-operative surgical planning, intraoperative surgical guidance, and postoperative assessment of surgical outcome. We have integrated the optoelectronic elements into a compact and miniaturized system in preparation for further clinical validation.

  8. A Spot Reminder System for the Visually Impaired Based on a Smartphone Camera

    PubMed Central

    Takizawa, Hotaka; Orita, Kazunori; Aoyagi, Mayumi; Ezaki, Nobuo; Mizuno, Shinji

    2017-01-01

    The present paper proposes a smartphone-camera-based system to assist visually impaired users in recalling their memories related to important locations, called spots, that they visited. The memories are recorded as voice memos, which can be played back when the users return to the spots. Spot-to-spot correspondence is determined by image matching based on the scale invariant feature transform. The main contribution of the proposed system is to allow visually impaired users to associate arbitrary voice memos with arbitrary spots. The users do not need any special devices or systems except smartphones and do not need to remember the spots where the voice memos were recorded. In addition, the proposed system can identify spots in environments that are inaccessible to the global positioning system. The proposed system has been evaluated by two experiments: image matching tests and a user study. The experimental results suggested the effectiveness of the system to help visually impaired individuals, including blind individuals, recall information about regularly-visited spots. PMID:28165403

  9. A Spot Reminder System for the Visually Impaired Based on a Smartphone Camera.

    PubMed

    Takizawa, Hotaka; Orita, Kazunori; Aoyagi, Mayumi; Ezaki, Nobuo; Mizuno, Shinji

    2017-02-04

    The present paper proposes a smartphone-camera-based system to assist visually impaired users in recalling their memories related to important locations, called spots, that they visited. The memories are recorded as voice memos, which can be played back when the users return to the spots. Spot-to-spot correspondence is determined by image matching based on the scale invariant feature transform. The main contribution of the proposed system is to allow visually impaired users to associate arbitrary voice memos with arbitrary spots. The users do not need any special devices or systems except smartphones and do not need to remember the spots where the voice memos were recorded. In addition, the proposed system can identify spots in environments that are inaccessible to the global positioning system. The proposed system has been evaluated by two experiments: image matching tests and a user study. The experimental results suggested the effectiveness of the system to help visually impaired individuals, including blind individuals, recall information about regularly-visited spots.

  10. Cost effective system for monitoring of fish migration with a camera

    NASA Astrophysics Data System (ADS)

    Sečnik, Matej; Brilly, Mitja; Vidmar, Andrej

    2016-04-01

    Within the European LIFE project Ljubljanica connects (LIFE10 NAT/SI/000142) we have developed a cost-effective solution for the monitoring of fish migration through the fish passes with the underwater camera. In the fish pass at Ambrožev trg and in the fish pass near the Fužine castle we installed a video camera called "Fishcam" to be able to monitor the migration of fish through the fish passes and success of its reconstruction. Live stream from fishcams installed in the fishpassesis available on our project website (http://ksh.fgg.uni-lj.si/ljubljanicaconnects/ang/12_camera). The system for the fish monitoring is made from two parts. First is the waterproof box for the computer with charger and the second part is the camera itself. We used a high sensitive Sony analogue camera. The advantage of this camera is that it has very good sensitivity in low light conditions, so it can take good quality pictures even at night with a minimum additional lighting. For the night recording we use additional IR reflector to illuminate passing fishes. The camera is connected to an 8-inch tablet PC. We decided to use a tablet PC because it is quite small, cheap, it is relatively fast and has a low power consumption. On the computer we use software which has advanced motion detection capabilities, so we can also detect the small fishes. When the fish is detected by a software, its photograph is automatically saved to local hard drive and for backup also on Google drive. The system for monitoring of fish migration has turned out to work very well. From the beginning of monitoring in June 2015 to end of the year there were more than 100.000 photographs produced. The first analysis of them was already prepared estimating fish species and their frequency in passing the fish pass.

  11. X-ray ‘ghost images’ could cut radiation doses

    NASA Astrophysics Data System (ADS)

    Chen, Sophia

    2018-03-01

    On its own, a single-pixel camera captures pictures that are pretty dull: squares that are completely black, completely white, or some shade of gray in between. All it does, after all, is detect brightness. Yet by connecting a single-pixel camera to a patterned light source, a team of physicists in China has made detailed x-ray images using a statistical technique called ghost imaging, first pioneered 20 years ago in infrared and visible light. Researchers in the field say future versions of this system could take clear x-ray photographs with cheap cameras—no need for lenses and multipixel detectors—and less cancer-causing radiation than conventional techniques.

  12. In-situ calibration of nonuniformity in infrared staring and modulated systems

    NASA Astrophysics Data System (ADS)

    Black, Wiley T.

    Infrared cameras can directly measure the apparent temperature of objects, providing thermal imaging. However, the raw output from most infrared cameras suffers from a strong, often limiting noise source called nonuniformity. Manufacturing imperfections in infrared focal planes lead to high pixel-to-pixel sensitivity to electronic bias, focal plane temperature, and other effects. The resulting imagery can only provide useful thermal imaging after a nonuniformity calibration has been performed. Traditionally, these calibrations are performed by momentarily blocking the field of view with a at temperature plate or blackbody cavity. However because the pattern is a coupling of manufactured sensitivities with operational variations, periodic recalibration is required, sometimes on the order of tens of seconds. A class of computational methods called Scene-Based Nonuniformity Correction (SBNUC) has been researched for over 20 years where the nonuniformity calibration is estimated in digital processing by analysis of the video stream in the presence of camera motion. The most sophisticated SBNUC methods can completely and robustly eliminate the high-spatial frequency component of nonuniformity with only an initial reference calibration or potentially no physical calibration. I will demonstrate a novel algorithm that advances these SBNUC techniques to support all spatial frequencies of nonuniformity correction. Long-wave infrared microgrid polarimeters are a class of camera that incorporate a microscale per-pixel wire-grid polarizer directly affixed to each pixel of the focal plane. These cameras have the capability of simultaneously measuring thermal imagery and polarization in a robust integrated package with no moving parts. I will describe the necessary adaptations of my SBNUC method to operate on this class of sensor as well as demonstrate SBNUC performance in LWIR polarimetry video collected on the UA mall.

  13. Expression transmission using exaggerated animation for Elfoid

    PubMed Central

    Hori, Maiya; Tsuruda, Yu; Yoshimura, Hiroki; Iwai, Yoshio

    2015-01-01

    We propose an expression transmission system using a cellular-phone-type teleoperated robot called Elfoid. Elfoid has a soft exterior that provides the look and feel of human skin, and is designed to transmit the speaker's presence to their communication partner using a camera and microphone. To transmit the speaker's presence, Elfoid sends not only the voice of the speaker but also the facial expression captured by the camera. In this research, facial expressions are recognized using a machine learning technique. Elfoid cannot, however, display facial expressions because of its compactness and a lack of sufficiently small actuator motors. To overcome this problem, facial expressions are displayed using Elfoid's head-mounted mobile projector. In an experiment, we built a prototype system and experimentally evaluated it's subjective usability. PMID:26347686

  14. Computational and design methods for advanced imaging

    NASA Astrophysics Data System (ADS)

    Birch, Gabriel C.

    This dissertation merges the optical design and computational aspects of imaging systems to create novel devices that solve engineering problems in optical science and attempts to expand the solution space available to the optical designer. This dissertation is divided into two parts: the first discusses a new active illumination depth sensing modality, while the second part discusses a passive illumination system called plenoptic, or lightfield, imaging. The new depth sensing modality introduced in part one is called depth through controlled aberration. This technique illuminates a target with a known, aberrated projected pattern and takes an image using a traditional, unmodified imaging system. Knowing how the added aberration in the projected pattern changes as a function of depth, we are able to quantitatively determine depth of a series of points from the camera. A major advantage this method permits is the ability for illumination and imaging axes to be coincident. Plenoptic cameras capture both spatial and angular data simultaneously. This dissertation present a new set of parameters that permit the design and comparison of plenoptic devices outside the traditionally published plenoptic 1.0 and plenoptic 2.0 configurations. Additionally, a series of engineering advancements are presented, including full system raytraces of raw plenoptic images, Zernike compression techniques of raw image files, and non-uniform lenslet arrays to compensate for plenoptic system aberrations. Finally, a new snapshot imaging spectrometer is proposed based off the plenoptic configuration.

  15. Astronomy helps advance medical diagnosis techniques

    NASA Astrophysics Data System (ADS)

    2001-11-01

    Effective treatment of cancer relies on the early detection and removal of cancerous cells. Unfortunately, this is when they are hardest to spot. In the case of breast cancer, now the most prevalent form of cancer in the United Kingdom, cancer cells tend to congregate in the lymph nodes, from where they can rapidly spread throughout the rest of the body. Current medical equipment can give doctors only limited information on tissue health. A surgeon must then perform an exploratory operation to try to identify the diseased tissue. If that is possible, the diseased tissue will be removed. If identification is not possible, the doctor may be forced to take away the whole of the lymphatic system. Such drastic treatment can then cause side effects, such as excessive weight gain, because it throws the patient's hormones out of balance. Now, members of the Science Payloads Technology Division of the Research and Science Support Department, at ESA's science, technology and engineering research centre (ESTEC) in the Netherlands, have developed a new X-ray camera that could make on-the-spot diagnoses and pinpoint cancerous areas to guide surgeons. Importantly, it would be a small device that could be used continuously during operations. "There is no photography involved in the camera we envisage. It will be completely digital, so the surgeon will study the whole lymphatic system and the potentially cancerous parts on his monitor. He then decides which parts he removes," says Dr. Tone Peacock, Head of the Science Payloads Technology Division. The ESA team were trying to find a way to make images using high-energy X-rays because some celestial objects give out large quantities of X-rays but little visible light. To see these, astronomers need to use X-ray cameras. Traditionally, this has been a bit of a blind spot for astronomers. ESA's current X-ray telescope, XMM-Newton, is in orbit now, observing low energy, so-called 'soft' X-rays. European scientists have always wanted to follow up XMM-Newton's success with a satellite called XEUS. It would be capable of taking images of the high-energy 'hard' X-rays but a reliable camera has eluded them - until now. For the first time, the ESTEC researchers have produced a microchip, similar to that found in a household video camera but capable of detecting hard X-rays instead of visible light. The key is that, instead of silicon, the new chip is made from a chemical compound called epitaxial gallium arsenide. This new material was developed under the ESA leadership of Dr Marcos Bavdaz to the very demanding requirements of such hard X-ray sensors. The prototype sensor has now successfully completed its extensive tests at a German X-ray test facility (HASYLAB). It may seem surprising that medical imaging is similar to observing high energy X-rays from space. However, hard X-rays are the only type that will pass through the human body. Dr Alan Owens, who is closely involved in the research at ESA, explains: "For the lymphatic system a radioactive tracer which emits X-rays is injected into or near the breast tumour. The tracer focuses on those parts of the system which are cancerous. With a small camera it is therefore possible to image this cancerous tissue during surgery." The ESA team were aware, from an early stage, that the work they were doing could lead to better medical equipment and sought expert advice. "We are talking to the people at Leiden University Medical Centre," explains Owens. "Also they can test and evaluate what we produce." A small lightweight X-ray camera would be a very important addition to the set of tools available to the surgeon. Having made the basic camera sensor, the next stage in this work is to develop a system to send the images to television screens in real time. "We are developing that now with our industrial partners, such as Metorex, a research and development company in Finland," says Peacock. Once ESA, which is a non-profit organisation, has developed the technology to make this X-ray camera work, its task is done. The industrial partners will take over, producing a camera for medical use. ESA will adapt its design to provide European astronomers with a new view of the Universe.

  16. HCI∧2 framework: a software framework for multimodal human-computer interaction systems.

    PubMed

    Shen, Jie; Pantic, Maja

    2013-12-01

    This paper presents a novel software framework for the development and research in the area of multimodal human-computer interface (MHCI) systems. The proposed software framework, which is called the HCI∧2 Framework, is built upon publish/subscribe (P/S) architecture. It implements a shared-memory-based data transport protocol for message delivery and a TCP-based system management protocol. The latter ensures that the integrity of system structure is maintained at runtime. With the inclusion of bridging modules, the HCI∧2 Framework is interoperable with other software frameworks including Psyclone and ActiveMQ. In addition to the core communication middleware, we also present the integrated development environment (IDE) of the HCI∧2 Framework. It provides a complete graphical environment to support every step in a typical MHCI system development process, including module development, debugging, packaging, and management, as well as the whole system management and testing. The quantitative evaluation indicates that our framework outperforms other similar tools in terms of average message latency and maximum data throughput under a typical single PC scenario. To demonstrate HCI∧2 Framework's capabilities in integrating heterogeneous modules, we present several example modules working with a variety of hardware and software. We also present an example of a full system developed using the proposed HCI∧2 Framework, which is called the CamGame system and represents a computer game based on hand-held marker(s) and low-cost camera(s).

  17. A new omni-directional multi-camera system for high resolution surveillance

    NASA Astrophysics Data System (ADS)

    Cogal, Omer; Akin, Abdulkadir; Seyid, Kerem; Popovic, Vladan; Schmid, Alexandre; Ott, Beat; Wellig, Peter; Leblebici, Yusuf

    2014-05-01

    Omni-directional high resolution surveillance has a wide application range in defense and security fields. Early systems used for this purpose are based on parabolic mirror or fisheye lens where distortion due to the nature of the optical elements cannot be avoided. Moreover, in such systems, the image resolution is limited to a single image sensor's image resolution. Recently, the Panoptic camera approach that mimics the eyes of flying insects using multiple imagers has been presented. This approach features a novel solution for constructing a spherically arranged wide FOV plenoptic imaging system where the omni-directional image quality is limited by low-end sensors. In this paper, an overview of current Panoptic camera designs is provided. New results for a very-high resolution visible spectrum imaging and recording system inspired from the Panoptic approach are presented. The GigaEye-1 system, with 44 single cameras and 22 FPGAs, is capable of recording omni-directional video in a 360°×100° FOV at 9.5 fps with a resolution over (17,700×4,650) pixels (82.3MP). Real-time video capturing capability is also verified at 30 fps for a resolution over (9,000×2,400) pixels (21.6MP). The next generation system with significantly higher resolution and real-time processing capacity, called GigaEye-2, is currently under development. The important capacity of GigaEye-1 opens the door to various post-processing techniques in surveillance domain such as large perimeter object tracking, very-high resolution depth map estimation and high dynamicrange imaging which are beyond standard stitching and panorama generation methods.

  18. Investigating the Suitability of Mirrorless Cameras in Terrestrial Photogrammetric Applications

    NASA Astrophysics Data System (ADS)

    Incekara, A. H.; Seker, D. Z.; Delen, A.; Acar, A.

    2017-11-01

    Digital single-lens reflex cameras (DSLR) which are commonly referred as mirrored cameras are preferred for terrestrial photogrammetric applications such as documentation of cultural heritage, archaeological excavations and industrial measurements. Recently, digital cameras which are called as mirrorless systems that can be used with different lens combinations have become available for using similar applications. The main difference between these two camera types is the presence of the mirror mechanism which means that the incoming beam towards the lens is different in the way it reaches the sensor. In this study, two different digital cameras, one with a mirror (Nikon D700) and the other without a mirror (Sony a6000), were used to apply close range photogrammetric application on the rock surface at Istanbul Technical University (ITU) Ayazaga Campus. Accuracy of the 3D models created by means of photographs taken with both cameras were compared with each other using difference values between field and model coordinates which were obtained after the alignment of the photographs. In addition, cross sections were created on the 3D models for both data source and maximum area difference between them is quite small because they are almost overlapping. The mirrored camera has become more consistent in itself with respect to the change of model coordinates for models created with photographs taken at different times, with almost the same ground sample distance. As a result, it has been determined that mirrorless cameras and point cloud produced using photographs obtained from these cameras can be used for terrestrial photogrammetric studies.

  19. High dynamic range adaptive real-time smart camera: an overview of the HDR-ARTiST project

    NASA Astrophysics Data System (ADS)

    Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique

    2015-04-01

    Standard cameras capture only a fraction of the information that is visible to the human visual system. This is specifically true for natural scenes including areas of low and high illumination due to transitions between sunlit and shaded areas. When capturing such a scene, many cameras are unable to store the full Dynamic Range (DR) resulting in low quality video where details are concealed in shadows or washed out by sunlight. The imaging technique that can overcome this problem is called HDR (High Dynamic Range) imaging. This paper describes a complete smart camera built around a standard off-the-shelf LDR (Low Dynamic Range) sensor and a Virtex-6 FPGA board. This smart camera called HDR-ARtiSt (High Dynamic Range Adaptive Real-time Smart camera) is able to produce a real-time HDR live video color stream by recording and combining multiple acquisitions of the same scene while varying the exposure time. This technique appears as one of the most appropriate and cheapest solution to enhance the dynamic range of real-life environments. HDR-ARtiSt embeds real-time multiple captures, HDR processing, data display and transfer of a HDR color video for a full sensor resolution (1280 1024 pixels) at 60 frames per second. The main contributions of this work are: (1) Multiple Exposure Control (MEC) dedicated to the smart image capture with alternating three exposure times that are dynamically evaluated from frame to frame, (2) Multi-streaming Memory Management Unit (MMMU) dedicated to the memory read/write operations of the three parallel video streams, corresponding to the different exposure times, (3) HRD creating by combining the video streams using a specific hardware version of the Devebecs technique, and (4) Global Tone Mapping (GTM) of the HDR scene for display on a standard LCD monitor.

  20. A Motionless Camera

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Omniview, a motionless, noiseless, exceptionally versatile camera was developed for NASA as a receiving device for guiding space robots. The system can see in one direction and provide as many as four views simultaneously. Developed by Omniview, Inc. (formerly TRI) under a NASA Small Business Innovation Research (SBIR) grant, the system's image transformation electronics produce a real-time image from anywhere within a hemispherical field. Lens distortion is removed, and a corrected "flat" view appears on a monitor. Key elements are a high resolution charge coupled device (CCD), image correction circuitry and a microcomputer for image processing. The system can be adapted to existing installations. Applications include security and surveillance, teleconferencing, imaging, virtual reality, broadcast video and military operations. Omniview technology is now called IPIX. The company was founded in 1986 as TeleRobotics International, became Omniview in 1995, and changed its name to Interactive Pictures Corporation in 1997.

  1. SoundView: an auditory guidance system based on environment understanding for the visually impaired people.

    PubMed

    Nie, Min; Ren, Jie; Li, Zhengjun; Niu, Jinhai; Qiu, Yihong; Zhu, Yisheng; Tong, Shanbao

    2009-01-01

    Without visual information, the blind people live in various hardships with shopping, reading, finding objects and etc. Therefore, we developed a portable auditory guide system, called SoundView, for visually impaired people. This prototype system consists of a mini-CCD camera, a digital signal processing unit and an earphone, working with built-in customizable auditory coding algorithms. Employing environment understanding techniques, SoundView processes the images from a camera and detects objects tagged with barcodes. The recognized objects in the environment are then encoded into stereo speech signals for the blind though an earphone. The user would be able to recognize the type, motion state and location of the interested objects with the help of SoundView. Compared with other visual assistant techniques, SoundView is object-oriented and has the advantages of cheap cost, smaller size, light weight, low power consumption and easy customization.

  2. 25 CFR 542.23 - What are the minimum internal control standards for surveillance for Tier A gaming operations?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... trained in the use of the equipment, knowledge of the games, and house rules. (f) Each camera required by... device, the game board, and the activities of the employees responsible for drawing, calling, and entering the balls drawn or numbers selected. (j) Card games. The surveillance system shall record the...

  3. 25 CFR 542.23 - What are the minimum internal control standards for surveillance for Tier A gaming operations?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... trained in the use of the equipment, knowledge of the games, and house rules. (f) Each camera required by... device, the game board, and the activities of the employees responsible for drawing, calling, and entering the balls drawn or numbers selected. (j) Card games. The surveillance system shall record the...

  4. 25 CFR 542.23 - What are the minimum internal control standards for surveillance for Tier A gaming operations?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... trained in the use of the equipment, knowledge of the games, and house rules. (f) Each camera required by... device, the game board, and the activities of the employees responsible for drawing, calling, and entering the balls drawn or numbers selected. (j) Card games. The surveillance system shall record the...

  5. 25 CFR 542.23 - What are the minimum internal control standards for surveillance for Tier A gaming operations?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... trained in the use of the equipment, knowledge of the games, and house rules. (f) Each camera required by... device, the game board, and the activities of the employees responsible for drawing, calling, and entering the balls drawn or numbers selected. (j) Card games. The surveillance system shall record the...

  6. 25 CFR 542.23 - What are the minimum internal control standards for surveillance for Tier A gaming operations?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... trained in the use of the equipment, knowledge of the games, and house rules. (f) Each camera required by... device, the game board, and the activities of the employees responsible for drawing, calling, and entering the balls drawn or numbers selected. (j) Card games. The surveillance system shall record the...

  7. Development of the focal plane PNCCD camera system for the X-ray space telescope eROSITA

    NASA Astrophysics Data System (ADS)

    Meidinger, Norbert; Andritschke, Robert; Ebermayer, Stefanie; Elbs, Johannes; Hälker, Olaf; Hartmann, Robert; Herrmann, Sven; Kimmel, Nils; Schächner, Gabriele; Schopper, Florian; Soltau, Heike; Strüder, Lothar; Weidenspointner, Georg

    2010-12-01

    A so-called PNCCD, a special type of CCD, was developed twenty years ago as focal plane detector for the XMM-Newton X-ray astronomy mission of the European Space Agency ESA. Based on this detector concept and taking into account the experience of almost ten years of operation in space, a new X-ray CCD type was designed by the ‘MPI semiconductor laboratory’ for an upcoming X-ray space telescope, called eROSITA (extended Roentgen survey with an imaging telescope array). This space telescope will be equipped with seven X-ray mirror systems of Wolter-I type and seven CCD cameras, placed in their foci. The instrumentation permits the exploration of the X-ray universe in the energy band from 0.3 up to 10 keV by spectroscopic measurements with a time resolution of 50 ms for a full image comprising 384×384 pixels. Main scientific goals are an all-sky survey and investigation of the mysterious ‘Dark Energy’. The eROSITA space telescope, which is developed under the responsibility of the ‘Max-Planck-Institute for extraterrestrial physics’, is a scientific payload on the new Russian satellite ‘Spectrum-Roentgen-Gamma’ (SRG). The mission is already approved by the responsible Russian and German space agencies. After launch in 2012 the destination of the satellite is Lagrange point L2. The planned observational program takes about seven years. We describe the design of the eROSITA camera system and present important test results achieved recently with the eROSITA prototype PNCCD detector. This includes a comparison of the eROSITA detector with the XMM-Newton detector.

  8. OSMOSIS: a new joint laboratory between SOFRADIR and ONERA for the development of advanced DDCA with integrated optics

    NASA Astrophysics Data System (ADS)

    Druart, Guillaume; Matallah, Noura; Guerineau, Nicolas; Magli, Serge; Chambon, Mathieu; Jenouvrier, Pierre; Mallet, Eric; Reibel, Yann

    2014-06-01

    Today, both military and civilian applications require miniaturized optical systems in order to give an imagery function to vehicles with small payload capacity. After the development of megapixel focal plane arrays (FPA) with micro-sized pixels, this miniaturization will become feasible with the integration of optical functions in the detector area. In the field of cooled infrared imaging systems, the detector area is the Detector-Dewar-Cooler Assembly (DDCA). SOFRADIR and ONERA have launched a new research and innovation partnership, called OSMOSIS, to develop disruptive technologies for DDCA to improve the performance and compactness of optronic systems. With this collaboration, we will break down the technological barriers of DDCA, a sealed and cooled environment dedicated to the infrared detectors, to explore Dewar-level integration of optics. This technological breakthrough will bring more compact multipurpose thermal imaging products, as well as new thermal capabilities such as 3D imagery or multispectral imagery. Previous developments will be recalled (SOIE and FISBI cameras) and new developments will be presented. In particular, we will focus on a dual-band MWIR-LWIR camera and a multichannel camera.

  9. Person Recognition System Based on a Combination of Body Images from Visible Light and Thermal Cameras.

    PubMed

    Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung

    2017-03-16

    The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body.

  10. HUBBLE'S IMPROVED OPTICS REVEAL INCREDIBLE DETAIL IN GIANT CLOUD OF GAS AND DUS

    NASA Technical Reports Server (NTRS)

    2002-01-01

    An image of a star-forming region in the 30 Doradus nebula, surrounding the dense star cluster R136. The image was obtained using the second generation Wide Field and Planetary Camera (WFPC-2), installed in the Hubble Space Telescope during the STS-61 Servicing Mission. The WFPC-2 contains modified optics to correct for the aberration of the Hubble's primary mirror. The new optics will allow the telescope to tackle many of the most important scientific programs for which the K was built, but had to be temporarily shelved with the discovery of the spherical aberration in 1990. The large picture shows a mosaic of the images taken with WFPC-2s four separate cameras. Three of the cameras, called the Wide Field Cameras, give HST Hs 'panoramic' view of astronomical objects. A fourth camera, called the Planetary Camera, has a smaller field of view but provides better spatial resolution. The image shows the fields of view of the four cameras combined into a 'chevron' shape, the hallmark of WFPC-2 data. The image shows a portion of a giant cloud of gas and dust in 30 Doradus, which is located in a small neighboring galaxy called the Large Magellanic Cloud about 160,000 light years away from us. The cloud is called an H II region because it is made up primarily of ionized hydrogen excited by ultraviolet light from hot stars. This is an especially interesting H II region because unlike nearby objects which are lit up by only a few stars, such as the Orion Nebula, 30 Doradus is the result of the combined efforts of hundreds of the brightest and most massive stars known. The inset shows a blowup of the star cluster, called R136. Even at the distance to 30 Doradus, WFPC-2's resolution allows objects as small as 25 light days across to be distinguished from their surroundings, revealing the effect of the hot stars on the surrounding gas in unprecedented detail. (For comparison, our solar system is about half a light day across, while the distance to the nearest star beyond the Sun is 4.3 light years.) Once thought to consist of a fairly small number of supermassive stars, R136 was resolved from the ground using 'speckle' techniques into a handful of central objects. Prior to the servicing mission, HST resolved R136 into several hundred stars. Now, preliminary analysis of the images obtained with the WFPC-2 shows that R136 consists of more than 3000 stars with brightness and colors that can be accurately measured. It is these measurements that will provide astronomers with new insights into how clouds of gas suddenly turn into large aggregations of stars. These insights will help astronomers understand how stars in our own Galaxy formed, as well as providing clues about how to interpret observations of distant galaxies which are still in the process of forming. For example, the new data show that at least in the case of R136, stars with masses less than that of our Sun were able to form as rapidly as very massive stars, qualifying this as a true starburst. PHOTO RELEASE NO.: STScI-PR94-04

  11. Model of an optical system's influence on sensitivity of microbolometric focal plane array

    NASA Astrophysics Data System (ADS)

    Gogler, Sławomir; Bieszczad, Grzegorz; Zarzycka, Alicja; Szymańska, Magdalena; Sosnowski, Tomasz

    2012-10-01

    Thermal imagers and used therein infrared array sensors are subject to calibration procedure and evaluation of their voltage sensitivity on incident radiation during manufacturing process. The calibration procedure is especially important in so-called radiometric cameras, where accurate radiometric quantities, given in physical units, are of concern. Even though non-radiometric cameras are not expected to stand up to such elevated standards, it is still important, that the image faithfully represents temperature variations across the scene. The detectors used in thermal camera are illuminated by infrared radiation transmitted through a specialized optical system. Each optical system used influences irradiation distribution across an sensor array. In the article a model describing irradiation distribution across an array sensor working with an optical system used in the calibration set-up has been proposed. In the said method optical and geometrical considerations of the array set-up have been taken into account. By means of Monte-Carlo simulation, large number of rays has been traced to the sensor plane, what allowed to determine the irradiation distribution across the image plane for different aperture limiting configurations. Simulated results have been confronted with proposed analytical expression. Presented radiometric model allows fast and accurate non-uniformity correction to be carried out.

  12. Versatile illumination platform and fast optical switch to give standard observation camera gated active imaging capacity

    NASA Astrophysics Data System (ADS)

    Grasser, R.; Peyronneaudi, Benjamin; Yon, Kevin; Aubry, Marie

    2015-10-01

    CILAS, subsidiary of Airbus Defense and Space, develops, manufactures and sales laser-based optronics equipment for defense and homeland security applications. Part of its activity is related to active systems for threat detection, recognition and identification. Active surveillance and active imaging systems are often required to achieve identification capacity in case for long range observation in adverse conditions. In order to ease the deployment of active imaging systems often complex and expensive, CILAS suggests a new concept. It consists on the association of two apparatus working together. On one side, a patented versatile laser platform enables high peak power laser illumination for long range observation. On the other side, a small camera add-on works as a fast optical switch to select photons with specific time of flight only. The association of the versatile illumination platform and the fast optical switch presents itself as an independent body, so called "flash module", giving to virtually any passive observation systems gated active imaging capacity in NIR and SWIR.

  13. Sound imaging of nocturnal animal calls in their natural habitat.

    PubMed

    Mizumoto, Takeshi; Aihara, Ikkyu; Otsuka, Takuma; Takeda, Ryu; Aihara, Kazuyuki; Okuno, Hiroshi G

    2011-09-01

    We present a novel method for imaging acoustic communication between nocturnal animals. Investigating the spatio-temporal calling behavior of nocturnal animals, e.g., frogs and crickets, has been difficult because of the need to distinguish many animals' calls in noisy environments without being able to see them. Our method visualizes the spatial and temporal dynamics using dozens of sound-to-light conversion devices (called "Firefly") and an off-the-shelf video camera. The Firefly, which consists of a microphone and a light emitting diode, emits light when it captures nearby sound. Deploying dozens of Fireflies in a target area, we record calls of multiple individuals through the video camera. We conduct two experiments, one indoors and the other in the field, using Japanese tree frogs (Hyla japonica). The indoor experiment demonstrates that our method correctly visualizes Japanese tree frogs' calling behavior. It has confirmed the known behavior; two frogs call synchronously or in anti-phase synchronization. The field experiment (in a rice paddy where Japanese tree frogs live) also visualizes the same calling behavior to confirm anti-phase synchronization in the field. Experimental results confirm that our method can visualize the calling behavior of nocturnal animals in their natural habitat.

  14. Using a Video Camera to Measure the Radius of the Earth

    ERIC Educational Resources Information Center

    Carroll, Joshua; Hughes, Stephen

    2013-01-01

    A simple but accurate method for measuring the Earth's radius using a video camera is described. A video camera was used to capture a shadow rising up the wall of a tall building at sunset. A free program called ImageJ was used to measure the time it took the shadow to rise a known distance up the building. The time, distance and length of…

  15. Computer-Aided Remote Driving

    NASA Technical Reports Server (NTRS)

    Wilcox, Brian H.

    1994-01-01

    System for remote control of robotic land vehicle requires only small radio-communication bandwidth. Twin video cameras on vehicle create stereoscopic images. Operator views cross-polarized images on two cathode-ray tubes through correspondingly polarized spectacles. By use of cursor on frozen image, remote operator designates path. Vehicle proceeds to follow path, by use of limited degree of autonomous control to cope with unexpected conditions. System concept, called "computer-aided remote driving" (CARD), potentially useful in exploration of other planets, military surveillance, firefighting, and clean-up of hazardous materials.

  16. Assessment of the metrological performance of an in situ storage image sensor ultra-high speed camera for full-field deformation measurements

    NASA Astrophysics Data System (ADS)

    Rossi, Marco; Pierron, Fabrice; Forquin, Pascal

    2014-02-01

    Ultra-high speed (UHS) cameras allow us to acquire images typically up to about 1 million frames s-1 for a full spatial resolution of the order of 1 Mpixel. Different technologies are available nowadays to achieve these performances, an interesting one is the so-called in situ storage image sensor architecture where the image storage is incorporated into the sensor chip. Such an architecture is all solid state and does not contain movable devices as occurs, for instance, in the rotating mirror UHS cameras. One of the disadvantages of this system is the low fill factor (around 76% in the vertical direction and 14% in the horizontal direction) since most of the space in the sensor is occupied by memory. This peculiarity introduces a series of systematic errors when the camera is used to perform full-field strain measurements. The aim of this paper is to develop an experimental procedure to thoroughly characterize the performance of such kinds of cameras in full-field deformation measurement and identify the best operative conditions which minimize the measurement errors. A series of tests was performed on a Shimadzu HPV-1 UHS camera first using uniform scenes and then grids under rigid movements. The grid method was used as full-field measurement optical technique here. From these tests, it has been possible to appropriately identify the camera behaviour and utilize this information to improve actual measurements.

  17. A New Digital Holographic Instrument for Measuring Microphysical Properties of Contrails in the SASS (Subsonic Assessment) Program

    NASA Technical Reports Server (NTRS)

    Lawson, R. Paul

    2000-01-01

    SPEC incorporated designed, built and operated a new instrument, called a pi-Nephelometer, on the NASA DC-8 for the SUCCESS field project. The pi-Nephelometer casts an image of a particle on a 400,000 pixel solid-state camera by freezing the motion of the particle using a 25 ns pulsed, high-power (60 W) laser diode. Unique optical imaging and particle detection systems precisely detect particles and define the depth-of-field so that at least one particle in the image is almost always in focus. A powerful image processing engine processes frames from the solid-state camera, identifies and records regions of interest (i.e. particle images) in real time. Images of ice crystals are displayed and recorded with 5 micron pixel resolution. In addition, a scattered light system simultaneously measures the scattering phase function of the imaged particle. The system consists of twenty-eight 1-mm optical fibers connected to microlenses bonded on the surface of avalanche photo diodes (APDs). Data collected with the pi-Nephelometer during the SUCCESS field project was reported in a special issue of Geophysical Research Letters. The pi-Nephelometer provided the basis for development of a commercial imaging probe, called the cloud particle imager (CPI), which has been installed on several research aircraft and used in More than a dozen field programs.

  18. The Camera-Based Assessment Survey System (C-BASS): A towed camera platform for reef fish abundance surveys and benthic habitat characterization in the Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Lembke, Chad; Grasty, Sarah; Silverman, Alex; Broadbent, Heather; Butcher, Steven; Murawski, Steven

    2017-12-01

    An ongoing challenge for fisheries management is to provide cost-effective and timely estimates of habitat stratified fish densities. Traditional approaches use modified commercial fishing gear (such as trawls and baited hooks) that have biases in species selectivity and may also be inappropriate for deployment in some habitat types. Underwater visual and optical approaches offer the promise of more precise and less biased assessments of relative fish abundance, as well as direct estimates of absolute fish abundance. A number of video-based approaches have been developed and the technology for data acquisition, calibration, and synthesis has been developing rapidly. Beginning in 2012, our group of engineers and researchers at the University of South Florida has been working towards the goal of completing large scale, video-based surveys in the eastern Gulf of Mexico. This paper discusses design considerations and development of a towed camera system for collection of video-based data on commercially and recreationally important reef fishes and benthic habitat on the West Florida Shelf. Factors considered during development included potential habitat types to be assessed, sea-floor bathymetry, vessel support requirements, personnel requirements, and cost-effectiveness of system components. This regional-specific effort has resulted in a towed platform called the Camera-Based Assessment Survey System, or C-BASS, which has proven capable of surveying tens of kilometers of video transects per day and has the ability to cost-effective population estimates of reef fishes and coincident benthic habitat classification.

  19. Study on real-time images compounded using spatial light modulator

    NASA Astrophysics Data System (ADS)

    Xu, Jin; Chen, Zhebo; Ni, Xuxiang; Lu, Zukang

    2007-01-01

    Image compounded technology is often used on film and its facture. In common, image compounded use image processing arithmetic, get useful object, details, background or some other things from the images firstly, then compounding all these information into one image. When using this method, the film system needs a powerful processor, for the process function is very complex, we get the compounded image for a few time delay. In this paper, we introduce a new method of image real-time compounded, use this method, we can do image composite at the same time with movie shot. The whole system is made up of two camera-lens, spatial light modulator array and image sensor. In system, the spatial light modulator could be liquid crystal display (LCD), liquid crystal on silicon (LCoS), thin film transistor liquid crystal display (TFTLCD), Deformable Micro-mirror Device (DMD), and so on. Firstly, one camera-lens images the object on the spatial light modulator's panel, we call this camera-lens as first image lens. Secondly, we output an image to the panel of spatial light modulator. Then, the image of the object and image that output by spatial light modulator will be spatial compounded on the panel of spatial light modulator. Thirdly, the other camera-lens images the compounded image to the image sensor, and we call this camera-lens as second image lens. After these three steps, we will gain the compound images by image sensor. For the spatial light modulator could output the image continuously, then the image will be compounding continuously too, and the compounding procedure is completed in real-time. When using this method to compounding image, if we will put real object into invented background, we can output the invented background scene on the spatial light modulator, and the real object will be imaged by first image lens. Then, we get the compounded images by image sensor in real time. The same way, if we will put real background to an invented object, we can output the invented object on the spatial light modulator and the real background will be imaged by first image lens. Then, we can also get the compounded images by image sensor real time. Commonly, most spatial light modulator only can do modulate light intensity, so we can only do compounding BW images if use only one panel which without color filter. If we will get colorful compounded image, we need use the system like three spatial light modulator panel projection. In the paper, the system's optical system framework we will give out. In all experiment, the spatial light modulator used liquid crystal on silicon (LCoS). At the end of the paper, some original pictures and compounded pictures will be given on it. Although the system has a few shortcomings, we can conclude that, using this system to compounding images has no delay to do mathematic compounding process, it is a really real time images compounding system.

  20. Pettit runs a drill while looking through a camera mounted on the Nadir window in the U.S. Lab

    NASA Image and Video Library

    2003-04-05

    ISS006-E-44305 (5 April 2003) --- Astronaut Donald R. Pettit, Expedition Six NASA ISS science officer, runs a drill while looking through a camera mounted on the nadir window in the Destiny laboratory on the International Space Station (ISS). The device is called a “barn door tracker”. The drill turns the screw, which moves the camera and its spotting scope.

  1. Electro optical system to measure strains

    NASA Astrophysics Data System (ADS)

    Sciammarella, C. A.; Bhat, G.

    With the advent of the so called speckle interferometry, interferograms of objects can be obtained in real time by using a TV camera as the recording medium. The basic idea of this instrument is to couple the photoelectric registration by a TV camera with the subsequent electronic processing, to develop an efficient device for the measurement of deformations. This paper presents a new and improved instrument, which has a very important feature, portability, that can be operated in different modes and is capable of producing interferograms using holography, speckle, and moire methods. The basic features of the instrument are presented and some of the theoretical points at the foundation of operation of the instrument are analyzed. Examples are given of the application to moire, speckle, and holographic interferometry.

  2. Visual Odometry for Autonomous Deep-Space Navigation

    NASA Technical Reports Server (NTRS)

    Robinson, Shane; Pedrotty, Sam

    2016-01-01

    Visual Odometry fills two critical needs shared by all future exploration architectures considered by NASA: Autonomous Rendezvous and Docking (AR&D), and autonomous navigation during loss of comm. To do this, a camera is combined with cutting-edge algorithms (called Visual Odometry) into a unit that provides accurate relative pose between the camera and the object in the imagery. Recent simulation analyses have demonstrated the ability of this new technology to reliably, accurately, and quickly compute a relative pose. This project advances this technology by both preparing the system to process flight imagery and creating an activity to capture said imagery. This technology can provide a pioneering optical navigation platform capable of supporting a wide variety of future missions scenarios: deep space rendezvous, asteroid exploration, loss-of-comm.

  3. An earth imaging camera simulation using wide-scale construction of reflectance surfaces

    NASA Astrophysics Data System (ADS)

    Murthy, Kiran; Chau, Alexandra H.; Amin, Minesh B.; Robinson, M. Dirk

    2013-10-01

    Developing and testing advanced ground-based image processing systems for earth-observing remote sensing applications presents a unique challenge that requires advanced imagery simulation capabilities. This paper presents an earth-imaging multispectral framing camera simulation system called PayloadSim (PaySim) capable of generating terabytes of photorealistic simulated imagery. PaySim leverages previous work in 3-D scene-based image simulation, adding a novel method for automatically and efficiently constructing 3-D reflectance scenes by draping tiled orthorectified imagery over a geo-registered Digital Elevation Map (DEM). PaySim's modeling chain is presented in detail, with emphasis given to the techniques used to achieve computational efficiency. These techniques as well as cluster deployment of the simulator have enabled tuning and robust testing of image processing algorithms, and production of realistic sample data for customer-driven image product development. Examples of simulated imagery of Skybox's first imaging satellite are shown.

  4. 4-mm-diameter three-dimensional imaging endoscope with steerable camera for minimally invasive surgery (3-D-MARVEL).

    PubMed

    Bae, Sam Y; Korniski, Ronald J; Shearn, Michael; Manohara, Harish M; Shahinian, Hrayr

    2017-01-01

    High-resolution three-dimensional (3-D) imaging (stereo imaging) by endoscopes in minimally invasive surgery, especially in space-constrained applications such as brain surgery, is one of the most desired capabilities. Such capability exists at larger than 4-mm overall diameters. We report the development of a stereo imaging endoscope of 4-mm maximum diameter, called Multiangle, Rear-Viewing Endoscopic Tool (MARVEL) that uses a single-lens system with complementary multibandpass filter (CMBF) technology to achieve 3-D imaging. In addition, the system is endowed with the capability to pan from side-to-side over an angle of [Formula: see text], which is another unique aspect of MARVEL for such a class of endoscopes. The design and construction of a single-lens, CMBF aperture camera with integrated illumination to generate 3-D images, and the actuation mechanism built into it is summarized.

  5. Uncertainty Propagation Methods for High-Dimensional Complex Systems

    NASA Astrophysics Data System (ADS)

    Mukherjee, Arpan

    Researchers are developing ever smaller aircraft called Micro Aerial Vehicles (MAVs). The Space Robotics Group has joined the field by developing a dragonfly-inspired MAV. This thesis presents two contributions to this project. The first is the development of a dynamical model of the internal MAV components to be used for tuning design parameters and as a future plant model. This model is derived using the Lagrangian method and differs from others because it accounts for the internal dynamics of the system. The second contribution of this thesis is an estimation algorithm that can be used to determine prototype performance and verify the dynamical model from the first part. Based on the Gauss-Newton Batch Estimator, this algorithm uses a single camera and known points of interest on the wing to estimate the wing kinematic angles. Unlike other single-camera methods, this method is probabilistically based rather than being geometric.

  6. Rock with Odd Coating Beside a Young Martian Crater

    NASA Image and Video Library

    2010-03-24

    This image from the panoramic camera on NASA Mars Exploration Rover Opportunity shows a rock called Chocolate Hills, which the rover found and examined at the edge of a young crater called Concepción.

  7. Camera calibration correction in shape from inconsistent silhouette

    USDA-ARS?s Scientific Manuscript database

    The use of shape from silhouette for reconstruction tasks is plagued by two types of real-world errors: camera calibration error and silhouette segmentation error. When either error is present, we call the problem the Shape from Inconsistent Silhouette (SfIS) problem. In this paper, we show how sm...

  8. Person Recognition System Based on a Combination of Body Images from Visible Light and Thermal Cameras

    PubMed Central

    Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung

    2017-01-01

    The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body. PMID:28300783

  9. A new concept of real-time security camera monitoring with privacy protection by masking moving objects

    NASA Astrophysics Data System (ADS)

    Yabuta, Kenichi; Kitazawa, Hitoshi; Tanaka, Toshihisa

    2006-02-01

    Recently, monitoring cameras for security have been extensively increasing. However, it is normally difficult to know when and where we are monitored by these cameras and how the recorded images are stored and/or used. Therefore, how to protect privacy in the recorded images is a crucial issue. In this paper, we address this problem and introduce a framework for security monitoring systems considering the privacy protection. We state requirements for monitoring systems in this framework. We propose a possible implementation that satisfies the requirements. To protect privacy of recorded objects, they are made invisible by appropriate image processing techniques. Moreover, the original objects are encrypted and watermarked into the image with the "invisible" objects, which is coded by the JPEG standard. Therefore, the image decoded by a normal JPEG viewer includes the objects that are unrecognized or invisible. We also introduce in this paper a so-called "special viewer" in order to decrypt and display the original objects. This special viewer can be used by limited users when necessary for crime investigation, etc. The special viewer allows us to choose objects to be decoded and displayed. Moreover, in this proposed system, real-time processing can be performed, since no future frame is needed to generate a bitstream.

  10. Concurrent initialization for Bearing-Only SLAM.

    PubMed

    Munguía, Rodrigo; Grau, Antoni

    2010-01-01

    Simultaneous Localization and Mapping (SLAM) is perhaps the most fundamental problem to solve in robotics in order to build truly autonomous mobile robots. The sensors have a large impact on the algorithm used for SLAM. Early SLAM approaches focused on the use of range sensors as sonar rings or lasers. However, cameras have become more and more used, because they yield a lot of information and are well adapted for embedded systems: they are light, cheap and power saving. Unlike range sensors which provide range and angular information, a camera is a projective sensor which measures the bearing of images features. Therefore depth information (range) cannot be obtained in a single step. This fact has propitiated the emergence of a new family of SLAM algorithms: the Bearing-Only SLAM methods, which mainly rely in especial techniques for features system-initialization in order to enable the use of bearing sensors (as cameras) in SLAM systems. In this work a novel and robust method, called Concurrent Initialization, is presented which is inspired by having the complementary advantages of the Undelayed and Delayed methods that represent the most common approaches for addressing the problem. The key is to use concurrently two kinds of feature representations for both undelayed and delayed stages of the estimation. The simulations results show that the proposed method surpasses the performance of previous schemes.

  11. Photorefraction Screens Millions for Vision Disorders

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Who would have thought that stargazing in the 1980s would lead to hundreds of thousands of schoolchildren seeing more clearly today? Collaborating with research ophthalmologists and optometrists, Marshall Space Flight Center scientists Joe Kerr and the late John Richardson adapted optics technology for eye screening methods using a process called photorefraction. Photorefraction consists of delivering a light beam into the eyes where it bends in the ocular media, hits the retina, and then reflects as an image back to a camera. A series of refinements and formal clinical studies followed their highly successful initial tests in the 1980s. Evaluating over 5,000 subjects in field tests, Kerr and Richardson used a camera system prototype with a specifically angled telephoto lens and flash to photograph a subject s eye. They then analyzed the image, the cornea and pupil in particular, for irregular reflective patterns. Early tests of the system with 1,657 Alabama children revealed that, while only 111 failed the traditional chart test, Kerr and Richardson s screening system found 507 abnormalities.

  12. SCOUT: a fast Monte-Carlo modeling tool of scintillation camera output†

    PubMed Central

    Hunter, William C J; Barrett, Harrison H.; Muzi, John P.; McDougald, Wendy; MacDonald, Lawrence R.; Miyaoka, Robert S.; Lewellen, Thomas K.

    2013-01-01

    We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout of a scintillation camera. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:23640136

  13. Computational photography with plenoptic camera and light field capture: tutorial.

    PubMed

    Lam, Edmund Y

    2015-11-01

    Photography is a cornerstone of imaging. Ever since cameras became consumer products more than a century ago, we have witnessed great technological progress in optics and recording mediums, with digital sensors replacing photographic films in most instances. The latest revolution is computational photography, which seeks to make image reconstruction computation an integral part of the image formation process; in this way, there can be new capabilities or better performance in the overall imaging system. A leading effort in this area is called the plenoptic camera, which aims at capturing the light field of an object; proper reconstruction algorithms can then adjust the focus after the image capture. In this tutorial paper, we first illustrate the concept of plenoptic function and light field from the perspective of geometric optics. This is followed by a discussion on early attempts and recent advances in the construction of the plenoptic camera. We will then describe the imaging model and computational algorithms that can reconstruct images at different focus points, using mathematical tools from ray optics and Fourier optics. Last, but not least, we will consider the trade-off in spatial resolution and highlight some research work to increase the spatial resolution of the resulting images.

  14. First closed-loop visible AO test results for the advanced adaptive secondary AO system for the Magellan Telescope: MagAO's performance and status

    NASA Astrophysics Data System (ADS)

    Close, Laird M.; Males, Jared R.; Kopon, Derek A.; Gasho, Victor; Follette, Katherine B.; Hinz, Phil; Morzinski, Katie; Uomoto, Alan; Hare, Tyson; Riccardi, Armando; Esposito, Simone; Puglisi, Alfio; Pinna, Enrico; Busoni, Lorenzo; Arcidiacono, Carmelo; Xompero, Marco; Briguglio, Runa; Quiros-Pacheco, Fernando; Argomedo, Javier

    2012-07-01

    The heart of the 6.5 Magellan AO system (MagAO) is a 585 actuator adaptive secondary mirror (ASM) with <1 msec response times (0.7 ms typically). This adaptive secondary will allow low emissivity and high-contrast AO science. We fabricated a high order (561 mode) pyramid wavefront sensor (similar to that now successfully used at the Large Binocular Telescope). The relatively high actuator count (and small projected ~23 cm pitch) allows moderate Strehls to be obtained by MagAO in the “visible” (0.63-1.05 μm). To take advantage of this we have fabricated an AO CCD science camera called "VisAO". Complete “end-to-end” closed-loop lab tests of MagAO achieve a solid, broad-band, 37% Strehl (122 nm rms) at 0.76 μm (i’) with the VisAO camera in 0.8” simulated seeing (13 cm ro at V) with fast 33 mph winds and a 40 m Lo locked on R=8 mag artificial star. These relatively high visible wavelength Strehls are enabled by our powerful combination of a next generation ASM and a Pyramid WFS with 400 controlled modes and 1000 Hz sample speeds (similar to that used successfully on-sky at the LBT). Currently only the VisAO science camera is used for lab testing of MagAO, but this high level of measured performance (122 nm rms) promises even higher Strehls with our IR science cameras. On bright (R=8 mag) stars we should achieve very high Strehls (>70% at H) in the IR with the existing MagAO Clio2 (λ=1-5.3 μm) science camera/coronagraph or even higher (~98% Strehl) the Mid-IR (8-26 microns) with the existing BLINC/MIRAC4 science camera in the future. To eliminate non-common path vibrations, dispersions, and optical errors the VisAO science camera is fed by a common path advanced triplet ADC and is piggy-backed on the Pyramid WFS optical board itself. Also a high-speed shutter can be used to block periods of poor correction. The entire system passed CDR in June 2009, and we finished the closed-loop system level testing phase in December 2011. Final system acceptance (“pre-ship” review) was passed in February 2012. In May 2012 the entire AO system is was successfully shipped to Chile and fully tested/aligned. It is now in storage in the Magellan telescope clean room in anticipation of “First Light” scheduled for December 2012. An overview of the design, attributes, performance, and schedule for the Magellan AO system and its two science cameras are briefly presented here.

  15. Image Intensifier Modules For Use With Commercially Available Solid State Cameras

    NASA Astrophysics Data System (ADS)

    Murphy, Howard; Tyler, Al; Lake, Donald W.

    1989-04-01

    A modular approach to design has contributed greatly to the success of the family of machine vision video equipment produced by EG&G Reticon during the past several years. Internal modularity allows high-performance area (matrix) and line scan cameras to be assembled with two or three electronic subassemblies with very low labor costs, and permits camera control and interface circuitry to be realized by assemblages of various modules suiting the needs of specific applications. Product modularity benefits equipment users in several ways. Modular matrix and line scan cameras are available in identical enclosures (Fig. 1), which allows enclosure components to be purchased in volume for economies of scale and allows field replacement or exchange of cameras within a customer-designed system to be easily accomplished. The cameras are optically aligned (boresighted) at final test; modularity permits optical adjustments to be made with the same precise test equipment for all camera varieties. The modular cameras contain two, or sometimes three, hybrid microelectronic packages (Fig. 2). These rugged and reliable "submodules" perform all of the electronic operations internal to the camera except for the job of image acquisition performed by the monolithic image sensor. Heat produced by electrical power dissipation in the electronic modules is conducted through low resistance paths to the camera case by the metal plates, which results in a thermally efficient and environmentally tolerant camera with low manufacturing costs. A modular approach has also been followed in design of the camera control, video processor, and computer interface accessory called the Formatter (Fig. 3). This unit can be attached directly onto either a line scan or matrix modular camera to form a self-contained units, or connected via a cable to retain the advantages inherent to a small, light weight, and rugged image sensing component. Available modules permit the bus-structured Formatter to be configured as required by a specific camera application. Modular line and matrix scan cameras incorporating sensors with fiber optic faceplates (Fig 4) are also available. These units retain the advantages of interchangeability, simple construction, ruggedness, and optical precision offered by the more common lens input units. Fiber optic faceplate cameras are used for a wide variety of applications. A common usage involves mating of the Reticon-supplied camera to a customer-supplied intensifier tube for low light level and/or short exposure time situations.

  16. Contour Mapping

    NASA Technical Reports Server (NTRS)

    1995-01-01

    In the early 1990s, the Ohio State University Center for Mapping, a NASA Center for the Commercial Development of Space (CCDS), developed a system for mobile mapping called the GPSVan. While driving, the users can map an area from the sophisticated mapping van equipped with satellite signal receivers, video cameras and computer systems for collecting and storing mapping data. George J. Igel and Company and the Ohio State University Center for Mapping advanced the technology for use in determining the contours of a construction site. The new system reduces the time required for mapping and staking, and can monitor the amount of soil moved.

  17. Rock with Odd Coating Beside a Young Martian Crater, False Color

    NASA Image and Video Library

    2010-03-24

    This false color image from the panoramic camera on NASA Mars Exploration Rover Opportunity shows a rock called Chocolate Hills, which the rover found and examined at the edge of a young crater called Concepción.

  18. Laser Research

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Eastman Kodak Company, Rochester, New York is a broad-based firm which produces photographic apparatus and supplies, fibers, chemicals and vitamin concentrates. Much of the company's research and development effort is devoted to photographic science and imaging technology, including laser technology. Eastman Kodak is using a COSMIC computer program called LACOMA in the analysis of laser optical systems and camera design studies. The company reports that use of the program has provided development time savings and reduced computer service fees.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thurman-Keup, R.; Lumpkin, A. H.; Thangaraj, J.

    FAST is a facility at Fermilab that consists of a photoinjector, two superconducting capture cavities, one superconducting ILC-style cryomodule, and a small ring for studying non-linear, integrable beam optics called IOTA. This paper discusses the layout for the optical transport system that provides optical radiation to an externally located streak camera for bunch length measurements, and THz radiation to a Martin-Puplett interferometer, also for bunch length measurements. It accepts radiation from two synchrotron radiation ports in a chicane bunch compressor and a diffraction/transition radiation screen downstream of the compressor. It also has the potential to access signal from a transitionmore » radiation screen or YAG screen after the spectrometer magnet for measurements of energy-time correlations. Initial results from both the streak camera and Martin-Puplett will be presented.« less

  20. Obstacles encountered in the development of the low vision enhancement system.

    PubMed

    Massof, R W; Rickman, D L

    1992-01-01

    The Johns Hopkins Wilmer Eye Institute and the NASA Stennis Space Center are collaborating on the development of a new high technology low vision aid called the Low Vision Enhancement System (LVES). The LVES consists of a binocular head-mounted video display system, video cameras mounted on the head-mounted display, and real-time video image processing in a system package that is battery powered and portable. Through a phased development approach, several generations of the LVES can be made available to the patient in a timely fashion. This paper describes the LVES project with major emphasis on technical problems encountered or anticipated during the development process.

  1. Data annotation, recording and mapping system for the US open skies aircraft

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, B.W.; Goede, W.F.; Farmer, R.G.

    1996-11-01

    This paper discusses the system developed by Northrop Grumman for the Defense Nuclear Agency (DNA), US Air Force, and the On-Site Inspection Agency (OSIA) to comply with the data annotation and reporting provisions of the Open Skies Treaty. This system, called the Data Annotation, Recording and Mapping System (DARMS), has been installed on the US OC-135 and meets or exceeds all annotation requirements for the Open Skies Treaty. The Open Skies Treaty, which will enter into force in the near future, allows any of the 26 signatory countries to fly fixed wing aircraft with imaging sensors over any of themore » other treaty participants, upon very short notice, and with no restricted flight areas. Sensor types presently allowed by the treaty are: optical framing and panoramic film cameras; video cameras ranging from analog PAL color television cameras to the more sophisticated digital monochrome and color line scanning or framing cameras; infrared line scanners; and synthetic aperture radars. Each sensor type has specific performance parameters which are limited by the treaty, as well as specific annotation requirements which must be achieved upon full entry into force. DARMS supports U.S. compliance with the Opens Skies Treaty by means of three subsystems: the Data Annotation Subsytem (DAS), which annotates sensor media with data obtained from sensors and the aircraft`s avionics system; the Data Recording System (DRS), which records all sensor and flight events on magnetic media for later use in generating Treaty mandated mission reports; and the Dynamic Sensor Mapping Subsystem (DSMS), which provides observers and sensor operators with a real-time moving map displays of the progress of the mission, complete with instantaneous and cumulative sensor coverages. This paper will describe DARMS and its subsystems in greater detail, along with the supporting avionics sub-systems. 7 figs.« less

  2. 25 CFR 542.7 - What are the minimum internal control standards for bingo?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... acceptable. (b) Game play standards. (1) The functions of seller and payout verifier shall be segregated... selected in the bingo game. (5) Each ball shall be shown to a camera immediately before it is called so that it is individually displayed to all customers. For speed bingo games not verified by camera...

  3. 25 CFR 542.7 - What are the minimum internal control standards for bingo?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... section, as approved by the Tribal gaming regulatory authority, will be acceptable. (b) Game play... bingo game. (5) Each ball shall be shown to a camera immediately before it is called so that it is individually displayed to all customers. For speed bingo games not verified by camera equipment, each ball...

  4. 25 CFR 542.7 - What are the minimum internal control standards for bingo?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...) Game play standards. (1) The functions of seller and payout verifier shall be segregated. Employees who... selected in the bingo game. (5) Each ball shall be shown to a camera immediately before it is called so that it is individually displayed to all customers. For speed bingo games not verified by camera...

  5. Hydrogen Flame Imaging System Soars to New, Different Heights

    NASA Technical Reports Server (NTRS)

    2002-01-01

    When Judy and Dave Duncan of Auburn, Calif.-based Duncan Technologies Inc. (DTI) developed their color hydrogen flame imaging system in the early 1990's, their market prospects were limited. 'We talked about commercializing the technology in the hydrogen community, but we also looked at commercialization on a much broader aspect. While there were some hydrogen applications, the market was not large enough to suppport an entire company; also, safety issues were a concern,' said Judy Duncan, owner and CEO of Duncan Technologies. Using the basic technology developed under the Small Business Innovation Research Program (SBIR); DTI conducted market research, identified other applications, formulated a plan for next generation development, and implemented a far-reaching marketing strategy. 'We took that technology; reinvested our own funds and energy into a second-generation design on the overall camera electronics and deployed that basic technology intially in a series of what we call multi-spectral cameras; cameras that could image in both the visible range and the infrared,' explains Duncan. 'The SBIR program allowed us to develop the technology to do a 3CCD camera, which very few compaines in the world do, particularly not small companies. The fact that we designed our own prism and specked the coding as we had for the hydrogen application, we were able to create a custom spectral configuration which could support varying types of research and applications.' As a result, Duncan Technologies Inc. of Auburn, Ca., has achieved a milestone $ 1 million in sales.

  6. Advanced Spacesuit Informatics Software Design for Power, Avionics and Software Version 2.0

    NASA Technical Reports Server (NTRS)

    Wright, Theodore W.

    2016-01-01

    A description of the software design for the 2016 edition of the Informatics computer assembly of the NASAs Advanced Extravehicular Mobility Unit (AEMU), also called the Advanced Spacesuit. The Informatics system is an optional part of the spacesuit assembly. It adds a graphical interface for displaying suit status, timelines, procedures, and warning information. It also provides an interface to the suit mounted camera for recording still images, video, and audio field notes.

  7. Concurrent Initialization for Bearing-Only SLAM

    PubMed Central

    Munguía, Rodrigo; Grau, Antoni

    2010-01-01

    Simultaneous Localization and Mapping (SLAM) is perhaps the most fundamental problem to solve in robotics in order to build truly autonomous mobile robots. The sensors have a large impact on the algorithm used for SLAM. Early SLAM approaches focused on the use of range sensors as sonar rings or lasers. However, cameras have become more and more used, because they yield a lot of information and are well adapted for embedded systems: they are light, cheap and power saving. Unlike range sensors which provide range and angular information, a camera is a projective sensor which measures the bearing of images features. Therefore depth information (range) cannot be obtained in a single step. This fact has propitiated the emergence of a new family of SLAM algorithms: the Bearing-Only SLAM methods, which mainly rely in especial techniques for features system-initialization in order to enable the use of bearing sensors (as cameras) in SLAM systems. In this work a novel and robust method, called Concurrent Initialization, is presented which is inspired by having the complementary advantages of the Undelayed and Delayed methods that represent the most common approaches for addressing the problem. The key is to use concurrently two kinds of feature representations for both undelayed and delayed stages of the estimation. The simulations results show that the proposed method surpasses the performance of previous schemes. PMID:22294884

  8. SCOUT: A Fast Monte-Carlo Modeling Tool of Scintillation Camera Output

    PubMed Central

    Hunter, William C. J.; Barrett, Harrison H.; Lewellen, Thomas K.; Miyaoka, Robert S.; Muzi, John P.; Li, Xiaoli; McDougald, Wendy; MacDonald, Lawrence R.

    2011-01-01

    We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:22072297

  9. Concept of a photon-counting camera based on a diffraction-addressed Gray-code mask

    NASA Astrophysics Data System (ADS)

    Morel, Sébastien

    2004-09-01

    A new concept of photon counting camera for fast and low-light-level imaging applications is introduced. The possible spectrum covered by this camera ranges from visible light to gamma rays, depending on the device used to transform an incoming photon into a burst of visible photons (photo-event spot) localized in an (x,y) image plane. It is actually an evolution of the existing "PAPA" (Precision Analog Photon Address) Camera that was designed for visible photons. This improvement comes from a simplified optics. The new camera transforms, by diffraction, each photo-event spot from an image intensifier or a scintillator into a cross-shaped pattern, which is projected onto a specific Gray code mask. The photo-event position is then extracted from the signal given by an array of avalanche photodiodes (or photomultiplier tubes, alternatively) downstream of the mask. After a detailed explanation of this camera concept that we have called "DIAMICON" (DIffraction Addressed Mask ICONographer), we briefly discuss about technical solutions to build such a camera.

  10. Performance of Hayabusa2 DCAM3-D Camera for Short-Range Imaging of SCI and Ejecta Curtain Generated from the Artificial Impact Crater Formed on Asteroid 162137 Ryugu (1999 JU3)

    NASA Astrophysics Data System (ADS)

    Ishibashi, K.; Shirai, K.; Ogawa, K.; Wada, K.; Honda, R.; Arakawa, M.; Sakatani, N.; Ikeda, Y.

    2017-07-01

    Deployable Camera 3-D (DCAM3-D) is a small high-resolution camera equipped on Deployable Camera 3 (DCAM3), one of the Hayabusa2 instruments. Hayabusa2 will explore asteroid 162137 Ryugu (1999 JU3) and conduct an impact experiment using a liner shooting device called Small Carry-on Impactor (SCI). DCAM3 will be detached from the Hayabusa2 spacecraft and observe the impact experiment. The purposes of the observation are to know the impact conditions, to estimate the surface structure of asteroid Ryugu, and to understand the physics of impact phenomena on low-gravity bodies. DCAM3-D requires high imaging performance because it has to image and detect multiple targets of different scale and radiance, i.e., the faint SCI before the shot from 1-km distance, the bright ejecta generated by the impact, and the asteroid. In this paper we report the evaluation of the performance of the CMOS imaging sensor and the optical system of DCAM3-D. We also describe the calibration of DCAM3-D. We confirmed that the imaging performance of DCAM3-D satisfies the required values to achieve the purposes of the observation.

  11. An Online Tilt Estimation and Compensation Algorithm for a Small Satellite Camera

    NASA Astrophysics Data System (ADS)

    Lee, Da-Hyun; Hwang, Jai-hyuk

    2018-04-01

    In the case of a satellite camera designed to execute an Earth observation mission, even after a pre-launch precision alignment process has been carried out, misalignment will occur due to external factors during the launch and in the operating environment. In particular, for high-resolution satellite cameras, which require submicron accuracy for alignment between optical components, misalignment is a major cause of image quality degradation. To compensate for this, most high-resolution satellite cameras undergo a precise realignment process called refocusing before and during the operation process. However, conventional Earth observation satellites only execute refocusing upon de-space. Thus, in this paper, an online tilt estimation and compensation algorithm that can be utilized after de-space correction is executed. Although the sensitivity of the optical performance degradation due to the misalignment is highest in de-space, the MTF can be additionally increased by correcting tilt after refocusing. The algorithm proposed in this research can be used to estimate the amount of tilt that occurs by taking star images, and it can also be used to carry out automatic tilt corrections by employing a compensation mechanism that gives angular motion to the secondary mirror. Crucially, this algorithm is developed using an online processing system so that it can operate without communication with the ground.

  12. Live video monitoring robot controlled by web over internet

    NASA Astrophysics Data System (ADS)

    Lokanath, M.; Akhil Sai, Guruju

    2017-11-01

    Future is all about robots, robot can perform tasks where humans cannot, Robots have huge applications in military and industrial area for lifting heavy weights, for accurate placements, for repeating the same task number of times, where human are not efficient. Generally robot is a mix of electronic, electrical and mechanical engineering and can do the tasks automatically on its own or under the supervision of humans. The camera is the eye for robot, call as robovision helps in monitoring security system and also can reach into the places where the human eye cannot reach. This paper presents about developing a live video streaming robot controlled from the website. We designed the web, controlling for the robot to move left, right, front and back while streaming video. As we move to the smart environment or IoT (Internet of Things) by smart devices the system we developed here connects over the internet and can be operated with smart mobile phone using a web browser. The Raspberry Pi model B chip acts as heart for this system robot, the sufficient motors, surveillance camera R pi 2 are connected to Raspberry pi.

  13. Detection and enforcement of failure-to-yield in an emergency vehicle preemption system

    NASA Technical Reports Server (NTRS)

    Bachelder, Aaron (Inventor); Wickline, Richard (Inventor)

    2007-01-01

    An intersection controlled by an intersection controller receives trigger signals from on-coming emergency vehicles responding to an emergency call. The intersection controller initiates surveillance of the intersection via cameras installed at the intersection in response to a received trigger signal. The surveillance may begin immediately upon receipt of the trigger signal from an emergency vehicle, or may wait until the intersection controller determines that the signaling emergency vehicle is in the field of view of the cameras at the intersection. Portions of the captured images are tagged by the intersection controller based on tag signals transmitted by the vehicle or based on detected traffic patterns that indicate a potential traffic violation. The captured images are downloaded to a processing facility that analyzes the images and automatically issues citations for captured traffic violations.

  14. The development of a virtual camera system for astronaut-rover planetary exploration.

    PubMed

    Platt, Donald W; Boy, Guy A

    2012-01-01

    A virtual assistant is being developed for use by astronauts as they use rovers to explore the surface of other planets. This interactive database, called the Virtual Camera (VC), is an interactive database that allows the user to have better situational awareness for exploration. It can be used for training, data analysis and augmentation of actual surface exploration. This paper describes the development efforts and Human-Computer Interaction considerations for implementing a first-generation VC on a tablet mobile computer device. Scenarios for use will be presented. Evaluation and success criteria such as efficiency in terms of processing time and precision situational awareness, learnability, usability, and robustness will also be presented. Initial testing and the impact of HCI design considerations of manipulation and improvement in situational awareness using a prototype VC will be discussed.

  15. Rapid prototyping 3D virtual world interfaces within a virtual factory environment

    NASA Technical Reports Server (NTRS)

    Kosta, Charles Paul; Krolak, Patrick D.

    1993-01-01

    On-going work into user requirements analysis using CLIPS (NASA/JSC) expert systems as an intelligent event simulator has led to research into three-dimensional (3D) interfaces. Previous work involved CLIPS and two-dimensional (2D) models. Integral to this work was the development of the University of Massachusetts Lowell parallel version of CLIPS, called PCLIPS. This allowed us to create both a Software Bus and a group problem-solving environment for expert systems development. By shifting the PCLIPS paradigm to use the VEOS messaging protocol we have merged VEOS (HlTL/Seattle) and CLIPS into a distributed virtual worlds prototyping environment (VCLIPS). VCLIPS uses the VEOS protocol layer to allow multiple experts to cooperate on a single problem. We have begun to look at the control of a virtual factory. In the virtual factory there are actors and objects as found in our Lincoln Logs Factory of the Future project. In this artificial reality architecture there are three VCLIPS entities in action. One entity is responsible for display and user events in the 3D virtual world. Another is responsible for either simulating the virtual factory or communicating with the real factory. The third is a user interface expert. The interface expert maps user input levels, within the current prototype, to control information for the factory. The interface to the virtual factory is based on a camera paradigm. The graphics subsystem generates camera views of the factory on standard X-Window displays. The camera allows for view control and object control. Control or the factory is accomplished by the user reaching into the camera views to perform object interactions. All communication between the separate CLIPS expert systems is done through VEOS.

  16. The High Energy Detector of Simbol-X

    NASA Astrophysics Data System (ADS)

    Meuris, A.; Limousin, O.; Lugiez, F.; Gevin, O.; Blondel, C.; Le Mer, I.; Pinsard, F.; Cara, C.; Goetschy, A.; Martignac, J.; Tauzin, G.; Hervé, S.; Laurent, P.; Chipaux, R.; Rio, Y.; Fontignie, J.; Horeau, B.; Authier, M.; Ferrando, P.

    2009-05-01

    The High Energy Detector (HED) is one of the three detection units on board the Simbol-X detector spacecraft. It is placed below the Low Energy Detector so as to collect focused photons in the energy range from 8 to 80 keV. It consists of a mosaic of 64 independent cameras, divided in 8 sectors. Each elementary detection unit, called Caliste, is the hybridization of a 256-pixel Cadmium Telluride (CdTe) detector with full custom front-end electronics into a unique component. The status of the HED design will be reported. The promising results obtained from the first micro-camera prototypes called Caliste 64 and Caliste 256 will be presented to illustrate the expected performance of the instrument.

  17. Kotov during Albedo Experiment in the SM

    NASA Image and Video Library

    2013-11-18

    ISS038-E-005022 (20 Nov. 2013) --- At a window in the International Space Station?s Zvezda Service Module, Russian cosmonaut Oleg Kotov, Expedition 38 commander, uses a digital camera photospectral system to perform a session for the Albedo Experiment. The experiment measures Earth?s albedo, or the amount of solar radiation reflected from the surface, in the hopes to develop methods to harness the reflected radiation to supplement the station?s power supply. The light reflection phenomenon is measured in units called albedo.

  18. Kotov during Albedo Experiment in the SM

    NASA Image and Video Library

    2013-11-18

    ISS038-E-005014 (20 Nov. 2013) --- At a window in the International Space Station’s Zvezda Service Module, Russian cosmonaut Oleg Kotov, Expedition 38 commander, uses a digital camera photospectral system to perform a session for the Albedo Experiment. The experiment measures Earth’s albedo, or the amount of solar radiation reflected from the surface, in the hopes to develop methods to harness the reflected radiation to supplement the station’s power supply. The light reflection phenomenon is measured in units called albedo.

  19. Kotov during Albedo Experiment in the SM

    NASA Image and Video Library

    2013-11-18

    ISS038-E-005023 (20 Nov. 2013) --- At a window in the International Space Station?s Zvezda Service Module, Russian cosmonaut Oleg Kotov, Expedition 38 commander, uses a digital camera photospectral system to perform a session for the Albedo Experiment. The experiment measures Earth?s albedo, or the amount of solar radiation reflected from the surface, in the hopes to develop methods to harness the reflected radiation to supplement the station?s power supply. The light reflection phenomenon is measured in units called albedo.

  20. Kotov during Albedo Experiment in the SM

    NASA Image and Video Library

    2013-11-18

    ISS038-E-005031 (20 Nov. 2013) --- At a window in the International Space Station?s Zvezda Service Module, Russian cosmonaut Oleg Kotov, Expedition 38 commander, uses a digital camera photospectral system to perform a session for the Albedo Experiment. The experiment measures Earth?s albedo, or the amount of solar radiation reflected from the surface, in the hopes to develop methods to harness the reflected radiation to supplement the station?s power supply. The light reflection phenomenon is measured in units called albedo.

  1. Kotov during Albedo Experiment in the SM

    NASA Image and Video Library

    2013-11-18

    ISS038-E-005016 (20 Nov. 2013) --- At a window in the International Space Station?s Zvezda Service Module, Russian cosmonaut Oleg Kotov, Expedition 38 commander, uses a digital camera photospectral system to perform a session for the Albedo Experiment. The experiment measures Earth?s albedo, or the amount of solar radiation reflected from the surface, in the hopes to develop methods to harness the reflected radiation to supplement the station?s power supply. The light reflection phenomenon is measured in units called albedo.

  2. Kotov during Albedo Experiment in the SM

    NASA Image and Video Library

    2013-11-18

    ISS038-E-005019 (20 Nov. 2013) --- At a window in the International Space Station?s Zvezda Service Module, Russian cosmonaut Oleg Kotov, Expedition 38 commander, uses a digital camera photospectral system to perform a session for the Albedo Experiment. The experiment measures Earth?s albedo, or the amount of solar radiation reflected from the surface, in the hopes to develop methods to harness the reflected radiation to supplement the station?s power supply. The light reflection phenomenon is measured in units called albedo.

  3. Development of infrared goggles and prototype

    NASA Astrophysics Data System (ADS)

    Tsuchimoto, Kouzou; Komatsubara, Shigeyuki; Fujikawa, Masaru; Otsuka, Toshiaki; Kan, Moriyasu; Matsumura, Norihide

    2006-05-01

    We aimed at developing a hands free type practical wearable thermography which will not hinder walking or working of the person wearing the equipment. We installed a small format camera core module, which was recently developed, into the fire fighter's helmet and incorporated image transmission function over radio to the equipment. We combined this thermography with a see-through type head mount display, and called it "Infrared Goggles". A prototype was developed for verification test of lifesaving support system in fire fighting activities.

  4. Micro-optical system based 3D imaging for full HD depth image capturing

    NASA Astrophysics Data System (ADS)

    Park, Yong-Hwa; Cho, Yong-Chul; You, Jang-Woo; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Kwon, Jong-Oh; Lee, Seung-Wan

    2012-03-01

    20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical shutter'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation. The optical shutter device is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image. Suggested novel optical shutter device enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously. The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical shutter design, fabrication, characterization, 3D camera system prototype and image test results.

  5. Nonuniformity correction based on focal plane array temperature in uncooled long-wave infrared cameras without a shutter.

    PubMed

    Liang, Kun; Yang, Cailan; Peng, Li; Zhou, Bo

    2017-02-01

    In uncooled long-wave IR camera systems, the temperature of a focal plane array (FPA) is variable along with the environmental temperature as well as the operating time. The spatial nonuniformity of the FPA, which is partly affected by the FPA temperature, obviously changes as well, resulting in reduced image quality. This study presents a real-time nonuniformity correction algorithm based on FPA temperature to compensate for nonuniformity caused by FPA temperature fluctuation. First, gain coefficients are calculated using a two-point correction technique. Then offset parameters at different FPA temperatures are obtained and stored in tables. When the camera operates, the offset tables are called to update the current offset parameters via a temperature-dependent interpolation. Finally, the gain coefficients and offset parameters are used to correct the output of the IR camera in real time. The proposed algorithm is evaluated and compared with two representative shutterless algorithms [minimizing the sum of the squares of errors algorithm (MSSE), template-based solution algorithm (TBS)] using IR images captured by a 384×288 pixel uncooled IR camera with a 17 μm pitch. Experimental results show that this method can quickly trace the response drift of the detector units when the FPA temperature changes. The quality of the proposed algorithm is as good as MSSE, while the processing time is as short as TBS, which means the proposed algorithm is good for real-time control and at the same time has a high correction effect.

  6. Immersive telepresence system using high-resolution omnidirectional movies and a locomotion interface

    NASA Astrophysics Data System (ADS)

    Ikeda, Sei; Sato, Tomokazu; Kanbara, Masayuki; Yokoya, Naokazu

    2004-05-01

    Technology that enables users to experience a remote site virtually is called telepresence. A telepresence system using real environment images is expected to be used in the field of entertainment, medicine, education and so on. This paper describes a novel telepresence system which enables users to walk through a photorealistic virtualized environment by actual walking. To realize such a system, a wide-angle high-resolution movie is projected on an immersive multi-screen display to present users the virtualized environments and a treadmill is controlled according to detected user's locomotion. In this study, we use an omnidirectional multi-camera system to acquire images real outdoor scene. The proposed system provides users with rich sense of walking in a remote site.

  7. A digital system for surface reconstruction

    USGS Publications Warehouse

    Zhou, Weiyang; Brock, Robert H.; Hopkins, Paul F.

    1996-01-01

    A digital photogrammetric system, STEREO, was developed to determine three dimensional coordinates of points of interest (POIs) defined with a grid on a textureless and smooth-surfaced specimen. Two CCD cameras were set up with unknown orientation and recorded digital images of a reference model and a specimen. Points on the model were selected as control or check points for calibrating or assessing the system. A new algorithm for edge-detection called local maximum convolution (LMC) helped extract the POIs from the stereo image pairs. The system then matched the extracted POIs and used a least squares “bundle” adjustment procedure to solve for the camera orientation parameters and the coordinates of the POIs. An experiment with STEREO found that the standard deviation of the residuals at the check points was approximately 24%, 49% and 56% of the pixel size in the X, Y and Z directions, respectively. The average of the absolute values of the residuals at the check points was approximately 19%, 36% and 49% of the pixel size in the X, Y and Z directions, respectively. With the graphical user interface, STEREO demonstrated a high degree of automation and its operation does not require special knowledge of photogrammetry, computers or image processing.

  8. The GISMO-2 Bolometer Camera

    NASA Technical Reports Server (NTRS)

    Staguhn, Johannes G.; Benford, Dominic J.; Fixsen, Dale J.; Hilton, Gene; Irwin, Kent D.; Jhabvala, Christine A.; Kovacs, Attila; Leclercq, Samuel; Maher, Stephen F.; Miller, Timothy M.; hide

    2012-01-01

    We present the concept for the GISMO-2 bolometer camera) which we build for background-limited operation at the IRAM 30 m telescope on Pico Veleta, Spain. GISM0-2 will operate Simultaneously in the 1 mm and 2 mm atmospherical windows. The 1 mm channel uses a 32 x 40 TES-based Backshort Under Grid (BUG) bolometer array, the 2 mm channel operates with a 16 x 16 BUG array. The camera utilizes almost the entire full field of view provided by the telescope. The optical design of GISM0-2 was strongly influenced by our experience with the GISMO 2 mm bolometer camera which is successfully operating at the 30m telescope. GISMO is accessible to the astronomical community through the regular IRAM call for proposals.

  9. Situational Awareness from a Low-Cost Camera System

    NASA Technical Reports Server (NTRS)

    Freudinger, Lawrence C.; Ward, David; Lesage, John

    2010-01-01

    A method gathers scene information from a low-cost camera system. Existing surveillance systems using sufficient cameras for continuous coverage of a large field necessarily generate enormous amounts of raw data. Digitizing and channeling that data to a central computer and processing it in real time is difficult when using low-cost, commercially available components. A newly developed system is located on a combined power and data wire to form a string-of-lights camera system. Each camera is accessible through this network interface using standard TCP/IP networking protocols. The cameras more closely resemble cell-phone cameras than traditional security camera systems. Processing capabilities are built directly onto the camera backplane, which helps maintain a low cost. The low power requirements of each camera allow the creation of a single imaging system comprising over 100 cameras. Each camera has built-in processing capabilities to detect events and cooperatively share this information with neighboring cameras. The location of the event is reported to the host computer in Cartesian coordinates computed from data correlation across multiple cameras. In this way, events in the field of view can present low-bandwidth information to the host rather than high-bandwidth bitmap data constantly being generated by the cameras. This approach offers greater flexibility than conventional systems, without compromising performance through using many small, low-cost cameras with overlapping fields of view. This means significant increased viewing without ignoring surveillance areas, which can occur when pan, tilt, and zoom cameras look away. Additionally, due to the sharing of a single cable for power and data, the installation costs are lower. The technology is targeted toward 3D scene extraction and automatic target tracking for military and commercial applications. Security systems and environmental/ vehicular monitoring systems are also potential applications.

  10. Middle infrared (wavelength range: 8 μm-14 μm) 2-dimensional spectroscopy (total weight with electrical controller: 1.7 kg, total cost: less than 10,000 USD) so-called hyperspectral camera for unmanned air vehicles like drones

    NASA Astrophysics Data System (ADS)

    Yamamoto, Naoyuki; Saito, Tsubasa; Ogawa, Satoru; Ishimaru, Ichiro

    2016-05-01

    We developed the palm size (optical unit: 73[mm]×102[mm]×66[mm]) and light weight (total weight with electrical controller: 1.7[kg]) middle infrared (wavelength range: 8[μm]-14[μm]) 2-dimensional spectroscopy for UAV (Unmanned Air Vehicle) like drone. And we successfully demonstrated the flights with the developed hyperspectral camera mounted on the multi-copter so-called drone in 15/Sep./2015 at Kagawa prefecture in Japan. We had proposed 2 dimensional imaging type Fourier spectroscopy that was the near-common path temporal phase-shift interferometer. We install the variable phase shifter onto optical Fourier transform plane of infinity corrected imaging optical systems. The variable phase shifter was configured with a movable mirror and a fixed mirror. The movable mirror was actuated by the impact drive piezo-electric device (stroke: 4.5[mm], resolution: 0.01[μm], maker: Technohands Co.,Ltd., type:XDT50-45, price: around 1,000USD). We realized the wavefront division type and near common path interferometry that has strong robustness against mechanical vibrations. Without anti-mechanical vibration systems, the palm-size Fourier spectroscopy was realized. And we were able to utilize the small and low-cost middle infrared camera that was the micro borometer array (un-cooled VOxMicroborometer, pixel array: 336×256, pixel pitch: 17[μm], frame rate 60[Hz], maker: FLIR, type: Quark 336, price: around 5,000USD). And this apparatus was able to be operated by single board computer (Raspberry Pi.). Thus, total cost was less than 10,000 USD. We joined with KAMOME-PJ (Kanagawa Advanced MOdule for Material Evaluation Project) with DRONE FACTORY Corp., KUUSATSU Corp., Fuji Imvac Inc. And we successfully obtained the middle infrared spectroscopic imaging with multi-copter drone.

  11. Proposed tethered unmanned aerial system for the detection of pollution entering the Chesapeake Bay area

    NASA Astrophysics Data System (ADS)

    Goodman, J.; McKay, J.; Evans, W.; Gadsden, S. Andrew

    2016-05-01

    This paper is based on a proposed unmanned aerial system platform that is to be outfitted with high-resolution sensors. The proposed system is to be tethered to a moveable ground station, which may be a research vessel or some form of ground vehicle (e.g., car, truck, or rover). The sensors include, at a minimum: camera, infrared sensor, thermal, normalized difference vegetation index (NDVI) camera, global positioning system (GPS), and a light-based radar (LIDAR). The purpose of this paper is to provide an overview of existing methods for pollution detection of failing septic systems, and to introduce the proposed system. Future work will look at the high-resolution data from the sensors and integrating the data through a process called information fusion. Typically, this process is done using the popular and well-published Kalman filter (or its nonlinear formulations, such as the extended Kalman filter). However, future work will look at using a new type of strategy based on variable structure estimation for the information fusion portion of the data processing. It is hypothesized that fusing data from the thermal and NDVI sensors will be more accurate and reliable for a multitude of applications, including the detection of pollution entering the Chesapeake Bay area.

  12. HALO: a reconfigurable image enhancement and multisensor fusion system

    NASA Astrophysics Data System (ADS)

    Wu, F.; Hickman, D. L.; Parker, Steve J.

    2014-06-01

    Contemporary high definition (HD) cameras and affordable infrared (IR) imagers are set to dramatically improve the effectiveness of security, surveillance and military vision systems. However, the quality of imagery is often compromised by camera shake, or poor scene visibility due to inadequate illumination or bad atmospheric conditions. A versatile vision processing system called HALO™ is presented that can address these issues, by providing flexible image processing functionality on a low size, weight and power (SWaP) platform. Example processing functions include video distortion correction, stabilisation, multi-sensor fusion and image contrast enhancement (ICE). The system is based around an all-programmable system-on-a-chip (SoC), which combines the computational power of a field-programmable gate array (FPGA) with the flexibility of a CPU. The FPGA accelerates computationally intensive real-time processes, whereas the CPU provides management and decision making functions that can automatically reconfigure the platform based on user input and scene content. These capabilities enable a HALO™ equipped reconnaissance or surveillance system to operate in poor visibility, providing potentially critical operational advantages in visually complex and challenging usage scenarios. The choice of an FPGA based SoC is discussed, and the HALO™ architecture and its implementation are described. The capabilities of image distortion correction, stabilisation, fusion and ICE are illustrated using laboratory and trials data.

  13. System Synchronizes Recordings from Separated Video Cameras

    NASA Technical Reports Server (NTRS)

    Nail, William; Nail, William L.; Nail, Jasper M.; Le, Doung T.

    2009-01-01

    A system of electronic hardware and software for synchronizing recordings from multiple, physically separated video cameras is being developed, primarily for use in multiple-look-angle video production. The system, the time code used in the system, and the underlying method of synchronization upon which the design of the system is based are denoted generally by the term "Geo-TimeCode(TradeMark)." The system is embodied mostly in compact, lightweight, portable units (see figure) denoted video time-code units (VTUs) - one VTU for each video camera. The system is scalable in that any number of camera recordings can be synchronized. The estimated retail price per unit would be about $350 (in 2006 dollars). The need for this or another synchronization system external to video cameras arises because most video cameras do not include internal means for maintaining synchronization with other video cameras. Unlike prior video-camera-synchronization systems, this system does not depend on continuous cable or radio links between cameras (however, it does depend on occasional cable links lasting a few seconds). Also, whereas the time codes used in prior video-camera-synchronization systems typically repeat after 24 hours, the time code used in this system does not repeat for slightly more than 136 years; hence, this system is much better suited for long-term deployment of multiple cameras.

  14. Combining shearography and interferometric fringe projection in a single device for complete control of industrial applications

    NASA Astrophysics Data System (ADS)

    Blain, Pascal; Michel, Fabrice; Piron, Pierre; Renotte, Yvon; Habraken, Serge

    2013-08-01

    Noncontact optical measurement methods are essential tools in many industrial and research domains. A family of new noncontact optical measurement methods based on the polarization states splitting technique and monochromatic light projection as a way to overcome ambient lighting for in-situ measurement has been developed. Recent works on a birefringent element, a Savart plate, allow one to build a more flexible and robust interferometer. This interferometer is a multipurpose metrological device. On one hand the interferometer can be set in front of a charge-coupled device (CCD) camera. This optical measurement system is called a shearography interferometer and allows one to measure microdisplacements between two states of the studied object under coherent lighting. On the other hand, by producing and shifting multiple sinusoidal Young's interference patterns with this interferometer, and using a CCD camera, it is possible to build a three-dimensional structured light profilometer.

  15. Phoenix Robotic Arm's Workspace After 90 Sols

    NASA Technical Reports Server (NTRS)

    2008-01-01

    During the first 90 Martian days, or sols, after its May 25, 2008, landing on an arctic plain of Mars, NASA's Phoenix Mars Lander dug several trenches in the workspace reachable with the lander's robotic arm.

    The lander's Surface Stereo Imager camera recorded this view of the workspace on Sol 90, early afternoon local Mars time (overnight Aug. 25 to Aug. 26, 2008). The shadow of the the camera itself, atop its mast, is just left of the center of the image and roughly a third of a meter (one foot) wide.

    The workspace is on the north side of the lander. The trench just to the right of center is called 'Neverland.'

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  16. Application of Stereo Vision to the Reconnection Scaling Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klarenbeek, Johnny; Sears, Jason A.; Gao, Kevin W.

    The measurement and simulation of the three-dimensional structure of magnetic reconnection in astrophysical and lab plasmas is a challenging problem. At Los Alamos National Laboratory we use the Reconnection Scaling Experiment (RSX) to model 3D magnetohydrodynamic (MHD) relaxation of plasma filled tubes. These magnetic flux tubes are called flux ropes. In RSX, the 3D structure of the flux ropes is explored with insertable probes. Stereo triangulation can be used to compute the 3D position of a probe from point correspondences in images from two calibrated cameras. While common applications of stereo triangulation include 3D scene reconstruction and robotics navigation, wemore » will investigate the novel application of stereo triangulation in plasma physics to aid reconstruction of 3D data for RSX plasmas. Several challenges will be explored and addressed, such as minimizing 3D reconstruction errors in stereo camera systems and dealing with point correspondence problems.« less

  17. Hubble Team Unveils Most Colorful View of Universe Captured by Space Telescope

    NASA Image and Video Library

    2014-06-04

    Astronomers using NASA's Hubble Space Telescope have assembled a comprehensive picture of the evolving universe – among the most colorful deep space images ever captured by the 24-year-old telescope. Researchers say the image, in new study called the Ultraviolet Coverage of the Hubble Ultra Deep Field, provides the missing link in star formation. The Hubble Ultra Deep Field 2014 image is a composite of separate exposures taken in 2003 to 2012 with Hubble's Advanced Camera for Surveys and Wide Field Camera 3. Credit: NASA/ESA Read more: 1.usa.gov/1neD0se NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  18. CdTe Based Hard X-ray Imager Technology For Space Borne Missions

    NASA Astrophysics Data System (ADS)

    Limousin, Olivier; Delagnes, E.; Laurent, P.; Lugiez, F.; Gevin, O.; Meuris, A.

    2009-01-01

    CEA Saclay has recently developed an innovative technology for CdTe based Pixelated Hard X-Ray Imagers with high spectral performance and high timing resolution for efficient background rejection when the camera is coupled to an active veto shield. This development has been done in a R&D program supported by CNES (French National Space Agency) and has been optimized towards the Simbol-X mission requirements. In the latter telescope, the hard X-Ray imager is 64 cm² and is equipped with 625µm pitch pixels (16384 independent channels) operating at -40°C in the range of 4 to 80 keV. The camera we demonstrate in this paper consists of a mosaic of 64 independent cameras, divided in 8 independent sectors. Each elementary detection unit, called Caliste, is the hybridization of a 256-pixel Cadmium Telluride (CdTe) detector with full custom front-end electronics into a unique 1 cm² component, juxtaposable on its four sides. Recently, promising results have been obtained from the first micro-camera prototypes called Caliste 64 and will be presented to illustrate the capabilities of the device as well as the expected performance of an instrument based on it. The modular design of Caliste enables to consider extended developments toward IXO type mission, according to its specific scientific requirements.

  19. The sensory power of cameras and noise meters for protest surveillance in South Korea.

    PubMed

    Kim, Eun-Sung

    2016-06-01

    This article analyzes sensory aspects of material politics in social movements, focusing on two police tools: evidence-collecting cameras and noise meters for protest surveillance. Through interviews with Korean political activists, this article examines the relationship between power and the senses in the material culture of Korean protests and asks why cameras and noise meters appeared in order to control contemporary peaceful protests in the 2000s. The use of cameras and noise meters in contemporary peaceful protests evidences the exercise of what Michel Foucault calls 'micro-power'. Building on material culture studies, this article also compares the visual power of cameras with the sonic power of noise meters, in terms of a wide variety of issues: the control of things versus words, impacts on protest size, differential effects on organizers and participants, and differences in timing regarding surveillance and punishment.

  20. Real-time FPGA-based radar imaging for smart mobility systems

    NASA Astrophysics Data System (ADS)

    Saponara, Sergio; Neri, Bruno

    2016-04-01

    The paper presents an X-band FMCW (Frequency Modulated Continuous Wave) Radar Imaging system, called X-FRI, for surveillance in smart mobility applications. X-FRI allows for detecting the presence of targets (e.g. obstacles in a railway crossing or urban road crossing, or ships in a small harbor), as well as their speed and their position. With respect to alternative solutions based on LIDAR or camera systems, X-FRI operates in real-time also in bad lighting and weather conditions, night and day. The radio-frequency transceiver is realized through COTS (Commercial Off The Shelf) components on a single-board. An FPGA-based baseband platform allows for real-time Radar image processing.

  1. Geometric calibration of lens and filter distortions for multispectral filter-wheel cameras.

    PubMed

    Brauers, Johannes; Aach, Til

    2011-02-01

    High-fidelity color image acquisition with a multispectral camera utilizes optical filters to separate the visible electromagnetic spectrum into several passbands. This is often realized with a computer-controlled filter wheel, where each position is equipped with an optical bandpass filter. For each filter wheel position, a grayscale image is acquired and the passbands are finally combined to a multispectral image. However, the different optical properties and non-coplanar alignment of the filters cause image aberrations since the optical path is slightly different for each filter wheel position. As in a normal camera system, the lens causes additional wavelength-dependent image distortions called chromatic aberrations. When transforming the multispectral image with these aberrations into an RGB image, color fringes appear, and the image exhibits a pincushion or barrel distortion. In this paper, we address both the distortions caused by the lens and by the filters. Based on a physical model of the bandpass filters, we show that the aberrations caused by the filters can be modeled by displaced image planes. The lens distortions are modeled by an extended pinhole camera model, which results in a remaining mean calibration error of only 0.07 pixels. Using an absolute calibration target, we then geometrically calibrate each passband and compensate for both lens and filter distortions simultaneously. We show that both types of aberrations can be compensated and present detailed results on the remaining calibration errors.

  2. Machine vision for real time orbital operations

    NASA Technical Reports Server (NTRS)

    Vinz, Frank L.

    1988-01-01

    Machine vision for automation and robotic operation of Space Station era systems has the potential for increasing the efficiency of orbital servicing, repair, assembly and docking tasks. A machine vision research project is described in which a TV camera is used for inputing visual data to a computer so that image processing may be achieved for real time control of these orbital operations. A technique has resulted from this research which reduces computer memory requirements and greatly increases typical computational speed such that it has the potential for development into a real time orbital machine vision system. This technique is called AI BOSS (Analysis of Images by Box Scan and Syntax).

  3. OmniBird: a miniature PTZ NIR sensor system for UCAV day/night autonomous operations

    NASA Astrophysics Data System (ADS)

    Yi, Steven; Li, Hui

    2007-04-01

    Through a SBIR funding from NAVAIR, we have successfully developed an innovative, miniaturized, and lightweight PTZ UCAV imager called OmniBird for UCAV taxiing. The proposed OmniBird will be able to fit in a small space. The designed zoom capability allows it to acquire focused images for targets ranging from 10 to 250 feet. The innovative panning mechanism also allows the system to have a field of view of +/- 100 degrees within the provided limited spacing (6 cubic inches). The integrated optics, camera sensor, and mechanics solution will allow the OmniBird to stay optically aligned and shock-proof under harsh environments.

  4. The robot's eyes - Stereo vision system for automated scene analysis

    NASA Technical Reports Server (NTRS)

    Williams, D. S.

    1977-01-01

    Attention is given to the robot stereo vision system which maintains the image produced by solid-state detector television cameras in a dynamic random access memory called RAPID. The imaging hardware consists of sensors (two solid-state image arrays using a charge injection technique), a video-rate analog-to-digital converter, the RAPID memory, and various types of computer-controlled displays, and preprocessing equipment (for reflexive actions, processing aids, and object detection). The software is aimed at locating objects and transversibility. An object-tracking algorithm is discussed and it is noted that tracking speed is in the 50-75 pixels/s range.

  5. Automatic multi-camera calibration for deployable positioning systems

    NASA Astrophysics Data System (ADS)

    Axelsson, Maria; Karlsson, Mikael; Rudner, Staffan

    2012-06-01

    Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning. In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters (relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The method is compared to a manual calibration method using real HD video data from a field trial with a multicamera positioning system. The method is also evaluated on simulated data from a stereo camera model. The results show that the reprojection error of the automated camera calibration method is close to or smaller than the error for the manual calibration method and that the automated calibration method can replace the manual calibration.

  6. The sequence measurement system of the IR camera

    NASA Astrophysics Data System (ADS)

    Geng, Ai-hui; Han, Hong-xia; Zhang, Hai-bo

    2011-08-01

    Currently, the IR cameras are broadly used in the optic-electronic tracking, optic-electronic measuring, fire control and optic-electronic countermeasure field, but the output sequence of the most presently applied IR cameras in the project is complex and the giving sequence documents from the leave factory are not detailed. Aiming at the requirement that the continuous image transmission and image procession system need the detailed sequence of the IR cameras, the sequence measurement system of the IR camera is designed, and the detailed sequence measurement way of the applied IR camera is carried out. The FPGA programming combined with the SignalTap online observation way has been applied in the sequence measurement system, and the precise sequence of the IR camera's output signal has been achieved, the detailed document of the IR camera has been supplied to the continuous image transmission system, image processing system and etc. The sequence measurement system of the IR camera includes CameraLink input interface part, LVDS input interface part, FPGA part, CameraLink output interface part and etc, thereinto the FPGA part is the key composed part in the sequence measurement system. Both the video signal of the CmaeraLink style and the video signal of LVDS style can be accepted by the sequence measurement system, and because the image processing card and image memory card always use the CameraLink interface as its input interface style, the output signal style of the sequence measurement system has been designed into CameraLink interface. The sequence measurement system does the IR camera's sequence measurement work and meanwhile does the interface transmission work to some cameras. Inside the FPGA of the sequence measurement system, the sequence measurement program, the pixel clock modification, the SignalTap file configuration and the SignalTap online observation has been integrated to realize the precise measurement to the IR camera. Te sequence measurement program written by the verilog language combining the SignalTap tool on line observation can count the line numbers in one frame, pixel numbers in one line and meanwhile account the line offset and row offset of the image. Aiming at the complex sequence of the IR camera's output signal, the sequence measurement system of the IR camera accurately measures the sequence of the project applied camera, supplies the detailed sequence document to the continuous system such as image processing system and image transmission system and gives out the concrete parameters of the fval, lval, pixclk, line offset and row offset. The experiment shows that the sequence measurement system of the IR camera can get the precise sequence measurement result and works stably, laying foundation for the continuous system.

  7. A method for the real-time construction of a full parallax light field

    NASA Astrophysics Data System (ADS)

    Tanaka, Kenji; Aoki, Soko

    2006-02-01

    We designed and implemented a light field acquisition and reproduction system for dynamic objects called LiveDimension, which serves as a 3D live video system for multiple viewers. The acquisition unit consists of circularly arranged NTSC cameras surrounding an object. The display consists of circularly arranged projectors and a rotating screen. The projectors are constantly projecting images captured by the corresponding cameras onto the screen. The screen rotates around an in-plane vertical axis at a sufficient speed so that it faces each of the projectors in sequence. Since the Lambertian surfaces of the screens are covered by light-collimating plastic films with vertical louver patterns that are used for the selection of appropriate light rays, viewers can only observe images from a projector located in the same direction as the viewer. Thus, the dynamic view of an object is dependent on the viewer's head position. We evaluated the system by projecting both objects and human figures and confirmed that the entire system can reproduce light fields with a horizontal parallax to display video sequences of 430x770 pixels at a frame rate of 45 fps. Applications of this system include product design reviews, sales promotion, art exhibits, fashion shows, and sports training with form checking.

  8. A novel Compton camera design featuring a rear-panel shield for substantial noise reduction in gamma-ray images

    NASA Astrophysics Data System (ADS)

    Nishiyama, T.; Kataoka, J.; Kishimoto, A.; Fujita, T.; Iwamoto, Y.; Taya, T.; Ohsuka, S.; Nakamura, S.; Hirayanagi, M.; Sakurai, N.; Adachi, S.; Uchiyama, T.

    2014-12-01

    After the Japanese nuclear disaster in 2011, large amounts of radioactive isotopes were released and still remain a serious problem in Japan. Consequently, various gamma cameras are being developed to help identify radiation hotspots and ensure effective decontamination operation. The Compton camera utilizes the kinematics of Compton scattering to contract images without using a mechanical collimator, and features a wide field of view. For instance, we have developed a novel Compton camera that features a small size (13 × 14 × 15 cm3) and light weight (1.9 kg), but which also achieves high sensitivity thanks to Ce:GAGG scintillators optically coupled wiith MPPC arrays. By definition, in such a Compton camera, gamma rays are expected to scatter in the ``scatterer'' and then be fully absorbed in the ``absorber'' (in what is called a forward-scattered event). However, high energy gamma rays often interact with the detector in the opposite direction - initially scattered in the absorber and then absorbed in the scatterer - in what is called a ``back-scattered'' event. Any contamination of such back-scattered events is known to substantially degrade the quality of gamma-ray images, but determining the order of gamma-ray interaction based solely on energy deposits in the scatterer and absorber is quite difficult. For this reason, we propose a novel yet simple Compton camera design that includes a rear-panel shield (a few mm thick) consisting of W or Pb located just behind the scatterer. Since the energy of scattered gamma rays in back-scattered events is much lower than that in forward-scattered events, we can effectively discriminate and reduce back-scattered events to improve the signal-to-noise ratio in the images. This paper presents our detailed optimization of the rear-panel shield using Geant4 simulation, and describes a demonstration test using our Compton camera.

  9. Phoenix Checks out its Work Area

    NASA Technical Reports Server (NTRS)

    2008-01-01

    [figure removed for brevity, see original site] Click on image for animation

    This animation shows a mosaic of images of the workspace reachable by the scoop on the robotic arm of NASA's Phoenix Mars Lander, along with some measurements of rock sizes.

    Phoenix was able to determine the size of the rocks based on three-dimensional views from stereoscopic images taken by the lander's 7-foot mast camera, called the Surface Stereo Imager. The stereo pair of images enable depth perception, much the way a pair of human eyes enable people to gauge the distance to nearby objects.

    The rock measurements were made by a visualization tool known as Viz, developed at NASA's Ames Research Laboratory. The shadow cast by the camera on the Martian surface appears somewhat disjointed because the camera took the images in the mosaic at different times of day.

    Scientists do not yet know the origin or composition of the flat, light-colored rocks on the surface in front of the lander.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  10. Sitting in the Pilot's Seat; Optimizing Human-Systems Interfaces for Unmanned Aerial Vehicles

    NASA Technical Reports Server (NTRS)

    Queen, Steven M.; Sanner, Kurt Gregory

    2011-01-01

    One of the pilot-machine interfaces (the forward viewing camera display) for an Unmanned Aerial Vehicle called the DROID (Dryden Remotely Operated Integrated Drone) will be analyzed for optimization. The goal is to create a visual display for the pilot that as closely resembles an out-the-window view as possible. There are currently no standard guidelines for designing pilot-machine interfaces for UAVs. Typically, UAV camera views have a narrow field, which limits the situational awareness (SA) of the pilot. Also, at this time, pilot-UAV interfaces often use displays that have a diagonal length of around 20". Using a small display may result in a distorted and disproportional view for UAV pilots. Making use of a larger display and a camera lens with a wider field of view may minimize the occurrences of pilot error associated with the inability to see "out the window" as in a manned airplane. It is predicted that the pilot will have a less distorted view of the DROID s surroundings, quicker response times and more stable vehicle control. If the experimental results validate this concept, other UAV pilot-machine interfaces will be improved with this design methodology.

  11. Verification of the test stand for microbolometer camera in accredited laboratory

    NASA Astrophysics Data System (ADS)

    Krupiński, Michal; Bareła, Jaroslaw; Chmielewski, Krzysztof; Kastek, Mariusz

    2017-10-01

    Microbolometer belongs to the group of thermal detectors and consist of temperature sensitive resistor which is exposed to measured radiation flux. Bolometer array employs a pixel structure prepared in silicon technology. The detecting area is defined by a size of thin membrane, usually made of amorphous silicon (a-Si) or vanadium oxide (VOx). FPAs are made of a multitude of detector elements (for example 384 × 288 ), where each individual detector has different sensitivity and offset due to detector-to-detector spread in the FPA fabrication process, and additionally can change with sensor operating temperature, biasing voltage variation or temperature of the observed scene. The difference in sensitivity and offset among detectors (which is called non-uniformity) additionally with its high sensitivity, produces fixed pattern noise (FPN) on produced image. Fixed pattern noise degrades parameters of infrared cameras like sensitivity or NETD. Additionally it degrades image quality, radiometric accuracy and temperature resolution. In order to objectively compare the two infrared cameras ones must measure and compare their parameters on a laboratory test stand. One of the basic parameters for the evaluation of a designed camera is NETD. In order to examine the NETD, parameters such as sensitivity and pixels noise must be measured. To do so, ones should register the output signal from the camera in response to the radiation of black bodies at two different temperatures. The article presets an application and measuring stand for determining the parameters of microbolometers camera. Prepared measurements were compared with the result of the measurements in the Institute of Optoelectronics, MUT on a METS test stand by CI SYSTEM. This test stand consists of IR collimator, IR standard source, rotating wheel with test patterns, a computer with a video grabber card and specialized software. The parameters of thermals cameras were measure according to norms and method described in literature.

  12. A Control System and Streaming DAQ Platform with Image-Based Trigger for X-ray Imaging

    NASA Astrophysics Data System (ADS)

    Stevanovic, Uros; Caselle, Michele; Cecilia, Angelica; Chilingaryan, Suren; Farago, Tomas; Gasilov, Sergey; Herth, Armin; Kopmann, Andreas; Vogelgesang, Matthias; Balzer, Matthias; Baumbach, Tilo; Weber, Marc

    2015-06-01

    High-speed X-ray imaging applications play a crucial role for non-destructive investigations of the dynamics in material science and biology. On-line data analysis is necessary for quality assurance and data-driven feedback, leading to a more efficient use of a beam time and increased data quality. In this article we present a smart camera platform with embedded Field Programmable Gate Array (FPGA) processing that is able to stream and process data continuously in real-time. The setup consists of a Complementary Metal-Oxide-Semiconductor (CMOS) sensor, an FPGA readout card, and a readout computer. It is seamlessly integrated in a new custom experiment control system called Concert that provides a more efficient way of operating a beamline by integrating device control, experiment process control, and data analysis. The potential of the embedded processing is demonstrated by implementing an image-based trigger. It records the temporal evolution of physical events with increased speed while maintaining the full field of view. The complete data acquisition system, with Concert and the smart camera platform was successfully integrated and used for fast X-ray imaging experiments at KIT's synchrotron radiation facility ANKA.

  13. Infrared Thermography-based Biophotonics: Integrated Diagnostic Technique for Systemic Reaction Monitoring

    NASA Astrophysics Data System (ADS)

    Vainer, Boris G.; Morozov, Vitaly V.

    A peculiar branch of biophotonics is a measurement, visualisation and quantitative analysis of infrared (IR) radiation emitted from living object surfaces. Focal plane array (FPA)-based IR cameras make it possible to realize in medicine the so called interventional infrared thermal diagnostics. An integrated technique aimed at the advancement of this new approach in biomedical science and practice is described in the paper. The assembled system includes a high-performance short-wave (2.45-3.05 μm) or long-wave (8-14 μm) IR camera, two laser Doppler flowmeters (LDF) and additional equipment and complementary facilities implementing the monitoring of human cardiovascular status. All these means operate synchronously. It is first ascertained the relationship between infrared thermography (IRT) and LDF data in humans in regard to their systemic cardiovascular reactivity. Blood supply real-time dynamics in a narcotized patient is first visualized and quantitatively represented during surgery in order to observe how the general hyperoxia influences thermoregulatory mechanisms; an abrupt increase in temperature of the upper limb is observed using IRT. It is outlined that the IRT-based integrated technique may act as a take-off runway leading to elaboration of informative new methods directly applicable to medicine and biomedical sciences.

  14. Visual servoing of a laser ablation based cochleostomy

    NASA Astrophysics Data System (ADS)

    Kahrs, Lüder A.; Raczkowsky, Jörg; Werner, Martin; Knapp, Felix B.; Mehrwald, Markus; Hering, Peter; Schipper, Jörg; Klenzner, Thomas; Wörn, Heinz

    2008-03-01

    The aim of this study is a defined, visually based and camera controlled bone removal by a navigated CO II laser on the promontory of the inner ear. A precise and minimally traumatic opening procedure of the cochlea for the implantation of a cochlear implant electrode (so-called cochleostomy) is intended. Harming the membrane linings of the inner ear can result in damage of remaining organ functions (e.g. complete deafness or vertigo). A precise tissue removal by a laser-based bone ablation system is investigated. Inside the borehole the pulsed laser beam is guided automatically over the bone by using a two mirror galvanometric scanner. The ablation process is controlled by visual servoing. For the detection of the boundary layers of the inner ear the ablation area is monitored by a color camera. The acquired pictures are analyzed by image processing. The results of this analysis are used to control the process of laser ablation. This publication describes the complete system including image processing algorithms and the concept for the resulting distribution of single laser pulses. The system has been tested on human cochleae in ex-vivo studies. Further developments could lead to safe intraoperative openings of the cochlea by a robot based surgical laser instrument.

  15. Miniaturized fundus camera

    NASA Astrophysics Data System (ADS)

    Gliss, Christine; Parel, Jean-Marie A.; Flynn, John T.; Pratisto, Hans S.; Niederer, Peter F.

    2003-07-01

    We present a miniaturized version of a fundus camera. The camera is designed for the use in screening for retinopathy of prematurity (ROP). There, but also in other applications a small, light weight, digital camera system can be extremely useful. We present a small wide angle digital camera system. The handpiece is significantly smaller and lighter then in all other systems. The electronics is truly portable fitting in a standard boardcase. The camera is designed to be offered at a compatible price. Data from tests on young rabbits' eyes is presented. The development of the camera system is part of a telemedicine project screening for ROP. Telemedical applications are a perfect application for this camera system using both advantages: the portability as well as the digital image.

  16. Flexible Sigmoidoscopy

    MedlinePlus

    ... camera on one end, called a sigmoidoscope or scope, to look inside your rectum and lower colon, ... your rectum and into your sigmoid colon. The scope pumps air into your large intestine to give ...

  17. Camera systems in human motion analysis for biomedical applications

    NASA Astrophysics Data System (ADS)

    Chin, Lim Chee; Basah, Shafriza Nisha; Yaacob, Sazali; Juan, Yeap Ewe; Kadir, Aida Khairunnisaa Ab.

    2015-05-01

    Human Motion Analysis (HMA) system has been one of the major interests among researchers in the field of computer vision, artificial intelligence and biomedical engineering and sciences. This is due to its wide and promising biomedical applications, namely, bio-instrumentation for human computer interfacing and surveillance system for monitoring human behaviour as well as analysis of biomedical signal and image processing for diagnosis and rehabilitation applications. This paper provides an extensive review of the camera system of HMA, its taxonomy, including camera types, camera calibration and camera configuration. The review focused on evaluating the camera system consideration of the HMA system specifically for biomedical applications. This review is important as it provides guidelines and recommendation for researchers and practitioners in selecting a camera system of the HMA system for biomedical applications.

  18. Development of a model of machine hand eye coordination and program specifications for a topological machine vision system

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A unified approach to computer vision and manipulation is developed which is called choreographic vision. In the model, objects to be viewed by a projected robot in the Viking missions to Mars are seen as objects to be manipulated within choreographic contexts controlled by a multimoded remote, supervisory control system on Earth. A new theory of context relations is introduced as a basis for choreographic programming languages. A topological vision model is developed for recognizing objects by shape and contour. This model is integrated with a projected vision system consisting of a multiaperture image dissector TV camera and a ranging laser system. System program specifications integrate eye-hand coordination and topological vision functions and an aerospace multiprocessor implementation is described.

  19. A multipurpose camera system for monitoring Kīlauea Volcano, Hawai'i

    USGS Publications Warehouse

    Patrick, Matthew R.; Orr, Tim R.; Lee, Lopaka; Moniz, Cyril J.

    2015-01-01

    We describe a low-cost, compact multipurpose camera system designed for field deployment at active volcanoes that can be used either as a webcam (transmitting images back to an observatory in real-time) or as a time-lapse camera system (storing images onto the camera system for periodic retrieval during field visits). The system also has the capability to acquire high-definition video. The camera system uses a Raspberry Pi single-board computer and a 5-megapixel low-light (near-infrared sensitive) camera, as well as a small Global Positioning System (GPS) module to ensure accurate time-stamping of images. Custom Python scripts control the webcam and GPS unit and handle data management. The inexpensive nature of the system allows it to be installed at hazardous sites where it might be lost. Another major advantage of this camera system is that it provides accurate internal timing (independent of network connection) and, because a full Linux operating system and the Python programming language are available on the camera system itself, it has the versatility to be configured for the specific needs of the user. We describe example deployments of the camera at Kīlauea Volcano, Hawai‘i, to monitor ongoing summit lava lake activity. 

  20. Seabird acoustic communication at sea: a new perspective using bio-logging devices.

    PubMed

    Thiebault, Andréa; Pistorius, Pierre; Mullers, Ralf; Tremblay, Yann

    2016-08-05

    Most seabirds are very noisy at their breeding colonies, when aggregated in high densities. Calls are used for individual recognition and also emitted during agonistic interactions. When at sea, many seabirds aggregate over patchily distributed resources and may benefit from foraging in groups. Because these aggregations are so common, it raises the question of whether seabirds use acoustic communication when foraging at sea? We deployed video-cameras with built in microphones on 36 Cape gannets (Morus capensis) during the breeding season of 2010-2011 at Bird Island (Algoa Bay, South Africa) to study their foraging behaviour and vocal activity at sea. Group formation was derived from the camera footage. During ~42 h, calls were recorded on 72 occasions from 16 birds. Vocalization exclusively took place in the presence of conspecifics, and mostly in feeding aggregations (81% of the vocalizations). From the observation of the behaviours of birds associated with the emission of calls, we suggest that the calls were emitted to avoid collisions between birds. Our observations show that at least some seabirds use acoustic communication when foraging at sea. These findings open up new perspectives for research on seabirds foraging ecology and their interactions at sea.

  1. Seabird acoustic communication at sea: a new perspective using bio-logging devices

    PubMed Central

    Thiebault, Andréa; Pistorius, Pierre; Mullers, Ralf; Tremblay, Yann

    2016-01-01

    Most seabirds are very noisy at their breeding colonies, when aggregated in high densities. Calls are used for individual recognition and also emitted during agonistic interactions. When at sea, many seabirds aggregate over patchily distributed resources and may benefit from foraging in groups. Because these aggregations are so common, it raises the question of whether seabirds use acoustic communication when foraging at sea? We deployed video-cameras with built in microphones on 36 Cape gannets (Morus capensis) during the breeding season of 2010–2011 at Bird Island (Algoa Bay, South Africa) to study their foraging behaviour and vocal activity at sea. Group formation was derived from the camera footage. During ~42 h, calls were recorded on 72 occasions from 16 birds. Vocalization exclusively took place in the presence of conspecifics, and mostly in feeding aggregations (81% of the vocalizations). From the observation of the behaviours of birds associated with the emission of calls, we suggest that the calls were emitted to avoid collisions between birds. Our observations show that at least some seabirds use acoustic communication when foraging at sea. These findings open up new perspectives for research on seabirds foraging ecology and their interactions at sea. PMID:27492779

  2. HUBBLE PROVIDES 'ONE-TWO PUNCH' TO SEE BIRTH OF STARS IN GALACTIC WRECKAGE

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Two powerful cameras aboard NASA's Hubble Space Telescope teamed up to capture the final stages in the grand assembly of galaxies. The photograph, taken by the Advanced Camera for Surveys (ACS) and the revived Near Infrared Camera and Multi-Object Spectrometer (NICMOS), shows a tumultuous collision between four galaxies located 1 billion light-years from Earth. The galactic car wreck is creating a torrent of new stars. The tangled up galaxies, called IRAS 19297-0406, are crammed together in the center of the picture. IRAS 19297-0406 is part of a class of galaxies known as ultraluminous infrared galaxies (ULIRGs). ULIRGs are considered the progenitors of massive elliptical galaxies. ULIRGs glow fiercely in infrared light, appearing 100 times brighter than our Milky Way Galaxy. The large amount of dust in these galaxies produces the brilliant infrared glow. The dust is generated by a firestorm of star birth triggered by the collisions. IRAS 19297-0406 is producing about 200 new Sun-like stars every year -- about 100 times more stars than our Milky Way creates. The hotbed of this star formation is the central region [the yellow objects]. This area is swamped in the dust created by the flurry of star formation. The bright blue material surrounding the central region corresponds to the ultraviolet glow of new stars. The ultraviolet light is not obscured by dust. Astronomers believe that this area is creating fewer new stars and therefore not as much dust. The colliding system [yellow and blue regions] has a diameter of about 30,000 light-years, or about half the size of the Milky Way. The tail [faint blue material at left] extends out for another 20,000 light-years. Astronomers used both cameras to witness the flocks of new stars that are forming from the galactic wreckage. NICMOS penetrated the dusty veil that masks the intense star birth in the central region. ACS captured the visible starlight of the colliding system's blue outer region. IRAS 19297-0406 may be similar to the so-called Hickson compact groups -- clusters of at least four galaxies in a tight configuration that are isolated from other galaxies. The galaxies are so close together that they lose energy from the relentless pull of gravity. Eventually, they fall into each other and form one massive galaxy. This color-composite image was made by combining photographs taken in near-infrared light with NICMOS and ultraviolet and visible light with ACS. The pictures were taken with these filters: the H-band and J-band on NICMOS; the V-band on the ACS wide-field camera; and the U-band on the ACS high-resolution camera. The images were taken on May 13 and 14. Credits: NASA, the NICMOS Group (STScI, ESA), and the NICMOS Science Team (University of Arizona)

  3. Black Hole With Jet (Artist's Concept)

    NASA Image and Video Library

    2017-11-02

    This artist's concept shows a black hole with an accretion disk -- a flat structure of material orbiting the black hole -- and a jet of hot gas, called plasma. Using NASA's NuSTAR space telescope and a fast camera called ULTRACAM on the William Herschel Observatory in La Palma, Spain, scientists have been able to measure the distance that particles in jets travel before they "turn on" and become bright sources of light. This distance is called the "acceleration zone." https://photojournal.jpl.nasa.gov/catalog/PIA22085

  4. Investigating Curiosity Drill Area

    NASA Image and Video Library

    2013-02-09

    NASA Mars rover Curiosity used its Mast Camera Mastcam to take the images combined into this mosaic of the drill area, called John Klein, where the rover ultimately performed its first sample drilling.

  5. IrLaW an OGC compliant infrared thermography measurement system developed on mini PC with real time computing capabilities for long term monitoring of transport infrastructures

    NASA Astrophysics Data System (ADS)

    Dumoulin, J.; Averty, R.

    2012-04-01

    One of the objectives of ISTIMES project is to evaluate the potentialities offered by the integration of different electromagnetic techniques able to perform non-invasive diagnostics for surveillance and monitoring of transport infrastructures. Among the EM methods investigated, uncooled infrared camera is a promising technique due to its dissemination potential according to its relative low cost on the market. Infrared thermography, when it is used in quantitative mode (not in laboratory conditions) and not in qualitative mode (vision applied to survey), requires to process in real time thermal radiative corrections on raw data acquired to take into account influences of natural environment evolution with time. But, camera sensor has to be enough smart to apply in real time calibration law and radiometric corrections in a varying atmosphere. So, a complete measurement system was studied and developed with low cost infrared cameras available on the market. In the system developed, infrared camera is coupled with other sensors to feed simplified radiative models running, in real time, on GPU available on small PC. The system studied and developed uses a fast Ethernet camera FLIR A320 [1] coupled with a VAISALA WXT520 [2] weather station and a light GPS unit [3] for positioning and dating. It can be used with other Ethernet infrared cameras (i.e. visible ones) but requires to be able to access measured data at raw level. In the present study, it has been made possible thanks to a specific agreement signed with FLIR Company. The prototype system studied and developed is implemented on low cost small computer that integrates a GPU card to allow real time parallel computing [4] of simplified radiometric [5] heat balance using information measured with the weather station. An HMI was developed under Linux using OpenSource and complementary pieces of software developed at IFSTTAR. This new HMI called "IrLaW" has various functionalities that let it compliant to be use in real site for long term monitoring. It can be remotely controlled in wire or wireless communication mode depending on what is the context of measurement and the degree of accessibility to the system when it is running on real site. To complete and conclude, thanks to the development of a high level library, but also to the deployment of a daemon, our developed measurement system was tuned to be compatible with OGC standards. Complementary functionalities were also developed to allow the system to self declare to 52North. For that, a specific plugin was developed to be inserted previously at 52North level. Finally, data are also accessible by tasking the system when required, fort instance by using the web portal developed in the ISTIMES Framework. ACKNOWLEDGEMENT - The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under Grant Agreement n° 225663.

  6. Improving depth maps of plants by using a set of five cameras

    NASA Astrophysics Data System (ADS)

    Kaczmarek, Adam L.

    2015-03-01

    Obtaining high-quality depth maps and disparity maps with the use of a stereo camera is a challenging task for some kinds of objects. The quality of these maps can be improved by taking advantage of a larger number of cameras. The research on the usage of a set of five cameras to obtain disparity maps is presented. The set consists of a central camera and four side cameras. An algorithm for making disparity maps called multiple similar areas (MSA) is introduced. The algorithm was specially designed for the set of five cameras. Experiments were performed with the MSA algorithm and the stereo matching algorithm based on the sum of sum of squared differences (sum of SSD, SSSD) measure. Moreover, the following measures were included in the experiments: sum of absolute differences (SAD), zero-mean SAD (ZSAD), zero-mean SSD (ZSSD), locally scaled SAD (LSAD), locally scaled SSD (LSSD), normalized cross correlation (NCC), and zero-mean NCC (ZNCC). Algorithms presented were applied to images of plants. Making depth maps of plants is difficult because parts of leaves are similar to each other. The potential usability of the described algorithms is especially high in agricultural applications such as robotic fruit harvesting.

  7. Line-Constrained Camera Location Estimation in Multi-Image Stereomatching.

    PubMed

    Donné, Simon; Goossens, Bart; Philips, Wilfried

    2017-08-23

    Stereomatching is an effective way of acquiring dense depth information from a scene when active measurements are not possible. So-called lightfield methods take a snapshot from many camera locations along a defined trajectory (usually uniformly linear or on a regular grid-we will assume a linear trajectory) and use this information to compute accurate depth estimates. However, they require the locations for each of the snapshots to be known: the disparity of an object between images is related to both the distance of the camera to the object and the distance between the camera positions for both images. Existing solutions use sparse feature matching for camera location estimation. In this paper, we propose a novel method that uses dense correspondences to do the same, leveraging an existing depth estimation framework to also yield the camera locations along the line. We illustrate the effectiveness of the proposed technique for camera location estimation both visually for the rectification of epipolar plane images and quantitatively with its effect on the resulting depth estimation. Our proposed approach yields a valid alternative for sparse techniques, while still being executed in a reasonable time on a graphics card due to its highly parallelizable nature.

  8. Scaling-up camera traps: monitoring the planet's biodiversity with networks of remote sensors

    USGS Publications Warehouse

    Steenweg, Robin; Hebblewhite, Mark; Kays, Roland; Ahumada, Jorge A.; Fisher, Jason T.; Burton, Cole; Townsend, Susan E.; Carbone, Chris; Rowcliffe, J. Marcus; Whittington, Jesse; Brodie, Jedediah; Royle, Andy; Switalski, Adam; Clevenger, Anthony P.; Heim, Nicole; Rich, Lindsey N.

    2017-01-01

    Countries committed to implementing the Convention on Biological Diversity's 2011–2020 strategic plan need effective tools to monitor global trends in biodiversity. Remote cameras are a rapidly growing technology that has great potential to transform global monitoring for terrestrial biodiversity and can be an important contributor to the call for measuring Essential Biodiversity Variables. Recent advances in camera technology and methods enable researchers to estimate changes in abundance and distribution for entire communities of animals and to identify global drivers of biodiversity trends. We suggest that interconnected networks of remote cameras will soon monitor biodiversity at a global scale, help answer pressing ecological questions, and guide conservation policy. This global network will require greater collaboration among remote-camera studies and citizen scientists, including standardized metadata, shared protocols, and security measures to protect records about sensitive species. With modest investment in infrastructure, and continued innovation, synthesis, and collaboration, we envision a global network of remote cameras that not only provides real-time biodiversity data but also serves to connect people with nature.

  9. The electromagnetic interference of mobile phones on the function of a γ-camera.

    PubMed

    Javadi, Hamid; Azizmohammadi, Zahra; Mahmoud Pashazadeh, Ali; Neshandar Asli, Isa; Moazzeni, Taleb; Baharfar, Nastaran; Shafiei, Babak; Nabipour, Iraj; Assadi, Majid

    2014-03-01

    The aim of the present study is to evaluate whether or not the electromagnetic field generated by mobile phones interferes with the function of a SPECT γ-camera during data acquisition. We tested the effects of 7 models of mobile phones on 1 SPECT γ-camera. The mobile phones were tested when making a call, in ringing mode, and in standby mode. The γ-camera function was assessed during data acquisition from a planar source and a point source of Tc with activities of 10 mCi and 3 mCi, respectively. A significant visual decrease in count number was considered to be electromagnetic interference (EMI). The percentage of induced EMI with the γ-camera per mobile phone was in the range of 0% to 100%. The incidence of EMI was mainly observed in the first seconds of ringing and then mitigated in the following frames. Mobile phones are portable sources of electromagnetic radiation, and there is interference potential with the function of SPECT γ-cameras leading to adverse effects on the quality of the acquired images.

  10. LAMOST CCD camera-control system based on RTS2

    NASA Astrophysics Data System (ADS)

    Tian, Yuan; Wang, Zheng; Li, Jian; Cao, Zi-Huang; Dai, Wei; Wei, Shou-Lin; Zhao, Yong-Heng

    2018-05-01

    The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) is the largest existing spectroscopic survey telescope, having 32 scientific charge-coupled-device (CCD) cameras for acquiring spectra. Stability and automation of the camera-control software are essential, but cannot be provided by the existing system. The Remote Telescope System 2nd Version (RTS2) is an open-source and automatic observatory-control system. However, all previous RTS2 applications were developed for small telescopes. This paper focuses on implementation of an RTS2-based camera-control system for the 32 CCDs of LAMOST. A virtual camera module inherited from the RTS2 camera module is built as a device component working on the RTS2 framework. To improve the controllability and robustness, a virtualized layer is designed using the master-slave software paradigm, and the virtual camera module is mapped to the 32 real cameras of LAMOST. The new system is deployed in the actual environment and experimentally tested. Finally, multiple observations are conducted using this new RTS2-framework-based control system. The new camera-control system is found to satisfy the requirements for automatic camera control in LAMOST. This is the first time that RTS2 has been applied to a large telescope, and provides a referential solution for full RTS2 introduction to the LAMOST observatory control system.

  11. Geometric Calibration and Radiometric Correction of the Maia Multispectral Camera

    NASA Astrophysics Data System (ADS)

    Nocerino, E.; Dubbini, M.; Menna, F.; Remondino, F.; Gattelli, M.; Covi, D.

    2017-10-01

    Multispectral imaging is a widely used remote sensing technique, whose applications range from agriculture to environmental monitoring, from food quality check to cultural heritage diagnostic. A variety of multispectral imaging sensors are available on the market, many of them designed to be mounted on different platform, especially small drones. This work focuses on the geometric and radiometric characterization of a brand-new, lightweight, low-cost multispectral camera, called MAIA. The MAIA camera is equipped with nine sensors, allowing for the acquisition of images in the visible and near infrared parts of the electromagnetic spectrum. Two versions are available, characterised by different set of band-pass filters, inspired by the sensors mounted on the WorlView-2 and Sentinel2 satellites, respectively. The camera details and the developed procedures for the geometric calibrations and radiometric correction are presented in the paper.

  12. Martian Terrain Near Curiosity Precipice Target

    NASA Image and Video Library

    2016-12-06

    This view from the Navigation Camera (Navcam) on the mast of NASA's Curiosity Mars rover shows rocky ground within view while the rover was working at an intended drilling site called "Precipice" on lower Mount Sharp. The right-eye camera of the stereo Navcam took this image on Dec. 2, 2016, during the 1,537th Martian day, or sol, of Curiosity's work on Mars. On the previous sol, an attempt to collect a rock-powder sample with the rover's drill ended before drilling began. This led to several days of diagnostic work while the rover remained in place, during which it continued to use cameras and a spectrometer on its mast, plus environmental monitoring instruments. In this view, hardware visible at lower right includes the sundial-theme calibration target for Curiosity's Mast Camera. http://photojournal.jpl.nasa.gov/catalog/PIA21140

  13. Motion Estimation Utilizing Range Detection-Enhanced Visual Odometry

    NASA Technical Reports Server (NTRS)

    Morris, Daniel Dale (Inventor); Chang, Hong (Inventor); Friend, Paul Russell (Inventor); Chen, Qi (Inventor); Graf, Jodi Seaborn (Inventor)

    2016-01-01

    A motion determination system is disclosed. The system may receive a first and a second camera image from a camera, the first camera image received earlier than the second camera image. The system may identify corresponding features in the first and second camera images. The system may receive range data comprising at least one of a first and a second range data from a range detection unit, corresponding to the first and second camera images, respectively. The system may determine first positions and the second positions of the corresponding features using the first camera image and the second camera image. The first positions or the second positions may be determined by also using the range data. The system may determine a change in position of the machine based on differences between the first and second positions, and a VO-based velocity of the machine based on the determined change in position.

  14. Face pose tracking using the four-point algorithm

    NASA Astrophysics Data System (ADS)

    Fung, Ho Yin; Wong, Kin Hong; Yu, Ying Kin; Tsui, Kwan Pang; Kam, Ho Chuen

    2017-06-01

    In this paper, we have developed an algorithm to track the pose of a human face robustly and efficiently. Face pose estimation is very useful in many applications such as building virtual reality systems and creating an alternative input method for the disabled. Firstly, we have modified a face detection toolbox called DLib for the detection of a face in front of a camera. The detected face features are passed to a pose estimation method, known as the four-point algorithm, for pose computation. The theory applied and the technical problems encountered during system development are discussed in the paper. It is demonstrated that the system is able to track the pose of a face in real time using a consumer grade laptop computer.

  15. Indoor integrated navigation and synchronous data acquisition method for Android smartphone

    NASA Astrophysics Data System (ADS)

    Hu, Chunsheng; Wei, Wenjian; Qin, Shiqiao; Wang, Xingshu; Habib, Ayman; Wang, Ruisheng

    2015-08-01

    Smartphones are widely used at present. Most smartphones have cameras and kinds of sensors, such as gyroscope, accelerometer and magnet meter. Indoor navigation based on smartphone is very important and valuable. According to the features of the smartphone and indoor navigation, a new indoor integrated navigation method is proposed, which uses MEMS (Micro-Electro-Mechanical Systems) IMU (Inertial Measurement Unit), camera and magnet meter of smartphone. The proposed navigation method mainly involves data acquisition, camera calibration, image measurement, IMU calibration, initial alignment, strapdown integral, zero velocity update and integrated navigation. Synchronous data acquisition of the sensors (gyroscope, accelerometer and magnet meter) and the camera is the base of the indoor navigation on the smartphone. A camera data acquisition method is introduced, which uses the camera class of Android to record images and time of smartphone camera. Two kinds of sensor data acquisition methods are introduced and compared. The first method records sensor data and time with the SensorManager of Android. The second method realizes open, close, data receiving and saving functions in C language, and calls the sensor functions in Java language with JNI interface. A data acquisition software is developed with JDK (Java Development Kit), Android ADT (Android Development Tools) and NDK (Native Development Kit). The software can record camera data, sensor data and time at the same time. Data acquisition experiments have been done with the developed software and Sumsang Note 2 smartphone. The experimental results show that the first method of sensor data acquisition is convenient but lost the sensor data sometimes, the second method is much better in real-time performance and much less in data losing. A checkerboard image is recorded, and the corner points of the checkerboard are detected with the Harris method. The sensor data of gyroscope, accelerometer and magnet meter have been recorded about 30 minutes. The bias stability and noise feature of the sensors have been analyzed. Besides the indoor integrated navigation, the integrated navigation and synchronous data acquisition method can be applied to outdoor navigation.

  16. Into the blue: AO science with MagAO in the visible

    NASA Astrophysics Data System (ADS)

    Close, Laird M.; Males, Jared R.; Follette, Katherine B.; Hinz, Phil; Morzinski, Katie; Wu, Ya-Lin; Kopon, Derek; Riccardi, Armando; Esposito, Simone; Puglisi, Alfio; Pinna, Enrico; Xompero, Marco; Briguglio, Runa; Quiros-Pacheco, Fernando

    2014-08-01

    We review astronomical results in the visible (λ<1μm) with adaptive optics. Other than a brief period in the early 1990s, there has been little astronomical science done in the visible with AO until recently. The most productive visible AO system to date is our 6.5m Magellan telescope AO system (MagAO). MagAO is an advanced Adaptive Secondary system at the Magellan 6.5m in Chile. This secondary has 585 actuators with < 1 msec response times (0.7 ms typically). We use a pyramid wavefront sensor. The relatively small actuator pitch (~23 cm/subap) allows moderate Strehls to be obtained in the visible (0.63-1.05 microns). We use a CCD AO science camera called "VisAO". On-sky long exposures (60s) achieve <30mas resolutions, 30% Strehls at 0.62 microns (r') with the VisAO camera in 0.5" seeing with bright R < 8 mag stars. These relatively high visible wavelength Strehls are made possible by our powerful combination of a next generation ASM and a Pyramid WFS with 378 controlled modes and 1000 Hz loop frequency. We'll review the key steps to having good performance in the visible and review the exciting new AO visible science opportunities and refereed publications in both broad-band (r,i,z,Y) and at Halpha for exoplanets, protoplanetary disks, young stars, and emission line jets. These examples highlight the power of visible AO to probe circumstellar regions/spatial resolutions that would otherwise require much larger diameter telescopes with classical infrared AO cameras.

  17. Differences in glance behavior between drivers using a rearview camera, parking sensor system, both technologies, or no technology during low-speed parking maneuvers.

    PubMed

    Kidd, David G; McCartt, Anne T

    2016-02-01

    This study characterized the use of various fields of view during low-speed parking maneuvers by drivers with a rearview camera, a sensor system, a camera and sensor system combined, or neither technology. Participants performed four different low-speed parking maneuvers five times. Glances to different fields of view the second time through the four maneuvers were coded along with the glance locations at the onset of the audible warning from the sensor system and immediately after the warning for participants in the sensor and camera-plus-sensor conditions. Overall, the results suggest that information from cameras and/or sensor systems is used in place of mirrors and shoulder glances. Participants with a camera, sensor system, or both technologies looked over their shoulders significantly less than participants without technology. Participants with cameras (camera and camera-plus-sensor conditions) used their mirrors significantly less compared with participants without cameras (no-technology and sensor conditions). Participants in the camera-plus-sensor condition looked at the center console/camera display for a smaller percentage of the time during the low-speed maneuvers than participants in the camera condition and glanced more frequently to the center console/camera display immediately after the warning from the sensor system compared with the frequency of glances to this location at warning onset. Although this increase was not statistically significant, the pattern suggests that participants in the camera-plus-sensor condition may have used the warning as a cue to look at the camera display. The observed differences in glance behavior between study groups were illustrated by relating it to the visibility of a 12-15-month-old child-size object. These findings provide evidence that drivers adapt their glance behavior during low-speed parking maneuvers following extended use of rearview cameras and parking sensors, and suggest that other technologies which augment the driving task may do the same. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Phoenix's 'Dodo' Trench

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This image was taken by NASA's Phoenix Mars Lander's Robotic Arm Camera (RAC) on the ninth Martian day of the mission, or Sol 9 (June 3, 2008). The center of the image shows a trench informally called 'Dodo' after the second dig. 'Dodo' is located within the previously determined digging area, informally called 'Knave of Hearts.' The light square to the right of the trench is the Robotic Arm's Thermal and Electrical Conductivity Probe (TECP). The Robotic Arm has scraped to a bright surface which indicated the Arm has reached a solid structure underneath the surface, which has been seen in other images as well.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  19. Using queuing models to aid design and guide research effort for multimodality buried target detection systems

    NASA Astrophysics Data System (ADS)

    Malof, Jordan M.; Collins, Leslie M.

    2016-05-01

    Many remote sensing modalities have been developed for buried target detection (BTD), each one offering relative advantages over the others. There has been interest in combining several modalities into a single BTD system that benefits from the advantages of each constituent sensor. Recently an approach was developed, called multi-state management (MSM), that aims to achieve this goal by separating BTD system operation into discrete states, each with different sensor activity and system velocity. Additionally, a modeling approach, called Q-MSM, was developed to quickly analyze multi-modality BTD systems operating with MSM. This work extends previous work by demonstrating how Q-MSM modeling can be used to design BTD systems operating with MSM, and to guide research to yield the most performance benefits. In this work an MSM system is considered that combines a forward-looking infrared (FLIR) camera and a ground penetrating radar (GPR). Experiments are conducted using a dataset of real, field-collected, data which demonstrates how the Q-MSM model can be used to evaluate performance benefits of altering, or improving via research investment, various characteristics of the GPR and FLIR systems. Q-MSM permits fast analysis that can determine where system improvements will have the greatest impact, and can therefore help guide BTD research.

  20. A generic readout system for astrophysical detectors

    NASA Astrophysics Data System (ADS)

    Doumayrou, E.; Lortholary, M.

    2012-09-01

    We have developed a generic digital platform to fulfill the needs for the development of new detectors in astrophysics, which is used in lab, for ground-based telescopes instruments and also in prototype versions for space instruments development. This system is based on hardware FPGA electronic board (called MISE) together with software on a PC computer (called BEAR). The MISE board generates the fast clocking which reads the detectors thanks to a programmable digital sequencer and performs data acquisition, buffering of digitalized pixels outputs and interfaces with others boards. The data are then sent to the PC via a SpaceWire or Usb link. The BEAR software sets the MISE board up, makes data acquisition and enables the visualization, processing and the storage of data in line. These software tools are made of C++ and Labview (NI) on a Linux OS. MISE and BEAR make a generic acquisition architecture, on which dedicated analog boards are plugged, so that to accommodate with detectors specificity: number of pixels, the readout channels and frequency, analog bias and clock interfaces. We have used this concept to build a camera for the P-ARTEMIS project including a 256 pixels sub-millimeter bolometer detector at 10Kpixel/s (SPIE 7741-12 (2010)). For the EUCLID project, a lab camera is now working for the test of CCDs 4Mpixels at 4*200Kpixel/s. Another is working for the testing of new near infrared detectors (NIR LFSA for the ESA TRP program) 110Kpixels at 2*100Kpixels/s. Other projects are in progress for the space missions PLATO and SPICA.

  1. Charge Diffusion Variations in Pan-STARRS1 CCDs

    NASA Astrophysics Data System (ADS)

    Magnier, Eugene A.; Tonry, J. L.; Finkbeiner, D.; Schlafly, E.; Burgett, W. S.; Chambers, K. C.; Flewelling, H. A.; Hodapp, K. W.; Kaiser, N.; Kudritzki, R.-P.; Metcalfe, N.; Wainscoat, R. J.; Waters, C. Z.

    2018-06-01

    Thick back-illuminated deep-depletion CCDs have superior quantum efficiency over previous generations of thinned and traditional thick CCDs. As a result, they are being used for wide-field imaging cameras in several major projects. We use observations from the Pan-STARRS 3π survey to characterize the behavior of the deep-depletion devices used in the Pan-STARRS 1 Gigapixel Camera. We have identified systematic spatial variations in the photometric measurements and stellar profiles that are similar in pattern to the so-called “tree rings” identified in devices used by other wide-field cameras (e.g., DECam and Hypersuprime Camera). The tree-ring features identified in these other cameras result from lateral electric fields that displace the electrons as they are transported in the silicon to the pixel location. In contrast, we show that the photometric and morphological modifications observed in the GPC1 detectors are caused by variations in the vertical charge transportation rate and resulting charge diffusion variations.

  2. Utilizing HDTV as Data for Space Flight

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney; Lindblom, Walt

    2006-01-01

    In the aftermath of the Space Shuttle Columbia accident February 1, 2003, the Columbia Accident Investigation Board recognized the need for better video data from launch, on-orbit, and landing to assess the status and safety of the shuttle orbiter fleet. The board called on NASA to improve its imagery assets and update the Agency s methods for analyzing video. This paper will feature details of several projects implemented prior to the return to flight of the Space Shuttle, including an airborne HDTV imaging system called the WB-57 Ascent Video Experiment, use of true 60 Hz progressive scan HDTV for ground and airborne HDTV camera systems, and the decision to utilize a wavelet compression system for recording. This paper will include results of compression testing, imagery from the launch of STS-114, and details of how commercial components were utilized to image the shuttle launch from an aircraft flying at 400 knots at 60,000 feet altitude. The paper will conclude with a review of future plans to expand on the upgrades made prior to return to flight.

  3. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    NASA Astrophysics Data System (ADS)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  4. STARS: a software application for the EBEX autonomous daytime star cameras

    NASA Astrophysics Data System (ADS)

    Chapman, Daniel; Didier, Joy; Hanany, Shaul; Hillbrand, Seth; Limon, Michele; Miller, Amber; Reichborn-Kjennerud, Britt; Tucker, Greg; Vinokurov, Yury

    2014-07-01

    The E and B Experiment (EBEX) is a balloon-borne telescope designed to probe polarization signals in the CMB resulting from primordial gravitational waves, gravitational lensing, and Galactic dust emission. EBEX completed an 11 day flight over Antarctica in January 2013 and data analysis is underway. EBEX employs two star cameras to achieve its real-time and post-flight pointing requirements. We wrote a software application called STARS to operate, command, and collect data from each of the star cameras, and to interface them with the main flight computer. We paid special attention to make the software robust against potential in-flight failures. We report on the implementation, testing, and successful in flight performance of STARS.

  5. In-Situ Cameras for Radiometric Correction of Remotely Sensed Data

    NASA Astrophysics Data System (ADS)

    Kautz, Jess S.

    The atmosphere distorts the spectrum of remotely sensed data, negatively affecting all forms of investigating Earth's surface. To gather reliable data, it is vital that atmospheric corrections are accurate. The current state of the field of atmospheric correction does not account well for the benefits and costs of different correction algorithms. Ground spectral data are required to evaluate these algorithms better. This dissertation explores using cameras as radiometers as a means of gathering ground spectral data. I introduce techniques to implement a camera systems for atmospheric correction using off the shelf parts. To aid the design of future camera systems for radiometric correction, methods for estimating the system error prior to construction, calibration and testing of the resulting camera system are explored. Simulations are used to investigate the relationship between the reflectance accuracy of the camera system and the quality of atmospheric correction. In the design phase, read noise and filter choice are found to be the strongest sources of system error. I explain the calibration methods for the camera system, showing the problems of pixel to angle calibration, and adapting the web camera for scientific work. The camera system is tested in the field to estimate its ability to recover directional reflectance from BRF data. I estimate the error in the system due to the experimental set up, then explore how the system error changes with different cameras, environmental set-ups and inversions. With these experiments, I learn about the importance of the dynamic range of the camera, and the input ranges used for the PROSAIL inversion. Evidence that the camera can perform within the specification set for ELM correction in this dissertation is evaluated. The analysis is concluded by simulating an ELM correction of a scene using various numbers of calibration targets, and levels of system error, to find the number of cameras needed for a full-scale implementation.

  6. Harpicon camera for HDTV

    NASA Astrophysics Data System (ADS)

    Tanada, Jun

    1992-08-01

    Ikegami has been involved in broadcast equipment ever since it was established as a company. In conjunction with NHK it has brought forth countless television cameras, from black-and-white cameras to color cameras, HDTV cameras, and special-purpose cameras. In the early days of HDTV (high-definition television, also known as "High Vision") cameras the specifications were different from those for the cameras of the present-day system, and cameras using all kinds of components, having different arrangements of components, and having different appearances were developed into products, with time spent on experimentation, design, fabrication, adjustment, and inspection. But recently the knowhow built up thus far in components, , printed circuit boards, and wiring methods has been incorporated in camera fabrication, making it possible to make HDTV cameras by metbods similar to the present system. In addition, more-efficient production, lower costs, and better after-sales service are being achieved by using the same circuits, components, mechanism parts, and software for both HDTV cameras and cameras that operate by the present system.

  7. An attentive multi-camera system

    NASA Astrophysics Data System (ADS)

    Napoletano, Paolo; Tisato, Francesco

    2014-03-01

    Intelligent multi-camera systems that integrate computer vision algorithms are not error free, and thus both false positive and negative detections need to be revised by a specialized human operator. Traditional multi-camera systems usually include a control center with a wall of monitors displaying videos from each camera of the network. Nevertheless, as the number of cameras increases, switching from a camera to another becomes hard for a human operator. In this work we propose a new method that dynamically selects and displays the content of a video camera from all the available contents in the multi-camera system. The proposed method is based on a computational model of human visual attention that integrates top-down and bottom-up cues. We believe that this is the first work that tries to use a model of human visual attention for the dynamic selection of the camera view of a multi-camera system. The proposed method has been experimented in a given scenario and has demonstrated its effectiveness with respect to the other methods and manually generated ground-truth. The effectiveness has been evaluated in terms of number of correct best-views generated by the method with respect to the camera views manually generated by a human operator.

  8. Almost Like Being at Bonneville

    NASA Image and Video Library

    2004-03-17

    NASA Mars Exploration Rover Spirit took this 3-D navigation camera mosaic of the crater called Bonneville. The rover solar panels can be seen in the foreground. 3D glasses are necessary to view this image.

  9. Microscopic Colitis: Collagenous Colitis and Lymphocytic Colitis

    MedlinePlus

    ... camera on one end, called a colonoscope or scope, to look inside the rectum and entire colon. ... through the rectum and into the colon. The scope inflates the large intestine with air to give ...

  10. Designing a wearable navigation system for image-guided cancer resection surgery

    PubMed Central

    Shao, Pengfei; Ding, Houzhu; Wang, Jinkun; Liu, Peng; Ling, Qiang; Chen, Jiayu; Xu, Junbin; Zhang, Shiwu; Xu, Ronald

    2015-01-01

    A wearable surgical navigation system is developed for intraoperative imaging of surgical margin in cancer resection surgery. The system consists of an excitation light source, a monochromatic CCD camera, a host computer, and a wearable headset unit in either of the following two modes: head-mounted display (HMD) and Google glass. In the HMD mode, a CMOS camera is installed on a personal cinema system to capture the surgical scene in real-time and transmit the image to the host computer through a USB port. In the Google glass mode, a wireless connection is established between the glass and the host computer for image acquisition and data transport tasks. A software program is written in Python to call OpenCV functions for image calibration, co-registration, fusion, and display with augmented reality. The imaging performance of the surgical navigation system is characterized in a tumor simulating phantom. Image-guided surgical resection is demonstrated in an ex vivo tissue model. Surgical margins identified by the wearable navigation system are co-incident with those acquired by a standard small animal imaging system, indicating the technical feasibility for intraoperative surgical margin detection. The proposed surgical navigation system combines the sensitivity and specificity of a fluorescence imaging system and the mobility of a wearable goggle. It can be potentially used by a surgeon to identify the residual tumor foci and reduce the risk of recurrent diseases without interfering with the regular resection procedure. PMID:24980159

  11. Designing a wearable navigation system for image-guided cancer resection surgery.

    PubMed

    Shao, Pengfei; Ding, Houzhu; Wang, Jinkun; Liu, Peng; Ling, Qiang; Chen, Jiayu; Xu, Junbin; Zhang, Shiwu; Xu, Ronald

    2014-11-01

    A wearable surgical navigation system is developed for intraoperative imaging of surgical margin in cancer resection surgery. The system consists of an excitation light source, a monochromatic CCD camera, a host computer, and a wearable headset unit in either of the following two modes: head-mounted display (HMD) and Google glass. In the HMD mode, a CMOS camera is installed on a personal cinema system to capture the surgical scene in real-time and transmit the image to the host computer through a USB port. In the Google glass mode, a wireless connection is established between the glass and the host computer for image acquisition and data transport tasks. A software program is written in Python to call OpenCV functions for image calibration, co-registration, fusion, and display with augmented reality. The imaging performance of the surgical navigation system is characterized in a tumor simulating phantom. Image-guided surgical resection is demonstrated in an ex vivo tissue model. Surgical margins identified by the wearable navigation system are co-incident with those acquired by a standard small animal imaging system, indicating the technical feasibility for intraoperative surgical margin detection. The proposed surgical navigation system combines the sensitivity and specificity of a fluorescence imaging system and the mobility of a wearable goggle. It can be potentially used by a surgeon to identify the residual tumor foci and reduce the risk of recurrent diseases without interfering with the regular resection procedure.

  12. A Lion of a Stone

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This approximate true-color image of the rock called 'Lion Stone' was acquired by the Mars Exploration Rover Opportunity's panoramic camera on sol 104 (May 9, 2004). The rock stands about 10 centimeters tall (about 4 inches) and is about 30 centimeters long (12 inches). Plans for the coming sols include investigating the rock with the spectrometers on the rover's instrument arm.

    This image was generated using the camera's L2 (750-nanometer), L5 (530-nanometer) and L6 (480-nanometer) filters.

  13. Assessing the Reliability and the Accuracy of Attitude Extracted from Visual Odometry for LIDAR Data Georeferencing

    NASA Astrophysics Data System (ADS)

    Leroux, B.; Cali, J.; Verdun, J.; Morel, L.; He, H.

    2017-08-01

    Airborne LiDAR systems require the use of Direct Georeferencing (DG) in order to compute the coordinates of the surveyed point in the mapping frame. An UAV platform does not derogate to this need, but its payload has to be lighter than this installed onboard so the manufacturer needs to find an alternative to heavy sensors and navigation systems. For the georeferencing of these data, a possible solution could be to replace the Inertial Measurement Unit (IMU) by a camera and record the optical flow. The different frames would then be processed thanks to photogrammetry so as to extract the External Orientation Parameters (EOP) and, therefore, the path of the camera. The major advantages of this method called Visual Odometry (VO) is low cost, no drifts IMU-induced, option for the use of Ground Control Points (GCPs) such as on airborne photogrammetry surveys. In this paper we shall present a test bench designed to assess the reliability and accuracy of the attitude estimated from VO outputs. The test bench consists of a trolley which embeds a GNSS receiver, an IMU sensor and a camera. The LiDAR is replaced by a tacheometer in order to survey the control points already known. We have also developped a methodology applied to this test bench for the calibration of the external parameters and the computation of the surveyed point coordinates. Several tests have revealed a difference about 2-3 centimeters between the control point coordinates measured and those already known.

  14. Prism-based single-camera system for stereo display

    NASA Astrophysics Data System (ADS)

    Zhao, Yue; Cui, Xiaoyu; Wang, Zhiguo; Chen, Hongsheng; Fan, Heyu; Wu, Teresa

    2016-06-01

    This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.

  15. Feasibility evaluation and study of adapting the attitude reference system to the Orbiter camera payload system's large format camera

    NASA Technical Reports Server (NTRS)

    1982-01-01

    A design concept that will implement a mapping capability for the Orbital Camera Payload System (OCPS) when ground control points are not available is discussed. Through the use of stellar imagery collected by a pair of cameras whose optical axis are structurally related to the large format camera optical axis, such pointing information is made available.

  16. Traffic Sign Recognition with Invariance to Lighting in Dual-Focal Active Camera System

    NASA Astrophysics Data System (ADS)

    Gu, Yanlei; Panahpour Tehrani, Mehrdad; Yendo, Tomohiro; Fujii, Toshiaki; Tanimoto, Masayuki

    In this paper, we present an automatic vision-based traffic sign recognition system, which can detect and classify traffic signs at long distance under different lighting conditions. To realize this purpose, the traffic sign recognition is developed in an originally proposed dual-focal active camera system. In this system, a telephoto camera is equipped as an assistant of a wide angle camera. The telephoto camera can capture a high accuracy image for an object of interest in the view field of the wide angle camera. The image from the telephoto camera provides enough information for recognition when the accuracy of traffic sign is low from the wide angle camera. In the proposed system, the traffic sign detection and classification are processed separately for different images from the wide angle camera and telephoto camera. Besides, in order to detect traffic sign from complex background in different lighting conditions, we propose a type of color transformation which is invariant to light changing. This color transformation is conducted to highlight the pattern of traffic signs by reducing the complexity of background. Based on the color transformation, a multi-resolution detector with cascade mode is trained and used to locate traffic signs at low resolution in the image from the wide angle camera. After detection, the system actively captures a high accuracy image of each detected traffic sign by controlling the direction and exposure time of the telephoto camera based on the information from the wide angle camera. Moreover, in classification, a hierarchical classifier is constructed and used to recognize the detected traffic signs in the high accuracy image from the telephoto camera. Finally, based on the proposed system, a set of experiments in the domain of traffic sign recognition is presented. The experimental results demonstrate that the proposed system can effectively recognize traffic signs at low resolution in different lighting conditions.

  17. Feasibility study of a ``4H'' X-ray camera based on GaAs:Cr sensor

    NASA Astrophysics Data System (ADS)

    Dragone, A.; Kenney, C.; Lozinskaya, A.; Tolbanov, O.; Tyazhev, A.; Zarubin, A.; Wang, Zhehui

    2016-11-01

    A multilayer stacked X-ray camera concept is described. This type of technology is called `4H' X-ray cameras, where 4H stands for high-Z (Z>30) sensor, high-resolution (less than 300 micron pixel pitch), high-speed (above 100 MHz), and high-energy (above 30 keV in photon energy). The components of the technology, similar to the popular two-dimensional (2D) hybrid pixelated array detectors, consists of GaAs:Cr sensors bonded to high-speed ASICs. 4H cameras based on GaAs also use integration mode of X-ray detection. The number of layers, on the order of ten, is smaller than an earlier configuration for single-photon-counting (SPC) mode of detection [1]. High-speed ASIC based on modification to the ePix family of ASIC is discussed. Applications in X-ray free electron lasers (XFELs), synchrotrons, medicine and non-destructive testing are possible.

  18. Eye pupil detection system using an ensemble of regression forest and fast radial symmetry transform with a near infrared camera

    NASA Astrophysics Data System (ADS)

    Jeong, Mira; Nam, Jae-Yeal; Ko, Byoung Chul

    2017-09-01

    In this paper, we focus on pupil center detection in various video sequences that include head poses and changes in illumination. To detect the pupil center, we first find four eye landmarks in each eye by using cascade local regression based on a regression forest. Based on the rough location of the pupil, a fast radial symmetric transform is applied using the previously found pupil location to rearrange the fine pupil center. As the final step, the pupil displacement is estimated between the previous frame and the current frame to maintain the level of accuracy against a false locating result occurring in a particular frame. We generated a new face dataset, called Keimyung University pupil detection (KMUPD), with infrared camera. The proposed method was successfully applied to the KMUPD dataset, and the results indicate that its pupil center detection capability is better than that of other methods and with a shorter processing time.

  19. An artificial reality environment for remote factory control and monitoring

    NASA Technical Reports Server (NTRS)

    Kosta, Charles Paul; Krolak, Patrick D.

    1993-01-01

    Work has begun on the merger of two well known systems, VEOS (HITLab) and CLIPS (NASA). In the recent past, the University of Massachusetts Lowell developed a parallel version of NASA CLIPS, called P-CLIPS. This modification allows users to create smaller expert systems which are able to communicate with each other to jointly solve problems. With the merger of a VEOS message system, PCLIPS-V can now act as a group of entities working within VEOS. To display the 3D virtual world we have been using a graphics package called HOOPS, from Ithaca Software. The artificial reality environment we have set up contains actors and objects as found in our Lincoln Logs Factory of the Future project. The environment allows us to view and control the objects within the virtual world. All communication between the separate CLIPS expert systems is done through VEOS. A graphical renderer generates camera views on X-Windows devices; Head Mounted Devices are not required. This allows more people to make use of this technology. We are experimenting with different types of virtual vehicles to give the user a sense that he or she is actually moving around inside the factory looking ahead through windows and virtual monitors.

  20. Red Spot Spotted by Juno

    NASA Image and Video Library

    2016-06-30

    NASA's Juno spacecraft obtained this color view on June 28, 2016, at a distance of 3.9 million miles (6.2 million kilometers) from Jupiter. As Juno nears its destination, features on the giant planet are increasingly visible, including the Great Red Spot. The spacecraft is approaching over Jupiter's north pole, providing a unique perspective on the Jupiter system, including its four large moons. The scene was captured by the mission's imaging camera, called JunoCam, which is designed to acquire high resolution views of features in Jupiter's atmosphere from very close to the planet. http://photojournal.jpl.nasa.gov/catalog/PIA20705

  1. Commander Mattingly prepares meal on middeck

    NASA Image and Video Library

    1982-07-04

    STS004-28-312 (27 June-4 July 1982) --- Astronaut Thomas K. Mattingly II, STS-4 crew commander, prepares a meal in the middeck area of space shuttle Columbia. He uses scissors to open a drink container. Various packages of food and meal accessories are attached to locker doors. At far left edge of the frame is the tall payload called continuous flow electrophoresis experiment (CFES) system-designed to separate biological materials according to their surface electrical charges as they pass through an electrical field. Astronaut Henry W. Hartsfield Jr. exposed this frame with a 35mm camera. Photo credit: NASA

  2. OVMS-plus at the LBT: disturbance compensation simplified

    NASA Astrophysics Data System (ADS)

    Böhm, Michael; Pott, Jörg-Uwe; Borelli, José; Hinz, Phil; Defrère, Denis; Downey, Elwood; Hill, John; Summers, Kellee; Conrad, Al; Kürster, Martin; Herbst, Tom; Sawodny, Oliver

    2016-07-01

    In this paper we will briefly revisit the optical vibration measurement system (OVMS) at the Large Binocular Telescope (LBT) and how these values are used for disturbance compensation and particularly for the LBT Interferometer (LBTI) and the LBT Interferometric Camera for Near-Infrared and Visible Adaptive Interferometry for Astronomy (LINC-NIRVANA). We present the now centralized software architecture, called OVMS+, on which our approach is based and illustrate several challenges faced during the implementation phase. Finally, we will present measurement results from LBTI proving the effectiveness of the approach and the ability to compensate for a large fraction of the telescope induced vibrations.

  3. Homestake Vein, False Color

    NASA Image and Video Library

    2011-12-07

    This false-color view of a mineral vein called Homestake comes from the panoramic camera Pancam on NASA Mars Exploration Rover Opportunity. The vein is about the width of a thumb and about 18 inches 45 centimeters long.

  4. 3D Medical Collaboration Technology to Enhance Emergency Healthcare

    PubMed Central

    Welch, Greg; Sonnenwald, Diane H; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Söderholm, Hanna M.; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Ampalam, Manoj; Krishnan, Srinivas; Noel, Vincent; Noland, Michael; Manning, James E.

    2009-01-01

    Two-dimensional (2D) videoconferencing has been explored widely in the past 15–20 years to support collaboration in healthcare. Two issues that arise in most evaluations of 2D videoconferencing in telemedicine are the difficulty obtaining optimal camera views and poor depth perception. To address these problems, we are exploring the use of a small array of cameras to reconstruct dynamic three-dimensional (3D) views of a remote environment and of events taking place within. The 3D views could be sent across wired or wireless networks to remote healthcare professionals equipped with fixed displays or with mobile devices such as personal digital assistants (PDAs). The remote professionals’ viewpoints could be specified manually or automatically (continuously) via user head or PDA tracking, giving the remote viewers head-slaved or hand-slaved virtual cameras for monoscopic or stereoscopic viewing of the dynamic reconstructions. We call this idea remote 3D medical collaboration. In this article we motivate and explain the vision for 3D medical collaboration technology; we describe the relevant computer vision, computer graphics, display, and networking research; we present a proof-of-concept prototype system; and we present evaluation results supporting the general hypothesis that 3D remote medical collaboration technology could offer benefits over conventional 2D videoconferencing in emergency healthcare. PMID:19521951

  5. 3D medical collaboration technology to enhance emergency healthcare.

    PubMed

    Welch, Gregory F; Sonnenwald, Diane H; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Söderholm, Hanna M; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Ampalam, Manoj K; Krishnan, Srinivas; Noel, Vincent; Noland, Michael; Manning, James E

    2009-04-19

    Two-dimensional (2D) videoconferencing has been explored widely in the past 15-20 years to support collaboration in healthcare. Two issues that arise in most evaluations of 2D videoconferencing in telemedicine are the difficulty obtaining optimal camera views and poor depth perception. To address these problems, we are exploring the use of a small array of cameras to reconstruct dynamic three-dimensional (3D) views of a remote environment and of events taking place within. The 3D views could be sent across wired or wireless networks to remote healthcare professionals equipped with fixed displays or with mobile devices such as personal digital assistants (PDAs). The remote professionals' viewpoints could be specified manually or automatically (continuously) via user head or PDA tracking, giving the remote viewers head-slaved or hand-slaved virtual cameras for monoscopic or stereoscopic viewing of the dynamic reconstructions. We call this idea remote 3D medical collaboration. In this article we motivate and explain the vision for 3D medical collaboration technology; we describe the relevant computer vision, computer graphics, display, and networking research; we present a proof-of-concept prototype system; and we present evaluation results supporting the general hypothesis that 3D remote medical collaboration technology could offer benefits over conventional 2D videoconferencing in emergency healthcare.

  6. Autonomous Aerial Refueling Ground Test Demonstration—A Sensor-in-the-Loop, Non-Tracking Method

    PubMed Central

    Chen, Chao-I; Koseluk, Robert; Buchanan, Chase; Duerner, Andrew; Jeppesen, Brian; Laux, Hunter

    2015-01-01

    An essential capability for an unmanned aerial vehicle (UAV) to extend its airborne duration without increasing the size of the aircraft is called the autonomous aerial refueling (AAR). This paper proposes a sensor-in-the-loop, non-tracking method for probe-and-drogue style autonomous aerial refueling tasks by combining sensitivity adjustments of a 3D Flash LIDAR camera with computer vision based image-processing techniques. The method overcomes the inherit ambiguity issues when reconstructing 3D information from traditional 2D images by taking advantage of ready to use 3D point cloud data from the camera, followed by well-established computer vision techniques. These techniques include curve fitting algorithms and outlier removal with the random sample consensus (RANSAC) algorithm to reliably estimate the drogue center in 3D space, as well as to establish the relative position between the probe and the drogue. To demonstrate the feasibility of the proposed method on a real system, a ground navigation robot was designed and fabricated. Results presented in the paper show that using images acquired from a 3D Flash LIDAR camera as real time visual feedback, the ground robot is able to track a moving simulated drogue and continuously narrow the gap between the robot and the target autonomously. PMID:25970254

  7. Performance Assessment of Integrated Sensor Orientation with a Low-Cost Gnss Receiver

    NASA Astrophysics Data System (ADS)

    Rehak, M.; Skaloud, J.

    2017-08-01

    Mapping with Micro Aerial Vehicles (MAVs whose weight does not exceed 5 kg) is gaining importance in applications such as corridor mapping, road and pipeline inspections, or mapping of large areas with homogeneous surface structure, e.g. forest or agricultural fields. In these challenging scenarios, integrated sensor orientation (ISO) improves effectiveness and accuracy. Furthermore, in block geometry configurations, this mode of operation allows mapping without ground control points (GCPs). Accurate camera positions are traditionally determined by carrier-phase GNSS (Global Navigation Satellite System) positioning. However, such mode of positioning has strong requirements on receiver's and antenna's performance. In this article, we present a mapping project in which we employ a single-frequency, low-cost (< 100) GNSS receiver on a MAV. The performance of the low-cost receiver is assessed by comparing its trajectory with a reference trajectory obtained by a survey-grade, multi-frequency GNSS receiver. In addition, the camera positions derived from these two trajectories are used as observations in bundle adjustment (BA) projects and mapping accuracy is evaluated at check points (ChP). Several BA scenarios are considered with absolute and relative aerial position control. Additionally, the presented experiments show the possibility of BA to determine a camera-antenna spatial offset, so-called lever-arm.

  8. Leveraging Service Oriented Architecture to Enhance Information Sharing for Surface Transportation Security

    DTIC Science & Technology

    2008-09-01

    telephone, conference calls, emails, alert notifications, and blackberry . The RDTSF holds conference calls with its stakeholders to provide routine... tunnels ) is monitored by CCTV cameras with live feeds to WMATA’s Operations Control Center (OCC) to detect unauthorized entry into areas not intended for...message by email, blackberry and phone to the Security Coordinators. Dissemination of classified information however, is generally handled through the

  9. Structure-From for Calibration of a Vehicle Camera System with Non-Overlapping Fields-Of in AN Urban Environment

    NASA Astrophysics Data System (ADS)

    Hanel, A.; Stilla, U.

    2017-05-01

    Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between one and ten centimeters from tachymeter reference measurements.

  10. Compact Autonomous Hemispheric Vision System

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Cunningham, Thomas J.; Werne, Thomas A.; Eastwood, Michael L.; Walch, Marc J.; Staehle, Robert L.

    2012-01-01

    Solar System Exploration camera implementations to date have involved either single cameras with wide field-of-view (FOV) and consequently coarser spatial resolution, cameras on a movable mast, or single cameras necessitating rotation of the host vehicle to afford visibility outside a relatively narrow FOV. These cameras require detailed commanding from the ground or separate onboard computers to operate properly, and are incapable of making decisions based on image content that control pointing and downlink strategy. For color, a filter wheel having selectable positions was often added, which added moving parts, size, mass, power, and reduced reliability. A system was developed based on a general-purpose miniature visible-light camera using advanced CMOS (complementary metal oxide semiconductor) imager technology. The baseline camera has a 92 FOV and six cameras are arranged in an angled-up carousel fashion, with FOV overlaps such that the system has a 360 FOV (azimuth). A seventh camera, also with a FOV of 92 , is installed normal to the plane of the other 6 cameras giving the system a > 90 FOV in elevation and completing the hemispheric vision system. A central unit houses the common electronics box (CEB) controlling the system (power conversion, data processing, memory, and control software). Stereo is achieved by adding a second system on a baseline, and color is achieved by stacking two more systems (for a total of three, each system equipped with its own filter.) Two connectors on the bottom of the CEB provide a connection to a carrier (rover, spacecraft, balloon, etc.) for telemetry, commands, and power. This system has no moving parts. The system's onboard software (SW) supports autonomous operations such as pattern recognition and tracking.

  11. Radiometric stability of the Multi-angle Imaging SpectroRadiometer (MISR) following 15 years on-orbit

    NASA Astrophysics Data System (ADS)

    Bruegge, Carol J.; Val, Sebastian; Diner, David J.; Jovanovic, Veljko; Gray, Ellyn; Di Girolamo, Larry; Zhao, Guangyu

    2014-09-01

    The Multi-angle Imaging SpectroRadiometer (MISR) has successfully operated on the EOS/ Terra spacecraft since 1999. It consists of nine cameras pointing from nadir to 70.5° view angle with four spectral channels per camera. Specifications call for a radiometric uncertainty of 3% absolute and 1% relative to the other cameras. To accomplish this, MISR utilizes an on-board calibrator (OBC) to measure camera response changes. Once every two months the two Spectralon panels are deployed to direct solar-light into the cameras. Six photodiode sets measure the illumination level that are compared to MISR raw digital numbers, thus determining the radiometric gain coefficients used in Level 1 data processing. Although panel stability is not required, there has been little detectable change in panel reflectance, attributed to careful preflight handling techniques. The cameras themselves have degraded in radiometric response by 10% since launch, but calibration updates using the detector-based scheme has compensated for these drifts and allowed the radiance products to meet accuracy requirements. Validation using Sahara desert observations show that there has been a drift of ~1% in the reported nadir-view radiance over a decade, common to all spectral bands.

  12. Mapping the Apollo 17 landing site area based on Lunar Reconnaissance Orbiter Camera images and Apollo surface photography

    NASA Astrophysics Data System (ADS)

    Haase, I.; Oberst, J.; Scholten, F.; Wählisch, M.; Gläser, P.; Karachevtseva, I.; Robinson, M. S.

    2012-05-01

    Newly acquired high resolution Lunar Reconnaissance Orbiter Camera (LROC) images allow accurate determination of the coordinates of Apollo hardware, sampling stations, and photographic viewpoints. In particular, the positions from where the Apollo 17 astronauts recorded panoramic image series, at the so-called “traverse stations”, were precisely determined for traverse path reconstruction. We analyzed observations made in Apollo surface photography as well as orthorectified orbital images (0.5 m/pixel) and Digital Terrain Models (DTMs) (1.5 m/pixel and 100 m/pixel) derived from LROC Narrow Angle Camera (NAC) and Wide Angle Camera (WAC) images. Key features captured in the Apollo panoramic sequences were identified in LROC NAC orthoimages. Angular directions of these features were measured in the panoramic images and fitted to the NAC orthoimage by applying least squares techniques. As a result, we obtained the surface panoramic camera positions to within 50 cm. At the same time, the camera orientations, North azimuth angles and distances to nearby features of interest were also determined. Here, initial results are shown for traverse station 1 (northwest of Steno Crater) as well as the Apollo Lunar Surface Experiment Package (ALSEP) area.

  13. The research of adaptive-exposure on spot-detecting camera in ATP system

    NASA Astrophysics Data System (ADS)

    Qian, Feng; Jia, Jian-jun; Zhang, Liang; Wang, Jian-Yu

    2013-08-01

    High precision acquisition, tracking, pointing (ATP) system is one of the key techniques of laser communication. The spot-detecting camera is used to detect the direction of beacon in laser communication link, so that it can get the position information of communication terminal for ATP system. The positioning accuracy of camera decides the capability of laser communication system directly. So the spot-detecting camera in satellite-to-earth laser communication ATP systems needs high precision on target detection. The positioning accuracy of cameras should be better than +/-1μ rad . The spot-detecting cameras usually adopt centroid algorithm to get the position information of light spot on detectors. When the intensity of beacon is moderate, calculation results of centroid algorithm will be precise. But the intensity of beacon changes greatly during communication for distance, atmospheric scintillation, weather etc. The output signal of detector will be insufficient when the camera underexposes to beacon because of low light intensity. On the other hand, the output signal of detector will be saturated when the camera overexposes to beacon because of high light intensity. The calculation accuracy of centroid algorithm becomes worse if the spot-detecting camera underexposes or overexposes, and then the positioning accuracy of camera will be reduced obviously. In order to improve the accuracy, space-based cameras should regulate exposure time in real time according to light intensity. The algorithm of adaptive-exposure technique for spot-detecting camera based on metal-oxide-semiconductor (CMOS) detector is analyzed. According to analytic results, a CMOS camera in space-based laser communication system is described, which utilizes the algorithm of adaptive-exposure to adapting exposure time. Test results from imaging experiment system formed verify the design. Experimental results prove that this design can restrain the reduction of positioning accuracy for the change of light intensity. So the camera can keep stable and high positioning accuracy during communication.

  14. Representing videos in tangible products

    NASA Astrophysics Data System (ADS)

    Fageth, Reiner; Weiting, Ralf

    2014-03-01

    Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used, the number of images extracted out of the video in order to represent the video, the positions in the book and different design strategies compared to regular books.

  15. Electrostatic camera system functional design study

    NASA Technical Reports Server (NTRS)

    Botticelli, R. A.; Cook, F. J.; Moore, R. F.

    1972-01-01

    A functional design study for an electrostatic camera system for application to planetary missions is presented. The electrostatic camera can produce and store a large number of pictures and provide for transmission of the stored information at arbitrary times after exposure. Preliminary configuration drawings and circuit diagrams for the system are illustrated. The camera system's size, weight, power consumption, and performance are characterized. Tradeoffs between system weight, power, and storage capacity are identified.

  16. Video Diagnostic for W7-X Stellarator

    NASA Astrophysics Data System (ADS)

    Sárközi, J.; Grosser, K.; Kocsis, G.; König, R.; Neuner, U.; Molnár, Á.; Petravich, G.; Por, G.; Porempovics, G.; Récsei, S.; Szabó, V.; Szappanos, A.; Zoletnik, S.

    2008-03-01

    The video diagnostics for W7-X—which is under development—is devoted to observe plasma and frrst wall elements during operation, to warn in case of hot spots and dangerous heat load and to give information about the plasma size, position, edge structure, the geometry and location of magnetic islands and distribution of impurities. The video diagnostics will be mounted on the tangential AEQ-ports of the torus that are not straight and have about 2m length and a typical diameter of 0.1m which makes its realization more difficult. The geometry of the 10 tangential views of the AEQ-ports allows giving an almost complete overview of the vessel interior making this diagnostic indispensable for the machine operation. Different concepts of the diagnostics were investigated and finally the following design was selected. As a large heat load is expected on the optical window located at the plasma-facing end of the AEQ-port, the port window is protected by a cooled pinhole. An uncooled shutter located behind the pinhole can be closed to prevent window contamination during vessel conditioning discharges (glow discharge cleaning) and from inter-pulse deposition of soft a-C:H layers. The imaging optics and the detection sensor are located behind the port window in the port tube, which will be under atmospheric pressure. To detect the visible radiation distribution a new camera system called Event Detection Intelligent Camera (EDICAM) is under development. The system is divided into three major separated components. The Sensor Module contains only the selected CMOS sensor, the analog digital converters and the minimal electronics necessary for the communication with the subsequent camera system module called Image Processing and Control Unit (IPCU). Its simple structure makes the Sensor Module suitable to operate despite being exposed to ionizing (neutron, γ-) radiation. The IPCU, which can be located far from the Sensor Module and therefore far from the plasma, is designed to perform real time evaluation of the images detecting predefined events, managing the sensor read-out and the input triggers and producing the output triggers generated by the detected events. The IPCU can also be used to reduce the amount of the stored data. A Standard 10 Gigabit Ethernet fiber optics connection connects the IPCU module to the PC with GigEVision communication protocol.

  17. 'Rosy Red' Soil in Phoenix's Scoop

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This image shows fine-grained material inside the Robotic Arm scoop as seen by the Robotic Arm Camera (RAC) aboard NASA's Phoenix Mars Lander on June 25, 2008, the 30th Martian day, or sol, of the mission.

    The image shows fine, fluffy, red soil particles collected in a sample called 'Rosy Red.' The sample was dug from the trench named 'Snow White' in the area called 'Wonderland.' Some of the Rosy Red sample was delivered to Phoenix's Optical Microscope and Wet Chemistry Laboratory for analysis.

    The RAC provides its own illumination, so the color seen in RAC images is color as seen on Earth, not color as it would appear on Mars.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  18. Amateurs to take a Crack at Juno Images

    NASA Image and Video Library

    2011-08-03

    Data from the camera onboard NASA Juno mission, called JunoCam, will be made available to the public for processing into their own images. Illustrated here with an image of Jupiter taken by NASA Voyager mission.

  19. Warm-Season Flows on Slope in Horowitz Crater Nine-Image Sequence

    NASA Image and Video Library

    2011-08-04

    This image comes from observations of Horowitz crater by the HiRISE camera onboard NASA Mars Reconnaissance Orbiter. The features that extend down the slope during warm seasons are called recurring slope lineae.

  20. Martian Rock Harrison in Color, Showing Crystals

    NASA Image and Video Library

    2014-01-29

    This view of a Martian rock target called /Harrison merges images from two cameras onboard NASA Curiosity Mars rover to provide both color and microscopic detail. The elongated crystals are likely feldspars, and the matrix is pyroxene-dominated.

  1. Harnessing the power of multimedia in offender-based law enforcement information systems

    NASA Astrophysics Data System (ADS)

    Zimmerman, Alan P.

    1997-02-01

    Criminal offenders are increasingly administratively processed by automated multimedia information systems. During this processing, case and offender biographical data, mugshot photos, fingerprints and other valuable information and media are collected by law enforcement officers. As part of their criminal investigations, law enforcement officers are routinely called to solve criminal cases based upon limited evidence . . . evidence increasingly comprised of human DNA, ballistic casings and projectiles, chemical residues, latent fingerprints, surveillance camera facial images and voices. As multimedia systems receive greater use in law enforcement, traditional approaches used to index text data are not appropriate for images and signal data which comprise a multimedia database. Multimedia systems with integrated advanced pattern matching tools will provide law enforcement the ability to effectively locate multimedia information based upon content, without reliance upon the accuracy or completeness of text-based indexing.

  2. Report on the eROSITA camera system

    NASA Astrophysics Data System (ADS)

    Meidinger, Norbert; Andritschke, Robert; Bornemann, Walter; Coutinho, Diogo; Emberger, Valentin; Hälker, Olaf; Kink, Walter; Mican, Benjamin; Müller, Siegfried; Pietschner, Daniel; Predehl, Peter; Reiffers, Jonas

    2014-07-01

    The eROSITA space telescope is currently developed for the determination of cosmological parameters and the equation of state of dark energy via evolution of clusters of galaxies. Furthermore, the instrument development was strongly motivated by the intention of a first imaging X-ray all-sky survey enabling measurements above 2 keV. eROSITA is a scientific payload on the Russian research satellite SRG. Its destination after launch is the Lagrangian point L2. The observational program of the observatory divides into an all-sky survey and pointed observations and takes in total about 7.5 years. The instrument comprises an array of 7 identical and parallel aligned telescopes. Each of the seven focal plane cameras is equipped with a PNCCD detector, an enhanced type of the XMM-Newton focal plane detector. This instrumentation permits spectroscopy and imaging of X-rays in the energy band from 0.3 keV to 10 keV with a field of view of 1.0 degree. The camera development is done at the Max-Planck-Institute for extraterrestrial physics. Key component of each camera is the PNCCD chip. This silicon sensor is a back-illuminated, fully depleted and column-parallel type of charge coupled device. The image area of the 450 micron thick frame-transfer CCD comprises an array of 384 x 384 pixels, each with a size of 75 micron x 75 micron. Readout of the signal charge that is generated by an incident X-ray photon in the CCD is accomplished by an ASIC, the so-called eROSITA CAMEX. It provides 128 parallel analog signal processing channels but multiplexes the signals finally to one output which feeds the detector signals to a fast 14-bit ADC. The read noise of this system is equivalent to a noise charge of about 2.5 electrons rms. We achieve an energy resolution close to the theoretical limit given by Fano noise (except for very low energies). For example, the FWHM at an energy of 5.9 keV is approximately 140 eV. The complete camera assembly comprises the camera head with the detector as key component, the electronics for detector operation as well as data acquisition and the filter wheel unit. In addition to the on-chip light blocking filter directly deposited on the photon entrance window of the PNCCD, an external filter can be moved in front of the sensor, which serves also for contamination protection. Furthermore, an on-board calibration source emitting several fluorescence lines is accommodated on the filter wheel mechanism for the purpose of in-orbit calibration. Since the spectroscopic silicon sensors need cooling down to -95°C to mitigate best radiation damage effects, an elaborate cooling system is necessary. It consists of two different types of heat pipes linking the seven detectors to two radiators. Based on the tests with an engineering model, a flight design was developed for the camera and a qualification model has been built. The tests and the performance of this camera is presented in the following. In conclusion an outlook on the flight cameras is given.

  3. Killer whale caller localization using a hydrophone array in an oceanarium pool

    NASA Astrophysics Data System (ADS)

    Bowles, Ann E.; Greenlaw, Charles F.; McGehee, Duncan E.; van Holliday, D.

    2004-05-01

    A system to localize calling killer whales was designed around a ten-hydrophone array in a pool at SeaWorld San Diego. The array consisted of nine ITC 8212 and one ITC 6050H hydrophones mounted in recessed 30×30 cm2 niches. Eight of the hydrophones were connected to a Compaq Armada E500 laptop computer through a National Instruments DAQ 6024E PCMCIA A/D data acquisition card and a BNC-2120 signal conditioner. The system was calibrated with a 139-dB, 4.5-kHz pinger. Acoustic data were collected during four 48-72 h recording sessions, simultaneously with video recorded from a four-camera array. Calling whales were localized by one of two methods, (1) at the hydrophone reporting the highest sound exposure level and (2) using custom-designed 3-D localization software based on time-of-arrival (ORCA). Complex reverberations in the niches and pool made locations based on time of arrival difficult to collect. Based on preliminary analysis of data from four sessions (400+ calls/session), the hydrophone reporting the highest level reliably attributed callers 51%-100% of the time. This represents a substantial improvement over attribution rates of 5%-15% obtained with single hydrophone recordings. [Funding provided by Hubbs-SeaWorld Research Institute and the Hubbs Society.

  4. A Robust Mechanical Sensing System for Unmanned Sea Surface Vehicles

    NASA Technical Reports Server (NTRS)

    Kulczycki, Eric A.; Magnone, Lee J.; Huntsberger, Terrance; Aghazarian, Hrand; Padgett, Curtis W.; Trotz, David C.; Garrett, Michael S.

    2009-01-01

    The need for autonomous navigation and intelligent control of unmanned sea surface vehicles requires a mechanically robust sensing architecture that is watertight, durable, and insensitive to vibration and shock loading. The sensing system developed here comprises four black and white cameras and a single color camera. The cameras are rigidly mounted to a camera bar that can be reconfigured to mount multiple vehicles, and act as both navigational cameras and application cameras. The cameras are housed in watertight casings to protect them and their electronics from moisture and wave splashes. Two of the black and white cameras are positioned to provide lateral vision. They are angled away from the front of the vehicle at horizontal angles to provide ideal fields of view for mapping and autonomous navigation. The other two black and white cameras are positioned at an angle into the color camera's field of view to support vehicle applications. These two cameras provide an overlap, as well as a backup to the front camera. The color camera is positioned directly in the middle of the bar, aimed straight ahead. This system is applicable to any sea-going vehicle, both on Earth and in space.

  5. Head-coupled remote stereoscopic camera system for telepresence applications

    NASA Astrophysics Data System (ADS)

    Bolas, Mark T.; Fisher, Scott S.

    1990-09-01

    The Virtual Environment Workstation Project (VIEW) at NASA's Ames Research Center has developed a remotely controlled stereoscopic camera system that can be used for telepresence research and as a tool to develop and evaluate configurations for head-coupled visual systems associated with space station telerobots and remote manipulation robotic arms. The prototype camera system consists of two lightweight CCD video cameras mounted on a computer controlled platform that provides real-time pan, tilt, and roll control of the camera system in coordination with head position transmitted from the user. This paper provides an overall system description focused on the design and implementation of the camera and platform hardware configuration and the development of control software. Results of preliminary performance evaluations are reported with emphasis on engineering and mechanical design issues and discussion of related psychophysiological effects and objectives.

  6. Automatic three-dimensional quantitative analysis for evaluation of facial movement.

    PubMed

    Hontanilla, B; Aubá, C

    2008-01-01

    The aim of this study is to present a new 3D capture system of facial movements called FACIAL CLIMA. It is an automatic optical motion system that involves placing special reflecting dots on the subject's face and video recording with three infrared-light cameras the subject performing several face movements such as smile, mouth puckering, eye closure and forehead elevation. Images from the cameras are automatically processed with a software program that generates customised information such as 3D data on velocities and areas. The study has been performed in 20 healthy volunteers. The accuracy of the measurement process and the intrarater and interrater reliabilities have been evaluated. Comparison of a known distance and angle with those obtained by FACIAL CLIMA shows that this system is accurate to within 0.13 mm and 0.41 degrees . In conclusion, the accuracy of the FACIAL CLIMA system for evaluation of facial movements is demonstrated and also the high intrarater and interrater reliability. It has advantages with respect to other systems that have been developed for evaluation of facial movements, such as short calibration time, short measuring time, easiness to use and it provides not only distances but also velocities and areas. Thus the FACIAL CLIMA system could be considered as an adequate tool to assess the outcome of facial paralysis reanimation surgery. Thus, patients with facial paralysis could be compared between surgical centres such that effectiveness of facial reanimation operations could be evaluated.

  7. An integrated port camera and display system for laparoscopy.

    PubMed

    Terry, Benjamin S; Ruppert, Austin D; Steinhaus, Kristen R; Schoen, Jonathan A; Rentschler, Mark E

    2010-05-01

    In this paper, we built and tested the port camera, a novel, inexpensive, portable, and battery-powered laparoscopic tool that integrates the components of a vision system with a cannula port. This new device 1) minimizes the invasiveness of laparoscopic surgery by combining a camera port and tool port; 2) reduces the cost of laparoscopic vision systems by integrating an inexpensive CMOS sensor and LED light source; and 3) enhances laparoscopic surgical procedures by mechanically coupling the camera, tool port, and liquid crystal display (LCD) screen to provide an on-patient visual display. The port camera video system was compared to two laparoscopic video systems: a standard resolution unit from Karl Storz (model 22220130) and a high definition unit from Stryker (model 1188HD). Brightness, contrast, hue, colorfulness, and sharpness were compared. The port camera video is superior to the Storz scope and approximately equivalent to the Stryker scope. An ex vivo study was conducted to measure the operative performance of the port camera. The results suggest that simulated tissue identification and biopsy acquisition with the port camera is as efficient as with a traditional laparoscopic system. The port camera was successfully used by a laparoscopic surgeon for exploratory surgery and liver biopsy during a porcine surgery, demonstrating initial surgical feasibility.

  8. The Next Generation Space Telescope

    NASA Technical Reports Server (NTRS)

    Mather, John C.; Seery, Bernard (Technical Monitor)

    2001-01-01

    The Next Generation Space Telescope NGST is an 6-7 m class radiatively cooled telescope, planned for launch to the Lagrange point L2 in 2009, to be built by a partnership of NASA, ESA, and CSA. The NGST science program calls for three core instruments: 1) Near IR camera, 0.6 - 5 micrometer; 2) Near IR multiobject spectrometer, 1 - 5 micrometer, and 3) Mid IR camera and spectrometer, 5 - 28 micrometers. I will report on the scientific goals, project status, and the recent reduction in aperture from the target of 8 m.

  9. First Report of Using Portable Unmanned Aircraft Systems (Drones) for Search and Rescue.

    PubMed

    Van Tilburg, Christopher

    2017-06-01

    Unmanned aircraft systems (UAS), colloquially called drones, are used commonly for military, government, and civilian purposes, including both commercial and consumer applications. During a search and rescue mission in Oregon, a UAS was used to confirm a fatality in a slot canyon; this eliminated the need for a dangerous rappel at night by rescue personnel. A second search mission in Oregon used several UAS to clear terrain. This allowed search of areas that were not accessible or were difficult to clear by ground personnel. UAS with cameras may be useful for searching, observing, and documenting missions. It is possible that UAS might be useful for delivering equipment in difficult areas and in communication. Copyright © 2017. Published by Elsevier Inc.

  10. Laser designator protection filter for see-spot thermal imaging systems

    NASA Astrophysics Data System (ADS)

    Donval, Ariela; Fisher, Tali; Lipman, Ofir; Oron, Moshe

    2012-06-01

    In some cases the FLIR has an open window in the 1.06 micrometer wavelength range; this capability is called 'see spot' and allows seeing a laser designator spot using the FLIR. A problem arises when the returned laser energy is too high for the camera sensitivity, and therefore can cause damage to the sensor. We propose a non-linear, solid-state dynamic filter solution protecting from damage in a passive way. Our filter blocks the transmission, only if the power exceeds a certain threshold as opposed to spectral filters that block a certain wavelength permanently. In this paper we introduce the Wideband Laser Protection Filter (WPF) solution for thermal imaging systems possessing the ability to see the laser spot.

  11. A real-time camera calibration system based on OpenCV

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Wang, Hua; Guo, Huinan; Ren, Long; Zhou, Zuofeng

    2015-07-01

    Camera calibration is one of the essential steps in the computer vision research. This paper describes a real-time OpenCV based camera calibration system, and developed and implemented in the VS2008 environment. Experimental results prove that the system to achieve a simple and fast camera calibration, compared with MATLAB, higher precision and does not need manual intervention, and can be widely used in various computer vision system.

  12. Validation of the Microsoft Kinect® camera system for measurement of lower extremity jump landing and squatting kinematics.

    PubMed

    Eltoukhy, Moataz; Kelly, Adam; Kim, Chang-Young; Jun, Hyung-Pil; Campbell, Richard; Kuenze, Christopher

    2016-01-01

    Cost effective, quantifiable assessment of lower extremity movement represents potential improvement over standard tools for evaluation of injury risk. Ten healthy participants completed three trials of a drop jump, overhead squat, and single leg squat task. Peak hip and knee kinematics were assessed using an 8 camera BTS Smart 7000DX motion analysis system and the Microsoft Kinect® camera system. The agreement and consistency between both uncorrected and correct Kinect kinematic variables and the BTS camera system were assessed using interclass correlations coefficients. Peak sagittal plane kinematics measured using the Microsoft Kinect® camera system explained a significant amount of variance [Range(hip) = 43.5-62.8%; Range(knee) = 67.5-89.6%] in peak kinematics measured using the BTS camera system. Across tasks, peak knee flexion angle and peak hip flexion were found to be consistent and in agreement when the Microsoft Kinect® camera system was directly compared to the BTS camera system but these values were improved following application of a corrective factor. The Microsoft Kinect® may not be an appropriate surrogate for traditional motion analysis technology, but it may have potential applications as a real-time feedback tool in pathological or high injury risk populations.

  13. Retrieval System for Calcined Waste for the Idaho Cleanup Project - 12104

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eastman, Randy L.; Johnston, Beau A.; Lower, Danielle E.

    This paper describes the conceptual approach to retrieve radioactive calcine waste, hereafter called calcine, from stainless steel storage bins contained within concrete vaults. The retrieval system will allow evacuation of the granular solids (calcine) from the storage bins through the use of stationary vacuum nozzles. The nozzles will use air jets for calcine fluidization and will be able to rotate and direct the fluidization or displacement of the calcine within the bin. Each bin will have a single retrieval system installed prior to operation to prevent worker exposure to the high radiation fields. The addition of an articulated camera armmore » will allow for operations monitoring and will be equipped with contingency tools to aid in calcine removal. Possible challenges (calcine bridging and rat-holing) associated with calcine retrieval and transport, including potential solutions for bin pressurization, calcine fluidization and waste confinement, are also addressed. The Calcine Disposition Project has the responsibility to retrieve, treat, and package HLW calcine. The calcine retrieval system has been designed to incorporate the functions and technical characteristics as established by the retrieval system functional analysis. By adequately implementing the highest ranking technical characteristics into the design of the retrieval system, the system will be able to satisfy the functional requirements. The retrieval system conceptual design provides the means for removing bulk calcine from the bins of the CSSF vaults. Top-down vacuum retrieval coupled with an articulating camera arm will allow for a robust, contained process capable of evacuating bulk calcine from bins and transporting it to the processing facility. The system is designed to fluidize, vacuum, transport and direct the calcine from its current location to the CSSF roof-top transport lines. An articulating camera arm, deployed through an adjacent access riser, will work in conjunction with the retrieval nozzle to aid in calcine fluidization, remote viewing, clumped calcine breaking and recovery from off-normal conditions. As the design of the retrieval system progresses from conceptual to preliminary, increasing attention will be directed toward detailed design and proof-of- concept testing. (authors)« less

  14. Evaluation of camera-based systems to reduce transit bus side collisions : phase II.

    DOT National Transportation Integrated Search

    2012-12-01

    The sideview camera system has been shown to eliminate blind zones by providing a view to the driver in real time. In : order to provide the best integration of these systems, an integrated camera-mirror system (hybrid system) was : developed and tes...

  15. Real time moving scene holographic camera system

    NASA Technical Reports Server (NTRS)

    Kurtz, R. L. (Inventor)

    1973-01-01

    A holographic motion picture camera system producing resolution of front surface detail is described. The system utilizes a beam of coherent light and means for dividing the beam into a reference beam for direct transmission to a conventional movie camera and two reflection signal beams for transmission to the movie camera by reflection from the front side of a moving scene. The system is arranged so that critical parts of the system are positioned on the foci of a pair of interrelated, mathematically derived ellipses. The camera has the theoretical capability of producing motion picture holograms of projectiles moving at speeds as high as 900,000 cm/sec (about 21,450 mph).

  16. Calibration Techniques for Accurate Measurements by Underwater Camera Systems

    PubMed Central

    Shortis, Mark

    2015-01-01

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems. PMID:26690172

  17. Curiosity Self-Portrait at Windjana Drilling Site

    NASA Image and Video Library

    2014-06-23

    NASA Curiosity Mars rover used the MAHLI camera at the end of its arm in April and May 2014 to take dozens of component images combined into this self-portrait where the rover drilled into a sandstone target called Windjana.

  18. A detailed comparison of single-camera light-field PIV and tomographic PIV

    NASA Astrophysics Data System (ADS)

    Shi, Shengxian; Ding, Junfei; Atkinson, Callum; Soria, Julio; New, T. H.

    2018-03-01

    This paper conducts a comprehensive study between the single-camera light-field particle image velocimetry (LF-PIV) and the multi-camera tomographic particle image velocimetry (Tomo-PIV). Simulation studies were first performed using synthetic light-field and tomographic particle images, which extensively examine the difference between these two techniques by varying key parameters such as pixel to microlens ratio (PMR), light-field camera Tomo-camera pixel ratio (LTPR), particle seeding density and tomographic camera number. Simulation results indicate that the single LF-PIV can achieve accuracy consistent with that of multi-camera Tomo-PIV, but requires the use of overall greater number of pixels. Experimental studies were then conducted by simultaneously measuring low-speed jet flow with single-camera LF-PIV and four-camera Tomo-PIV systems. Experiments confirm that given a sufficiently high pixel resolution, a single-camera LF-PIV system can indeed deliver volumetric velocity field measurements for an equivalent field of view with a spatial resolution commensurate with those of multi-camera Tomo-PIV system, enabling accurate 3D measurements in applications where optical access is limited.

  19. Photogrammetry System and Method for Determining Relative Motion Between Two Bodies

    NASA Technical Reports Server (NTRS)

    Miller, Samuel A. (Inventor); Severance, Kurt (Inventor)

    2014-01-01

    A photogrammetry system and method provide for determining the relative position between two objects. The system utilizes one or more imaging devices, such as high speed cameras, that are mounted on a first body, and three or more photogrammetry targets of a known location on a second body. The system and method can be utilized with cameras having fish-eye, hyperbolic, omnidirectional, or other lenses. The system and method do not require overlapping fields-of-view if two or more cameras are utilized. The system and method derive relative orientation by equally weighting information from an arbitrary number of heterogeneous cameras, all with non-overlapping fields-of-view. Furthermore, the system can make the measurements with arbitrary wide-angle lenses on the cameras.

  20. Advanced imaging system

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This document describes the Advanced Imaging System CCD based camera. The AIS1 camera system was developed at Photometric Ltd. in Tucson, Arizona as part of a Phase 2 SBIR contract No. NAS5-30171 from the NASA/Goddard Space Flight Center in Greenbelt, Maryland. The camera project was undertaken as a part of the Space Telescope Imaging Spectrograph (STIS) project. This document is intended to serve as a complete manual for the use and maintenance of the camera system. All the different parts of the camera hardware and software are discussed and complete schematics and source code listings are provided.

  1. A versatile photogrammetric camera automatic calibration suite for multispectral fusion and optical helmet tracking

    NASA Astrophysics Data System (ADS)

    de Villiers, Jason; Jermy, Robert; Nicolls, Fred

    2014-06-01

    This paper presents a system to determine the photogrammetric parameters of a camera. The lens distortion, focal length and camera six degree of freedom (DOF) position are calculated. The system caters for cameras of different sensitivity spectra and fields of view without any mechanical modifications. The distortion characterization, a variant of Brown's classic plumb line method, allows many radial and tangential distortion coefficients and finds the optimal principal point. Typical values are 5 radial and 3 tangential coefficients. These parameters are determined stably and demonstrably produce superior results to low order models despite popular and prevalent misconceptions to the contrary. The system produces coefficients to model both the distorted to undistorted pixel coordinate transformation (e.g. for target designation) and the inverse transformation (e.g. for image stitching and fusion) allowing deterministic rates far exceeding real time. The focal length is determined to minimise the error in absolute photogrammetric positional measurement for both multi camera systems or monocular (e.g. helmet tracker) systems. The system determines the 6 DOF position of the camera in a chosen coordinate system. It can also determine the 6 DOF offset of the camera relative to its mechanical mount. This allows faulty cameras to be replaced without requiring a recalibration of the entire system (such as an aircraft cockpit). Results from two simple applications of the calibration results are presented: stitching and fusion of the images from a dual-band visual/ LWIR camera array, and a simple laboratory optical helmet tracker.

  2. Video systems for real-time oil-spill detection

    NASA Technical Reports Server (NTRS)

    Millard, J. P.; Arvesen, J. C.; Lewis, P. L.; Woolever, G. F.

    1973-01-01

    Three airborne television systems are being developed to evaluate techniques for oil-spill surveillance. These include a conventional TV camera, two cameras operating in a subtractive mode, and a field-sequential camera. False-color enhancement and wavelength and polarization filtering are also employed. The first of a series of flight tests indicates that an appropriately filtered conventional TV camera is a relatively inexpensive method of improving contrast between oil and water. False-color enhancement improves the contrast, but the problem caused by sun glint now limits the application to overcast days. Future effort will be aimed toward a one-camera system. Solving the sun-glint problem and developing the field-sequential camera into an operable system offers potential for color 'flagging' oil on water.

  3. Localization and Mapping Using a Non-Central Catadioptric Camera System

    NASA Astrophysics Data System (ADS)

    Khurana, M.; Armenakis, C.

    2018-05-01

    This work details the development of an indoor navigation and mapping system using a non-central catadioptric omnidirectional camera and its implementation for mobile applications. Omnidirectional catadioptric cameras find their use in navigation and mapping of robotic platforms, owing to their wide field of view. Having a wider field of view, or rather a potential 360° field of view, allows the system to "see and move" more freely in the navigation space. A catadioptric camera system is a low cost system which consists of a mirror and a camera. Any perspective camera can be used. A platform was constructed in order to combine the mirror and a camera to build a catadioptric system. A calibration method was developed in order to obtain the relative position and orientation between the two components so that they can be considered as one monolithic system. The mathematical model for localizing the system was determined using conditions based on the reflective properties of the mirror. The obtained platform positions were then used to map the environment using epipolar geometry. Experiments were performed to test the mathematical models and the achieved location and mapping accuracies of the system. An iterative process of positioning and mapping was applied to determine object coordinates of an indoor environment while navigating the mobile platform. Camera localization and 3D coordinates of object points obtained decimetre level accuracies.

  4. Design and realization of an AEC&AGC system for the CCD aerial camera

    NASA Astrophysics Data System (ADS)

    Liu, Hai ying; Feng, Bing; Wang, Peng; Li, Yan; Wei, Hao yun

    2015-08-01

    An AEC and AGC(Automatic Exposure Control and Automatic Gain Control) system was designed for a CCD aerial camera with fixed aperture and electronic shutter. The normal AEC and AGE algorithm is not suitable to the aerial camera since the camera always takes high-resolution photographs in high-speed moving. The AEC and AGE system adjusts electronic shutter and camera gain automatically according to the target brightness and the moving speed of the aircraft. An automatic Gamma correction is used before the image is output so that the image is better for watching and analyzing by human eyes. The AEC and AGC system could avoid underexposure, overexposure, or image blurring caused by fast moving or environment vibration. A series of tests proved that the system meet the requirements of the camera system with its fast adjusting speed, high adaptability, high reliability in severe complex environment.

  5. Applications of Action Cam Sensors in the Archaeological Yard

    NASA Astrophysics Data System (ADS)

    Pepe, M.; Ackermann, S.; Fregonese, L.; Fassi, F.; Adami, A.

    2018-05-01

    In recent years, special digital cameras called "action camera" or "action cam", have become popular due to their low price, smallness, lightness, strength and capacity to make videos and photos even in extreme environment surrounding condition. Indeed, these particular cameras have been designed mainly to capture sport actions and work even in case of dirt, bumps, or underwater and at different external temperatures. High resolution of Digital single-lens reflex (DSLR) cameras are usually preferred to be employed in photogrammetric field. Indeed, beyond the sensor resolution, the combination of such cameras with fixed lens with low distortion are preferred to perform accurate 3D measurements; at the contrary, action cameras have small and wide-angle lens, with a lower performance in terms of sensor resolution, lens quality and distortions. However, by considering the characteristics of the action cameras to acquire under conditions that may result difficult for standard DSLR cameras and because of their lower price, these could be taken into consideration as a possible and interesting approach during archaeological excavation activities to document the state of the places. In this paper, the influence of lens radial distortion and chromatic aberration on this type of cameras in self-calibration mode and an evaluation of their application in the field of Cultural Heritage will be investigated and discussed. Using a suitable technique, it has been possible to improve the accuracy of the 3D model obtained by action cam images. Case studies show the quality and the utility of the use of this type of sensor in the survey of archaeological artefacts.

  6. Overview of a Hybrid Underwater Camera System

    DTIC Science & Technology

    2014-07-01

    meters), in increments of 200ps. The camera is also equipped with 6:1 motorized zoom lens. A precision miniature attitude, heading reference system ( AHRS ...LUCIE Control & Power Distribution System AHRS Pulsed LASER Gated Camera -^ Sonar Transducer (b) LUCIE sub-systems Proc. ofSPIEVol. 9111

  7. [Microeconomics of introduction of a PET system based on the revised Japanese National Insurance reimbursement system].

    PubMed

    Abe, Katsumi; Kosuda, Shigeru; Kusano, Shoichi; Nagata, Masayoshi

    2003-11-01

    It is crucial to evaluate an annual balance before-hand when an institution installs a PET system because the revised Japanese national insurance reimbursement system set the cost of a FDG PET study as 75,000 yen. A break-even point was calculated in an 8-hour or a 24-hour operation of a PET system, based on the total costs reported. The break-even points were as follows: 13.4, 17.7, 22.1 studies per day for the 1 cyclotron-1 PET camera, 1 cyclotron-2 PET cameras, 1 cyclotron-3 PET cameras system, respectively, in an ordinary PET system operation of 8 hours. The break-even points were 19.9, 25.5, 31.2 studies per day for the 1 cyclotron-1 PET camera, 1 cyclotron-2 PET cameras, 1 cyclotron-3 PET cameras system, respectively, in a full PET system operation of 24 hours. The results indicate no profit would accrue in an ordinary PET system operation of 8 hours. The annual profit and break-even point for the total cost including the initial investment would be respectively 530 million yen and 2.8 years in a 24-hour operation with 1 cyclotron-3 PET cameras system.

  8. On-line content creation for photo products: understanding what the user wants

    NASA Astrophysics Data System (ADS)

    Fageth, Reiner

    2015-03-01

    This paper describes how videos can be implemented into printed photo books and greeting cards. We will show that - surprisingly or not- pictures from videos are similarly used such as classical images to tell compelling stories. Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used.

  9. The development of large-aperture test system of infrared camera and visible CCD camera

    NASA Astrophysics Data System (ADS)

    Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying

    2015-10-01

    Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.

  10. Assessment of the DoD Embedded Media Program

    DTIC Science & Technology

    2004-09-01

    Classified and Sensitive Information ................... VII-22 3. Weapons Systems Video, Gun Camera Video, and Lipstick Cameras...Weapons Systems Video, Gun Camera Video, and Lipstick Cameras A SECDEF and CJCS message to commanders stated, “Put in place mechanisms and processes...of public communication activities.”126 The 10 February 2003 PAG stated, “Use of lipstick and helmet-mounted cameras on combat sorties is approved

  11. Post-coma persons emerging from a minimally conscious state with multiple disabilities make technology-aided phone contacts with relevant partners.

    PubMed

    Lancioni, Giulio E; Singh, Nirbhay N; O'Reilly, Mark F; Sigafoos, Jeff; Oliva, Doretta; Campodonico, Francesca; D'Amico, Fiora; Buonocunto, Francesca; Sacco, Valentina; Didden, Robert

    2013-10-01

    Post-coma individuals emerging from a minimally conscious state with multiple disabilities may enjoy contact with relevant partners (e.g., family members and friends), but may not have easy access to them. These two single-case studies assessed whether those individuals could make contact with partners through computer-aided telephone technology and enjoy such contact. The technology involved a computer system with special software, a global system for mobile communication modem (GSM), and microswitch devices. In Study I, the computer system presented a 23-year-old man the names of the partners that he could contact, one at a time, automatically. Together with each partner's name, the system also presented the voice of the partner asking the man whether he wanted to call him or her. The man could (a) place a call to that partner by activating a camera-based microswitch through mouth movements or (b) bypass that partner and wait for the next one to be presented. In Study II, the system presented a 36-year-old man the partners' names only after he had activated his wobble microswitch with a hand movement. The man could place a call or bypass a partner as in Study I. The results showed that both men (a) were able to contact relevant partners through the technology, (b) seemed to enjoy their telephone-mediated communication contacts with the partners, and (c) showed preferences among the partners. Implications of the findings are discussed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. View of 'Cape St. Mary' from 'Cape Verde'

    NASA Technical Reports Server (NTRS)

    2006-01-01

    As part of its investigation of 'Victoria Crater,' NASA's Mars Exploration Rover Opportunity examined a promontory called 'Cape St. Mary' from the from the vantage point of 'Cape Verde,' the next promontory counterclockwise around the crater's deeply scalloped rim. This view of Cape St. Mary combines several exposures taken by the rover's panoramic camera into an approximately true-color mosaic.

    The upper portion of the crater wall contains a jumble of material tossed outward by the impact that excavated the crater. This vertical cross-section through the blanket of ejected material surrounding the crater was exposed by erosion that expanded the crater outward from its original diameter, according to scientists' interpretation of the observations. Below the jumbled material in the upper part of the wall are layers that survive relatively intact from before the crater-causing impact. Near the base of the Cape St. Mary cliff are layers with a pattern called 'crossbedding,' intersecting with each other at angles, rather than parallel to each other. Large-scale crossbedding can result from material being deposited as wind-blown dunes.

    The images combined into this mosaic were taken during the 970th Martian day, or sol, of Opportunity's Mars-surface mission (Oct. 16, 2006). The panoramic camera took them through the camera's 750-nanometer, 530-nanometer and 430-nanometer filters.

  13. Lytro camera technology: theory, algorithms, performance analysis

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor; Yu, Zhan; Lumsdaine, Andrew; Goma, Sergio

    2013-03-01

    The Lytro camera is the first implementation of a plenoptic camera for the consumer market. We consider it a successful example of the miniaturization aided by the increase in computational power characterizing mobile computational photography. The plenoptic camera approach to radiance capture uses a microlens array as an imaging system focused on the focal plane of the main camera lens. This paper analyzes the performance of Lytro camera from a system level perspective, considering the Lytro camera as a black box, and uses our interpretation of Lytro image data saved by the camera. We present our findings based on our interpretation of Lytro camera file structure, image calibration and image rendering; in this context, artifacts and final image resolution are discussed.

  14. Antenna Measurements: Test & Analysis of the Radiated Emissions from the NASA/Orion Spacecraft - Parachute System Simulator

    NASA Technical Reports Server (NTRS)

    Norgard, John D.

    2012-01-01

    For future NASA Manned Space Exploration of the Moon and Mars, a blunt body capsule, called the Orion Crew Exploration Vehicle (CEV), composed of a Crew Module (CM) and a Service Module (SM), with a parachute decent assembly is planned for reentry back to Earth. A Capsule Parachute Assembly System (CPAS) is being developed for preliminary parachute drop tests at the Yuma Proving Ground (YPG) to simulate high-speed reentry to Earth from beyond Low-Earth-Orbit (LEO) and to provide measurements of landing parameters and parachute loads. The avionics systems on CPAS also provide mission critical firing events to deploy, reef, and release the parachutes in three stages (extraction, drogues, mains) using mortars and pressure cartridge assemblies. In addition, a Mid-Air Delivery System (MDS) is used to separate the capsule from the sled that is used to eject the capsule from the back of the drop plane. Also, high-speed and high-definition cameras in a Video Camera System (VCS) are used to film the drop plane extraction and parachute landing events. To verify Electromagnetic Compatibility (EMC) of the CPAS system from unintentional radiation, Electromagnetic Interference (EMI) measurements are being made inside a semi-anechoic chamber at NASA/JSC at 1m from the electronic components of the CPAS system. In addition, EMI measurements of the integrated CPAS system are being made inside a hanger at YPG. These near-field B-Dot probe measurements on the surface of a parachute simulator (DART) are being extrapolated outward to the 1m standard distance for comparison to the MIL-STD radiated emissions limit.

  15. Namibia and Central Angola

    Atmospheric Science Data Center

    2013-04-15

    ... The images on the left are natural color (red, green, blue) images from MISR's vertical-viewing (nadir) camera. The images on the ... one of MISR's derived surface products. The radiance (light intensity) in each pixel of the so-called "top-of-atmosphere" images on ...

  16. What Juno will see at Jupiter South Pole Simulation

    NASA Image and Video Library

    2011-08-03

    This simulated view of the south pole of Jupiter illustrates the unique perspective of NASA Juno mission. Juno polar orbit will allow its camera, called JunoCam, to image Jupiter clouds from a vantage point never accessed by other spacecraft.

  17. Microgravity combustion experiment using high altitude balloon.

    NASA Astrophysics Data System (ADS)

    Kan, Yuji

    In JAXA, microgravity experiment system using a high altitude balloon was developed , for good microgravity environment and short turn-around time. In this publication, I give an account of themicrogravity experiment system and a combustion experiment to utilize the system. The balloon operated vehicle (BOV) as a microgravity experiment system was developed from 2004 to 2009. Features of the BOV are (1) BOV has double capsule structure. Outside-capsule and inside-capsule are kept the non-contact state by 3-axis drag-free control. (2) The payload is spherical shape and itsdiameter is about 300 mm. (3) Keep 10-4 G level microgravity environment for about 30 seconds However, BOV’s payload was small, and could not mount large experiment module. In this study, inherits the results of past, we established a new experimental system called “iBOV” in order toaccommodate larger payload. Features of the iBOV are (1) Drag-free control use for only vertical direction. (2) The payload is a cylindrical shape and its size is about 300 mm in diameter and 700 mm in height. (3) Keep 10-3-10-4 G level microgravity environment for about 30 seconds We have "Observation experiment of flame propagation behavior of the droplets column" as experiment using iBOV. This experiment is a theme that was selected first for technical demonstration of iBOV. We are conducting the flame propagation mechanism elucidation study of fuel droplets array was placed at regular intervals. We conducted a microgravity experiments using TEXUS rocket ESA and drop tower. For this microgravity combustion experiment using high altitude balloon, we use the Engineering Model (EM) for TEXUS rocket experiment. The EM (This payload) consists of combustion vessel, droplets supporter, droplets generator, fuel syringe, igniter, digital camera, high-speed camera. And, This payload was improved from the EM as follows. 1. Add a control unit. 2. Add inside batteries for control unit and heater of combustion vessel. 3. Update of the cameras for the observation. In this experiment, we heat air in the combustion vessel to 500K, before microgravity. And during microgravity, we conduct to the follows. (1) Generate five droplets on the droplets supporter. (2) Moving droplets into combustion vessel. (3) Ignition of an edge droplet of the array using igniter. And during combustion experiment, cameras take movies of combustion phenomena. We plan to conduct this experiment in May 2014.

  18. Single-snapshot 2D color measurement by plenoptic imaging system

    NASA Astrophysics Data System (ADS)

    Masuda, Kensuke; Yamanaka, Yuji; Maruyama, Go; Nagai, Sho; Hirai, Hideaki; Meng, Lingfei; Tosic, Ivana

    2014-03-01

    Plenoptic cameras enable capture of directional light ray information, thus allowing applications such as digital refocusing, depth estimation, or multiband imaging. One of the most common plenoptic camera architectures contains a microlens array at the conventional image plane and a sensor at the back focal plane of the microlens array. We leverage the multiband imaging (MBI) function of this camera and develop a single-snapshot, single-sensor high color fidelity camera. Our camera is based on a plenoptic system with XYZ filters inserted in the pupil plane of the main lens. To achieve high color measurement precision of this system, we perform an end-to-end optimization of the system model that includes light source information, object information, optical system information, plenoptic image processing and color estimation processing. Optimized system characteristics are exploited to build an XYZ plenoptic colorimetric camera prototype that achieves high color measurement precision. We describe an application of our colorimetric camera to color shading evaluation of display and show that it achieves color accuracy of ΔE<0.01.

  19. Energy-efficient lighting system for television

    DOEpatents

    Cawthorne, Duane C.

    1987-07-21

    A light control system for a television camera comprises an artificial light control system which is cooperative with an iris control system. This artificial light control system adjusts the power to lamps illuminating the camera viewing area to provide only sufficient artificial illumination necessary to provide a sufficient video signal when the camera iris is substantially open.

  20. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.

    PubMed

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-06-24

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.

  1. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras

    PubMed Central

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-01-01

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961

  2. A scalable multi-DLP pico-projector system for virtual reality

    NASA Astrophysics Data System (ADS)

    Teubl, F.; Kurashima, C.; Cabral, M.; Fels, S.; Lopes, R.; Zuffo, M.

    2014-03-01

    Virtual Reality (VR) environments can offer immersion, interaction and realistic images to users. A VR system is usually expensive and requires special equipment in a complex setup. One approach is to use Commodity-Off-The-Shelf (COTS) desktop multi-projectors manually or camera based calibrated to reduce the cost of VR systems without significant decrease of the visual experience. Additionally, for non-planar screen shapes, special optics such as lenses and mirrors are required thus increasing costs. We propose a low-cost, scalable, flexible and mobile solution that allows building complex VR systems that projects images onto a variety of arbitrary surfaces such as planar, cylindrical and spherical surfaces. This approach combines three key aspects: 1) clusters of DLP-picoprojectors to provide homogeneous and continuous pixel density upon arbitrary surfaces without additional optics; 2) LED lighting technology for energy efficiency and light control; 3) smaller physical footprint for flexibility purposes. Therefore, the proposed system is scalable in terms of pixel density, energy and physical space. To achieve these goals, we developed a multi-projector software library called FastFusion that calibrates all projectors in a uniform image that is presented to viewers. FastFusion uses a camera to automatically calibrate geometric and photometric correction of projected images from ad-hoc positioned projectors, the only requirement is some few pixels overlapping amongst them. We present results with eight Pico-projectors, with 7 lumens (LED) and DLP 0.17 HVGA Chipset.

  3. Composite video and graphics display for multiple camera viewing system in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1991-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  4. Composite video and graphics display for camera viewing systems in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1993-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  5. Feasibility study of a ``4H'' X-ray camera based on GaAs:Cr sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dragone, Angelo; Kenney, Chris; Lozinskaya, Anastassiya

    Here, we describe a multilayer stacked X-ray camera concept. This type of technology is called `4H' X-ray cameras, where 4H stands for high-Z (Z>30) sensor, high-resolution (less than 300 micron pixel pitch), high-speed (above 100 MHz), and high-energy (above 30 keV in photon energy). The components of the technology, similar to the popular two-dimensional (2D) hybrid pixelated array detectors, consists of GaAs:Cr sensors bonded to high-speed ASICs. 4H cameras based on GaAs also use integration mode of X-ray detection. The number of layers, on the order of ten, is smaller than an earlier configuration for single-photon-counting (SPC) mode of detectionmore » [1]. High-speed ASIC based on modification to the ePix family of ASIC is discussed. Applications in X-ray free electron lasers (XFELs), synchrotrons, medicine and non-destructive testing are possible.« less

  6. Feasibility study of a ``4H'' X-ray camera based on GaAs:Cr sensor

    DOE PAGES

    Dragone, Angelo; Kenney, Chris; Lozinskaya, Anastassiya; ...

    2016-11-29

    Here, we describe a multilayer stacked X-ray camera concept. This type of technology is called `4H' X-ray cameras, where 4H stands for high-Z (Z>30) sensor, high-resolution (less than 300 micron pixel pitch), high-speed (above 100 MHz), and high-energy (above 30 keV in photon energy). The components of the technology, similar to the popular two-dimensional (2D) hybrid pixelated array detectors, consists of GaAs:Cr sensors bonded to high-speed ASICs. 4H cameras based on GaAs also use integration mode of X-ray detection. The number of layers, on the order of ten, is smaller than an earlier configuration for single-photon-counting (SPC) mode of detectionmore » [1]. High-speed ASIC based on modification to the ePix family of ASIC is discussed. Applications in X-ray free electron lasers (XFELs), synchrotrons, medicine and non-destructive testing are possible.« less

  7. Tests of commercial colour CMOS cameras for astronomical applications

    NASA Astrophysics Data System (ADS)

    Pokhvala, S. M.; Reshetnyk, V. M.; Zhilyaev, B. E.

    2013-12-01

    We present some results of testing commercial colour CMOS cameras for astronomical applications. Colour CMOS sensors allow to perform photometry in three filters simultaneously that gives a great advantage compared with monochrome CCD detectors. The Bayer BGR colour system realized in colour CMOS sensors is close to the astronomical Johnson BVR system. The basic camera characteristics: read noise (e^{-}/pix), thermal noise (e^{-}/pix/sec) and electronic gain (e^{-}/ADU) for the commercial digital camera Canon 5D MarkIII are presented. We give the same characteristics for the scientific high performance cooled CCD camera system ALTA E47. Comparing results for tests of Canon 5D MarkIII and CCD ALTA E47 show that present-day commercial colour CMOS cameras can seriously compete with the scientific CCD cameras in deep astronomical imaging.

  8. Research on the electro-optical assistant landing system based on the dual camera photogrammetry algorithm

    NASA Astrophysics Data System (ADS)

    Mi, Yuhe; Huang, Yifan; Li, Lin

    2015-08-01

    Based on the location technique of beacon photogrammetry, Dual Camera Photogrammetry (DCP) algorithm was used to assist helicopters landing on the ship. In this paper, ZEMAX was used to simulate the two Charge Coupled Device (CCD) cameras imaging four beacons on both sides of the helicopter and output the image to MATLAB. Target coordinate systems, image pixel coordinate systems, world coordinate systems and camera coordinate systems were established respectively. According to the ideal pin-hole imaging model, the rotation matrix and translation vector of the target coordinate systems and the camera coordinate systems could be obtained by using MATLAB to process the image information and calculate the linear equations. On the basis mentioned above, ambient temperature and the positions of the beacons and cameras were changed in ZEMAX to test the accuracy of the DCP algorithm in complex sea status. The numerical simulation shows that in complex sea status, the position measurement accuracy can meet the requirements of the project.

  9. Martian Surface Beneath Phoenix

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This is an image of the Martian surface beneath NASA's Phoenix Mars Lander. The image was taken by Phoenix's Robotic Arm Camera (RAC) on the eighth Martian day of the mission, or Sol 8 (June 2, 2008). The light feature in the middle of the image below the leg is informally called 'Holy Cow.' The dust, shown in the dark foreground, has been blown off of 'Holy Cow' by Phoenix's thruster engines.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  10. Low-cost digital dynamic visualization system

    NASA Astrophysics Data System (ADS)

    Asundi, Anand K.; Sajan, M. R.

    1995-05-01

    High speed photographic systems like the image rotation camera, the Cranz Schardin camera and the drum camera are typically used for recording and visualization of dynamic events in stress analysis, fluid mechanics, etc. All these systems are fairly expensive and generally not simple to use. Furthermore they are all based on photographic film recording systems requiring time consuming and tedious wet processing of the films. Currently digital cameras are replacing to certain extent the conventional cameras for static experiments. Recently, there is lot of interest in developing and modifying CCD architectures and recording arrangements for dynamic scene analysis. Herein we report the use of a CCD camera operating in the Time Delay and Integration (TDI) mode for digitally recording dynamic scenes. Applications in solid as well as fluid impact problems are presented.

  11. A New Tool for Quality Control

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Diffracto, Ltd. is now offering a new product inspection system that allows detection of minute flaws previously difficult or impossible to observe. Called D-Sight, it represents a revolutionary technique for inspection of flat or curved surfaces to find such imperfections as dings, dents and waviness. System amplifies defects, making them highly visible to simplify decision making as to corrective measures or to identify areas that need further study. CVA 3000 employs a camera, high intensity lamps and a special reflective screen to produce a D- Sight image of light reflected from a surface. Image is captured and stored in a computerized vision system then analyzed by a computer program. A live image of surface is projected onto a video display and compared with a stored master image to identify imperfections. Localized defects measuring less than 1/1000 of an inch are readily detected.

  12. IMAX camera (12-IML-1)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The IMAX camera system is used to record on-orbit activities of interest to the public. Because of the extremely high resolution of the IMAX camera, projector, and audio systems, the audience is afforded a motion picture experience unlike any other. IMAX and OMNIMAX motion picture systems were designed to create motion picture images of superior quality and audience impact. The IMAX camera is a 65 mm, single lens, reflex viewing design with a 15 perforation per frame horizontal pull across. The frame size is 2.06 x 2.77 inches. Film travels through the camera at a rate of 336 feet per minute when the camera is running at the standard 24 frames/sec.

  13. Development of biostereometric experiments. [stereometric camera system

    NASA Technical Reports Server (NTRS)

    Herron, R. E.

    1978-01-01

    The stereometric camera was designed for close-range techniques in biostereometrics. The camera focusing distance of 360 mm to infinity covers a broad field of close-range photogrammetry. The design provides for a separate unit for the lens system and interchangeable backs on the camera for the use of single frame film exposure, roll-type film cassettes, or glass plates. The system incorporates the use of a surface contrast optical projector.

  14. A direct-view customer-oriented digital holographic camera

    NASA Astrophysics Data System (ADS)

    Besaga, Vira R.; Gerhardt, Nils C.; Maksimyak, Peter P.; Hofmann, Martin R.

    2018-01-01

    In this paper, we propose a direct-view digital holographic camera system consisting mostly of customer-oriented components. The camera system is based on standard photographic units such as camera sensor and objective and is adapted to operate under off-axis external white-light illumination. The common-path geometry of the holographic module of the system ensures direct-view operation. The system can operate in both self-reference and self-interference modes. As a proof of system operability, we present reconstructed amplitude and phase information of a test sample.

  15. Deep Space Positioning System

    NASA Technical Reports Server (NTRS)

    Vaughan, Andrew T. (Inventor); Riedel, Joseph E. (Inventor)

    2016-01-01

    A single, compact, lower power deep space positioning system (DPS) configured to determine a location of a spacecraft anywhere in the solar system, and provide state information relative to Earth, Sun, or any remote object. For example, the DPS includes a first camera and, possibly, a second camera configured to capture a plurality of navigation images to determine a state of a spacecraft in a solar system. The second camera is located behind, or adjacent to, a secondary reflector of a first camera in a body of a telescope.

  16. Intensity-based readout of resonant-waveguide grating biosensors: Systems and nanostructures

    NASA Astrophysics Data System (ADS)

    Paulsen, Moritz; Jahns, Sabrina; Gerken, Martina

    2017-09-01

    Resonant waveguide gratings (RWG) - also called photonic crystal slabs (PCS) - have been established as reliable optical transducers for label-free biochemical assays as well as for cell-based assays. Current readout systems are based on mechanical scanning and spectrometric measurements with system sizes suitable for laboratory equipment. Here, we review recent progress in compact intensity-based readout systems for point-of-care (POC) applications. We briefly introduce PCSs as sensitive optical transducers and introduce different approaches for intensity-based readout systems. Photometric measurements have been realized with a simple combination of a light source and a photodetector. Recently a 96-channel, intensity-based readout system for both biochemical interaction analyses as well as cellular assays was presented employing the intensity change of a near cut-off mode. As an alternative for multiparametric detection, a camera system for imaging detection has been implemented. A portable, camera-based system of size 13 cm × 4.9 cm × 3.5 cm with six detection areas on an RWG surface area of 11 mm × 7 mm has been demonstrated for the parallel detection of six protein binding kinetics. The signal-to-noise ratio of this system corresponds to a limit of detection of 168 M (24 ng/ml). To further improve the signal-to-noise ratio advanced nanostructure designs are investigated for RWGs. Here, results on multiperiodic and deterministic aperiodic nanostructures are presented. These advanced nanostructures allow for the design of the number and wavelengths of the RWG resonances. In the context of intensity-based readout systems they are particularly interesting for the realization of multi-LED systems. These recent trends suggest that compact point-of-care systems employing disposable test chips with RWG functional areas may reach market in the near future.

  17. OPSO - The OpenGL based Field Acquisition and Telescope Guiding System

    NASA Astrophysics Data System (ADS)

    Škoda, P.; Fuchs, J.; Honsa, J.

    2006-07-01

    We present OPSO, a modular pointing and auto-guiding system for the coudé spectrograph of the Ondřejov observatory 2m telescope. The current field and slit viewing CCD cameras with image intensifiers are giving only standard TV video output. To allow the acquisition and guiding of very faint targets, we have designed an image enhancing system working in real time on TV frames grabbed by BT878-based video capture card. Its basic capabilities include the sliding averaging of hundreds of frames with bad pixel masking and removal of outliers, display of median of set of frames, quick zooming, contrast and brightness adjustment, plotting of horizontal and vertical cross cuts of seeing disk within given intensity range and many more. From the programmer's point of view, the system consists of three tasks running in parallel on a Linux PC. One C task controls the video capturing over Video for Linux (v4l2) interface and feeds the frames into the large block of shared memory, where the core image processing is done by another C program calling the OpenGL library. The GUI is, however, dynamically built in Python from XML description of widgets prepared in Glade. All tasks are exchanging information by IPC calls using the shared memory segments.

  18. A Quasi-Static Method for Determining the Characteristics of a Motion Capture Camera System in a "Split-Volume" Configuration

    NASA Technical Reports Server (NTRS)

    Miller, Chris; Mulavara, Ajitkumar; Bloomberg, Jacob

    2001-01-01

    To confidently report any data collected from a video-based motion capture system, its functional characteristics must be determined, namely accuracy, repeatability and resolution. Many researchers have examined these characteristics with motion capture systems, but they used only two cameras, positioned 90 degrees to each other. Everaert used 4 cameras, but all were aligned along major axes (two in x, one in y and z). Richards compared the characteristics of different commercially available systems set-up in practical configurations, but all cameras viewed a single calibration volume. The purpose of this study was to determine the accuracy, repeatability and resolution of a 6-camera Motion Analysis system in a split-volume configuration using a quasistatic methodology.

  19. ACL reconstruction

    MedlinePlus

    ... Your hamstring are the muscles behind your knee. Tissue taken from a donor is called an allograft. The procedure is usually performed with the help of knee arthroscopy . With arthroscopy, a tiny camera is inserted into ... ligaments and other tissues of your knee. Your surgeon will make other ...

  20. Close-Up After Preparatory Test of Drilling on Mars

    NASA Image and Video Library

    2013-02-07

    After an activity called the mini drill test by NASA Mars rover Curiosity, the rover MAHLI camera recorded this view of the results. The test generated a ring of powdered rock for inspection in advance of the rover first full drilling.

  1. Sample-Collection Drill Hole on Martian Sandstone Target Windjana

    NASA Image and Video Library

    2014-05-06

    This image from the Navigation Camera Navcam on NASA Curiosity Mars rover shows two holes at top center drilled into a sandstone target called Windjana. The farther hole, with larger pile of tailings around it, is a full-depth sampling hole.

  2. Smartphone-based low light detection for bioluminescence application

    USDA-ARS?s Scientific Manuscript database

    We report a smartphone-based device and associated imaging-processing algorithm to maximize the sensitivity of standard smartphone cameras, that can detect the presence of single-digit pW of radiant flux intensity. The proposed hardware and software, called bioluminescent-based analyte quantitation ...

  3. Streak camera receiver definition study

    NASA Technical Reports Server (NTRS)

    Johnson, C. B.; Hunkler, L. T., Sr.; Letzring, S. A.; Jaanimagi, P.

    1990-01-01

    Detailed streak camera definition studies were made as a first step toward full flight qualification of a dual channel picosecond resolution streak camera receiver for the Geoscience Laser Altimeter and Ranging System (GLRS). The streak camera receiver requirements are discussed as they pertain specifically to the GLRS system, and estimates of the characteristics of the streak camera are given, based upon existing and near-term technological capabilities. Important problem areas are highlighted, and possible corresponding solutions are discussed.

  4. Wired and Wireless Camera Triggering with Arduino

    NASA Astrophysics Data System (ADS)

    Kauhanen, H.; Rönnholm, P.

    2017-10-01

    Synchronous triggering is an important task that allows simultaneous data capture from multiple cameras. Accurate synchronization enables 3D measurements of moving objects or from a moving platform. In this paper, we describe one wired and four wireless variations of Arduino-based low-cost remote trigger systems designed to provide a synchronous trigger signal for industrial cameras. Our wireless systems utilize 315 MHz or 434 MHz frequencies with noise filtering capacitors. In order to validate the synchronization accuracy, we developed a prototype of a rotating trigger detection system (named RoTriDeS). This system is suitable to detect the triggering accuracy of global shutter cameras. As a result, the wired system indicated an 8.91 μs mean triggering time difference between two cameras. Corresponding mean values for the four wireless triggering systems varied between 7.92 and 9.42 μs. Presented values include both camera-based and trigger-based desynchronization. Arduino-based triggering systems appeared to be feasible, and they have the potential to be extended to more complicated triggering systems.

  5. Dynamic photoelasticity by TDI imaging

    NASA Astrophysics Data System (ADS)

    Asundi, Anand K.; Sajan, M. R.

    2001-06-01

    High speed photographic system like the image rotation camera, the Cranz Schardin camera and the drum camera are typically used for the recording and visualization of dynamic events in stress analysis, fluid mechanics, etc. All these systems are fairly expensive and generally not simple to use. Furthermore they are all based on photographic film recording system requiring time consuming and tedious wet processing of the films. Digital cameras are replacing the conventional cameras, to certain extent in static experiments. Recently, there is lots of interest in development and modifying CCD architectures and recording arrangements for dynamic scenes analysis. Herein we report the use of a CCD camera operating in the Time Delay and Integration mode for digitally recording dynamic photoelastic stress patterns. Applications in strobe and streak photoelastic pattern recording and system limitations will be explained in the paper.

  6. Fiber optic TV direct

    NASA Technical Reports Server (NTRS)

    Kassak, John E.

    1991-01-01

    The objective of the operational television (OTV) technology was to develop a multiple camera system (up to 256 cameras) for NASA Kennedy installations where camera video, synchronization, control, and status data are transmitted bidirectionally via a single fiber cable at distances in excess of five miles. It is shown that the benefits (such as improved video performance, immunity from electromagnetic interference and radio frequency interference, elimination of repeater stations, and more system configuration flexibility) can be realized if application of the proven fiber optic transmission concept is used. The control system will marry the lens, pan and tilt, and camera control functions into a modular based Local Area Network (LAN) control network. Such a system does not exist commercially at present since the Television Broadcast Industry's current practice is to divorce the positional controls from the camera control system. The application software developed for this system will have direct applicability to similar systems in industry using LAN based control systems.

  7. Levels of Autonomy and Autonomous System Performance Assessment for Intelligent Unmanned Systems

    DTIC Science & Technology

    2014-04-01

    LIDAR and camera sensors that is driven entirely by teleoperation would be AL 0. If that same robot used its LIDAR and camera data to generate a...obstacle detection, mapping, path planning 3 CMMAD semi- autonomous counter- mine system (Few 2010) Talon UGV, camera, LIDAR , metal detector...NCAP framework are performed on individual UMS components and do not require mission level evaluations. For example, bench testing of camera, LIDAR

  8. Performance of the Tachyon Time-of-Flight PET Camera

    NASA Astrophysics Data System (ADS)

    Peng, Q.; Choong, W.-S.; Vu, C.; Huber, J. S.; Janecek, M.; Wilson, D.; Huesman, R. H.; Qi, Jinyi; Zhou, Jian; Moses, W. W.

    2015-02-01

    We have constructed and characterized a time-of-flight Positron Emission Tomography (TOF PET) camera called the Tachyon. The Tachyon is a single-ring Lutetium Oxyorthosilicate (LSO) based camera designed to obtain significantly better timing resolution than the 550 ps found in present commercial TOF cameras, in order to quantify the benefit of improved TOF resolution for clinically relevant tasks. The Tachyon's detector module is optimized for timing by coupling the 6.15 ×25 mm2 side of 6.15 ×6.15 ×25 mm3 LSO scintillator crystals onto a 1-inch diameter Hamamatsu R-9800 PMT with a super-bialkali photocathode. We characterized the camera according to the NEMA NU 2-2012 standard, measuring the energy resolution, timing resolution, spatial resolution, noise equivalent count rates and sensitivity. The Tachyon achieved a coincidence timing resolution of 314 ps +/- 20 ps FWHM over all crystal-crystal combinations. Experiments were performed with the NEMA body phantom to assess the imaging performance improvement over non-TOF PET. The results show that at a matched contrast, incorporating 314 ps TOF reduces the standard deviation of the contrast by a factor of about 2.3.

  9. Performance of the Tachyon Time-of-Flight PET Camera.

    PubMed

    Peng, Q; Choong, W-S; Vu, C; Huber, J S; Janecek, M; Wilson, D; Huesman, R H; Qi, Jinyi; Zhou, Jian; Moses, W W

    2015-02-01

    We have constructed and characterized a time-of-flight Positron Emission Tomography (TOF PET) camera called the Tachyon. The Tachyon is a single-ring Lutetium Oxyorthosilicate (LSO) based camera designed to obtain significantly better timing resolution than the ~ 550 ps found in present commercial TOF cameras, in order to quantify the benefit of improved TOF resolution for clinically relevant tasks. The Tachyon's detector module is optimized for timing by coupling the 6.15 × 25 mm 2 side of 6.15 × 6.15 × 25 mm 3 LSO scintillator crystals onto a 1-inch diameter Hamamatsu R-9800 PMT with a super-bialkali photocathode. We characterized the camera according to the NEMA NU 2-2012 standard, measuring the energy resolution, timing resolution, spatial resolution, noise equivalent count rates and sensitivity. The Tachyon achieved a coincidence timing resolution of 314 ps +/- ps FWHM over all crystal-crystal combinations. Experiments were performed with the NEMA body phantom to assess the imaging performance improvement over non-TOF PET. The results show that at a matched contrast, incorporating 314 ps TOF reduces the standard deviation of the contrast by a factor of about 2.3.

  10. Performance of the Tachyon Time-of-Flight PET Camera

    PubMed Central

    Peng, Q.; Choong, W.-S.; Vu, C.; Huber, J. S.; Janecek, M.; Wilson, D.; Huesman, R. H.; Qi, Jinyi; Zhou, Jian; Moses, W. W.

    2015-01-01

    We have constructed and characterized a time-of-flight Positron Emission Tomography (TOF PET) camera called the Tachyon. The Tachyon is a single-ring Lutetium Oxyorthosilicate (LSO) based camera designed to obtain significantly better timing resolution than the ~ 550 ps found in present commercial TOF cameras, in order to quantify the benefit of improved TOF resolution for clinically relevant tasks. The Tachyon’s detector module is optimized for timing by coupling the 6.15 × 25 mm2 side of 6.15 × 6.15 × 25 mm3 LSO scintillator crystals onto a 1-inch diameter Hamamatsu R-9800 PMT with a super-bialkali photocathode. We characterized the camera according to the NEMA NU 2-2012 standard, measuring the energy resolution, timing resolution, spatial resolution, noise equivalent count rates and sensitivity. The Tachyon achieved a coincidence timing resolution of 314 ps +/− ps FWHM over all crystal-crystal combinations. Experiments were performed with the NEMA body phantom to assess the imaging performance improvement over non-TOF PET. The results show that at a matched contrast, incorporating 314 ps TOF reduces the standard deviation of the contrast by a factor of about 2.3. PMID:26594057

  11. Performance of the Tachyon Time-of-Flight PET Camera

    DOE PAGES

    Peng, Q.; Choong, W. -S.; Vu, C.; ...

    2015-01-23

    We have constructed and characterized a time-of-flight Positron Emission Tomography (TOF PET) camera called the Tachyon. The Tachyon is a single-ring Lutetium Oxyorthosilicate (LSO) based camera designed to obtain significantly better timing resolution than the ~ 550 ps found in present commercial TOF cameras, in order to quantify the benefit of improved TOF resolution for clinically relevant tasks. The Tachyon's detector module is optimized for timing by coupling the 6.15 ×25 mm 2 side of 6.15 ×6.15 ×25 mm 3 LSO scintillator crystals onto a 1-inch diameter Hamamatsu R-9800 PMT with a super-bialkali photocathode. We characterized the camera according tomore » the NEMA NU 2-2012 standard, measuring the energy resolution, timing resolution, spatial resolution, noise equivalent count rates and sensitivity. The Tachyon achieved a coincidence timing resolution of 314 ps +/- 20 ps FWHM over all crystal-crystal combinations. Experiments were performed with the NEMA body phantom to assess the imaging performance improvement over non-TOF PET. We find that the results show that at a matched contrast, incorporating 314 ps TOF reduces the standard deviation of the contrast by a factor of about 2.3.« less

  12. Similar on the Inside (pre-grinding)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This approximate true-color image taken by the panoramic camera on the Mars Exploration Rover Opportunity show the rock called 'Pilbara' located in the small crater dubbed 'Fram.' The rock appears to be dotted with the same 'blueberries,' or spherules, found at 'Eagle Crater.' Spirit drilled into this rock with its rock abrasion tool. After analyzing the hole with the rover's scientific instruments, scientists concluded that Pilbara has a similar chemical make-up, and thus watery past, to rocks studied at Eagle Crater. This image was taken with the panoramic camera's 480-, 530- and 600-nanometer filters.

  13. Similar on the Inside (post-grinding)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This approximate true-color image taken by the panoramic camera on the Mars Exploration Rover Opportunity show the hole drilled into the rock called 'Pilbara,' which is located in the small crater dubbed 'Fram.' Spirit drilled into this rock with its rock abrasion tool. The rock appears to be dotted with the same 'blueberries,' or spherules, found at 'Eagle Crater.' After analyzing the hole with the rover's scientific instruments, scientists concluded that Pilbara has a similar chemical make-up, and thus watery past, to rocks studied at Eagle Crater. This image was taken with the panoramic camera's 480-, 530- and 600-nanometer filters.

  14. Observation of interaction of shock wave with gas bubble by image converter camera

    NASA Astrophysics Data System (ADS)

    Yoshii, M.; Tada, M.; Tsuji, T.; Isuzugawa, Kohji

    1995-05-01

    When a spark discharge occurs at the first focal point of a semiellipsoid or a reflector located in water, a spherical shock wave is produced. A part of the wave spreads without reflecting on the reflector and is called direct wave in this paper. Another part reflects on the semiellipsoid and converges near the second focal point, that is named the focusing wave, and locally produces a high pressure. This phenomenon is applied to disintegrators of kidney stone. But it is concerned that cavitation bubbles induced in the body by the expansion wave following the focusing wave will injure human tissue around kidney stone. In this paper, in order to examine what happens when shock waves strike bubbles on human tissue, the aspect that an air bubble is truck by the spherical shock wave or its behavior is visualized by the schlieren system and its photographs are taken using an image converter camera. Besides,the variation of the pressure amplitude caused by the shock wave and the flow of water around the bubble is measured with a pressure probe.

  15. Performance of the Satellite Test Assistant Robot in JPL's Space Simulation Facility

    NASA Technical Reports Server (NTRS)

    Mcaffee, Douglas; Long, Mark; Johnson, Ken; Siebes, Georg

    1995-01-01

    An innovative new telerobotic inspection system called STAR (the Satellite Test Assistant Robot) has been developed to assist engineers as they test new spacecraft designs in simulated space environments. STAR operates inside the ultra-cold, high-vacuum, test chambers and provides engineers seated at a remote Operator Control Station (OCS) with high resolution video and infrared (IR) images of the flight articles under test. STAR was successfully proof tested in JPL's 25-ft (7.6-m) Space Simulation Chamber where temperatures ranged from +85 C to -190 C and vacuum levels reached 5.1 x 10(exp -6) torr. STAR's IR Camera was used to thermally map the entire interior of the chamber for the first time. STAR also made several unexpected and important discoveries about the thermal processes occurring within the chamber. Using a calibrated test fixture arrayed with ten sample spacecraft materials, the IR camera was shown to produce highly accurate surface temperature data. This paper outlines STAR's design and reports on significant results from the thermal vacuum chamber test.

  16. The imaging system design of three-line LMCCD mapping camera

    NASA Astrophysics Data System (ADS)

    Zhou, Huai-de; Liu, Jin-Guo; Wu, Xing-Xing; Lv, Shi-Liang; Zhao, Ying; Yu, Da

    2011-08-01

    In this paper, the authors introduced the theory about LMCCD (line-matrix CCD) mapping camera firstly. On top of the introduction were consists of the imaging system of LMCCD mapping camera. Secondly, some pivotal designs which were Introduced about the imaging system, such as the design of focal plane module, the video signal's procession, the controller's design of the imaging system, synchronous photography about forward and nadir and backward camera and the nadir camera of line-matrix CCD. At last, the test results of LMCCD mapping camera imaging system were introduced. The results as following: the precision of synchronous photography about forward and nadir and backward camera is better than 4 ns and the nadir camera of line-matrix CCD is better than 4 ns too; the photography interval of line-matrix CCD of the nadir camera can satisfy the butter requirements of LMCCD focal plane module; the SNR tested in laboratory is better than 95 under typical working condition(the solar incidence degree is 30, the reflectivity of the earth's surface is 0.3) of each CCD image; the temperature of the focal plane module is controlled under 30° in a working period of 15 minutes. All of these results can satisfy the requirements about the synchronous photography, the temperature control of focal plane module and SNR, Which give the guarantee of precision for satellite photogrammetry.

  17. Image quality assessment for selfies with and without super resolution

    NASA Astrophysics Data System (ADS)

    Kubota, Aya; Gohshi, Seiichi

    2018-04-01

    With the advent of cellphone cameras, in particular, on smartphones, many people now take photos of themselves alone and with others in the frame; such photos are popularly known as "selfies". Most smartphones are equipped with two cameras: the front-facing and rear cameras. The camera located on the back of the smartphone is referred to as the "out-camera," whereas the one located on the front of the smartphone is called the "in-camera." In-cameras are mainly used for selfies. Some smartphones feature high-resolution cameras. However, the original image quality cannot be obtained because smartphone cameras often have low-performance lenses. Super resolution (SR) is one of the recent technological advancements that has increased image resolution. We developed a new SR technology that can be processed on smartphones. Smartphones with new SR technology are currently available in the market have already registered sales. However, the effective use of new SR technology has not yet been verified. Comparing the image quality with and without SR on smartphone display is necessary to confirm the usefulness of this new technology. Methods that are based on objective and subjective assessments are required to quantitatively measure image quality. It is known that the typical object assessment value, such as Peak Signal to Noise Ratio (PSNR), does not go together with how we feel when we assess image/video. When digital broadcast started, the standard was determined using subjective assessment. Although subjective assessment usually comes at high cost because of personnel expenses for observers, the results are highly reproducible when they are conducted under right conditions and statistical analysis. In this study, the subjective assessment results for selfie images are reported.

  18. Optical Meteor Systems Used by the NASA Meteoroid Environment Office

    NASA Technical Reports Server (NTRS)

    Kingery, A. M.; Blaauw, R. C.; Cooke, W. J.; Moser, D. E.

    2015-01-01

    The NASA Meteoroid Environment Office (MEO) uses two main meteor camera networks to characterize the meteoroid environment: an all sky system and a wide field system to study cm and mm size meteors respectively. The NASA All Sky Fireball Network consists of fifteen meteor video cameras in the United States, with plans to expand to eighteen cameras by the end of 2015. The camera design and All-Sky Guided and Real-time Detection (ASGARD) meteor detection software [1, 2] were adopted from the University of Western Ontario's Southern Ontario Meteor Network (SOMN). After seven years of operation, the network has detected over 12,000 multi-station meteors, including meteors from at least 53 different meteor showers. The network is used for speed distribution determination, characterization of meteor showers and sporadic sources, and for informing the public on bright meteor events. The NASA Wide Field Meteor Network was established in December of 2012 with two cameras and expanded to eight cameras in December of 2014. The two camera configuration saw 5470 meteors over two years of operation with two cameras, and has detected 3423 meteors in the first five months of operation (Dec 12, 2014 - May 12, 2015) with eight cameras. We expect to see over 10,000 meteors per year with the expanded system. The cameras have a 20 degree field of view and an approximate limiting meteor magnitude of +5. The network's primary goal is determining the nightly shower and sporadic meteor fluxes. Both camera networks function almost fully autonomously with little human interaction required for upkeep and analysis. The cameras send their data to a central server for storage and automatic analysis. Every morning the servers automatically generates an e-mail and web page containing an analysis of the previous night's events. The current status of the networks will be described, alongside with preliminary results. In addition, future projects, CCD photometry and broadband meteor color camera system, will be discussed.

  19. 241-AZ-101 Waste Tank Color Video Camera System Shop Acceptance Test Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    WERRY, S.M.

    2000-03-23

    This report includes shop acceptance test results. The test was performed prior to installation at tank AZ-101. Both the camera system and camera purge system were originally sought and procured as a part of initial waste retrieval project W-151.

  20. Integrated inertial stellar attitude sensor

    NASA Technical Reports Server (NTRS)

    Brady, Tye M. (Inventor); Kourepenis, Anthony S. (Inventor); Wyman, Jr., William F. (Inventor)

    2007-01-01

    An integrated inertial stellar attitude sensor for an aerospace vehicle includes a star camera system, a gyroscope system, a controller system for synchronously integrating an output of said star camera system and an output of said gyroscope system into a stream of data, and a flight computer responsive to said stream of data for determining from the star camera system output and the gyroscope system output the attitude of the aerospace vehicle.

  1. The first satellite laser echoes recorded on the streak camera

    NASA Technical Reports Server (NTRS)

    Hamal, Karel; Prochazka, Ivan; Kirchner, Georg; Koidl, F.

    1993-01-01

    The application of the streak camera with the circular sweep for the satellite laser ranging is described. The Modular Streak Camera system employing the circular sweep option was integrated into the conventional Satellite Laser System. The experimental satellite tracking and ranging has been performed. The first satellite laser echo streak camera records are presented.

  2. Electronic camera-management system for 35-mm and 70-mm film cameras

    NASA Astrophysics Data System (ADS)

    Nielsen, Allan

    1993-01-01

    Military and commercial test facilities have been tasked with the need for increasingly sophisticated data collection and data reduction. A state-of-the-art electronic control system for high speed 35 mm and 70 mm film cameras designed to meet these tasks is described. Data collection in today's test range environment is difficult at best. The need for a completely integrated image and data collection system is mandated by the increasingly complex test environment. Instrumentation film cameras have been used on test ranges to capture images for decades. Their high frame rates coupled with exceptionally high resolution make them an essential part of any test system. In addition to documenting test events, today's camera system is required to perform many additional tasks. Data reduction to establish TSPI (time- space-position information) may be performed after a mission and is subject to all of the variables present in documenting the mission. A typical scenario would consist of multiple cameras located on tracking mounts capturing the event along with azimuth and elevation position data. Corrected data can then be reduced using each camera's time and position deltas and calculating the TSPI of the object using triangulation. An electronic camera control system designed to meet these requirements has been developed by Photo-Sonics, Inc. The feedback received from test technicians at range facilities throughout the world led Photo-Sonics to design the features of this control system. These prominent new features include: a comprehensive safety management system, full local or remote operation, frame rate accuracy of less than 0.005 percent, and phase locking capability to Irig-B. In fact, Irig-B phase lock operation of multiple cameras can reduce the time-distance delta of a test object traveling at mach-1 to less than one inch during data reduction.

  3. Modulated electron-multiplied fluorescence lifetime imaging microscope: all-solid-state camera for fluorescence lifetime imaging.

    PubMed

    Zhao, Qiaole; Schelen, Ben; Schouten, Raymond; van den Oever, Rein; Leenen, René; van Kuijk, Harry; Peters, Inge; Polderdijk, Frank; Bosiers, Jan; Raspe, Marcel; Jalink, Kees; Geert Sander de Jong, Jan; van Geest, Bert; Stoop, Karel; Young, Ian Ted

    2012-12-01

    We have built an all-solid-state camera that is directly modulated at the pixel level for frequency-domain fluorescence lifetime imaging microscopy (FLIM) measurements. This novel camera eliminates the need for an image intensifier through the use of an application-specific charge coupled device design in a frequency-domain FLIM system. The first stage of evaluation for the camera has been carried out. Camera characteristics such as noise distribution, dark current influence, camera gain, sampling density, sensitivity, linearity of photometric response, and optical transfer function have been studied through experiments. We are able to do lifetime measurement using our modulated, electron-multiplied fluorescence lifetime imaging microscope (MEM-FLIM) camera for various objects, e.g., fluorescein solution, fixed green fluorescent protein (GFP) cells, and GFP-actin stained live cells. A detailed comparison of a conventional microchannel plate (MCP)-based FLIM system and the MEM-FLIM system is presented. The MEM-FLIM camera shows higher resolution and a better image quality. The MEM-FLIM camera provides a new opportunity for performing frequency-domain FLIM.

  4. Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications

    NASA Technical Reports Server (NTRS)

    Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.

  5. Juno on Jupiter Doorstep

    NASA Image and Video Library

    2016-06-24

    NASA's Juno spacecraft obtained this color view on June 21, 2016, at a distance of 6.8 million miles (10.9 million kilometers) from Jupiter. As Juno makes its initial approach, the giant planet's four largest moons -- Io, Europa, Ganymede and Callisto -- are visible, and the alternating light and dark bands of the planet's clouds are just beginning to come into view. Juno is approaching over Jupiter's north pole, affording the spacecraft a unique perspective on the Jupiter system. Previous missions that imaged Jupiter on approach saw the system from much lower latitudes, closer to the planet's equator. The scene was captured by the mission's imaging camera, called JunoCam, which is designed to acquire high resolution views of features in Jupiter's atmosphere from very close to the planet. http://photojournal.jpl.nasa.gov/catalog/PIA20701

  6. Advances in instrumentation at the W. M. Keck Observatory

    NASA Astrophysics Data System (ADS)

    Adkins, Sean M.; Armandroff, Taft E.; Johnson, James; Lewis, Hilton A.; Martin, Christopher; McLean, Ian S.; Wizinowich, Peter

    2012-09-01

    In this paper we describe both recently completed instrumentation projects and our current development efforts in terms of their role in the strategic plan, the key science areas they address, and their performance as measured or predicted. Projects reaching completion in 2012 include MOSFIRE, a near IR multi-object spectrograph, a laser guide star adaptive optics facility on the Keck I telescope, and an upgrade to the guide camera for the HIRES instrument on Keck I. Projects in development include a new seeing limited integral field spectrograph for the visible wavelength range called the Keck Cosmic Web Imager (KCWI), an upgrade to the telescope control systems on both Keck telescopes, a near-IR tip/tilt sensor for the Keck I adaptive optics system, and a new grating for the OSIRIS integral field spectrograph.

  7. The readout system for the ArTeMis camera

    NASA Astrophysics Data System (ADS)

    Doumayrou, E.; Lortholary, M.; Dumaye, L.; Hamon, G.

    2014-07-01

    During ArTeMiS observations at the APEX telescope (Chajnantor, Chile), 5760 bolometric pixels from 20 arrays at 300mK, corresponding to 3 submillimeter focal planes at 450μm, 350μm and 200μm, have to be read out simultaneously at 40Hz. The read out system, made of electronics and software, is the full chain from the cryostat to the telescope. The readout electronics consists of cryogenic buffers at 4K (NABU), based on CMOS technology, and of warm electronic acquisition systems called BOLERO. The bolometric signal given by each pixel has to be amplified, sampled, converted, time stamped and formatted in data packets by the BOLERO electronics. The time stamping is obtained by the decoding of an IRIG-B signal given by APEX and is key to ensure the synchronization of the data with the telescope. Specifically developed for ArTeMiS, BOLERO is an assembly of analogue and digital FPGA boards connected directly on the top of the cryostat. Two detectors arrays (18*16 pixels), one NABU and one BOLERO interconnected by ribbon cables constitute the unit of the electronic architecture of ArTeMiS. In total, the 20 detectors for the tree focal planes are read by 10 BOLEROs. The software is working on a Linux operating system, it runs on 2 back-end computers (called BEAR) which are small and robust PCs with solid state disks. They gather the 10 BOLEROs data fluxes, and reconstruct the focal planes images. When the telescope scans the sky, the acquisitions are triggered thanks to a specific network protocol. This interface with APEX enables to synchronize the acquisition with the observations on sky: the time stamped data packets are sent during the scans to the APEX software that builds the observation FITS files. A graphical user interface enables the setting of the camera and the real time display of the focal plane images, which is essential in laboratory and commissioning phases. The software is a set of C++, Labview and Python, the qualities of which are respectively used for rapidity, powerful graphic interfacing and scripting. The commands to the camera can be sequenced in Python scripts. The paper describes the whole electronic and software readout chain designed to fulfill the specificities of ArTeMiS and its performances. The specific options used are explained, for example, the limited room in the Cassegrain cabin of APEX has led us to a quite compact design. This system was successfully used in summer 2013 for the commissioning and the first scientific observations with a preliminary set of 4 detectors at 350μm.

  8. Natural user interface as a supplement of the holographic Raman tweezers

    NASA Astrophysics Data System (ADS)

    Tomori, Zoltan; Kanka, Jan; Kesa, Peter; Jakl, Petr; Sery, Mojmir; Bernatova, Silvie; Antalik, Marian; Zemánek, Pavel

    2014-09-01

    Holographic Raman tweezers (HRT) manipulates with microobjects by controlling the positions of multiple optical traps via the mouse or joystick. Several attempts have appeared recently to exploit touch tablets, 2D cameras or Kinect game console instead. We proposed a multimodal "Natural User Interface" (NUI) approach integrating hands tracking, gestures recognition, eye tracking and speech recognition. For this purpose we exploited "Leap Motion" and "MyGaze" low-cost sensors and a simple speech recognition program "Tazti". We developed own NUI software which processes signals from the sensors and sends the control commands to HRT which subsequently controls the positions of trapping beams, micropositioning stage and the acquisition system of Raman spectra. System allows various modes of operation proper for specific tasks. Virtual tools (called "pin" and "tweezers") serving for the manipulation with particles are displayed on the transparent "overlay" window above the live camera image. Eye tracker identifies the position of the observed particle and uses it for the autofocus. Laser trap manipulation navigated by the dominant hand can be combined with the gestures recognition of the secondary hand. Speech commands recognition is useful if both hands are busy. Proposed methods make manual control of HRT more efficient and they are also a good platform for its future semi-automated and fully automated work.

  9. Fuzzy logic control for camera tracking system

    NASA Technical Reports Server (NTRS)

    Lea, Robert N.; Fritz, R. H.; Giarratano, J.; Jani, Yashvant

    1992-01-01

    A concept utilizing fuzzy theory has been developed for a camera tracking system to provide support for proximity operations and traffic management around the Space Station Freedom. Fuzzy sets and fuzzy logic based reasoning are used in a control system which utilizes images from a camera and generates required pan and tilt commands to track and maintain a moving target in the camera's field of view. This control system can be implemented on a fuzzy chip to provide an intelligent sensor for autonomous operations. Capabilities of the control system can be expanded to include approach, handover to other sensors, caution and warning messages.

  10. Variation in detection among passive infrared triggered-cameras used in wildlife research

    USGS Publications Warehouse

    Damm, Philip E.; Grand, James B.; Barnett, Steven W.

    2010-01-01

    Precise and accurate estimates of demographics such as age structure, productivity, and density are necessary in determining habitat and harvest management strategies for wildlife populations. Surveys using automated cameras are becoming an increasingly popular tool for estimating these parameters. However, most camera studies fail to incorporate detection probabilities, leading to parameter underestimation. The objective of this study was to determine the sources of heterogeneity in detection for trail cameras that incorporate a passive infrared (PIR) triggering system sensitive to heat and motion. Images were collected at four baited sites within the Conecuh National Forest, Alabama, using three cameras at each site operating continuously over the same seven-day period. Detection was estimated for four groups of animals based on taxonomic group and body size. Our hypotheses of detection considered variation among bait sites and cameras. The best model (w=0.99) estimated different rates of detection for each camera in addition to different detection rates for four animal groupings. Factors that explain this variability might include poor manufacturing tolerances, variation in PIR sensitivity, animal behavior, and species-specific infrared radiation. Population surveys using trail cameras with PIR systems must incorporate detection rates for individual cameras. Incorporating time-lapse triggering systems into survey designs should eliminate issues associated with PIR systems.

  11. A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment.

    PubMed

    Yang, Tao; Li, Guangpo; Li, Jing; Zhang, Yanning; Zhang, Xiaoqiang; Zhang, Zhuoyue; Li, Zhi

    2016-08-30

    This paper proposes a novel infrared camera array guidance system with capability to track and provide real time position and speed of a fixed-wing Unmanned air vehicle (UAV) during a landing process. The system mainly include three novel parts: (1) Infrared camera array and near infrared laser lamp based cooperative long range optical imaging module; (2) Large scale outdoor camera array calibration module; and (3) Laser marker detection and 3D tracking module. Extensive automatic landing experiments with fixed-wing flight demonstrate that our infrared camera array system has the unique ability to guide the UAV landing safely and accurately in real time. Moreover, the measurement and control distance of our system is more than 1000 m. The experimental results also demonstrate that our system can be used for UAV automatic accurate landing in Global Position System (GPS)-denied environments.

  12. Mars Rock Rocknest 3 Imaged by Curiosity ChemCam

    NASA Image and Video Library

    2012-11-26

    This view of a rock called Rocknest 3 combines two images taken by the Chemistry and Camera ChemCam instrument on the NASA Mars rover Curiosity and indicates five spots where ChemCam had hit the rock with laser pulses to check its composition.

  13. In-situ Image Acquisition Strategy on Asteroid Surface by MINERVA Rover in HAYABUSA Mission

    NASA Astrophysics Data System (ADS)

    Yoshimitsu, T.; Sasaki, S.; Yanagisawa, M.

    Institute of Space and Astronautical Science (ISAS) has launched the engineering test spacecraft ``HAYABUSA'' (formerly called ``MUSES-C'') to the near Earth asteroid ``ITOKAWA (1998SF36)'' on May 9, 2003. HAYABUSA will go to the target asteroid after two years' interplanetary cruise and will descend onto the asteroid surface in 2005 to acquire some fragments, which will be brought back to the Earth in 2007. A tiny rover called ``MINERVA'' has boarded the HAYABUSA spacecraft. MINERVA is the first asteroid rover in the world. It will be deployed onto the surface immediately before the spacecraft touches the asteroid to acquire some fragments. Then it will autonomously move over the surface by hopping for a couple of days and the obtained data on multiple places are transmitted to the Earth via the mother spacecraft. Small cameras and thermometers are installed in the rover. This paper describes the image acquisition strategy by the cameras installed in the rover.

  14. Global calibration of multi-cameras with non-overlapping fields of view based on photogrammetry and reconfigurable target

    NASA Astrophysics Data System (ADS)

    Xia, Renbo; Hu, Maobang; Zhao, Jibin; Chen, Songlin; Chen, Yueling

    2018-06-01

    Multi-camera vision systems are often needed to achieve large-scale and high-precision measurement because these systems have larger fields of view (FOV) than a single camera. Multiple cameras may have no or narrow overlapping FOVs in many applications, which pose a huge challenge to global calibration. This paper presents a global calibration method for multi-cameras without overlapping FOVs based on photogrammetry technology and a reconfigurable target. Firstly, two planar targets are fixed together and made into a long target according to the distance between the two cameras to be calibrated. The relative positions of the two planar targets can be obtained by photogrammetric methods and used as invariant constraints in global calibration. Then, the reprojection errors of target feature points in the two cameras’ coordinate systems are calculated at the same time and optimized by the Levenberg–Marquardt algorithm to find the optimal solution of the transformation matrix between the two cameras. Finally, all the camera coordinate systems are converted to the reference coordinate system in order to achieve global calibration. Experiments show that the proposed method has the advantages of high accuracy (the RMS error is 0.04 mm) and low cost and is especially suitable for on-site calibration.

  15. Uav Cameras: Overview and Geometric Calibration Benchmark

    NASA Astrophysics Data System (ADS)

    Cramer, M.; Przybilla, H.-J.; Zurhorst, A.

    2017-08-01

    Different UAV platforms and sensors are used in mapping already, many of them equipped with (sometimes) modified cameras as known from the consumer market. Even though these systems normally fulfil their requested mapping accuracy, the question arises, which system performs best? This asks for a benchmark, to check selected UAV based camera systems in well-defined, reproducible environments. Such benchmark is tried within this work here. Nine different cameras used on UAV platforms, representing typical camera classes, are considered. The focus is laid on the geometry here, which is tightly linked to the process of geometrical calibration of the system. In most applications the calibration is performed in-situ, i.e. calibration parameters are obtained as part of the project data itself. This is often motivated because consumer cameras do not keep constant geometry, thus, cannot be seen as metric cameras. Still, some of the commercial systems are quite stable over time, as it was proven from repeated (terrestrial) calibrations runs. Already (pre-)calibrated systems may offer advantages, especially when the block geometry of the project does not allow for a stable and sufficient in-situ calibration. Especially for such scenario close to metric UAV cameras may have advantages. Empirical airborne test flights in a calibration field have shown how block geometry influences the estimated calibration parameters and how consistent the parameters from lab calibration can be reproduced.

  16. A novel camera localization system for extending three-dimensional digital image correlation

    NASA Astrophysics Data System (ADS)

    Sabato, Alessandro; Reddy, Narasimha; Khan, Sameer; Niezrecki, Christopher

    2018-03-01

    The monitoring of civil, mechanical, and aerospace structures is important especially as these systems approach or surpass their design life. Often, Structural Health Monitoring (SHM) relies on sensing techniques for condition assessment. Advancements achieved in camera technology and optical sensors have made three-dimensional (3D) Digital Image Correlation (DIC) a valid technique for extracting structural deformations and geometry profiles. Prior to making stereophotogrammetry measurements, a calibration has to be performed to obtain the vision systems' extrinsic and intrinsic parameters. It means that the position of the cameras relative to each other (i.e. separation distance, cameras angle, etc.) must be determined. Typically, cameras are placed on a rigid bar to prevent any relative motion between the cameras. This constraint limits the utility of the 3D-DIC technique, especially as it is applied to monitor large-sized structures and from various fields of view. In this preliminary study, the design of a multi-sensor system is proposed to extend 3D-DIC's capability and allow for easier calibration and measurement. The suggested system relies on a MEMS-based Inertial Measurement Unit (IMU) and a 77 GHz radar sensor for measuring the orientation and relative distance of the stereo cameras. The feasibility of the proposed combined IMU-radar system is evaluated through laboratory tests, demonstrating its ability in determining the cameras position in space for performing accurate 3D-DIC calibration and measurements.

  17. Depth Perception In Remote Stereoscopic Viewing Systems

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Von Sydow, Marika

    1989-01-01

    Report describes theoretical and experimental studies of perception of depth by human operators through stereoscopic video systems. Purpose of such studies to optimize dual-camera configurations used to view workspaces of remote manipulators at distances of 1 to 3 m from cameras. According to analysis, static stereoscopic depth distortion decreased, without decreasing stereoscopitc depth resolution, by increasing camera-to-object and intercamera distances and camera focal length. Further predicts dynamic stereoscopic depth distortion reduced by rotating cameras around center of circle passing through point of convergence of viewing axes and first nodal points of two camera lenses.

  18. Multi-color pyrometry imaging system and method of operating the same

    DOEpatents

    Estevadeordal, Jordi; Nirmalan, Nirm Velumylum; Tralshawala, Nilesh; Bailey, Jeremy Clyde

    2017-03-21

    A multi-color pyrometry imaging system for a high-temperature asset includes at least one viewing port in optical communication with at least one high-temperature component of the high-temperature asset. The system also includes at least one camera device in optical communication with the at least one viewing port. The at least one camera device includes a camera enclosure and at least one camera aperture defined in the camera enclosure, The at least one camera aperture is in optical communication with the at least one viewing port. The at least one camera device also includes a multi-color filtering mechanism coupled to the enclosure. The multi-color filtering mechanism is configured to sequentially transmit photons within a first predetermined wavelength band and transmit photons within a second predetermined wavelength band that is different than the first predetermined wavelength band.

  19. Single-camera stereo-digital image correlation with a four-mirror adapter: optimized design and validation

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2016-12-01

    A low-cost, easy-to-implement but practical single-camera stereo-digital image correlation (DIC) system using a four-mirror adapter is established for accurate shape and three-dimensional (3D) deformation measurements. The mirrors assisted pseudo-stereo imaging system can convert a single camera into two virtual cameras, which view a specimen from different angles and record the surface images of the test object onto two halves of the camera sensor. To enable deformation measurement in non-laboratory conditions or extreme high temperature environments, an active imaging optical design, combining an actively illuminated monochromatic source with a coupled band-pass optical filter, is compactly integrated to the pseudo-stereo DIC system. The optical design, basic principles and implementation procedures of the established system for 3D profile and deformation measurements are described in detail. The effectiveness and accuracy of the established system are verified by measuring the profile of a regular cylinder surface and displacements of a translated planar plate. As an application example, the established system is used to determine the tensile strains and Poisson's ratio of a composite solid propellant specimen during stress relaxation test. Since the established single-camera stereo-DIC system only needs a single camera and presents strong robustness against variations in ambient light or the thermal radiation of a hot object, it demonstrates great potential in determining transient deformation in non-laboratory or high-temperature environments with the aid of a single high-speed camera.

  20. Mitigation of Atmospheric Effects on Imaging Systems

    DTIC Science & Technology

    2004-03-31

    focal length. The imaging system had two cameras: an Electrim camera sensitive in the visible (0.6 µ m) waveband and an Amber QWIP infrared camera...sensitive in the 9–micron region. The Amber QWIP infrared camera had 256x256 pixels, pixel pitch 38 mµ , focal length of 1.8 m, FOV of 5.4 x5.4 mr...each day. Unfortunately, signals from the different read ports of the Electrim camera picked up noise on their way to the digitizer, and this resulted

  1. Autocalibration of a projector-camera system.

    PubMed

    Okatani, Takayuki; Deguchi, Koichiro

    2005-12-01

    This paper presents a method for calibrating a projector-camera system that consists of multiple projectors (or multiple poses of a single projector), a camera, and a planar screen. We consider the problem of estimating the homography between the screen and the image plane of the camera or the screen-camera homography, in the case where there is no prior knowledge regarding the screen surface that enables the direct computation of the homography. It is assumed that the pose of each projector is unknown while its internal geometry is known. Subsequently, it is shown that the screen-camera homography can be determined from only the images projected by the projectors and then obtained by the camera, up to a transformation with four degrees of freedom. This transformation corresponds to arbitrariness in choosing a two-dimensional coordinate system on the screen surface and when this coordinate system is chosen in some manner, the screen-camera homography as well as the unknown poses of the projectors can be uniquely determined. A noniterative algorithm is presented, which computes the homography from three or more images. Several experimental results on synthetic as well as real images are shown to demonstrate the effectiveness of the method.

  2. ACT-Vision: active collaborative tracking for multiple PTZ cameras

    NASA Astrophysics Data System (ADS)

    Broaddus, Christopher; Germano, Thomas; Vandervalk, Nicholas; Divakaran, Ajay; Wu, Shunguang; Sawhney, Harpreet

    2009-04-01

    We describe a novel scalable approach for the management of a large number of Pan-Tilt-Zoom (PTZ) cameras deployed outdoors for persistent tracking of humans and vehicles, without resorting to the large fields of view of associated static cameras. Our system, Active Collaborative Tracking - Vision (ACT-Vision), is essentially a real-time operating system that can control hundreds of PTZ cameras to ensure uninterrupted tracking of target objects while maintaining image quality and coverage of all targets using a minimal number of sensors. The system ensures the visibility of targets between PTZ cameras by using criteria such as distance from sensor and occlusion.

  3. Traffic monitoring with distributed smart cameras

    NASA Astrophysics Data System (ADS)

    Sidla, Oliver; Rosner, Marcin; Ulm, Michael; Schwingshackl, Gert

    2012-01-01

    The observation and monitoring of traffic with smart visions systems for the purpose of improving traffic safety has a big potential. Today the automated analysis of traffic situations is still in its infancy--the patterns of vehicle motion and pedestrian flow in an urban environment are too complex to be fully captured and interpreted by a vision system. 3In this work we present steps towards a visual monitoring system which is designed to detect potentially dangerous traffic situations around a pedestrian crossing at a street intersection. The camera system is specifically designed to detect incidents in which the interaction of pedestrians and vehicles might develop into safety critical encounters. The proposed system has been field-tested at a real pedestrian crossing in the City of Vienna for the duration of one year. It consists of a cluster of 3 smart cameras, each of which is built from a very compact PC hardware system in a weatherproof housing. Two cameras run vehicle detection and tracking software, one camera runs a pedestrian detection and tracking module based on the HOG dectection principle. All 3 cameras use sparse optical flow computation in a low-resolution video stream in order to estimate the motion path and speed of objects. Geometric calibration of the cameras allows us to estimate the real-world co-ordinates of detected objects and to link the cameras together into one common reference system. This work describes the foundation for all the different object detection modalities (pedestrians, vehicles), and explains the system setup, tis design, and evaluation results which we have achieved so far.

  4. Automatic lightning detection and photographic system

    NASA Technical Reports Server (NTRS)

    Wojtasinski, R. J.; Holley, L. D.; Gray, J. L.; Hoover, R. B. (Inventor)

    1972-01-01

    A system is presented for monitoring and recording lightning strokes within a predetermined area with a camera having an electrically operated shutter with means for advancing the film in the camera after activating the shutter. The system includes an antenna for sensing lightning strikes which, in turn, generates a signal that is fed to an electronic circuit which generates signals for operating the shutter of the camera. Circuitry is provided for preventing activation of the shutter as the film in the camera is being advanced.

  5. Airborne ballistic camera tracking systems

    NASA Technical Reports Server (NTRS)

    Redish, W. L.

    1976-01-01

    An operational airborne ballistic camera tracking system was tested for operational and data reduction feasibility. The acquisition and data processing requirements of the system are discussed. Suggestions for future improvements are also noted. A description of the data reduction mathematics is outlined. Results from a successful reentry test mission are tabulated. The test mission indicated that airborne ballistic camera tracking systems are feasible.

  6. Development of an Extra-vehicular (EVA) Infrared (IR) Camera Inspection System

    NASA Technical Reports Server (NTRS)

    Gazarik, Michael; Johnson, Dave; Kist, Ed; Novak, Frank; Antill, Charles; Haakenson, David; Howell, Patricia; Pandolf, John; Jenkins, Rusty; Yates, Rusty

    2006-01-01

    Designed to fulfill a critical inspection need for the Space Shuttle Program, the EVA IR Camera System can detect crack and subsurface defects in the Reinforced Carbon-Carbon (RCC) sections of the Space Shuttle s Thermal Protection System (TPS). The EVA IR Camera performs this detection by taking advantage of the natural thermal gradients induced in the RCC by solar flux and thermal emission from the Earth. This instrument is a compact, low-mass, low-power solution (1.2cm3, 1.5kg, 5.0W) for TPS inspection that exceeds existing requirements for feature detection. Taking advantage of ground-based IR thermography techniques, the EVA IR Camera System provides the Space Shuttle program with a solution that can be accommodated by the existing inspection system. The EVA IR Camera System augments the visible and laser inspection systems and finds cracks and subsurface damage that is not measurable by the other sensors, and thus fills a critical gap in the Space Shuttle s inspection needs. This paper discusses the on-orbit RCC inspection measurement concept and requirements, and then presents a detailed description of the EVA IR Camera System design.

  7. A design for living technology: experiments with the mind time machine.

    PubMed

    Ikegami, Takashi

    2013-01-01

    Living technology aims to help people expand their experiences in everyday life. The environment offers people ways to interact with it, which we call affordances. Living technology is a design for new affordances. When we experience something new, we remember it by the way we perceive and interact with it. Recent studies in neuroscience have led to the idea of a default mode network, which is a baseline activity of a brain system. The autonomy of artificial life must be understood as a sort of default mode that self-organizes its baseline activity, preparing for its external inputs and its interaction with humans. I thus propose a method for creating a suitable default mode as a design principle for living technology. I built a machine called the mind time machine (MTM), which runs continuously for 10 h per day and receives visual data from its environment using 15 video cameras. The MTM receives and edits the video inputs while it self-organizes the momentary now. Its base program is a neural network that includes chaotic dynamics inside the system and a meta-network that consists of video feedback systems. Using this system as the hardware and a default mode network as a conceptual framework, I describe the system's autonomous behavior. Using the MTM as a testing ground, I propose a design principle for living technology.

  8. A photoelastic modulator-based birefringence imaging microscope for measuring biological specimens

    NASA Astrophysics Data System (ADS)

    Freudenthal, John; Leadbetter, Andy; Wolf, Jacob; Wang, Baoliang; Segal, Solomon

    2014-11-01

    The photoelastic modulator (PEM) has been applied to a variety of polarimetric measurements. However, nearly all such applications use point-measurements where each point (spot) on the sample is measured one at a time. The main challenge for employing the PEM in a camera-based imaging instrument is that the PEM modulates too fast for typical cameras. The PEM modulates at tens of KHz. To capture the specific polarization information that is carried on the modulation frequency of the PEM, the camera needs to be at least ten times faster. However, the typical frame rates of common cameras are only in the tens or hundreds frames per second. In this paper, we report a PEM-camera birefringence imaging microscope. We use the so-called stroboscopic illumination method to overcome the incompatibility of the high frequency of the PEM to the relatively slow frame rate of a camera. We trigger the LED light source using a field-programmable gate array (FPGA) in synchrony with the modulation of the PEM. We show the measurement results of several standard birefringent samples as a part of the instrument calibration. Furthermore, we show results observed in two birefringent biological specimens, a human skin tissue that contains collagen and a slice of mouse brain that contains bundles of myelinated axonal fibers. Novel applications of this PEM-based birefringence imaging microscope to both research communities and industrial applications are being tested.

  9. Observations of the Perseids 2013 using SPOSH cameras

    NASA Astrophysics Data System (ADS)

    Margonis, A.; Elgner, S.; Christou, A.; Oberst, J.; Flohrer, J.

    2013-09-01

    Earth is constantly bombard by debris, most of which disintegrates in the upper atmosphere. The collision of a dust particle, having a mass of approximately 1g or larger, with the Earth's atmosphere results into a visible streak of light in the night sky, called meteor. Comets produce new meteoroids each time they come close to the Sun due to sublimation processes. These fresh particles are moving around the Sun in orbits similar to their parent comet forming meteoroid streams. For this reason, the intersection of Earth's orbital path with different comets, gives rise to anumber of meteor showers throughout the year. The Perseids are one of the most prominent annual meteor showers occurring every summer, having its origin in Halley-type comet 109P/Swift-Tuttle. The dense core of this stream passes Earth's orbit on the 12th of August when more than 100 meteors per hour can been seen by a single observer under ideal conditions. The Technical University of Berlin (TUB) and the German Aerospace Center (DLR) together with the Armagh observatory organize meteor campaigns every summer observing the activity of the Perseids meteor shower. The observations are carried out using the Smart Panoramic Optical Sensor Head (SPOSH) camera system [2] which has been developed by DLR and Jena-Optronik GmbH under an ESA/ESTEC contract. The camera was designed to image faint, short-lived phenomena on dark planetary hemispheres. The camera is equipped with a highly sensitive back-illuminated CCD chip having a pixel resolution of 1024x1024. The custom-made fish-eye lens offers a 120°x120° field-of-view (168° over the diagonal) making the monitoring of nearly the whole night sky possible (Fig. 1). This year the observations will take place between 3rd and 10th of August to cover the meteor activity of the Perseids just before their maximum. The SPOSH cameras will be deployed at two remote sites located in high altitudes in the Greek Peloponnese peninsula. The baseline of ∼50km between the two observing stations ensures a large overlapping area of the cameras' field of views allowing the triangulation of approximately every meteor captured by the two observing systems. The acquired data will be reduced using dedicated software developed at TUB and DLR. Assuming a successful campaign, statistics, trajectories and photometric properties of the processed double-station meteors will be presented at the conference. Furthermore, a first order statistical analysis of the meteors processed during the 2012 and the new 2013 campaigns will be presented [1].

  10. In vivo imaging of cerebral hemodynamics and tissue scattering in rat brain using a surgical microscope camera system

    NASA Astrophysics Data System (ADS)

    Nishidate, Izumi; Kanie, Takuya; Mustari, Afrina; Kawauchi, Satoko; Sato, Shunichi; Sato, Manabu; Kokubo, Yasuaki

    2018-02-01

    We investigated a rapid imaging method to monitor the spatial distribution of total hemoglobin concentration (CHbT), the tissue oxygen saturation (StO2), and the scattering power b in the expression of musp=a(lambda)^-b as the scattering parameters in cerebral cortex using a digital red-green-blue camera. In the method, Monte Carlo simulation (MCS) for light transport in brain tissue is used to specify a relation among the RGB-values and the concentration of oxygenated hemoglobin (CHbO), that of deoxygenated hemoglobin (CHbR), and the scattering power b. In the present study, we performed sequential recordings of RGB images of in vivo exposed brain of rats while changing the fraction of inspired oxygen (FiO2), using a surgical microscope camera system. The time courses of CHbO, CHbR, CHbT, and StO2 indicated the well-known physiological responses in cerebral cortex. On the other hand, a fast decrease in the scattering power b was observed immediately after the respiratory arrest, which is similar to the negative deflection of the extracellular DC potential so-called anoxic depolarization. It is said that the DC shift coincident with a rise in extracellular potassium and can evoke cell deformation generated by water movement between intracellular and extracellular compartments, and hence the light scattering by tissue. Therefore, the decrease in the scattering power b after the respiratory arrest is indicative of changes in light scattering by tissue. The results in this study indicate potential of the method to evaluate the pathophysiological conditions and loss of tissue viability in brain tissue.

  11. Optical analysis of electro-optical systems by MTF calculus

    NASA Astrophysics Data System (ADS)

    Barbarini, Elisa Signoreto; Dos Santos, Daniel, Jr.; Stefani, Mário Antonio; Yasuoka, Fátima Maria Mitsue; Castro Neto, Jarbas C.; Rodrigues, Evandro Luís Linhari

    2011-08-01

    One of the widely used methods for performance analysis of an optical system is the determination of the Modulation Transfer Function (MTF). The MTF represents a quantitative and direct measure of image quality, and, besides being an objective test, it can be used on concatenated optical system. This paper presents the application of software called SMTF (software modulation transfer function), built in C++ and Open CV platforms for MTF calculation on electro-optical system. Through this technique, it is possible to develop specific method to measure the real time performance of a digital fundus camera, an infrared sensor and an ophthalmological surgery microscope. Each optical instrument mentioned has a particular device to measure the MTF response, which is being developed. Then the MTF information assists the analysis of the optical system alignment, and also defines its resolution limit by the MTF graphic. The result obtained from the implemented software is compared with the theoretical MTF curve from the analyzed systems.

  12. BRIC - Brown works with middeck experiment

    NASA Image and Video Library

    1997-08-12

    S85-E-5058 (12 August 1997) --- Astronaut Curtis L. Brown, Jr., commander, performs operations with an experiment called Biological Research in Canisters (BRIC) operations on the mid-deck of the Space Shuttle Discovery during flight day six. The photograph was taken with the Electronic Still Camera (ESC).

  13. First Imaging of Laser-Induced Spark on Mars

    NASA Image and Video Library

    2014-07-16

    NASA Curiosity Mars rover used the Mars Hand Lens Imager MAHLI camera on its arm to catch the first images of sparks produced by the rover laser being shot at a rock on Mars. The left image is from before the laser zapped this rock, called Nova.

  14. View of 'Cape St. Mary' from 'Cape Verde' (Altered Contrast)

    NASA Technical Reports Server (NTRS)

    2006-01-01

    As part of its investigation of 'Victoria Crater,' NASA's Mars Exploration Rover Opportunity examined a promontory called 'Cape St. Mary' from the from the vantage point of 'Cape Verde,' the next promontory counterclockwise around the crater's deeply scalloped rim. This view of Cape St. Mary combines several exposures taken by the rover's panoramic camera into an approximately true-color mosaic with contrast adjusted to improve the visibility of details in shaded areas.

    The upper portion of the crater wall contains a jumble of material tossed outward by the impact that excavated the crater. This vertical cross-section through the blanket of ejected material surrounding the crater was exposed by erosion that expanded the crater outward from its original diameter, according to scientists' interpretation of the observations. Below the jumbled material in the upper part of the wall are layers that survive relatively intact from before the crater-causing impact. Near the base of the Cape St. Mary cliff are layers with a pattern called 'crossbedding,' intersecting with each other at angles, rather than parallel to each other. Large-scale crossbedding can result from material being deposited as wind-blown dunes.

    The images combined into this mosaic were taken during the 970th Martian day, or sol, of Opportunity's Mars-surface mission (Oct. 16, 2006). The panoramic camera took them through the camera's 750-nanometer, 530-nanometer and 430-nanometer filters.

  15. A 3D photographic capsule endoscope system with full field of view

    NASA Astrophysics Data System (ADS)

    Ou-Yang, Mang; Jeng, Wei-De; Lai, Chien-Cheng; Kung, Yi-Chinn; Tao, Kuan-Heng

    2013-09-01

    Current capsule endoscope uses one camera to capture the surface image in the intestine. It can only observe the abnormal point, but cannot know the exact information of this abnormal point. Using two cameras can generate 3D images, but the visual plane changes while capsule endoscope rotates. It causes that two cameras can't capture the images information completely. To solve this question, this research provides a new kind of capsule endoscope to capture 3D images, which is 'A 3D photographic capsule endoscope system'. The system uses three cameras to capture images in real time. The advantage is increasing the viewing range up to 2.99 times respect to the two camera system. The system can accompany 3D monitor provides the exact information of symptom points, helping doctors diagnose the disease.

  16. A Versatile Time-Lapse Camera System Developed by the Hawaiian Volcano Observatory for Use at Kilauea Volcano, Hawaii

    USGS Publications Warehouse

    Orr, Tim R.; Hoblitt, Richard P.

    2008-01-01

    Volcanoes can be difficult to study up close. Because it may be days, weeks, or even years between important events, direct observation is often impractical. In addition, volcanoes are often inaccessible due to their remote location and (or) harsh environmental conditions. An eruption adds another level of complexity to what already may be a difficult and dangerous situation. For these reasons, scientists at the U.S. Geological Survey (USGS) Hawaiian Volcano Observatory (HVO) have, for years, built camera systems to act as surrogate eyes. With the recent advances in digital-camera technology, these eyes are rapidly improving. One type of photographic monitoring involves the use of near-real-time network-enabled cameras installed at permanent sites (Hoblitt and others, in press). Time-lapse camera-systems, on the other hand, provide an inexpensive, easily transportable monitoring option that offers more versatility in site location. While time-lapse systems lack near-real-time capability, they provide higher image resolution and can be rapidly deployed in areas where the use of sophisticated telemetry required by the networked cameras systems is not practical. This report describes the latest generation (as of 2008) time-lapse camera system used by HVO for photograph acquisition in remote and hazardous sites on Kilauea Volcano.

  17. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User's Head Movement.

    PubMed

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-08-31

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user's head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest.

  18. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User’s Head Movement

    PubMed Central

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-01-01

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user’s head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest. PMID:27589768

  19. The Orbiter camera payload system's large-format camera and attitude reference system

    NASA Technical Reports Server (NTRS)

    Schardt, B. B.; Mollberg, B. H.

    1985-01-01

    The Orbiter camera payload system (OCPS) is an integrated photographic system carried into earth orbit as a payload in the Space Transportation System (STS) Orbiter vehicle's cargo bay. The major component of the OCPS is a large-format camera (LFC), a precision wide-angle cartographic instrument capable of producing high-resolution stereophotography of great geometric fidelity in multiple base-to-height ratios. A secondary and supporting system to the LFC is the attitude reference system (ARS), a dual-lens stellar camera array (SCA) and camera support structure. The SCA is a 70 mm film system that is rigidly mounted to the LFC lens support structure and, through the simultaneous acquisition of two star fields with each earth viewing LFC frame, makes it possible to precisely determine the pointing of the LFC optical axis with reference to the earth nadir point. Other components complete the current OCPS configuration as a high-precision cartographic data acquisition system. The primary design objective for the OCPS was to maximize system performance characteristics while maintaining a high level of reliability compatible with rocket launch conditions and the on-orbit environment. The full OCPS configuration was launched on a highly successful maiden voyage aboard the STS Orbiter vehicle Challenger on Oct. 5, 1984, as a major payload aboard the STS-41G mission.

  20. Nuclear medicine imaging system

    DOEpatents

    Bennett, Gerald W.; Brill, A. Bertrand; Bizais, Yves J.; Rowe, R. Wanda; Zubal, I. George

    1986-01-07

    A nuclear medicine imaging system having two large field of view scintillation cameras mounted on a rotatable gantry and being movable diametrically toward or away from each other is disclosed. In addition, each camera may be rotated about an axis perpendicular to the diameter of the gantry. The movement of the cameras allows the system to be used for a variety of studies, including positron annihilation, and conventional single photon emission, as well as static orthogonal dual multi-pinhole tomography. In orthogonal dual multi-pinhole tomography, each camera is fitted with a seven pinhole collimator to provide seven views from slightly different perspectives. By using two cameras at an angle to each other, improved sensitivity and depth resolution is achieved. The computer system and interface acquires and stores a broad range of information in list mode, including patient physiological data, energy data over the full range detected by the cameras, and the camera position. The list mode acquisition permits the study of attenuation as a result of Compton scatter, as well as studies involving the isolation and correlation of energy with a range of physiological conditions.

  1. Nuclear medicine imaging system

    DOEpatents

    Bennett, Gerald W.; Brill, A. Bertrand; Bizais, Yves J. C.; Rowe, R. Wanda; Zubal, I. George

    1986-01-01

    A nuclear medicine imaging system having two large field of view scintillation cameras mounted on a rotatable gantry and being movable diametrically toward or away from each other is disclosed. In addition, each camera may be rotated about an axis perpendicular to the diameter of the gantry. The movement of the cameras allows the system to be used for a variety of studies, including positron annihilation, and conventional single photon emission, as well as static orthogonal dual multi-pinhole tomography. In orthogonal dual multi-pinhole tomography, each camera is fitted with a seven pinhole collimator to provide seven views from slightly different perspectives. By using two cameras at an angle to each other, improved sensitivity and depth resolution is achieved. The computer system and interface acquires and stores a broad range of information in list mode, including patient physiological data, energy data over the full range detected by the cameras, and the camera position. The list mode acquisition permits the study of attenuation as a result of Compton scatter, as well as studies involving the isolation and correlation of energy with a range of physiological conditions.

  2. An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates

    USGS Publications Warehouse

    Hobbs, Michael T.; Brehme, Cheryl S.

    2017-01-01

    Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing.

  3. An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates.

    PubMed

    Hobbs, Michael T; Brehme, Cheryl S

    2017-01-01

    Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing.

  4. ARNICA, the Arcetri Near-Infrared Camera

    NASA Astrophysics Data System (ADS)

    Lisi, F.; Baffa, C.; Bilotti, V.; Bonaccini, D.; del Vecchio, C.; Gennari, S.; Hunt, L. K.; Marcucci, G.; Stanga, R.

    1996-04-01

    ARNICA (ARcetri Near-Infrared CAmera) is the imaging camera for the near-infrared bands between 1.0 and 2.5 microns that the Arcetri Observatory has designed and built for the Infrared Telescope TIRGO located at Gornergrat, Switzerland. We describe the mechanical and optical design of the camera, and report on the astronomical performance of ARNICA as measured during the commissioning runs at the TIRGO (December, 1992 to December 1993), and an observing run at the William Herschel Telescope, Canary Islands (December, 1993). System performance is defined in terms of efficiency of the camera+telescope system and camera sensitivity for extended and point-like sources. (SECTION: Astronomical Instrumentation)

  5. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2012-01-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  6. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2011-12-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  7. Detecting method of subjects' 3D positions and experimental advanced camera control system

    NASA Astrophysics Data System (ADS)

    Kato, Daiichiro; Abe, Kazuo; Ishikawa, Akio; Yamada, Mitsuho; Suzuki, Takahito; Kuwashima, Shigesumi

    1997-04-01

    Steady progress is being made in the development of an intelligent robot camera capable of automatically shooting pictures with a powerful sense of reality or tracking objects whose shooting requires advanced techniques. Currently, only experienced broadcasting cameramen can provide these pictures.TO develop an intelligent robot camera with these abilities, we need to clearly understand how a broadcasting cameraman assesses his shooting situation and how his camera is moved during shooting. We use a real- time analyzer to study a cameraman's work and his gaze movements at studios and during sports broadcasts. This time, we have developed a detecting method of subjects' 3D positions and an experimental camera control system to help us further understand the movements required for an intelligent robot camera. The features are as follows: (1) Two sensor cameras shoot a moving subject and detect colors, producing its 3D coordinates. (2) Capable of driving a camera based on camera movement data obtained by a real-time analyzer. 'Moving shoot' is the name we have given to the object position detection technology on which this system is based. We used it in a soccer game, producing computer graphics showing how players moved. These results will also be reported.

  8. Performance Characteristics For The Orbiter Camera Payload System's Large Format Camera (LFC)

    NASA Astrophysics Data System (ADS)

    MoIIberg, Bernard H.

    1981-11-01

    The Orbiter Camera Payload System, the OCPS, is an integrated photographic system which is carried into Earth orbit as a payload in the Shuttle Orbiter vehicle's cargo bay. The major component of the OCPS is a Large Format Camera (LFC) which is a precision wide-angle cartographic instrument that is capable of produc-ing high resolution stereophotography of great geometric fidelity in multiple base to height ratios. The primary design objective for the LFC was to maximize all system performance characteristics while maintaining a high level of reliability compatible with rocket launch conditions and the on-orbit environment.

  9. Eye gaze tracking for endoscopic camera positioning: an application of a hardware/software interface developed to automate Aesop.

    PubMed

    Ali, S M; Reisner, L A; King, B; Cao, A; Auner, G; Klein, M; Pandya, A K

    2008-01-01

    A redesigned motion control system for the medical robot Aesop allows automating and programming its movements. An IR eye tracking system has been integrated with this control interface to implement an intelligent, autonomous eye gaze-based laparoscopic positioning system. A laparoscopic camera held by Aesop can be moved based on the data from the eye tracking interface to keep the user's gaze point region at the center of a video feedback monitor. This system setup provides autonomous camera control that works around the surgeon, providing an optimal robotic camera platform.

  10. Pedestrian Detection Based on Adaptive Selection of Visible Light or Far-Infrared Light Camera Image by Fuzzy Inference System and Convolutional Neural Network-Based Verification.

    PubMed

    Kang, Jin Kyu; Hong, Hyung Gil; Park, Kang Ryoung

    2017-07-08

    A number of studies have been conducted to enhance the pedestrian detection accuracy of intelligent surveillance systems. However, detecting pedestrians under outdoor conditions is a challenging problem due to the varying lighting, shadows, and occlusions. In recent times, a growing number of studies have been performed on visible light camera-based pedestrian detection systems using a convolutional neural network (CNN) in order to make the pedestrian detection process more resilient to such conditions. However, visible light cameras still cannot detect pedestrians during nighttime, and are easily affected by shadows and lighting. There are many studies on CNN-based pedestrian detection through the use of far-infrared (FIR) light cameras (i.e., thermal cameras) to address such difficulties. However, when the solar radiation increases and the background temperature reaches the same level as the body temperature, it remains difficult for the FIR light camera to detect pedestrians due to the insignificant difference between the pedestrian and non-pedestrian features within the images. Researchers have been trying to solve this issue by inputting both the visible light and the FIR camera images into the CNN as the input. This, however, takes a longer time to process, and makes the system structure more complex as the CNN needs to process both camera images. This research adaptively selects a more appropriate candidate between two pedestrian images from visible light and FIR cameras based on a fuzzy inference system (FIS), and the selected candidate is verified with a CNN. Three types of databases were tested, taking into account various environmental factors using visible light and FIR cameras. The results showed that the proposed method performs better than the previously reported methods.

  11. Video quality of 3G videophones for telephone cardiopulmonary resuscitation.

    PubMed

    Tränkler, Uwe; Hagen, Oddvar; Horsch, Alexander

    2008-01-01

    We simulated a cardiopulmonary resuscitation (CPR) scene with a manikin and used two 3G videophones on the caller's side to transmit video to a laptop PC. Five observers (two doctors with experience in emergency medicine and three paramedics) evaluated the video. They judged whether the manikin was breathing and whether they would give advice for CPR; they also graded the confidence of their decision-making. Breathing was only visible from certain orientations of the videophones, at distances below 150 cm with good illumination and a still background. Since the phones produced a degradation in colours and shadows, detection of breathing mainly depended on moving contours. Low camera positioning produced better results than having the camera high up. Darkness, shaking of the camera and a moving background made detection of breathing almost impossible. The video from the two 3G videophones that were tested was of sufficient quality for telephone CPR provided that camera orientation, distance, illumination and background were carefully chosen. Thus it seems possible to use 3G videophones for emergency calls involving CPR. However, further studies on the required video quality in different scenarios are necessary.

  12. MS Hadfield works on the SSRMS in the SLP during the first EVA for STS-100

    NASA Image and Video Library

    2001-04-22

    S100-E-5236 (22 April 2001) --- Astronaut Chris A. Hadfield, STS-100 mission specialist representing the Canadian Space Agency (CSA), stands on one Canadian-built robot arm to work with another one. Called Canadarm2, the newest addition to the International Space Station (ISS) was ferried up to the orbital outpost by the STS-100 crew. Hadfield's feet are secured on a special foot restraint attached to the end of the Remote Manipulator System (RMS) arm, which represents one of the standard shuttle components for the majority of the 100-plus STS missions thus far. The picture was recorded with a digital still camera.

  13. MS Hadfield works on the SSRMS in the SLP during the first EVA for STS-100

    NASA Image and Video Library

    2001-04-22

    S100-E-5239 (22 April 2001) --- Astronaut Chris A. Hadfield, STS-100 mission specialist representing the Canadian Space Agency (CSA), stands on one Canadian-built robot arm to work with another one. Called Canadarm2, the newest addition to the International Space Station (ISS) was ferried up to the orbital outpost by the STS-100 crew. Hadfield's feet are secured on a special foot restraint attached to the end of the Remote Manipulator System (RMS) arm, which represents one of the standard shuttle components for the majority of the 100-plus STS missions thus far. The picture was recorded with a digital still camera.

  14. MS Hadfield works on the SSRMS in the SLP during the first EVA for STS-100

    NASA Image and Video Library

    2001-04-22

    S100-E-5238 (22 April 2001) --- Astronaut Chris A. Hadfield, STS-100 mission specialist representing the Canadian Space Agency (CSA), stands on one Canadian-built robot arm to work with another one. Called Canadarm2, the newest addition to the International Space Station (ISS) was ferried up to the orbital outpost by the STS-100 crew. Hadfield's feet are secured on a special foot restraint attached to the end of the Remote Manipulator System (RMS) arm, which represents one of the standard shuttle components for the majority of the 100-plus STS missions thus far. The picture was recorded with a digital still camera.

  15. MS Hadfield works on the SSRMS in the SLP during the first EVA for STS-100

    NASA Image and Video Library

    2001-04-22

    S100-E-5243 (22 April 2001) --- Astronaut Chris A. Hadfield, STS-100 mission specialist representing the Canadian Space Agency (CSA), stands on one Canadian-built robot arm to work with another one. Called Canadarm2, the newest addition to the International Space Station (ISS) was ferried up to the orbital outpost by the STS-100 crew. Hadfield's feet are secured on a special foot restraint attached to the end of the Remote Manipulator System (RMS) arm, which represents one of the standard shuttle components for the majority of the 100-plus STS missions thus far. The picture was recorded with a digital still camera.

  16. First TEGA Oven is Ready to Accept a Sample

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The Thermal and Evolved Gas Analyzer instrument has been checked out and has been approved to accept the sample from the location informally called 'Baby Bear'. Although the doors did not fully open, tests have shown that enough sample will get in to fill the tiny oven. This image was taken on the eighth day of the Mars mission, or Sol 8 (June 2, 2008) by the Robotic Arm Camera aboard NASA's Phoenix Mars Lander.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  17. Application of infrared uncooled cameras in surveillance systems

    NASA Astrophysics Data System (ADS)

    Dulski, R.; Bareła, J.; Trzaskawka, P.; PiÄ tkowski, T.

    2013-10-01

    The recent necessity to protect military bases, convoys and patrols gave serious impact to the development of multisensor security systems for perimeter protection. One of the most important devices used in such systems are IR cameras. The paper discusses technical possibilities and limitations to use uncooled IR camera in a multi-sensor surveillance system for perimeter protection. Effective ranges of detection depend on the class of the sensor used and the observed scene itself. Application of IR camera increases the probability of intruder detection regardless of the time of day or weather conditions. It also simultaneously decreased the false alarm rate produced by the surveillance system. The role of IR cameras in the system was discussed as well as technical possibilities to detect human being. Comparison of commercially available IR cameras, capable to achieve desired ranges was done. The required spatial resolution for detection, recognition and identification was calculated. The simulation of detection ranges was done using a new model for predicting target acquisition performance which uses the Targeting Task Performance (TTP) metric. Like its predecessor, the Johnson criteria, the new model bounds the range performance with image quality. The scope of presented analysis is limited to the estimation of detection, recognition and identification ranges for typical thermal cameras with uncooled microbolometer focal plane arrays. This type of cameras is most widely used in security systems because of competitive price to performance ratio. Detection, recognition and identification range calculations were made, and the appropriate results for the devices with selected technical specifications were compared and discussed.

  18. Video model deformation system for the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Goad, W. K.

    1983-01-01

    A photogrammetric closed circuit television system to measure model deformation at the National Transonic Facility is described. The photogrammetric approach was chosen because of its inherent rapid data recording of the entire object field. Video cameras are used to acquire data instead of film cameras due to the inaccessibility of cameras which must be housed within the cryogenic, high pressure plenum of this facility. A rudimentary theory section is followed by a description of the video-based system and control measures required to protect cameras from the hostile environment. Preliminary results obtained with the same camera placement as planned for NTF are presented and plans for facility testing with a specially designed test wing are discussed.

  19. Evaluation of thermal cameras in quality systems according to ISO 9000 or EN 45000 standards

    NASA Astrophysics Data System (ADS)

    Chrzanowski, Krzysztof

    2001-03-01

    According to the international standards ISO 9001-9004 and EN 45001-45003 the industrial plants and the accreditation laboratories that implemented the quality systems according to these standards are required to evaluate an uncertainty of measurements. Manufacturers of thermal cameras do not offer any data that could enable estimation of measurement uncertainty of these imagers. Difficulties in determining the measurement uncertainty is an important limitation of thermal cameras for applications in the industrial plants and the cooperating accreditation laboratories that have implemented these quality systems. A set of parameters for characterization of commercial thermal cameras, a measuring set, some results of testing of these cameras, a mathematical model of uncertainty, and a software that enables quick calculation of uncertainty of temperature measurements with thermal cameras are presented in this paper.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karakaya, Mahmut; Qi, Hairong

    This paper addresses the communication and energy efficiency in collaborative visual sensor networks (VSNs) for people localization, a challenging computer vision problem of its own. We focus on the design of a light-weight and energy efficient solution where people are localized based on distributed camera nodes integrating the so-called certainty map generated at each node, that records the target non-existence information within the camera s field of view. We first present a dynamic itinerary for certainty map integration where not only each sensor node transmits a very limited amount of data but that a limited number of camera nodes ismore » involved. Then, we perform a comprehensive analytical study to evaluate communication and energy efficiency between different integration schemes, i.e., centralized and distributed integration. Based on results obtained from analytical study and real experiments, the distributed method shows effectiveness in detection accuracy as well as energy and bandwidth efficiency.« less

  1. AFRC2016-0116-065

    NASA Image and Video Library

    2016-04-15

    The newest instrument, an infrared camera called the High-resolution Airborne Wideband Camera-Plus (HAWC+), was installed on the Stratospheric Observatory for Infrared Astronomy, SOFIA, in April of 2016. This is the only currently operating astronomical camera that makes images using far-infrared light, allowing studies of low-temperature early stages of star and planet formation. HAWC+ includes a polarimeter, a device that measures the alignment of incoming light waves. With the polarimeter, HAWC+ can map magnetic fields in star forming regions and in the environment around the supermassive black hole at the center of the Milky Way galaxy. These new maps can reveal how the strength and direction of magnetic fields affect the rate at which interstellar clouds condense to form new stars. A team led by C. Darren Dowell at NASA’s Jet Propulsion Laboratory and including participants from more than a dozen institutions developed the instrument.

  2. Lensless Photoluminescence Hyperspectral Camera Employing Random Speckle Patterns.

    PubMed

    Žídek, Karel; Denk, Ondřej; Hlubuček, Jiří

    2017-11-10

    We propose and demonstrate a spectrally-resolved photoluminescence imaging setup based on the so-called single pixel camera - a technique of compressive sensing, which enables imaging by using a single-pixel photodetector. The method relies on encoding an image by a series of random patterns. In our approach, the image encoding was maintained via laser speckle patterns generated by an excitation laser beam scattered on a diffusor. By using a spectrometer as the single-pixel detector we attained a realization of a spectrally-resolved photoluminescence camera with unmatched simplicity. We present reconstructed hyperspectral images of several model scenes. We also discuss parameters affecting the imaging quality, such as the correlation degree of speckle patterns, pattern fineness, and number of datapoints. Finally, we compare the presented technique to hyperspectral imaging using sample scanning. The presented method enables photoluminescence imaging for a broad range of coherent excitation sources and detection spectral areas.

  3. Autonomous Navigation Results from the Mars Exploration Rover (MER) Mission

    NASA Technical Reports Server (NTRS)

    Maimone, Mark; Johnson, Andrew; Cheng, Yang; Willson, Reg; Matthies, Larry H.

    2004-01-01

    In January, 2004, the Mars Exploration Rover (MER) mission landed two rovers, Spirit and Opportunity, on the surface of Mars. Several autonomous navigation capabilities were employed in space for the first time in this mission. ]n the Entry, Descent, and Landing (EDL) phase, both landers used a vision system called the, Descent Image Motion Estimation System (DIMES) to estimate horizontal velocity during the last 2000 meters (m) of descent, by tracking features on the ground with a downlooking camera, in order to control retro-rocket firing to reduce horizontal velocity before impact. During surface operations, the rovers navigate autonomously using stereo vision for local terrain mapping and a local, reactive planning algorithm called Grid-based Estimation of Surface Traversability Applied to Local Terrain (GESTALT) for obstacle avoidance. ]n areas of high slip, stereo vision-based visual odometry has been used to estimate rover motion, As of mid-June, Spirit had traversed 3405 m, of which 1253 m were done autonomously; Opportunity had traversed 1264 m, of which 224 m were autonomous. These results have contributed substantially to the success of the mission and paved the way for increased levels of autonomy in future missions.

  4. Adapting the eButton to the abilities of children for diet assessment

    USDA-ARS?s Scientific Manuscript database

    Dietary assessment is fraught with error among adults and especially among children. Innovative technology may provide more accurate assessments of dietary intake. One recently available innovative method is a camera worn on the chest (called an eButton) that takes images of whatever is in front of ...

  5. High-performance dual-speed CCD camera system for scientific imaging

    NASA Astrophysics Data System (ADS)

    Simpson, Raymond W.

    1996-03-01

    Traditionally, scientific camera systems were partitioned with a `camera head' containing the CCD and its support circuitry and a camera controller, which provided analog to digital conversion, timing, control, computer interfacing, and power. A new, unitized high performance scientific CCD camera with dual speed readout at 1 X 106 or 5 X 106 pixels per second, 12 bit digital gray scale, high performance thermoelectric cooling, and built in composite video output is described. This camera provides all digital, analog, and cooling functions in a single compact unit. The new system incorporates the A/C converter, timing, control and computer interfacing in the camera, with the power supply remaining a separate remote unit. A 100 Mbyte/second serial link transfers data over copper or fiber media to a variety of host computers, including Sun, SGI, SCSI, PCI, EISA, and Apple Macintosh. Having all the digital and analog functions in the camera made it possible to modify this system for the Woods Hole Oceanographic Institution for use on a remote controlled submersible vehicle. The oceanographic version achieves 16 bit dynamic range at 1.5 X 105 pixels/second, can be operated at depths of 3 kilometers, and transfers data to the surface via a real time fiber optic link.

  6. An interactive web-based system using cloud for large-scale visual analytics

    NASA Astrophysics Data System (ADS)

    Kaseb, Ahmed S.; Berry, Everett; Rozolis, Erik; McNulty, Kyle; Bontrager, Seth; Koh, Youngsol; Lu, Yung-Hsiang; Delp, Edward J.

    2015-03-01

    Network cameras have been growing rapidly in recent years. Thousands of public network cameras provide tremendous amount of visual information about the environment. There is a need to analyze this valuable information for a better understanding of the world around us. This paper presents an interactive web-based system that enables users to execute image analysis and computer vision techniques on a large scale to analyze the data from more than 65,000 worldwide cameras. This paper focuses on how to use both the system's website and Application Programming Interface (API). Given a computer program that analyzes a single frame, the user needs to make only slight changes to the existing program and choose the cameras to analyze. The system handles the heterogeneity of the geographically distributed cameras, e.g. different brands, resolutions. The system allocates and manages Amazon EC2 and Windows Azure cloud resources to meet the analysis requirements.

  7. LSST camera control system

    NASA Astrophysics Data System (ADS)

    Marshall, Stuart; Thaler, Jon; Schalk, Terry; Huffer, Michael

    2006-06-01

    The LSST Camera Control System (CCS) will manage the activities of the various camera subsystems and coordinate those activities with the LSST Observatory Control System (OCS). The CCS comprises a set of modules (nominally implemented in software) which are each responsible for managing one camera subsystem. Generally, a control module will be a long lived "server" process running on an embedded computer in the subsystem. Multiple control modules may run on a single computer or a module may be implemented in "firmware" on a subsystem. In any case control modules must exchange messages and status data with a master control module (MCM). The main features of this approach are: (1) control is distributed to the local subsystem level; (2) the systems follow a "Master/Slave" strategy; (3) coordination will be achieved by the exchange of messages through the interfaces between the CCS and its subsystems. The interface between the camera data acquisition system and its downstream clients is also presented.

  8. A system for extracting 3-dimensional measurements from a stereo pair of TV cameras

    NASA Technical Reports Server (NTRS)

    Yakimovsky, Y.; Cunningham, R.

    1976-01-01

    Obtaining accurate three-dimensional (3-D) measurement from a stereo pair of TV cameras is a task requiring camera modeling, calibration, and the matching of the two images of a real 3-D point on the two TV pictures. A system which models and calibrates the cameras and pairs the two images of a real-world point in the two pictures, either manually or automatically, was implemented. This system is operating and provides three-dimensional measurements resolution of + or - mm at distances of about 2 m.

  9. A compact high-definition low-cost digital stereoscopic video camera for rapid robotic surgery development.

    PubMed

    Carlson, Jay; Kowalczuk, Jędrzej; Psota, Eric; Pérez, Lance C

    2012-01-01

    Robotic surgical platforms require vision feedback systems, which often consist of low-resolution, expensive, single-imager analog cameras. These systems are retooled for 3D display by simply doubling the cameras and outboard control units. Here, a fully-integrated digital stereoscopic video camera employing high-definition sensors and a class-compliant USB video interface is presented. This system can be used with low-cost PC hardware and consumer-level 3D displays for tele-medical surgical applications including military medical support, disaster relief, and space exploration.

  10. Applications of a shadow camera system for energy meteorology

    NASA Astrophysics Data System (ADS)

    Kuhn, Pascal; Wilbert, Stefan; Prahl, Christoph; Garsche, Dominik; Schüler, David; Haase, Thomas; Ramirez, Lourdes; Zarzalejo, Luis; Meyer, Angela; Blanc, Philippe; Pitz-Paal, Robert

    2018-02-01

    Downward-facing shadow cameras might play a major role in future energy meteorology. Shadow cameras directly image shadows on the ground from an elevated position. They are used to validate other systems (e.g. all-sky imager based nowcasting systems, cloud speed sensors or satellite forecasts) and can potentially provide short term forecasts for solar power plants. Such forecasts are needed for electricity grids with high penetrations of renewable energy and can help to optimize plant operations. In this publication, two key applications of shadow cameras are briefly presented.

  11. Backing collisions: a study of drivers' eye and backing behaviour using combined rear-view camera and sensor systems.

    PubMed

    Hurwitz, David S; Pradhan, Anuj; Fisher, Donald L; Knodler, Michael A; Muttart, Jeffrey W; Menon, Rajiv; Meissner, Uwe

    2010-04-01

    Backing crash injures can be severe; approximately 200 of the 2,500 reported injuries of this type per year to children under the age of 15 years result in death. Technology for assisting drivers when backing has limited success in preventing backing crashes. Two questions are addressed: Why is the reduction in backing crashes moderate when rear-view cameras are deployed? Could rear-view cameras augment sensor systems? 46 drivers (36 experimental, 10 control) completed 16 parking trials over 2 days (eight trials per day). Experimental participants were provided with a sensor camera system, controls were not. Three crash scenarios were introduced. Parking facility at UMass Amherst, USA. 46 drivers (33 men, 13 women) average age 29 years, who were Massachusetts residents licensed within the USA for an average of 9.3 years. Interventions Vehicles equipped with a rear-view camera and sensor system-based parking aid. Subject's eye fixations while driving and researcher's observation of collision with objects during backing. Only 20% of drivers looked at the rear-view camera before backing, and 88% of those did not crash. Of those who did not look at the rear-view camera before backing, 46% looked after the sensor warned the driver. This study indicates that drivers not only attend to an audible warning, but will look at a rear-view camera if available. Evidence suggests that when used appropriately, rear-view cameras can mitigate the occurrence of backing crashes, particularly when paired with an appropriate sensor system.

  12. Backing collisions: a study of drivers’ eye and backing behaviour using combined rear-view camera and sensor systems

    PubMed Central

    Hurwitz, David S; Pradhan, Anuj; Fisher, Donald L; Knodler, Michael A; Muttart, Jeffrey W; Menon, Rajiv; Meissner, Uwe

    2012-01-01

    Context Backing crash injures can be severe; approximately 200 of the 2,500 reported injuries of this type per year to children under the age of 15 years result in death. Technology for assisting drivers when backing has limited success in preventing backing crashes. Objectives Two questions are addressed: Why is the reduction in backing crashes moderate when rear-view cameras are deployed? Could rear-view cameras augment sensor systems? Design 46 drivers (36 experimental, 10 control) completed 16 parking trials over 2 days (eight trials per day). Experimental participants were provided with a sensor camera system, controls were not. Three crash scenarios were introduced. Setting Parking facility at UMass Amherst, USA. Subjects 46 drivers (33 men, 13 women) average age 29 years, who were Massachusetts residents licensed within the USA for an average of 9.3 years. Interventions Vehicles equipped with a rear-view camera and sensor system-based parking aid. Main Outcome Measures Subject’s eye fixations while driving and researcher’s observation of collision with objects during backing. Results Only 20% of drivers looked at the rear-view camera before backing, and 88% of those did not crash. Of those who did not look at the rear-view camera before backing, 46% looked after the sensor warned the driver. Conclusions This study indicates that drivers not only attend to an audible warning, but will look at a rear-view camera if available. Evidence suggests that when used appropriately, rear-view cameras can mitigate the occurrence of backing crashes, particularly when paired with an appropriate sensor system. PMID:20363812

  13. Coincidence ion imaging with a fast frame camera

    NASA Astrophysics Data System (ADS)

    Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander H.; Fan, Lin; Li, Wen

    2014-12-01

    A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.

  14. Development of the radial neutron camera system for the HL-2A tokamak

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Y. P., E-mail: zhangyp@swip.ac.cn; Yang, J. W.; Liu, Yi

    2016-06-15

    A new radial neutron camera system has been developed and operated recently in the HL-2A tokamak to measure the spatial and time resolved 2.5 MeV D-D fusion neutron, enhancing the understanding of the energetic-ion physics. The camera mainly consists of a multichannel collimator, liquid-scintillation detectors, shielding systems, and a data acquisition system. Measurements of the D-D fusion neutrons using the camera have been successfully performed during the 2015 HL-2A experiment campaign. The measurements show that the distribution of the fusion neutrons in the HL-2A plasma has a peaked profile, suggesting that the neutral beam injection beam ions in the plasmamore » have a peaked distribution. It also suggests that the neutrons are primarily produced from beam-target reactions in the plasma core region. The measurement results from the neutron camera are well consistent with the results of both a standard {sup 235}U fission chamber and NUBEAM neutron calculations. In this paper, the new radial neutron camera system on HL-2A and the first experimental results are described.« less

  15. Coordinating High-Resolution Traffic Cameras : Developing Intelligent, Collaborating Cameras for Transportation Security and Communications

    DOT National Transportation Integrated Search

    2015-08-01

    Cameras are used prolifically to monitor transportation incidents, infrastructure, and congestion. Traditional camera systems often require human monitoring and only offer low-resolution video. Researchers for the Exploratory Advanced Research (EAR) ...

  16. An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates

    PubMed Central

    2017-01-01

    Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing. PMID:28981533

  17. Portable, stand-off spectral imaging camera for detection of effluents and residues

    NASA Astrophysics Data System (ADS)

    Goldstein, Neil; St. Peter, Benjamin; Grot, Jonathan; Kogan, Michael; Fox, Marsha; Vujkovic-Cvijin, Pajo; Penny, Ryan; Cline, Jason

    2015-06-01

    A new, compact and portable spectral imaging camera, employing a MEMs-based encoded imaging approach, has been built and demonstrated for detection of hazardous contaminants including gaseous effluents and solid-liquid residues on surfaces. The camera is called the Thermal infrared Reconfigurable Analysis Camera for Effluents and Residues (TRACER). TRACER operates in the long wave infrared and has the potential to detect a wide variety of materials with characteristic spectral signatures in that region. The 30 lb. camera is tripod mounted and battery powered. A touch screen control panel provides a simple user interface for most operations. The MEMS spatial light modulator is a Texas Instruments Digital Microarray Array with custom electronics and firmware control. Simultaneous 1D-spatial and 1Dspectral dimensions are collected, with the second spatial dimension obtained by scanning the internal spectrometer slit. The sensor can be configured to collect data in several modes including full hyperspectral imagery using Hadamard multiplexing, panchromatic thermal imagery, and chemical-specific contrast imagery, switched with simple user commands. Matched filters and other analog filters can be generated internally on-the-fly and applied in hardware, substantially reducing detection time and improving SNR over HSI software processing, while reducing storage requirements. Results of preliminary instrument evaluation and measurements of flame exhaust are presented.

  18. Engineering study for pallet adapting the Apollo laser altimeter and photographic camera system for the Lidar Test Experiment on orbital flight tests 2 and 4

    NASA Technical Reports Server (NTRS)

    Kuebert, E. J.

    1977-01-01

    A Laser Altimeter and Mapping Camera System was included in the Apollo Lunar Orbital Experiment Missions. The backup system, never used in the Apollo Program, is available for use in the Lidar Test Experiments on the STS Orbital Flight Tests 2 and 4. Studies were performed to assess the problem associated with installation and operation of the Mapping Camera System in the STS. They were conducted on the photographic capabilities of the Mapping Camera System, its mechanical and electrical interface with the STS, documentation, operation and survivability in the expected environments, ground support equipment, test and field support.

  19. A novel design measuring method based on linearly polarized laser interference

    NASA Astrophysics Data System (ADS)

    Cao, Yanbo; Ai, Hua; Zhao, Nan

    2013-09-01

    The interferometric method is widely used in the precision measurement, including the surface quality of the large-aperture mirror. The laser interference technology has been developing rapidly as the laser sources become more and more mature and reliable. We adopted the laser diode as the source for the sake of the short coherent wavelength of it for the optical path difference of the system is quite shorter as several wavelengths, and the power of laser diode is sufficient for measurement and safe to human eye. The 673nm linearly laser was selected and we construct a novel form of interferometric system as we called `Closed Loop', comprised of polarizing optical components, such as polarizing prism and quartz wave plate, the light from the source split by which into measuring beam and referencing beam, they've both reflected by the measuring mirror, after the two beams transforming into circular polarization and spinning in the opposite directions we induced the polarized light synchronous phase shift interference technology to get the detecting fringes, which transfers the phase shifting in time domain to space, so that we did not need to consider the precise-controlled shift of optical path difference, which will introduce the disturbance of the air current and vibration. We got the interference fringes from four different CCD cameras well-alignment, and the fringes are shifted into four different phases of 0, π/2, π, and 3π/2 in time. After obtaining the images from the CCD cameras, we need to align the interference fringes pixel to pixel from different CCD cameras, and synthesis the rough morphology, after getting rid of systematic error, we could calculate the surface accuracy of the measuring mirror. This novel design detecting method could be applied into measuring the optical system aberration, and it would develop into the setup of the portable structural interferometer and widely used in different measuring circumstances.

  20. Utilization and viability of biologically-inspired algorithms in a dynamic multiagent camera surveillance system

    NASA Astrophysics Data System (ADS)

    Mundhenk, Terrell N.; Dhavale, Nitin; Marmol, Salvador; Calleja, Elizabeth; Navalpakkam, Vidhya; Bellman, Kirstie; Landauer, Chris; Arbib, Michael A.; Itti, Laurent

    2003-10-01

    In view of the growing complexity of computational tasks and their design, we propose that certain interactive systems may be better designed by utilizing computational strategies based on the study of the human brain. Compared with current engineering paradigms, brain theory offers the promise of improved self-organization and adaptation to the current environment, freeing the programmer from having to address those issues in a procedural manner when designing and implementing large-scale complex systems. To advance this hypothesis, we discus a multi-agent surveillance system where 12 agent CPUs each with its own camera, compete and cooperate to monitor a large room. To cope with the overload of image data streaming from 12 cameras, we take inspiration from the primate"s visual system, which allows the animal to operate a real-time selection of the few most conspicuous locations in visual input. This is accomplished by having each camera agent utilize the bottom-up, saliency-based visual attention algorithm of Itti and Koch (Vision Research 2000;40(10-12):1489-1506) to scan the scene for objects of interest. Real time operation is achieved using a distributed version that runs on a 16-CPU Beowulf cluster composed of the agent computers. The algorithm guides cameras to track and monitor salient objects based on maps of color, orientation, intensity, and motion. To spread camera view points or create cooperation in monitoring highly salient targets, camera agents bias each other by increasing or decreasing the weight of different feature vectors in other cameras, using mechanisms similar to excitation and suppression that have been documented in electrophysiology, psychophysics and imaging studies of low-level visual processing. In addition, if cameras need to compete for computing resources, allocation of computational time is weighed based upon the history of each camera. A camera agent that has a history of seeing more salient targets is more likely to obtain computational resources. The system demonstrates the viability of biologically inspired systems in a real time tracking. In future work we plan on implementing additional biological mechanisms for cooperative management of both the sensor and processing resources in this system that include top down biasing for target specificity as well as novelty and the activity of the tracked object in relation to sensitive features of the environment.

  1. A telephoto camera system with shooting direction control by gaze detection

    NASA Astrophysics Data System (ADS)

    Teraya, Daiki; Hachisu, Takumi; Yendo, Tomohiro

    2015-05-01

    For safe driving, it is important for driver to check traffic conditions such as traffic lights, or traffic signs as early as soon. If on-vehicle camera takes image of important objects to understand traffic conditions from long distance and shows these to driver, driver can understand traffic conditions earlier. To take image of long distance objects clearly, the focal length of camera must be long. When the focal length is long, on-vehicle camera doesn't have enough field of view to check traffic conditions. Therefore, in order to get necessary images from long distance, camera must have long-focal length and controllability of shooting direction. In previous study, driver indicates shooting direction on displayed image taken by a wide-angle camera, a direction controllable camera takes telescopic image, and displays these to driver. However, driver uses a touch panel to indicate the shooting direction in previous study. It is cause of disturb driving. So, we propose a telephoto camera system for driving support whose shooting direction is controlled by driver's gaze to avoid disturbing drive. This proposed system is composed of a gaze detector and an active telephoto camera whose shooting direction is controlled. We adopt non-wear detecting method to avoid hindrance to drive. The gaze detector measures driver's gaze by image processing. The shooting direction of the active telephoto camera is controlled by galvanometer scanners and the direction can be switched within a few milliseconds. We confirmed that the proposed system takes images of gazing straight ahead of subject by experiments.

  2. Visual Odometry Based on Structural Matching of Local Invariant Features Using Stereo Camera Sensor

    PubMed Central

    Núñez, Pedro; Vázquez-Martín, Ricardo; Bandera, Antonio

    2011-01-01

    This paper describes a novel sensor system to estimate the motion of a stereo camera. Local invariant image features are matched between pairs of frames and linked into image trajectories at video rate, providing the so-called visual odometry, i.e., motion estimates from visual input alone. Our proposal conducts two matching sessions: the first one between sets of features associated to the images of the stereo pairs and the second one between sets of features associated to consecutive frames. With respect to previously proposed approaches, the main novelty of this proposal is that both matching algorithms are conducted by means of a fast matching algorithm which combines absolute and relative feature constraints. Finding the largest-valued set of mutually consistent matches is equivalent to finding the maximum-weighted clique on a graph. The stereo matching allows to represent the scene view as a graph which emerge from the features of the accepted clique. On the other hand, the frame-to-frame matching defines a graph whose vertices are features in 3D space. The efficiency of the approach is increased by minimizing the geometric and algebraic errors to estimate the final displacement of the stereo camera between consecutive acquired frames. The proposed approach has been tested for mobile robotics navigation purposes in real environments and using different features. Experimental results demonstrate the performance of the proposal, which could be applied in both industrial and service robot fields. PMID:22164016

  3. Opportunity's Travels

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This overview map made from Mars Orbiter camera images illustrates the path that the Mars Exploration Rover Opportunity has taken from its first sol on the red planet through its 87th sol. After thoroughly examining its 'Eagle Crater' landing-site, the rover moved onto the plains of Meridiani Planum, stopping to examine a curious trough and a target within it called 'Anatolia.' Following that, Opportunity approached and remotely studied the rocky dish called 'Fram Crater.' As of its 91st sol (April 26, 2004), the rover sits 160 meters (about 525 feet) from the rim of 'Endurance Crater.'

  4. Leveraging CubeSat Technology to Address Nighttime Imagery Requirements over the Arctic

    NASA Astrophysics Data System (ADS)

    Pereira, J. J.; Mamula, D.; Caulfield, M.; Gallagher, F. W., III; Spencer, D.; Petrescu, E. M.; Ostroy, J.; Pack, D. W.; LaRosa, A.

    2017-12-01

    The National Oceanic and Atmospheric Administration (NOAA) has begun planning for the future operational environmental satellite system by conducting the NOAA Satellite Observing System Architecture (NSOSA) study. In support of the NSOSA study, NOAA is exploring how CubeSat technology funded by NASA can be used to demonstrate the ability to measure three-dimensional profiles of global temperature and water vapor. These measurements are critical for the National Weather Service's (NWS) weather prediction mission. NOAA is conducting design studies on Earth Observing Nanosatellites (EON) for microwave (EON-MW) and infrared (EON-IR) soundings, with MIT Lincoln Laboratory and NASA JPL, respectively. The next step is to explore the technology required for a CubeSat mission to address NWS nighttime imagery requirements over the Arctic. The concept is called EON-Day/Night Band (DNB). The DNB is a 0.5-0.9 micron channel currently on the operational Visible Infrared Imaging Radiometer Suite (VIIRS) instrument, which is part of the Suomi-National Polar-orbiting Partnership and Joint Polar Satellite System satellites. NWS has found DNB very useful during the long periods of darkness that occur during the Alaskan cold season. The DNB enables nighttime imagery products of fog, clouds, and sea ice. EON-DNB will leverage experiments carried out by The Aerospace Corporation's CUbesat MULtispectral Observation System (CUMULOS) sensor and other related work. CUMULOS is a DoD-funded demonstration of COTS camera technology integrated as a secondary mission on the JPL Integrated Solar Array and Reflectarray Antenna mission. CUMULOS is demonstrating a staring visible Si CMOS camera. The EON-DNB project will leverage proven, advanced compact visible lens and focal plane camera technologies to meet NWS user needs for nighttime visible imagery. Expanding this technology to an operational demonstration carries several areas of risk that need to be addressed prior to an operational mission. These include, but are not limited to: calibration, swath coverage, resolution, scene gain control, compact fast optical systems, downlink choices, and mission life. NOAA plans to conduct risk reduction efforts similar to those on EON-MW and EON-IR. This paper will explore EON-DNB risks and mitigation options.

  5. SPARTAN Near-IR Camera | SOAR

    Science.gov Websites

    SPARTAN Near-IR Camera SPARTAN Cookbook Ohio State Infrared Imager/Spectrograph (OSIRIS) - NO LONGER Instrumentation at SOAR»SPARTAN Near-IR Camera SPARTAN Near-IR Camera System Overview The Spartan Infrared Camera is a high spatial resolution near-IR imager. Spartan has a focal plane conisisting of four "

  6. Control system for several rotating mirror camera synchronization operation

    NASA Astrophysics Data System (ADS)

    Liu, Ningwen; Wu, Yunfeng; Tan, Xianxiang; Lai, Guoji

    1997-05-01

    This paper introduces a single chip microcomputer control system for synchronization operation of several rotating mirror high-speed cameras. The system consists of four parts: the microcomputer control unit (including the synchronization part and precise measurement part and the time delay part), the shutter control unit, the motor driving unit and the high voltage pulse generator unit. The control system has been used to control the synchronization working process of the GSI cameras (driven by a motor) and FJZ-250 rotating mirror cameras (driven by a gas driven turbine). We have obtained the films of the same objective from different directions in different speed or in same speed.

  7. Evaluation of the MSFC facsimile camera system as a tool for extraterrestrial geologic exploration

    NASA Technical Reports Server (NTRS)

    Wolfe, E. W.; Alderman, J. D.

    1971-01-01

    Utility of the Marshall Space Flight (MSFC) facsimile camera system for extraterrestrial geologic exploration was investigated during the spring of 1971 near Merriam Crater in northern Arizona. Although the system with its present hard-wired recorder operates erratically, the imagery showed that the camera could be developed as a prime imaging tool for automated missions. Its utility would be enhanced by development of computer techniques that utilize digital camera output for construction of topographic maps, and it needs increased resolution for examining near field details. A supplementary imaging system may be necessary for hand specimen examination at low magnification.

  8. The use of consumer depth cameras for 3D surface imaging of people with obesity: A feasibility study.

    PubMed

    Wheat, J S; Clarkson, S; Flint, S W; Simpson, C; Broom, D R

    2018-05-21

    Three dimensional (3D) surface imaging is a viable alternative to traditional body morphology measures, but the feasibility of using this technique with people with obesity has not been fully established. Therefore, the aim of this study was to investigate the validity, repeatability and acceptability of a consumer depth camera 3D surface imaging system in imaging people with obesity. The concurrent validity of the depth camera based system was investigated by comparing measures of mid-trunk volume to a gold-standard. The repeatability and acceptability of the depth camera system was assessed in people with obesity at a clinic. There was evidence of a fixed systematic difference between the depth camera system and the gold standard but excellent correlation between volume estimates (r 2 =0.997), with little evidence of proportional bias. The depth camera system was highly repeatable - low typical error (0.192L), high intraclass correlation coefficient (>0.999) and low technical error of measurement (0.64%). Depth camera based 3D surface imaging was also acceptable to people with obesity. It is feasible (valid, repeatable and acceptable) to use a low cost, flexible 3D surface imaging system to monitor the body size and shape of people with obesity in a clinical setting. Copyright © 2018 Asia Oceania Association for the Study of Obesity. Published by Elsevier Ltd. All rights reserved.

  9. Development of a camera casing suited for cryogenic and vacuum applications

    NASA Astrophysics Data System (ADS)

    Delaquis, S. C.; Gornea, R.; Janos, S.; Lüthi, M.; von Rohr, Ch Rudolf; Schenk, M.; Vuilleumier, J.-L.

    2013-12-01

    We report on the design, construction, and operation of a PID temperature controlled and vacuum tight camera casing. The camera casing contains a commercial digital camera and a lighting system. The design of the camera casing and its components are discussed in detail. Pictures taken by this cryo-camera while immersed in argon vapour and liquid nitrogen are presented. The cryo-camera can provide a live view inside cryogenic set-ups and allows to record video.

  10. Use and validation of mirrorless digital single light reflex camera for recording of vitreoretinal surgeries in high definition

    PubMed Central

    Khanduja, Sumeet; Sampangi, Raju; Hemlatha, B C; Singh, Satvir; Lall, Ashish

    2018-01-01

    Purpose: The purpose of this study is to describe the use of commercial digital single light reflex (DSLR) for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD) camera. Methods: Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a) anterior segment surgery, (b) surgery under direct viewing system, and (c) surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better) in DSLR camera. Conclusion: Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching. PMID:29283133

  11. Use and validation of mirrorless digital single light reflex camera for recording of vitreoretinal surgeries in high definition.

    PubMed

    Khanduja, Sumeet; Sampangi, Raju; Hemlatha, B C; Singh, Satvir; Lall, Ashish

    2018-01-01

    The purpose of this study is to describe the use of commercial digital single light reflex (DSLR) for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD) camera. Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a) anterior segment surgery, (b) surgery under direct viewing system, and (c) surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better) in DSLR camera. Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching.

  12. Sprint: The first flight demonstration of the external work system robots

    NASA Technical Reports Server (NTRS)

    Price, Charles R.; Grimm, Keith

    1995-01-01

    The External Works Systems (EWS) 'X Program' is a new NASA initiative that will, in the next ten years, develop a new generation of space robots for active and participative support of zero g external operations. The robotic development will center on three areas: the assistant robot, the associate robot, and the surrogate robot that will support external vehicular activities (EVA) prior to and after, during, and instead of space-suited human external activities respectively. The EWS robotics program will be a combination of technology developments and flight demonstrations for operational proof of concept. The first EWS flight will be a flying camera called 'Sprint' that will seek to demonstrate operationally flexible, remote viewing capability for EVA operations, inspections, and contingencies for the space shuttle and space station. This paper describes the need for Sprint and its characteristics.

  13. An Unusual View: MISR sees the Moon

    NASA Image and Video Library

    2017-08-17

    The job of the Multiangle Imaging SpectroRadiometer (MISR) instrument on NASA's Terra satellite is to view Earth. For more than 17 years, its nine cameras have stared downward 24 hours a day, faithfully collecting images used to study Earth's surface and atmosphere. On August 5, however, MISR captured some very unusual data as the Terra satellite performed a backflip in space. This maneuver was performed to allow MISR and the other instruments on Terra to catch a glimpse of the Moon, something that has been done only once before, in 2003. Why task an elderly satellite with such a radical maneuver? Since we can be confident that the Moon's brightness has remained very constant over the mission, MISR's images of the Moon can be used as a check of the instrument's calibration, allowing an independent verification of the procedures used to correct the images for any changes the cameras have experienced over their many years in space. If changes in the cameras' responses to light aren't properly accounted for, the images captured by MISR would make it appear as if Earth were growing darker or lighter, which would throw off scientists' efforts to characterize air pollution, cloud cover and Earth's climate. Because of this, the MISR team uses several methods to calibrate the data, all of which involve imaging something with a known (or independently measured) brightness and correcting the images to match that brightness. Every month, MISR views two panels of a special material called Spectralon, which reflects sunlight in a very particular way, onboard the instrument. Periodically, this calibration is checked by a field team who measures the brightness of a flat, uniformly colored surface on Earth, usually a dry desert lakebed, as MISR flies overhead. The lunar maneuver offers a third opportunity to check the brightness calibration of MISR's images. While viewing Earth, MISR's cameras are fixed at nine different angles, with one (called An) pointed straight down, four canted forwards (Af, Bf, Cf, and Df) and four angled backwards (Aa, Ba, Ca, and Da). The A, B, C, and D cameras have different focal lengths, with the most oblique (D) cameras having the longest focal lengths in order to preserve spatial resolution on the ground. During the lunar maneuver, however, the spacecraft rotated so that each camera saw the almost-full Moon straight on. This means that the different focal lengths produce images with different resolutions. The D cameras produce the sharpest images. These grayscale images were made with raw data from the red spectral band of each camera. Because the spacecraft is constantly rotating while these images were taken, the images are "smeared" in the vertical direction, producing an oval-shaped Moon. These have been corrected to restore the Moon to its true circular shape. https://photojournal.jpl.nasa.gov/catalog/PIA21876

  14. Optical Extinction Measurements of Dust Density in the GMRO Regolith Test Bin

    NASA Technical Reports Server (NTRS)

    Lane, J.; Mantovani, J.; Mueller, R.; Nugent, M.; Nick, A.; Schuler, J.; Townsend, I.

    2016-01-01

    A regolith simulant test bin was constructed and completed in the Granular Mechanics and Regolith Operations (GMRO) Lab in 2013. This Planetary Regolith Test Bed (PRTB) is a 64 sq m x 1 m deep test bin, is housed in a climate-controlled facility, and contains 120 MT of lunar-regolith simulant, called Black Point-1 or BP-1, from Black Point, AZ. One of the current uses of the test bin is to study the effects of difficult lighting and dust conditions on Telerobotic Perception Systems to better assess and refine regolith operations for asteroid, Mars and polar lunar missions. Low illumination and low angle of incidence lighting pose significant problems to computer vision and human perception. Levitated dust on Asteroids interferes with imaging and degrades depth perception. Dust Storms on Mars pose a significant problem. Due to these factors, the likely performance of telerobotics is poorly understood for future missions. Current space telerobotic systems are only operated in bright lighting and dust-free conditions. This technology development testing will identify: (1) the impact of degraded lighting and environmental dust on computer vision and operator perception, (2) potential methods and procedures for mitigating these impacts, (3) requirements for telerobotic perception systems for asteroid capture, Mars dust storms and lunar regolith ISRU missions. In order to solve some of the Telerobotic Perception system problems, a plume erosion sensor (PES) was developed in the Lunar Regolith Simulant Bin (LRSB), containing 2 MT of JSC-1a lunar simulant. PES is simply a laser and digital camera with a white target. Two modes of operation have been investigated: (1) single laser spot - the brightness of the spot is dependent on the optical extinction due to dust and is thus an indirect measure of particle number density, and (2) side-scatter - the camera images the laser from the side, showing beam entrance into the dust cloud and the boundary between dust and void. Both methods must assume a mean particle size in order to extract a number density. The optical extinction measurement yields the product of the 2nd moment of the particle size distribution and the extinction efficiency Qe. For particle sizes in the range of interest (greater than 1 micrometer), Qe approximately equal to 2. Scaling up of the PES single laser and camera system is underway in the PRTB, where an array of lasers penetrate a con-trolled dust cloud, illuminating multiple targets. Using high speed HD GoPro video cameras, the evolution of the dust cloud and particle size density can be studied in detail.

  15. A high-speed digital camera system for the observation of rapid H-alpha fluctuations in solar flares

    NASA Technical Reports Server (NTRS)

    Kiplinger, Alan L.; Dennis, Brian R.; Orwig, Larry E.

    1989-01-01

    Researchers developed a prototype digital camera system for obtaining H-alpha images of solar flares with 0.1 s time resolution. They intend to operate this system in conjunction with SMM's Hard X Ray Burst Spectrometer, with x ray instruments which will be available on the Gamma Ray Observatory and eventually with the Gamma Ray Imaging Device (GRID), and with the High Resolution Gamma-Ray and Hard X Ray Spectrometer (HIREGS) which are being developed for the Max '91 program. The digital camera has recently proven to be successful as a one camera system operating in the blue wing of H-alpha during the first Max '91 campaign. Construction and procurement of a second and possibly a third camera for simultaneous observations at other wavelengths are underway as are analyses of the campaign data.

  16. Accuracy Potential and Applications of MIDAS Aerial Oblique Camera System

    NASA Astrophysics Data System (ADS)

    Madani, M.

    2012-07-01

    Airborne oblique cameras such as Fairchild T-3A were initially used for military reconnaissance in 30s. A modern professional digital oblique camera such as MIDAS (Multi-camera Integrated Digital Acquisition System) is used to generate lifelike three dimensional to the users for visualizations, GIS applications, architectural modeling, city modeling, games, simulators, etc. Oblique imagery provide the best vantage for accessing and reviewing changes to the local government tax base, property valuation assessment, buying & selling of residential/commercial for better decisions in a more timely manner. Oblique imagery is also used for infrastructure monitoring making sure safe operations of transportation, utilities, and facilities. Sanborn Mapping Company acquired one MIDAS from TrackAir in 2011. This system consists of four tilted (45 degrees) cameras and one vertical camera connected to a dedicated data acquisition computer system. The 5 digital cameras are based on the Canon EOS 1DS Mark3 with Zeiss lenses. The CCD size is 5,616 by 3,744 (21 MPixels) with the pixel size of 6.4 microns. Multiple flights using different camera configurations (nadir/oblique (28 mm/50 mm) and (50 mm/50 mm)) were flown over downtown Colorado Springs, Colorado. Boresight fights for 28 mm nadir camera were flown at 600 m and 1,200 m and for 50 mm nadir camera at 750 m and 1500 m. Cameras were calibrated by using a 3D cage and multiple convergent images utilizing Australis model. In this paper, the MIDAS system is described, a number of real data sets collected during the aforementioned flights are presented together with their associated flight configurations, data processing workflow, system calibration and quality control workflows are highlighted and the achievable accuracy is presented in some detail. This study revealed that the expected accuracy of about 1 to 1.5 GSD (Ground Sample Distance) for planimetry and about 2 to 2.5 GSD for vertical can be achieved. Remaining systematic errors were modeled by analyzing residuals using correction grid. The results of the final bundle adjustments are sufficient to enable Sanborn to produce DEM/DTM and orthophotos from the nadir imagery and create 3D models using georeferenced oblique imagery.

  17. Compensation for positioning error of industrial robot for flexible vision measuring system

    NASA Astrophysics Data System (ADS)

    Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui

    2013-01-01

    Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.

  18. PubMed Central

    Baum, S.; Sillem, M.; Ney, J. T.; Baum, A.; Friedrich, M.; Radosa, J.; Kramer, K. M.; Gronwald, B.; Gottschling, S.; Solomayer, E. F.; Rody, A.; Joukhadar, R.

    2017-01-01

    Introduction Minimally invasive operative techniques are being used increasingly in gynaecological surgery. The expansion of the laparoscopic operation spectrum is in part the result of improved imaging. This study investigates the practical advantages of using 3D cameras in routine surgical practice. Materials and Methods Two different 3-dimensional camera systems were compared with a 2-dimensional HD system; the operating surgeonʼs experiences were documented immediately postoperatively using a questionnaire. Results Significant advantages were reported for suturing and cutting of anatomical structures when using the 3D compared to 2D camera systems. There was only a slight advantage for coagulating. The use of 3D cameras significantly improved the general operative visibility and in particular the representation of spacial depth compared to 2-dimensional images. There was not a significant advantage for image width. Depiction of adhesions and retroperitoneal neural structures was significantly improved by the stereoscopic cameras, though this did not apply to blood vessels, ureter, uterus or ovaries. Conclusion 3-dimensional cameras were particularly advantageous for the depiction of fine anatomical structures due to improved spacial depth representation compared to 2D systems. 3D cameras provide the operating surgeon with a monitor image that more closely resembles actual anatomy, thus simplifying laparoscopic procedures. PMID:28190888

  19. Analysis of edge density fluctuation measured by trial KSTAR beam emission spectroscopy systema)

    NASA Astrophysics Data System (ADS)

    Nam, Y. U.; Zoletnik, S.; Lampert, M.; Kovácsik, Á.

    2012-10-01

    A beam emission spectroscopy (BES) system based on direct imaging avalanche photodiode (APD) camera has been designed for Korea Superconducting Tokamak Advanced Research (KSTAR) and a trial system has been constructed and installed for evaluating feasibility of the design. The system contains two cameras, one is an APD camera for BES measurement and another is a fast visible camera for position calibration. Two pneumatically actuated mirrors were positioned at front and rear of lens optics. The front mirror can switch the measurement between edge and core region of plasma and the rear mirror can switch between the APD and the visible camera. All systems worked properly and the measured photon flux was reasonable as expected from the simulation. While the measurement data from the trial system were limited, it revealed some interesting characteristics of KSTAR plasma suggesting future research works with fully installed BES system. The analysis result and the development plan will be presented in this paper.

  20. Real-time tracking and fast retrieval of persons in multiple surveillance cameras of a shopping mall

    NASA Astrophysics Data System (ADS)

    Bouma, Henri; Baan, Jan; Landsmeer, Sander; Kruszynski, Chris; van Antwerpen, Gert; Dijk, Judith

    2013-05-01

    The capability to track individuals in CCTV cameras is important for e.g. surveillance applications at large areas such as train stations, airports and shopping centers. However, it is laborious to track and trace people over multiple cameras. In this paper, we present a system for real-time tracking and fast interactive retrieval of persons in video streams from multiple static surveillance cameras. This system is demonstrated in a shopping mall, where the cameras are positioned without overlapping fields-of-view and have different lighting conditions. The results show that the system allows an operator to find the origin or destination of a person more efficiently. The misses are reduced with 37%, which is a significant improvement.

  1. FPGA Based Adaptive Rate and Manifold Pattern Projection for Structured Light 3D Camera System †

    PubMed Central

    Lee, Sukhan

    2018-01-01

    The quality of the captured point cloud and the scanning speed of a structured light 3D camera system depend upon their capability of handling the object surface of a large reflectance variation in the trade-off of the required number of patterns to be projected. In this paper, we propose and implement a flexible embedded framework that is capable of triggering the camera single or multiple times for capturing single or multiple projections within a single camera exposure setting. This allows the 3D camera system to synchronize the camera and projector even for miss-matched frame rates such that the system is capable of projecting different types of patterns for different scan speed applications. This makes the system capturing a high quality of 3D point cloud even for the surface of a large reflectance variation while achieving a high scan speed. The proposed framework is implemented on the Field Programmable Gate Array (FPGA), where the camera trigger is adaptively generated in such a way that the position and the number of triggers are automatically determined according to camera exposure settings. In other words, the projection frequency is adaptive to different scanning applications without altering the architecture. In addition, the proposed framework is unique as it does not require any external memory for storage because pattern pixels are generated in real-time, which minimizes the complexity and size of the application-specific integrated circuit (ASIC) design and implementation. PMID:29642506

  2. Combined use of a priori data for fast system self-calibration of a non-rigid multi-camera fringe projection system

    NASA Astrophysics Data System (ADS)

    Stavroulakis, Petros I.; Chen, Shuxiao; Sims-Waterhouse, Danny; Piano, Samanta; Southon, Nicholas; Bointon, Patrick; Leach, Richard

    2017-06-01

    In non-rigid fringe projection 3D measurement systems, where either the camera or projector setup can change significantly between measurements or the object needs to be tracked, self-calibration has to be carried out frequently to keep the measurements accurate1. In fringe projection systems, it is common to use methods developed initially for photogrammetry for the calibration of the camera(s) in the system in terms of extrinsic and intrinsic parameters. To calibrate the projector(s) an extra correspondence between a pre-calibrated camera and an image created by the projector is performed. These recalibration steps are usually time consuming and involve the measurement of calibrated patterns on planes, before the actual object can continue to be measured after a motion of a camera or projector has been introduced in the setup and hence do not facilitate fast 3D measurement of objects when frequent experimental setup changes are necessary. By employing and combining a priori information via inverse rendering, on-board sensors, deep learning and leveraging a graphics processor unit (GPU), we assess a fine camera pose estimation method which is based on optimising the rendering of a model of a scene and the object to match the view from the camera. We find that the success of this calibration pipeline can be greatly improved by using adequate a priori information from the aforementioned sources.

  3. Quality management for space systems in ISRO

    NASA Astrophysics Data System (ADS)

    Satish, S.; Selva Raju, S.; Nanjunda Swamy, T. S.; Kulkarni, P. L.

    2009-11-01

    In a little over four decades, the Indian Space Program has carved a niche for itself with the unique application driven program oriented towards National development. The end-to-end capability approach of the space projects in the country call for innovative practices and procedures in assuring the quality and reliability of space systems. The System Reliability (SR) efforts initiated at the start of the projects continue during the entire life cycle of the project encompassing design, development, realisation, assembly, testing and integration and during launch. Even after the launch, SR groups participate in the on-orbit evaluation of transponders in communication satellites and camera systems in remote sensing satellites. SR groups play a major role in identification, evaluation and inculcating quality practices in work centres involved in the fabrication of mechanical, electronics and propulsion systems required for Indian Space Research Organization's (ISRO's) launch vehicle and spacecraft projects. Also the reliability analysis activities like prediction, assessment and demonstration as well as de-rating analysis, Failure Mode Effects and Criticality Analysis (FMECA) and worst-case analysis are carried out by SR groups during various stages of project realisation. These activities provide the basis for project management to take appropriate techno-managerial decisions to ensure that the required reliability goals are met. Extensive test facilities catering to the needs of the space program has been set up. A system for consolidating the experience and expertise gained for issue of standards called product assurance specifications to be used in all ISRO centres has also been established.

  4. Alternative images for perpendicular parking : a usability test of a multi-camera parking assistance system.

    DOT National Transportation Integrated Search

    2004-10-01

    The parking assistance system evaluated consisted of four outward facing cameras whose images could be presented on a monitor on the center console. The images presented varied in the location of the virtual eye point of the camera (the height above ...

  5. A low-cost dual-camera imaging system for aerial applicators

    USDA-ARS?s Scientific Manuscript database

    Agricultural aircraft provide a readily available remote sensing platform as low-cost and easy-to-use consumer-grade cameras are being increasingly used for aerial imaging. In this article, we report on a dual-camera imaging system we recently assembled that can capture RGB and near-infrared (NIR) i...

  6. A smart telerobotic system driven by monocular vision

    NASA Technical Reports Server (NTRS)

    Defigueiredo, R. J. P.; Maccato, A.; Wlczek, P.; Denney, B.; Scheerer, J.

    1994-01-01

    A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry.

  7. Image simulation for HardWare In the Loop simulation in EO domain

    NASA Astrophysics Data System (ADS)

    Cathala, Thierry; Latger, Jean

    2015-10-01

    Infrared camera as a weapon sub system for automatic guidance is a key component for military carrier such as missile for example. The associated Image Processing, that controls the navigation, needs to be intensively assessed. Experimentation in the real world is very expensive. This is the main reason why hybrid simulation also called HardWare In the Loop (HWIL) is more and more required nowadays. In that field, IR projectors are able to cast IR fluxes of photons directly onto the IR camera of a given weapon system, typically a missile seeker head. Though in laboratory, the missile is so stimulated exactly like in the real world, provided a realistic simulation tool enables to perform synthetic images to be displayed by the IR projectors. The key technical challenge is to render the synthetic images at the required frequency. This paper focuses on OKTAL-SE experience in this domain through its product SE-FAST-HWIL. It shows the methodology and Return of Experience from OKTAL-SE. Examples are given, in the frame of the SE-Workbench. The presentation focuses on trials on real operational complex 3D cases. In particular, three important topics, that are very sensitive with regards to IG performance, are detailed: first the 3D sea surface representation, then particle systems rendering especially to simulate flares and at last sensor effects modelling. Beyond "projection mode", some information will be given on the SE-FAST-HWIL new capabilities dedicated to "injection mode".

  8. Issues in implementing services for a wireless web-enabled digital camera

    NASA Astrophysics Data System (ADS)

    Venkataraman, Shyam; Sampat, Nitin; Fisher, Yoram; Canosa, John; Noel, Nicholas

    2001-05-01

    The competition in the exploding digital photography market has caused vendors to explore new ways to increase their return on investment. A common view among industry analysts is that increasingly it will be services provided by these cameras, and not the cameras themselves, that will provide the revenue stream. These services will be coupled to e- Appliance based Communities. In addition, the rapidly increasing need to upload images to the Internet for photo- finishing services as well as the need to download software upgrades to the camera is driving many camera OEMs to evaluate the benefits of using the wireless web to extend their enterprise systems. Currently, creating a viable e- appliance such as a digital camera coupled with a wireless web service requires more than just a competency in product development. This paper will evaluate the system implications in the deployment of recurring revenue services and enterprise connectivity of a wireless, web-enabled digital camera. These include, among other things, an architectural design approach for services such as device management, synchronization, billing, connectivity, security, etc. Such an evaluation will assist, we hope, anyone designing or connecting a digital camera to the enterprise systems.

  9. Very High-Speed Digital Video Capability for In-Flight Use

    NASA Technical Reports Server (NTRS)

    Corda, Stephen; Tseng, Ting; Reaves, Matthew; Mauldin, Kendall; Whiteman, Donald

    2006-01-01

    digital video camera system has been qualified for use in flight on the NASA supersonic F-15B Research Testbed aircraft. This system is capable of very-high-speed color digital imaging at flight speeds up to Mach 2. The components of this system have been ruggedized and shock-mounted in the aircraft to survive the severe pressure, temperature, and vibration of the flight environment. The system includes two synchronized camera subsystems installed in fuselage-mounted camera pods (see Figure 1). Each camera subsystem comprises a camera controller/recorder unit and a camera head. The two camera subsystems are synchronized by use of an MHub(TradeMark) synchronization unit. Each camera subsystem is capable of recording at a rate up to 10,000 pictures per second (pps). A state-of-the-art complementary metal oxide/semiconductor (CMOS) sensor in the camera head has a maximum resolution of 1,280 1,024 pixels at 1,000 pps. Exposure times of the electronic shutter of the camera range from 1/200,000 of a second to full open. The recorded images are captured in a dynamic random-access memory (DRAM) and can be downloaded directly to a personal computer or saved on a compact flash memory card. In addition to the high-rate recording of images, the system can display images in real time at 30 pps. Inter Range Instrumentation Group (IRIG) time code can be inserted into the individual camera controllers or into the M-Hub unit. The video data could also be used to obtain quantitative, three-dimensional trajectory information. The first use of this system was in support of the Space Shuttle Return to Flight effort. Data were needed to help in understanding how thermally insulating foam is shed from a space shuttle external fuel tank during launch. The cameras captured images of simulated external tank debris ejected from a fixture mounted under the centerline of the F-15B aircraft. Digital video was obtained at subsonic and supersonic flight conditions, including speeds up to Mach 2 and altitudes up to 50,000 ft (15.24 km). The digital video was used to determine the structural survivability of the debris in a real flight environment and quantify the aerodynamic trajectories of the debris.

  10. Bandit: Technologies for Proximity Operations of Teams of Sub-10Kg Spacecraft

    DTIC Science & Technology

    2007-10-16

    and adding a dedicated overhead camera system. As will be explained below, the forced-air system did not work and the existing system has proven too...erratic to justify the expense of the camera system. 6DOF Software Simulator. The existing Java-based graphical 6DOF simulator was to be improved for...proposed camera system for a nonfunctional table. The C-9 final report is enclosed. ["Prf flj ,er Figure 1. Forced-air table schematic Figure 2

  11. Keyboard before Head Tracking Depresses User Success in Remote Camera Control

    NASA Astrophysics Data System (ADS)

    Zhu, Dingyun; Gedeon, Tom; Taylor, Ken

    In remote mining, operators of complex machinery have more tasks or devices to control than they have hands. For example, operating a rock breaker requires two handed joystick control to position and fire the jackhammer, leaving the camera control to either automatic control or require the operator to switch between controls. We modelled such a teleoperated setting by performing experiments using a simple physical game analogue, being a half size table soccer game with two handles. The complex camera angles of the mining application were modelled by obscuring the direct view of the play area and the use of a Pan-Tilt-Zoom (PTZ) camera. The camera control was via either a keyboard or via head tracking using two different sets of head gestures called “head motion” and “head flicking” for turning camera motion on/off. Our results show that the head motion control was able to provide a comparable performance to using a keyboard, while head flicking was significantly worse. In addition, the sequence of use of the three control methods is highly significant. It appears that use of the keyboard first depresses successful use of the head tracking methods, with significantly better results when one of the head tracking methods was used first. Analysis of the qualitative survey data collected supports that the worst (by performance) method was disliked by participants. Surprisingly, use of that worst method as the first control method significantly enhanced performance using the other two control methods.

  12. A Reconfigurable Real-Time Compressive-Sampling Camera for Biological Applications

    PubMed Central

    Fu, Bo; Pitter, Mark C.; Russell, Noah A.

    2011-01-01

    Many applications in biology, such as long-term functional imaging of neural and cardiac systems, require continuous high-speed imaging. This is typically not possible, however, using commercially available systems. The frame rate and the recording time of high-speed cameras are limited by the digitization rate and the capacity of on-camera memory. Further restrictions are often imposed by the limited bandwidth of the data link to the host computer. Even if the system bandwidth is not a limiting factor, continuous high-speed acquisition results in very large volumes of data that are difficult to handle, particularly when real-time analysis is required. In response to this issue many cameras allow a predetermined, rectangular region of interest (ROI) to be sampled, however this approach lacks flexibility and is blind to the image region outside of the ROI. We have addressed this problem by building a camera system using a randomly-addressable CMOS sensor. The camera has a low bandwidth, but is able to capture continuous high-speed images of an arbitrarily defined ROI, using most of the available bandwidth, while simultaneously acquiring low-speed, full frame images using the remaining bandwidth. In addition, the camera is able to use the full-frame information to recalculate the positions of targets and update the high-speed ROIs without interrupting acquisition. In this way the camera is capable of imaging moving targets at high-speed while simultaneously imaging the whole frame at a lower speed. We have used this camera system to monitor the heartbeat and blood cell flow of a water flea (Daphnia) at frame rates in excess of 1500 fps. PMID:22028852

  13. Method used to test the imaging consistency of binocular camera's left-right optical system

    NASA Astrophysics Data System (ADS)

    Liu, Meiying; Wang, Hu; Liu, Jie; Xue, Yaoke; Yang, Shaodong; Zhao, Hui

    2016-09-01

    To binocular camera, the consistency of optical parameters of the left and the right optical system is an important factor that will influence the overall imaging consistency. In conventional testing procedure of optical system, there lacks specifications suitable for evaluating imaging consistency. In this paper, considering the special requirements of binocular optical imaging system, a method used to measure the imaging consistency of binocular camera is presented. Based on this method, a measurement system which is composed of an integrating sphere, a rotary table and a CMOS camera has been established. First, let the left and the right optical system capture images in normal exposure time under the same condition. Second, a contour image is obtained based on the multiple threshold segmentation result and the boundary is determined using the slope of contour lines near the pseudo-contour line. Third, the constraint of gray level based on the corresponding coordinates of left-right images is established and the imaging consistency could be evaluated through standard deviation σ of the imaging grayscale difference D (x, y) between the left and right optical system. The experiments demonstrate that the method is suitable for carrying out the imaging consistency testing for binocular camera. When the standard deviation 3σ distribution of imaging gray difference D (x, y) between the left and right optical system of the binocular camera does not exceed 5%, it is believed that the design requirements have been achieved. This method could be used effectively and paves the way for the imaging consistency testing of the binocular camera.

  14. Measurement of reach envelopes with a four-camera Selective Spot Recognition (SELSPOT) system

    NASA Technical Reports Server (NTRS)

    Stramler, J. H., Jr.; Woolford, B. J.

    1983-01-01

    The basic Selective Spot Recognition (SELSPOT) system is essentially a system which uses infrared LEDs and a 'camera' with an infrared-sensitive photodetector, a focusing lens, and some A/D electronics to produce a digital output representing an X and Y coordinate for each LED for each camera. When the data are synthesized across all cameras with appropriate calibrations, an XYZ set of coordinates is obtained for each LED at a given point in time. Attention is given to the operating modes, a system checkout, and reach envelopes and software. The Video Recording Adapter (VRA) represents the main addition to the basic SELSPOT system. The VRA contains a microprocessor and other electronics which permit user selection of several options and some interaction with the system.

  15. A goggle navigation system for cancer resection surgery

    NASA Astrophysics Data System (ADS)

    Xu, Junbin; Shao, Pengfei; Yue, Ting; Zhang, Shiwu; Ding, Houzhu; Wang, Jinkun; Xu, Ronald

    2014-02-01

    We describe a portable fluorescence goggle navigation system for cancer margin assessment during oncologic surgeries. The system consists of a computer, a head mount display (HMD) device, a near infrared (NIR) CCD camera, a miniature CMOS camera, and a 780 nm laser diode excitation light source. The fluorescence and the background images of the surgical scene are acquired by the CCD camera and the CMOS camera respectively, co-registered, and displayed on the HMD device in real-time. The spatial resolution and the co-registration deviation of the goggle navigation system are evaluated quantitatively. The technical feasibility of the proposed goggle system is tested in an ex vivo tumor model. Our experiments demonstrate the feasibility of using a goggle navigation system for intraoperative margin detection and surgical guidance.

  16. Assessment of Photogrammetry Structure-from-Motion Compared to Terrestrial LiDAR Scanning for Generating Digital Elevation Models. Application to the Austre Lovéenbreen Polar Glacier Basin, Spitsbergen 79°N

    NASA Astrophysics Data System (ADS)

    Tolle, F.; Friedt, J. M.; Bernard, É.; Prokop, A.; Griselin, M.

    2014-12-01

    Digital Elevation Model (DEM) is a key tool for analyzing spatially dependent processes including snow accumulation on slopes or glacier mass balance. Acquiring DEM within short time intervals provides new opportunities to evaluate such phenomena at the daily to seasonal rates.DEMs are usually generated from satellite imagery, aerial photography, airborne and ground-based LiDAR, and GPS surveys. In addition to these classical methods, we consider another alternative for periodic DEM acquisition with lower logistics requirements: digital processing of ground based, oblique view digital photography. Such a dataset, acquired using commercial off the shelf cameras, provides the source for generating elevation models using Structure from Motion (SfM) algorithms. Sets of pictures of a same structure but taken from various points of view are acquired. Selected features are identified on the images and allow for the reconstruction of the three-dimensional (3D) point cloud after computing the camera positions and optical properties. This cloud point, generated in an arbitrary coordinate system, is converted to an absolute coordinate system either by adding constraints of Ground Control Points (GCP), or including the (GPS) position of the cameras in the processing chain. We selected the opensource digital signal processing library provided by the French Geographic Institute (IGN) called Micmac for its fine processing granularity and the ability to assess the quality of each processing step.Although operating in snow covered environments appears challenging due to the lack of relevant features, we observed that enough reference points could be identified for 3D reconstruction. Despite poor climatic environment of the Arctic region considered (Ny Alesund area, 79oN) is not a problem for SfM, the low lying spring sun and the cast shadows appear as a limitation because of the lack of color dynamics in the digital cameras we used. A detailed understanding of the processing steps is mandatory during the image acquisition phase: compliance with acquisition rules reducing digital processing errors helps minimizing the uncertainty on the point cloud absolute position in its coordinate system. 3D models from SfM are compared with terrestrial LiDAR acquisitions for resolution assesment.

  17. OpenCV and TYZX : video surveillance for tracking.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Jim; Spencer, Andrew; Chu, Eric

    2008-08-01

    As part of the National Security Engineering Institute (NSEI) project, several sensors were developed in conjunction with an assessment algorithm. A camera system was developed in-house to track the locations of personnel within a secure room. In addition, a commercial, off-the-shelf (COTS) tracking system developed by TYZX was examined. TYZX is a Bay Area start-up that has developed its own tracking hardware and software which we use as COTS support for robust tracking. This report discusses the pros and cons of each camera system, how they work, a proposed data fusion method, and some visual results. Distributed, embedded image processingmore » solutions show the most promise in their ability to track multiple targets in complex environments and in real-time. Future work on the camera system may include three-dimensional volumetric tracking by using multiple simple cameras, Kalman or particle filtering, automated camera calibration and registration, and gesture or path recognition.« less

  18. Software for minimalistic data management in large camera trap studies

    PubMed Central

    Krishnappa, Yathin S.; Turner, Wendy C.

    2014-01-01

    The use of camera traps is now widespread and their importance in wildlife studies well understood. Camera trap studies can produce millions of photographs and there is a need for software to help manage photographs efficiently. In this paper, we describe a software system that was built to successfully manage a large behavioral camera trap study that produced more than a million photographs. We describe the software architecture and the design decisions that shaped the evolution of the program over the study’s three year period. The software system has the ability to automatically extract metadata from images, and add customized metadata to the images in a standardized format. The software system can be installed as a standalone application on popular operating systems. It is minimalistic, scalable and extendable so that it can be used by small teams or individual researchers for a broad variety of camera trap studies. PMID:25110471

  19. Calibration of a dual-PTZ camera system for stereo vision

    NASA Astrophysics Data System (ADS)

    Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng

    2010-08-01

    In this paper, we propose a calibration process for the intrinsic and extrinsic parameters of dual-PTZ camera systems. The calibration is based on a complete definition of six coordinate systems fixed at the image planes, and the pan and tilt rotation axes of the cameras. Misalignments between estimated and ideal coordinates of image corners are formed into cost values to be solved by the Nelder-Mead simplex optimization method. Experimental results show that the system is able to obtain 3D coordinates of objects with a consistent accuracy of 1 mm when the distance between the dual-PTZ camera set and the objects are from 0.9 to 1.1 meters.

  20. Geometrical calibration television measuring systems with solid state photodetectors

    NASA Astrophysics Data System (ADS)

    Matiouchenko, V. G.; Strakhov, V. V.; Zhirkov, A. O.

    2000-11-01

    The various optical measuring methods for deriving information about the size and form of objects are now used in difference branches- mechanical engineering, medicine, art, criminalistics. Measuring by means of the digital television systems is one of these methods. The development of this direction is promoted by occurrence on the market of various types and costs small-sized television cameras and frame grabbers. There are many television measuring systems using the expensive cameras, but accuracy performances of low cost cameras are also interested for the system developers. For this reason inexpensive mountingless camera SK1004CP (format 1/3', cost up to 40$) and frame grabber Aver2000 were used in experiments.

  1. Underwater Photo-Elicitation: A New Experiential Marine Education Technique

    ERIC Educational Resources Information Center

    Andrews, Steve; Stocker, Laura; Oechel, Walter

    2018-01-01

    Underwater photo-elicitation is a novel experiential marine education technique that combines direct experience in the marine environment with the use of digital underwater cameras. A program called Show Us Your Ocean! (SUYO!) was created, utilising a mixed methodology (qualitative and quantitative methods) to test the efficacy of this technique.…

  2. Lights! Camera! Learning!

    ERIC Educational Resources Information Center

    Donnelly, Laura

    2007-01-01

    When teaching science to kids, a visual approach is good. Humor is also good. And blowing things up is really, really good. At least that is what educators at the Exploratorium in San Francisco have found in the nine years since the museum began producing a live, off-the-cuff competition called Iron Science Teacher. Modeled after the Japanese cult…

  3. Methodological Issues in Documentary Ethnography: A Renewed Call for Putting Cameras in the Hands of the People.

    ERIC Educational Resources Information Center

    Huesca, Robert

    The participatory method of image production holds enormous potential for communication and journalism scholars operating out of a critical/cultural framework. The methodological potentials of mechanical reproduction were evident in the 1930s, when Walter Benjamin contributed three enduring concepts: questioning the art/document dichotomy; placing…

  4. Photovoice in the Diversity Classroom: Engagement, Voice, and the "Eye/I" of the Camera

    ERIC Educational Resources Information Center

    Chio, Vanessa C. M.; Fandt, Patricia M.

    2007-01-01

    A response to calls for more self-reflective and inclusive pedagogy, this article considers pedagogical and teaching possibilities offered by Photovoice--a community and participatory action research methodology developed by Wang and Burris. Extrapolating Photovoice to the context of the diversity classroom, the authors discuss how the methodology…

  5. If You Build It, They Will Scan: Oxford University's Exploration of Community Collections

    ERIC Educational Resources Information Center

    Lee, Stuart D.; Lindsay, Kate

    2009-01-01

    Traditional large digitization projects demand massive resources from the central unit (library, museum, or university) that has acquired funding for them. Another model, enabled by easy access to cameras, scanners, and web tools, calls for public contributions to community collections of artifacts. In 2009, the University of Oxford ran a…

  6. More About Hazard-Response Robot For Combustible Atmospheres

    NASA Technical Reports Server (NTRS)

    Stone, Henry W.; Ohm, Timothy R.

    1995-01-01

    Report presents additional information about design and capabilities of mobile hazard-response robot called "Hazbot III." Designed to operate safely in combustible and/or toxic atmosphere. Includes cameras and chemical sensors helping human technicians determine location and nature of hazard so human emergency team can decide how to eliminate hazard without approaching themselves.

  7. In the L1B2 products, why are the block dimensions different for some cameras and bands?

    Atmospheric Science Data Center

    2014-12-08

    Most of the time that MISR is acquiring Earth imagery it operates in a configuration called Global Mode, which allows the spatial resolution to be set for each individual channel (there are 36 channels on MISR: 4 bands at each of 9...

  8. Guide to Using Onionskin Analysis Code (U)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fugate, Michael Lynn; Morzinski, Jerome Arthur

    2016-09-15

    This document is a guide to using R-code written for the purpose of analyzing onionskin experiments. We expect the user to be very familiar with statistical methods and the R programming language. For more details about onionskin experiments and the statistical methods mentioned in this document see Storlie, Fugate, et al. (2013). Engineers at LANL experiment with detonators and high explosives to assess performance. The experimental unit, called an onionskin, is a hemisphere consisting of a detonator and a booster pellet surrounded by explosive material. When the detonator explodes, a streak camera mounted above the pole of the hemisphere recordsmore » when the shock wave arrives at the surface. The output from the camera is a two-dimensional image that is transformed into a curve that shows the arrival time as a function of polar angle. The statistical challenge is to characterize a baseline population of arrival time curves and to compare the baseline curves to curves from a new, so-called, test series. The hope is that the new test series of curves is statistically similar to the baseline population.« less

  9. Coincidence electron/ion imaging with a fast frame camera

    NASA Astrophysics Data System (ADS)

    Li, Wen; Lee, Suk Kyoung; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander; Fan, Lin

    2015-05-01

    A new time- and position- sensitive particle detection system based on a fast frame CMOS camera is developed for coincidence electron/ion imaging. The system is composed of three major components: a conventional microchannel plate (MCP)/phosphor screen electron/ion imager, a fast frame CMOS camera and a high-speed digitizer. The system collects the positional information of ions/electrons from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of MCPs processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of electron/ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide. We further show that a time resolution of 30 ps can be achieved when measuring electron TOF spectrum and this enables the new system to achieve a good energy resolution along the TOF axis.

  10. An airborne multispectral imaging system based on two consumer-grade cameras for agricultural remote sensing

    USDA-ARS?s Scientific Manuscript database

    This paper describes the design and evaluation of an airborne multispectral imaging system based on two identical consumer-grade cameras for agricultural remote sensing. The cameras are equipped with a full-frame complementary metal oxide semiconductor (CMOS) sensor with 5616 × 3744 pixels. One came...

  11. Recognition of human activity characteristics based on state transitions modeling technique

    NASA Astrophysics Data System (ADS)

    Elangovan, Vinayak; Shirkhodaie, Amir

    2012-06-01

    Human Activity Discovery & Recognition (HADR) is a complex, diverse and challenging task but yet an active area of ongoing research in the Department of Defense. By detecting, tracking, and characterizing cohesive Human interactional activity patterns, potential threats can be identified which can significantly improve situation awareness, particularly, in Persistent Surveillance Systems (PSS). Understanding the nature of such dynamic activities, inevitably involves interpretation of a collection of spatiotemporally correlated activities with respect to a known context. In this paper, we present a State Transition model for recognizing the characteristics of human activities with a link to a prior contextbased ontology. Modeling the state transitions between successive evidential events determines the activities' temperament. The proposed state transition model poses six categories of state transitions including: Human state transitions of Object handling, Visibility, Entity-entity relation, Human Postures, Human Kinematics and Distance to Target. The proposed state transition model generates semantic annotations describing the human interactional activities via a technique called Casual Event State Inference (CESI). The proposed approach uses a low cost kinect depth camera for indoor and normal optical camera for outdoor monitoring activities. Experimental results are presented here to demonstrate the effectiveness and efficiency of the proposed technique.

  12. FRIPON, the French fireball network

    NASA Astrophysics Data System (ADS)

    Colas, F.; Zanda, B.; Bouley, S.; Vaubaillon, J.; Marmo, C.; Audureau, Y.; Kwon, M. K.; Rault, J. L.; Caminade, S.; Vernazza, P.; Gattacceca, J.; Birlan, M.; Maquet, L.; Egal, A.; Rotaru, M.; Gruson-Daniel, Y.; Birnbaum, C.; Cochard, F.; Thizy, O.

    2015-10-01

    FRIPON (Fireball Recovery and InterPlanetary Observation Network) [4](Colas et al, 2014) was recently founded by ANR (Agence Nationale de la Recherche). Its aim is to connect meteoritical science with asteroidal and cometary science in order to better understand solar system formation and evolution. The main idea is to set up an observation network covering all the French territory to collect a large number of meteorites (one or two per year) with accurate orbits, allowing us to pinpoint possible parent bodies. 100 all-sky cameras will be installed at the end of 2015 forming a dense network with an average distance of 100km between stations. To maximize the accuracy of orbit determination, we will mix our optical data with radar data from the GRAVES beacon received by 25 stations [5](Rault et al, 2015). As both the setting up of the network and the creation of search teams for meteorites will need manpower beyond our small team of professionals, we are developing a citizen science network called Vigie-Ciel [6](Zanda et al, 2015). The public at large will thus be able to simply use our data, participate in search campaigns or even setup their own cameras.

  13. Coincidence ion imaging with a fast frame camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei

    2014-12-15

    A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots onmore » each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.« less

  14. Research into a Single-aperture Light Field Camera System to Obtain Passive Ground-based 3D Imagery of LEO Objects

    NASA Astrophysics Data System (ADS)

    Bechis, K.; Pitruzzello, A.

    2014-09-01

    This presentation describes our ongoing research into using a ground-based light field camera to obtain passive, single-aperture 3D imagery of LEO objects. Light field cameras are an emerging and rapidly evolving technology for passive 3D imaging with a single optical sensor. The cameras use an array of lenslets placed in front of the camera focal plane, which provides angle of arrival information for light rays originating from across the target, allowing range to target and 3D image to be obtained from a single image using monocular optics. The technology, which has been commercially available for less than four years, has the potential to replace dual-sensor systems such as stereo cameras, dual radar-optical systems, and optical-LIDAR fused systems, thus reducing size, weight, cost, and complexity. We have developed a prototype system for passive ranging and 3D imaging using a commercial light field camera and custom light field image processing algorithms. Our light field camera system has been demonstrated for ground-target surveillance and threat detection applications, and this paper presents results of our research thus far into applying this technology to the 3D imaging of LEO objects. The prototype 3D imaging camera system developed by Northrop Grumman uses a Raytrix R5 C2GigE light field camera connected to a Windows computer with an nVidia graphics processing unit (GPU). The system has a frame rate of 30 Hz, and a software control interface allows for automated camera triggering and light field image acquisition to disk. Custom image processing software then performs the following steps: (1) image refocusing, (2) change detection, (3) range finding, and (4) 3D reconstruction. In Step (1), a series of 2D images are generated from each light field image; the 2D images can be refocused at up to 100 different depths. Currently, steps (1) through (3) are automated, while step (4) requires some user interaction. A key requirement for light field camera operation is that the target must be within the near-field (Fraunhofer distance) of the collecting optics. For example, in visible light the near-field of a 1-m telescope extends out to about 3,500 km, while the near-field of the AEOS telescope extends out over 46,000 km. For our initial proof of concept, we have integrated our light field camera with a 14-inch Meade LX600 advanced coma-free telescope, to image various surrogate ground targets at up to tens of kilometers range. Our experiments with the 14-inch telescope have assessed factors and requirements that are traceable and scalable to a larger-aperture system that would have the near-field distance needed to obtain 3D images of LEO objects. The next step would be to integrate a light field camera with a 1-m or larger telescope and evaluate its 3D imaging capability against LEO objects. 3D imaging of LEO space objects with light field camera technology can potentially provide a valuable new tool for space situational awareness, especially for those situations where laser or radar illumination of the target objects is not feasible.

  15. An evaluation of video cameras for collecting observational data on sanctuary-housed chimpanzees (Pan troglodytes).

    PubMed

    Hansen, Bethany K; Fultz, Amy L; Hopper, Lydia M; Ross, Stephen R

    2018-05-01

    Video cameras are increasingly being used to monitor captive animals in zoo, laboratory, and agricultural settings. This technology may also be useful in sanctuaries with large and/or complex enclosures. However, the cost of camera equipment and a lack of formal evaluations regarding the use of cameras in sanctuary settings make it challenging for facilities to decide whether and how to implement this technology. To address this, we evaluated the feasibility of using a video camera system to monitor chimpanzees at Chimp Haven. We viewed a group of resident chimpanzees in a large forested enclosure and compared observations collected in person and with remote video cameras. We found that via camera, the observer viewed fewer chimpanzees in some outdoor locations (GLMM post hoc test: est. = 1.4503, SE = 0.1457, Z = 9.951, p < 0.001) and identified a lower proportion of chimpanzees (GLMM post hoc test: est. = -2.17914, SE = 0.08490, Z = -25.666, p < 0.001) compared to in-person observations. However, the observer could view the 2 ha enclosure 15 times faster by camera compared to in person. In addition to these results, we provide recommendations to animal facilities considering the installation of a video camera system. Despite some limitations of remote monitoring, we posit that there are substantial benefits of using camera systems in sanctuaries to facilitate animal care and observational research. © 2018 Wiley Periodicals, Inc.

  16. Object tracking using multiple camera video streams

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  17. View of 'Cape St. Mary' from 'Cape Verde' (False Color)

    NASA Technical Reports Server (NTRS)

    2006-01-01

    As part of its investigation of 'Victoria Crater,' NASA's Mars Exploration Rover Opportunity examined a promontory called 'Cape St. Mary' from the from the vantage point of 'Cape Verde,' the next promontory counterclockwise around the crater's deeply scalloped rim. This view of Cape St. Mary combines several exposures taken by the rover's panoramic camera into a false-color mosaic. Contrast has been adjusted to improve the visibility of details in shaded areas.

    The upper portion of the crater wall contains a jumble of material tossed outward by the impact that excavated the crater. This vertical cross-section through the blanket of ejected material surrounding the crater was exposed by erosion that expanded the crater outward from its original diameter, according to scientists' interpretation of the observations. Below the jumbled material in the upper part of the wall are layers that survive relatively intact from before the crater-causing impact. Near the base of the Cape St. Mary cliff are layers with a pattern called 'crossbedding,' intersecting with each other at angles, rather than parallel to each other. Large-scale crossbedding can result from material being deposited as wind-blown dunes.

    The images combined into this mosaic were taken during the 970th Martian day, or sol, of Opportunity's Mars-surface mission (Oct. 16, 2006). The panoramic camera took them through the camera's 750-nanometer, 530-nanometer and 430-nanometer filters. The false color enhances subtle color differences among materials in the rocks and soils of the scene.

  18. Real-time depth camera tracking with geometrically stable weight algorithm

    NASA Astrophysics Data System (ADS)

    Fu, Xingyin; Zhu, Feng; Qi, Feng; Wang, Mingming

    2017-03-01

    We present an approach for real-time camera tracking with depth stream. Existing methods are prone to drift in sceneries without sufficient geometric information. First, we propose a new weight method for an iterative closest point algorithm commonly used in real-time dense mapping and tracking systems. By detecting uncertainty in pose and increasing weight of points that constrain unstable transformations, our system achieves accurate and robust trajectory estimation results. Our pipeline can be fully parallelized with GPU and incorporated into the current real-time depth camera tracking system seamlessly. Second, we compare the state-of-the-art weight algorithms and propose a weight degradation algorithm according to the measurement characteristics of a consumer depth camera. Third, we use Nvidia Kepler Shuffle instructions during warp and block reduction to improve the efficiency of our system. Results on the public TUM RGB-D database benchmark demonstrate that our camera tracking system achieves state-of-the-art results both in accuracy and efficiency.

  19. Intraocular camera for retinal prostheses: Refractive and diffractive lens systems

    NASA Astrophysics Data System (ADS)

    Hauer, Michelle Christine

    The focus of this thesis is on the design and analysis of refractive, diffractive, and hybrid refractive/diffractive lens systems for a miniaturized camera that can be surgically implanted in the crystalline lens sac and is designed to work in conjunction with current and future generation retinal prostheses. The development of such an intraocular camera (IOC) would eliminate the need for an external head-mounted or eyeglass-mounted camera. Placing the camera inside the eye would allow subjects to use their natural eye movements for foveation (attention) instead of more cumbersome head tracking, would notably aid in personal navigation and mobility, and would also be significantly more psychologically appealing from the standpoint of personal appearances. The capability for accommodation with no moving parts or feedback control is incorporated by employing camera designs that exhibit nearly infinite depth of field. Such an ultracompact optical imaging system requires a unique combination of refractive and diffractive optical elements and relaxed system constraints derived from human psychophysics. This configuration necessitates an extremely compact, short focal-length lens system with an f-number close to unity. Initially, these constraints appear highly aggressive from an optical design perspective. However, after careful analysis of the unique imaging requirements of a camera intended to work in conjunction with the relatively low pixellation levels of a retinal microstimulator array, it becomes clear that such a design is not only feasible, but could possibly be implemented with a single lens system.

  20. Orbital-science investigation: Part C: photogrammetry of Apollo 15 photography

    USGS Publications Warehouse

    Wu, Sherman S.C.; Schafer, Francis J.; Jordan, Raymond; Nakata, Gary M.; Derick, James L.

    1972-01-01

    Mapping of large areas of the Moon by photogrammetric methods was not seriously considered until the Apollo 15 mission. In this mission, a mapping camera system and a 61-cm optical-bar high-resolution panoramic camera, as well as a laser altimeter, were used. The mapping camera system comprises a 7.6-cm metric terrain camera and a 7.6-cm stellar camera mounted in a fixed angular relationship (an angle of 96° between the two camera axes). The metric camera has a glass focal-plane plate with reseau grids. The ground-resolution capability from an altitude of 110 km is approximately 20 m. Because of the auxiliary stellar camera and the laser altimeter, the resulting metric photography can be used not only for medium- and small-scale cartographic or topographic maps, but it also can provide a basis for establishing a lunar geodetic network. The optical-bar panoramic camera has a 135- to 180-line resolution, which is approximately 1 to 2 m of ground resolution from an altitude of 110 km. Very large scale specialized topographic maps for supporting geologic studies of lunar-surface features can be produced from the stereoscopic coverage provided by this camera.

  1. SpectraCAM SPM: a camera system with high dynamic range for scientific and medical applications

    NASA Astrophysics Data System (ADS)

    Bhaskaran, S.; Baiko, D.; Lungu, G.; Pilon, M.; VanGorden, S.

    2005-08-01

    A scientific camera system having high dynamic range designed and manufactured by Thermo Electron for scientific and medical applications is presented. The newly developed CID820 image sensor with preamplifier-per-pixel technology is employed in this camera system. The 4 Mega-pixel imaging sensor has a raw dynamic range of 82dB. Each high-transparent pixel is based on a preamplifier-per-pixel architecture and contains two photogates for non-destructive readout of the photon-generated charge (NDRO). Readout is achieved via parallel row processing with on-chip correlated double sampling (CDS). The imager is capable of true random pixel access with a maximum operating speed of 4MHz. The camera controller consists of a custom camera signal processor (CSP) with an integrated 16-bit A/D converter and a PowerPC-based CPU running a Linux embedded operating system. The imager is cooled to -40C via three-stage cooler to minimize dark current. The camera housing is sealed and is designed to maintain the CID820 imager in the evacuated chamber for at least 5 years. Thermo Electron has also developed custom software and firmware to drive the SpectraCAM SPM camera. Included in this firmware package is the new Extreme DRTM algorithm that is designed to extend the effective dynamic range of the camera by several orders of magnitude up to 32-bit dynamic range. The RACID Exposure graphical user interface image analysis software runs on a standard PC that is connected to the camera via Gigabit Ethernet.

  2. Digital dental photography. Part 4: choosing a camera.

    PubMed

    Ahmad, I

    2009-06-13

    With so many cameras and systems on the market, making a choice of the right one for your practice needs is a daunting task. As described in Part 1 of this series, a digital single reflex (DSLR) camera is an ideal choice for dental use in enabling the taking of portraits, close-up or macro images of the dentition and study casts. However, for the sake of completion, some other cameras systems that are used in dentistry are also discussed.

  3. The new camera calibration system at the US Geological Survey

    USGS Publications Warehouse

    Light, D.L.

    1992-01-01

    Modern computerized photogrammetric instruments are capable of utilizing both radial and decentering camera calibration parameters which can increase plotting accuracy over that of older analog instrumentation technology from previous decades. Also, recent design improvements in aerial cameras have minimized distortions and increased the resolving power of camera systems, which should improve the performance of the overall photogrammetric process. In concert with these improvements, the Geological Survey has adopted the rigorous mathematical model for camera calibration developed by Duane Brown. An explanation of the Geological Survey's calibration facility and the additional calibration parameters now being provided in the USGS calibration certificate are reviewed. -Author

  4. 3D display for enhanced tele-operation and other applications

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Pezzaniti, J. Larry; Vaden, Justin; Hyatt, Brian; Morris, James; Chenault, David; Bodenhamer, Andrew; Pettijohn, Bradley; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Kingston, David; Newell, Scott

    2010-04-01

    In this paper, we report on the use of a 3D vision field upgrade kit for TALON robot consisting of a replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for robotic driving, manipulation, and surveillance operations was conducted. A replacement display, replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.

  5. An on-line calibration algorithm for external parameters of visual system based on binocular stereo cameras

    NASA Astrophysics Data System (ADS)

    Wang, Liqiang; Liu, Zhen; Zhang, Zhonghua

    2014-11-01

    Stereo vision is the key in the visual measurement, robot vision, and autonomous navigation. Before performing the system of stereo vision, it needs to calibrate the intrinsic parameters for each camera and the external parameters of the system. In engineering, the intrinsic parameters remain unchanged after calibrating cameras, and the positional relationship between the cameras could be changed because of vibration, knocks and pressures in the vicinity of the railway or motor workshops. Especially for large baselines, even minute changes in translation or rotation can affect the epipolar geometry and scene triangulation to such a degree that visual system becomes disabled. A technology including both real-time examination and on-line recalibration for the external parameters of stereo system becomes particularly important. This paper presents an on-line method for checking and recalibrating the positional relationship between stereo cameras. In epipolar geometry, the external parameters of cameras can be obtained by factorization of the fundamental matrix. Thus, it offers a method to calculate the external camera parameters without any special targets. If the intrinsic camera parameters are known, the external parameters of system can be calculated via a number of random matched points. The process is: (i) estimating the fundamental matrix via the feature point correspondences; (ii) computing the essential matrix from the fundamental matrix; (iii) obtaining the external parameters by decomposition of the essential matrix. In the step of computing the fundamental matrix, the traditional methods are sensitive to noise and cannot ensure the estimation accuracy. We consider the feature distribution situation in the actual scene images and introduce a regional weighted normalization algorithm to improve accuracy of the fundamental matrix estimation. In contrast to traditional algorithms, experiments on simulated data prove that the method improves estimation robustness and accuracy of the fundamental matrix. Finally, we take an experiment for computing the relationship of a pair of stereo cameras to demonstrate accurate performance of the algorithm.

  6. The Surgeon's View: Comparison of Two Digital Video Recording Systems in Veterinary Surgery.

    PubMed

    Giusto, Gessica; Caramello, Vittorio; Comino, Francesco; Gandini, Marco

    2015-01-01

    Video recording and photography during surgical procedures are useful in veterinary medicine for several reasons, including legal, educational, and archival purposes. Many systems are available, such as hand cameras, light-mounted cameras, and head cameras. We chose a reasonably priced head camera that is among the smallest video cameras available. To best describe its possible uses and advantages, we recorded video and images of eight different surgical cases and procedures, both in hospital and field settings. All procedures were recorded both with a head-mounted camera and a commercial hand-held photo camera. Then sixteen volunteers (eight senior clinicians and eight final-year students) completed an evaluation questionnaire. Both cameras produced high-quality photographs and videos, but observers rated the head camera significantly better regarding point of view and their understanding of the surgical operation. The head camera was considered significantly more useful in teaching surgical procedures. Interestingly, senior clinicians tended to assign generally lower scores compared to students. The head camera we tested is an effective, easy-to-use tool for recording surgeries and various veterinary procedures in all situations, with no need for assistance from a dedicated operator. It can be a valuable aid for veterinarians working in all fields of the profession and a useful tool for veterinary surgical education.

  7. RESTORATION OF ATMOSPHERICALLY DEGRADED IMAGES. VOLUME 3.

    DTIC Science & Technology

    AERIAL CAMERAS, LASERS, ILLUMINATION, TRACKING CAMERAS, DIFFRACTION, PHOTOGRAPHIC GRAIN, DENSITY, DENSITOMETERS, MATHEMATICAL ANALYSIS, OPTICAL SCANNING, SYSTEMS ENGINEERING, TURBULENCE, OPTICAL PROPERTIES, SATELLITE TRACKING SYSTEMS.

  8. Development of two-framing camera with large format and ultrahigh speed

    NASA Astrophysics Data System (ADS)

    Jiang, Xiaoguo; Wang, Yuan; Wang, Yi

    2012-10-01

    High-speed imaging facility is important and necessary for the formation of time-resolved measurement system with multi-framing capability. The framing camera which satisfies the demands of both high speed and large format needs to be specially developed in the ultrahigh speed research field. A two-framing camera system with high sensitivity and time-resolution has been developed and used for the diagnosis of electron beam parameters of Dragon-I linear induction accelerator (LIA). The camera system, which adopts the principle of light beam splitting in the image space behind the lens with long focus length, mainly consists of lens-coupled gated image intensifier, CCD camera and high-speed shutter trigger device based on the programmable integrated circuit. The fastest gating time is about 3 ns, and the interval time between the two frames can be adjusted discretely at the step of 0.5 ns. Both the gating time and the interval time can be tuned to the maximum value of about 1 s independently. Two images with the size of 1024×1024 for each can be captured simultaneously in our developed camera. Besides, this camera system possesses a good linearity, uniform spatial response and an equivalent background illumination as low as 5 electrons/pix/sec, which fully meets the measurement requirements of Dragon-I LIA.

  9. Layers of 'Cabo Frio' in 'Victoria Crater'

    NASA Technical Reports Server (NTRS)

    2006-01-01

    This view of 'Victoria crater' is looking southeast from 'Duck Bay' towards the dramatic promontory called 'Cabo Frio.' The small crater in the right foreground, informally known as 'Sputnik,' is about 20 meters (about 65 feet) away from the rover, the tip of the spectacular, layered, Cabo Frio promontory itself is about 200 meters (about 650 feet) away from the rover, and the exposed rock layers are about 15 meters (about 50 feet) tall. This is an approximately true color rendering of images taken by the panoramic camera (Pancam) on NASA's Mars Exploration Rover Opportunity during the rover's 952nd sol, or Martian day, (Sept. 28, 2006) using the camera's 750-nanometer, 530-nanometer and 430-nanometer filters.

  10. Layers of 'Cabo Frio' in 'Victoria Crater' (Stereo)

    NASA Technical Reports Server (NTRS)

    2006-01-01

    This view of 'Victoria crater' is looking southeast from 'Duck Bay' towards the dramatic promontory called 'Cabo Frio.' The small crater in the right foreground, informally known as 'Sputnik,' is about 20 meters (about 65 feet) away from the rover, the tip of the spectacular, layered, Cabo Frio promontory itself is about 200 meters (about 650 feet) away from the rover, and the exposed rock layers are about 15 meters (about 50 feet) tall. This is a red-blue stereo anaglyph generated from images taken by the panoramic camera (Pancam) on NASA's Mars Exploration Rover Opportunity during the rover's 952nd sol, or Martian day, (Sept. 28, 2006) using the camera's 430-nanometer filters.

  11. Layers of 'Cape Verde' in 'Victoria Crater'

    NASA Technical Reports Server (NTRS)

    2006-01-01

    This view of Victoria crater is looking north from 'Duck Bay' towards the dramatic promontory called 'Cape Verde.' The dramatic cliff of layered rocks is about 50 meters (about 165 feet) away from the rover and is about 6 meters (about 20 feet) tall. The taller promontory beyond that is about 100 meters (about 325 feet) away, and the vista beyond that extends away for more than 400 meters (about 1300 feet) into the distance. This is an approximately true color rendering of images taken by the panoramic camera (Pancam) on NASA's Mars Exploration Rover Opportunity during the rover's 952nd sol, or Martian day, (Sept. 28, 2006) using the camera's 750-nanometer, 530-nanometer and 430-nanometer filters.

  12. Layers of 'Cape Verde' in 'Victoria Crater' (Stereo)

    NASA Technical Reports Server (NTRS)

    2006-01-01

    This view of Victoria crater is looking north from 'Duck Bay' towards the dramatic promontory called 'Cape Verde.' The dramatic cliff of layered rocks is about 50 meters (about 165 feet) away from the rover and is about 6 meters (about 20 feet) tall. The taller promontory beyond that is about 100 meters (about 325 feet) away, and the vista beyond that extends away for more than 400 meters (about 1300 feet) into the distance. This is a red-blue stereo anaglyph generated from images taken by the panoramic camera (Pancam) on NASA's Mars Exploration Rover Opportunity during the rover's 952nd sol, or Martian day, (Sept. 28, 2006) using the camera's 430-nanometer filters.

  13. PIA01492

    NASA Image and Video Library

    1998-10-30

    This picture of Neptune was produced from the last whole planet images taken through the green and orange filters on NASA's Voyager 2 narrow angle camera. The images were taken at a range of 4.4 million miles from the planet, 4 days and 20 hours before closest approach. The picture shows the Great Dark Spot and its companion bright smudge; on the west limb the fast moving bright feature called Scooter and the little dark spot are visible. These clouds were seen to persist for as long as Voyager's cameras could resolve them. North of these, a bright cloud band similar to the south polar streak may be seen. http://photojournal.jpl.nasa.gov/catalog/PIA01492

  14. Curiosity Drill After Drilling at Telegraph Peak

    NASA Image and Video Library

    2015-03-06

    This view from the Mast Camera (Mastcam) on NASA's Curiosity Mars rover shows the rover's drill just after finishing a drilling operation at a target rock called "Telegraph Peak" on Feb. 24, 2015, the 908th Martian day, or sol, of the rover's work on Mars. Three sols later, a fault-protection action by the rover halted a process of transferring sample powder that was collected during this drilling. The image is in raw color, as recorded directly by the camera, and has not been white-balanced. The fault-protection event, triggered by an irregularity in electrical current, led to engineering tests in subsequent days to diagnose the underlying cause. http://photojournal.jpl.nasa.gov/catalog/PIA19145

  15. Development of Open source-based automatic shooting and processing UAV imagery for Orthoimage Using Smart Camera UAV

    NASA Astrophysics Data System (ADS)

    Park, J. W.; Jeong, H. H.; Kim, J. S.; Choi, C. U.

    2016-06-01

    Recently, aerial photography with unmanned aerial vehicle (UAV) system uses UAV and remote controls through connections of ground control system using bandwidth of about 430 MHz radio Frequency (RF) modem. However, as mentioned earlier, existing method of using RF modem has limitations in long distance communication. The Smart Camera equipments's LTE (long-term evolution), Bluetooth, and Wi-Fi to implement UAV that uses developed UAV communication module system carried out the close aerial photogrammetry with the automatic shooting. Automatic shooting system is an image capturing device for the drones in the area's that needs image capturing and software for loading a smart camera and managing it. This system is composed of automatic shooting using the sensor of smart camera and shooting catalog management which manages filmed images and information. Processing UAV imagery module used Open Drone Map. This study examined the feasibility of using the Smart Camera as the payload for a photogrammetric UAV system. The open soure tools used for generating Android, OpenCV (Open Computer Vision), RTKLIB, Open Drone Map.

  16. Ranging Apparatus and Method Implementing Stereo Vision System

    NASA Technical Reports Server (NTRS)

    Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system for use in telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a left and right video camera mounted on a camera platform, and a remotely positioned operator. The position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. A laser is provided between the left and right video camera and is directed by the user to point to a target device. The images produced by the left and right video cameras are processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. The horizontal disparity between the two processed images is calculated for use in a stereometric ranging analysis from which range is determined.

  17. Stability analysis for a multi-camera photogrammetric system.

    PubMed

    Habib, Ayman; Detchev, Ivan; Kwak, Eunju

    2014-08-18

    Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction.

  18. Enhanced technologies for unattended ground sensor systems

    NASA Astrophysics Data System (ADS)

    Hartup, David C.

    2010-04-01

    Progress in several technical areas is being leveraged to advantage in Unattended Ground Sensor (UGS) systems. This paper discusses advanced technologies that are appropriate for use in UGS systems. While some technologies provide evolutionary improvements, other technologies result in revolutionary performance advancements for UGS systems. Some specific technologies discussed include wireless cameras and viewers, commercial PDA-based system programmers and monitors, new materials and techniques for packaging improvements, low power cueing sensor radios, advanced long-haul terrestrial and SATCOM radios, and networked communications. Other technologies covered include advanced target detection algorithms, high pixel count cameras for license plate and facial recognition, small cameras that provide large stand-off distances, video transmissions of target activity instead of still images, sensor fusion algorithms, and control center hardware. The impact of each technology on the overall UGS system architecture is discussed, along with the advantages provided to UGS system users. Areas of analysis include required camera parameters as a function of stand-off distance for license plate and facial recognition applications, power consumption for wireless cameras and viewers, sensor fusion communication requirements, and requirements to practically implement video transmission through UGS systems. Examples of devices that have already been fielded using technology from several of these areas are given.

  19. Low-cost, portable, robust and high-resolution single-camera stereo-DIC system and its application in high-temperature deformation measurements

    NASA Astrophysics Data System (ADS)

    Chi, Yuxi; Yu, Liping; Pan, Bing

    2018-05-01

    A low-cost, portable, robust and high-resolution single-camera stereo-digital image correlation (stereo-DIC) system for accurate surface three-dimensional (3D) shape and deformation measurements is described. This system adopts a single consumer-grade high-resolution digital Single Lens Reflex (SLR) camera and a four-mirror adaptor, rather than two synchronized industrial digital cameras, for stereo image acquisition. In addition, monochromatic blue light illumination and coupled bandpass filter imaging are integrated to ensure the robustness of the system against ambient light variations. In contrast to conventional binocular stereo-DIC systems, the developed pseudo-stereo-DIC system offers the advantages of low cost, portability, robustness against ambient light variations, and high resolution. The accuracy and precision of the developed single SLR camera-based stereo-DIC system were validated by measuring the 3D shape of a stationary sphere along with in-plane and out-of-plane displacements of a translated planar plate. Application of the established system to thermal deformation measurement of an alumina ceramic plate and a stainless-steel plate subjected to radiation heating was also demonstrated.

  20. Stability Analysis for a Multi-Camera Photogrammetric System

    PubMed Central

    Habib, Ayman; Detchev, Ivan; Kwak, Eunju

    2014-01-01

    Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction. PMID:25196012

  1. 640x480 PtSi Stirling-cooled camera system

    NASA Astrophysics Data System (ADS)

    Villani, Thomas S.; Esposito, Benjamin J.; Davis, Timothy J.; Coyle, Peter J.; Feder, Howard L.; Gilmartin, Harvey R.; Levine, Peter A.; Sauer, Donald J.; Shallcross, Frank V.; Demers, P. L.; Smalser, P. J.; Tower, John R.

    1992-09-01

    A Stirling cooled 3 - 5 micron camera system has been developed. The camera employs a monolithic 640 X 480 PtSi-MOS focal plane array. The camera system achieves an NEDT equals 0.10 K at 30 Hz frame rate with f/1.5 optics (300 K background). At a spatial frequency of 0.02 cycles/mRAD the vertical and horizontal Minimum Resolvable Temperature are in the range of MRT equals 0.03 K (f/1.5 optics, 300 K background). The MOS focal plane array achieves a resolution of 480 TV lines per picture height independent of background level and position within the frame.

  2. Automatic Calibration of an Airborne Imaging System to an Inertial Navigation Unit

    NASA Technical Reports Server (NTRS)

    Ansar, Adnan I.; Clouse, Daniel S.; McHenry, Michael C.; Zarzhitsky, Dimitri V.; Pagdett, Curtis W.

    2013-01-01

    This software automatically calibrates a camera or an imaging array to an inertial navigation system (INS) that is rigidly mounted to the array or imager. In effect, it recovers the coordinate frame transformation between the reference frame of the imager and the reference frame of the INS. This innovation can automatically derive the camera-to-INS alignment using image data only. The assumption is that the camera fixates on an area while the aircraft flies on orbit. The system then, fully automatically, solves for the camera orientation in the INS frame. No manual intervention or ground tie point data is required.

  3. Solid state television camera

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The design, fabrication, and tests of a solid state television camera using a new charge-coupled imaging device are reported. An RCA charge-coupled device arranged in a 512 by 320 format and directly compatible with EIA format standards was the sensor selected. This is a three-phase, sealed surface-channel array that has 163,840 sensor elements, which employs a vertical frame transfer system for image readout. Included are test results of the complete camera system, circuit description and changes to such circuits as a result of integration and test, maintenance and operation section, recommendations to improve the camera system, and a complete set of electrical and mechanical drawing sketches.

  4. A Portable Shoulder-Mounted Camera System for Surgical Education in Spine Surgery.

    PubMed

    Pham, Martin H; Ohiorhenuan, Ifije E; Patel, Neil N; Jakoi, Andre M; Hsieh, Patrick C; Acosta, Frank L; Wang, Jeffrey C; Liu, John C

    2017-02-07

    The past several years have demonstrated an increased recognition of operative videos as an important adjunct for resident education. Currently lacking, however, are effective methods to record video for the purposes of illustrating the techniques of minimally invasive (MIS) and complex spine surgery. We describe here our experiences developing and using a shoulder-mounted camera system for recording surgical video. Our requirements for an effective camera system included wireless portability to allow for movement around the operating room, camera mount location for comfort and loupes/headlight usage, battery life for long operative days, and sterile control of on/off recording. With this in mind, we created a shoulder-mounted camera system utilizing a GoPro™ HERO3+, its Smart Remote (GoPro, Inc., San Mateo, California), a high-capacity external battery pack, and a commercially available shoulder-mount harness. This shoulder-mounted system was more comfortable to wear for long periods of time in comparison to existing head-mounted and loupe-mounted systems. Without requiring any wired connections, the surgeon was free to move around the room as needed. Over the past several years, we have recorded numerous MIS and complex spine surgeries for the purposes of surgical video creation for resident education. Surgical videos serve as a platform to distribute important operative nuances in rich multimedia. Effective and practical camera system setups are needed to encourage the continued creation of videos to illustrate the surgical maneuvers in minimally invasive and complex spinal surgery. We describe here a novel portable shoulder-mounted camera system setup specifically designed to be worn and used for long periods of time in the operating room.

  5. First results of the multi-purpose real-time processing video camera system on the Wendelstein 7-X stellarator and implications for future devices

    NASA Astrophysics Data System (ADS)

    Zoletnik, S.; Biedermann, C.; Cseh, G.; Kocsis, G.; König, R.; Szabolics, T.; Szepesi, T.; Wendelstein 7-X Team

    2018-01-01

    A special video camera has been developed for the 10-camera overview video system of the Wendelstein 7-X (W7-X) stellarator considering multiple application needs and limitations resulting from this complex long-pulse superconducting stellarator experiment. The event detection intelligent camera (EDICAM) uses a special 1.3 Mpixel CMOS sensor with non-destructive read capability which enables fast monitoring of smaller Regions of Interest (ROIs) even during long exposures. The camera can perform simple data evaluation algorithms (minimum/maximum, mean comparison to levels) on the ROI data which can dynamically change the readout process and generate output signals. Multiple EDICAM cameras were operated in the first campaign of W7-X and capabilities were explored in the real environment. Data prove that the camera can be used for taking long exposure (10-100 ms) overview images of the plasma while sub-ms monitoring and even multi-camera correlated edge plasma turbulence measurements of smaller areas can be done in parallel. These latter revealed that filamentary turbulence structures extend between neighboring modules of the stellarator. Considerations emerging for future upgrades of this system and similar setups on future long-pulse fusion experiments such as ITER are discussed.

  6. Motmot, an open-source toolkit for realtime video acquisition and analysis.

    PubMed

    Straw, Andrew D; Dickinson, Michael H

    2009-07-22

    Video cameras sense passively from a distance, offer a rich information stream, and provide intuitively meaningful raw data. Camera-based imaging has thus proven critical for many advances in neuroscience and biology, with applications ranging from cellular imaging of fluorescent dyes to tracking of whole-animal behavior at ecologically relevant spatial scales. Here we present 'Motmot': an open-source software suite for acquiring, displaying, saving, and analyzing digital video in real-time. At the highest level, Motmot is written in the Python computer language. The large amounts of data produced by digital cameras are handled by low-level, optimized functions, usually written in C. This high-level/low-level partitioning and use of select external libraries allow Motmot, with only modest complexity, to perform well as a core technology for many high-performance imaging tasks. In its current form, Motmot allows for: (1) image acquisition from a variety of camera interfaces (package motmot.cam_iface), (2) the display of these images with minimal latency and computer resources using wxPython and OpenGL (package motmot.wxglvideo), (3) saving images with no compression in a single-pass, low-CPU-use format (package motmot.FlyMovieFormat), (4) a pluggable framework for custom analysis of images in realtime and (5) firmware for an inexpensive USB device to synchronize image acquisition across multiple cameras, with analog input, or with other hardware devices (package motmot.fview_ext_trig). These capabilities are brought together in a graphical user interface, called 'FView', allowing an end user to easily view and save digital video without writing any code. One plugin for FView, 'FlyTrax', which tracks the movement of fruit flies in real-time, is included with Motmot, and is described to illustrate the capabilities of FView. Motmot enables realtime image processing and display using the Python computer language. In addition to the provided complete applications, the architecture allows the user to write relatively simple plugins, which can accomplish a variety of computer vision tasks and be integrated within larger software systems. The software is available at http://code.astraw.com/projects/motmot.

  7. Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System

    PubMed Central

    Lu, Yu; Wang, Keyi; Fan, Gongshu

    2016-01-01

    A new compact large field of view (FOV) multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second. PMID:27077857

  8. Hubble Finds a Little Gem

    NASA Image and Video Library

    2015-08-07

    This colorful bubble is a planetary nebula called NGC 6818, also known as the Little Gem Nebula. It is located in the constellation of Sagittarius (The Archer), roughly 6,000 light-years away from us. The rich glow of the cloud is just over half a light-year across — humongous compared to its tiny central star — but still a little gem on a cosmic scale. When stars like the sun enter "retirement," they shed their outer layers into space to create glowing clouds of gas called planetary nebulae. This ejection of mass is uneven, and planetary nebulae can have very complex shapes. NGC 6818 shows knotty filament-like structures and distinct layers of material, with a bright and enclosed central bubble surrounded by a larger, more diffuse cloud. Scientists believe that the stellar wind from the central star propels the outflowing material, sculpting the elongated shape of NGC 6818. As this fast wind smashes through the slower-moving cloud it creates particularly bright blowouts at the bubble’s outer layers. Hubble previously imaged this nebula back in 1997 with its Wide Field Planetary Camera 2, using a mix of filters that highlighted emission from ionized oxygen and hydrogen. This image, while from the same camera, uses different filters to reveal a different view of the nebula. Image credit: ESA/Hubble & NASA, Acknowledgement: Judy Schmidt NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  9. Mission Report on the Orbiter Camera Payload System (OCPS) Large Format Camera (LFC) and Attitude Reference System (ARS)

    NASA Technical Reports Server (NTRS)

    Mollberg, Bernard H.; Schardt, Bruton B.

    1988-01-01

    The Orbiter Camera Payload System (OCPS) is an integrated photographic system which is carried into earth orbit as a payload in the Space Transportation System (STS) Orbiter vehicle's cargo bay. The major component of the OCPS is a Large Format Camera (LFC), a precision wide-angle cartographic instrument that is capable of producing high resolution stereo photography of great geometric fidelity in multiple base-to-height (B/H) ratios. A secondary, supporting system to the LFC is the Attitude Reference System (ARS), which is a dual lens Stellar Camera Array (SCA) and camera support structure. The SCA is a 70-mm film system which is rigidly mounted to the LFC lens support structure and which, through the simultaneous acquisition of two star fields with each earth-viewing LFC frame, makes it possible to determine precisely the pointing of the LFC optical axis with reference to the earth nadir point. Other components complete the current OCPS configuration as a high precision cartographic data acquisition system. The primary design objective for the OCPS was to maximize system performance characteristics while maintaining a high level of reliability compatible with Shuttle launch conditions and the on-orbit environment. The full-up OCPS configuration was launched on a highly successful maiden voyage aboard the STS Orbiter vehicle Challenger on October 5, 1984, as a major payload aboard mission STS 41-G. This report documents the system design, the ground testing, the flight configuration, and an analysis of the results obtained during the Challenger mission STS 41-G.

  10. Stereoscopic 3D reconstruction using motorized zoom lenses within an embedded system

    NASA Astrophysics Data System (ADS)

    Liu, Pengcheng; Willis, Andrew; Sui, Yunfeng

    2009-02-01

    This paper describes a novel embedded system capable of estimating 3D positions of surfaces viewed by a stereoscopic rig consisting of a pair of calibrated cameras. Novel theoretical and technical aspects of the system are tied to two aspects of the design that deviate from typical stereoscopic reconstruction systems: (1) incorporation of an 10x zoom lens (Rainbow- H10x8.5) and (2) implementation of the system on an embedded system. The system components include a DSP running μClinux, an embedded version of the Linux operating system, and an FPGA. The DSP orchestrates data flow within the system and performs complex computational tasks and the FPGA provides an interface to the system devices which consist of a CMOS camera pair and a pair of servo motors which rotate (pan) each camera. Calibration of the camera pair is accomplished using a collection of stereo images that view a common chess board calibration pattern for a set of pre-defined zoom positions. Calibration settings for an arbitrary zoom setting are estimated by interpolation of the camera parameters. A low-computational cost method for dense stereo matching is used to compute depth disparities for the stereo image pairs. Surface reconstruction is accomplished by classical triangulation of the matched points from the depth disparities. This article includes our methods and results for the following problems: (1) automatic computation of the focus and exposure settings for the lens and camera sensor, (2) calibration of the system for various zoom settings and (3) stereo reconstruction results for several free form objects.

  11. HERCULES/MSI: a multispectral imager with geolocation for STS-70

    NASA Astrophysics Data System (ADS)

    Simi, Christopher G.; Kindsfather, Randy; Pickard, Henry; Howard, William, III; Norton, Mark C.; Dixon, Roberta

    1995-11-01

    A multispectral intensified CCD imager combined with a ring laser gyroscope based inertial measurement unit was flown on the Space Shuttle Discovery from July 13-22, 1995 (Space Transport System Flight No. 70, STS-70). The camera includes a six position filter wheel, a third generation image intensifier, and a CCD camera. The camera is integrated with a laser gyroscope system that determines the ground position of the imagery to an accuracy of better than three nautical miles. The camera has two modes of operation; a panchromatic mode for high-magnification imaging [ground sample distance (GSD) of 4 m], or a multispectral mode consisting of six different user-selectable spectral ranges at reduced magnification (12 m GSD). This paper discusses the system hardware and technical trade-offs involved with camera optimization, and presents imagery observed during the shuttle mission.

  12. 3D vision upgrade kit for TALON robot

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Pezzaniti, J. Larry; Chenault, David B.; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Pettijohn, Brad

    2010-04-01

    In this paper, we report on the development of a 3D vision field upgrade kit for TALON robot consisting of a replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for robotic driving, manipulation, and surveillance operations was conducted. The 3D vision system was integrated onto a TALON IV Robot and Operator Control Unit (OCU) such that stock components could be electrically disconnected and removed, and upgrade components coupled directly to the mounting and electrical connections. A replacement display, replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.

  13. Feasibility study of transmission of OTV camera control information in the video vertical blanking interval

    NASA Technical Reports Server (NTRS)

    White, Preston A., III

    1994-01-01

    The Operational Television system at Kennedy Space Center operates hundreds of video cameras, many remotely controllable, in support of the operations at the center. This study was undertaken to determine if commercial NABTS (North American Basic Teletext System) teletext transmission in the vertical blanking interval of the genlock signals distributed to the cameras could be used to send remote control commands to the cameras and the associated pan and tilt platforms. Wavelength division multiplexed fiberoptic links are being installed in the OTV system to obtain RS-250 short-haul quality. It was demonstrated that the NABTS transmission could be sent over the fiberoptic cable plant without excessive video quality degradation and that video cameras could be controlled using NABTS transmissions over multimode fiberoptic paths as long as 1.2 km.

  14. Touch And Go Camera System (TAGCAMS) for the OSIRIS-REx Asteroid Sample Return Mission

    NASA Astrophysics Data System (ADS)

    Bos, B. J.; Ravine, M. A.; Caplinger, M.; Schaffner, J. A.; Ladewig, J. V.; Olds, R. D.; Norman, C. D.; Huish, D.; Hughes, M.; Anderson, S. K.; Lorenz, D. A.; May, A.; Jackman, C. D.; Nelson, D.; Moreau, M.; Kubitschek, D.; Getzandanner, K.; Gordon, K. E.; Eberhardt, A.; Lauretta, D. S.

    2018-02-01

    NASA's OSIRIS-REx asteroid sample return mission spacecraft includes the Touch And Go Camera System (TAGCAMS) three camera-head instrument. The purpose of TAGCAMS is to provide imagery during the mission to facilitate navigation to the target asteroid, confirm acquisition of the asteroid sample, and document asteroid sample stowage. The cameras were designed and constructed by Malin Space Science Systems (MSSS) based on requirements developed by Lockheed Martin and NASA. All three of the cameras are mounted to the spacecraft nadir deck and provide images in the visible part of the spectrum, 400-700 nm. Two of the TAGCAMS cameras, NavCam 1 and NavCam 2, serve as fully redundant navigation cameras to support optical navigation and natural feature tracking. Their boresights are aligned in the nadir direction with small angular offsets for operational convenience. The third TAGCAMS camera, StowCam, provides imagery to assist with and confirm proper stowage of the asteroid sample. Its boresight is pointed at the OSIRIS-REx sample return capsule located on the spacecraft deck. All three cameras have at their heart a 2592 × 1944 pixel complementary metal oxide semiconductor (CMOS) detector array that provides up to 12-bit pixel depth. All cameras also share the same lens design and a camera field of view of roughly 44° × 32° with a pixel scale of 0.28 mrad/pixel. The StowCam lens is focused to image features on the spacecraft deck, while both NavCam lens focus positions are optimized for imaging at infinity. A brief description of the TAGCAMS instrument and how it is used to support critical OSIRIS-REx operations is provided.

  15. Comparison of parameters of modern cooled and uncooled thermal cameras

    NASA Astrophysics Data System (ADS)

    Bareła, Jarosław; Kastek, Mariusz; Firmanty, Krzysztof; Krupiński, Michał

    2017-10-01

    During the design of a system employing thermal cameras one always faces a problem of choosing the camera types best suited for the task. In many cases such a choice is far from optimal one, and there are several reasons for that. System designers often favor tried and tested solution they are used to. They do not follow the latest developments in the field of infrared technology and sometimes their choices are based on prejudice and not on facts. The paper presents the results of measurements of basic parameters of MWIR and LWIR thermal cameras, carried out in a specialized testing laboratory. The measured parameters are decisive in terms of image quality generated by thermal cameras. All measurements were conducted according to current procedures and standards. However the camera settings were not optimized for a specific test conditions or parameter measurements. Instead the real settings used in normal camera operations were applied to obtain realistic camera performance figures. For example there were significant differences between measured values of noise parameters and catalogue data provided by manufacturers, due to the application of edge detection filters to increase detection and recognition ranges. The purpose of this paper is to provide help in choosing the optimal thermal camera for particular application, answering the question whether to opt for cheaper microbolometer device or apply slightly better (in terms of specifications) yet more expensive cooled unit. Measurements and analysis were performed by qualified personnel with several dozen years of experience in both designing and testing of thermal camera systems with both cooled and uncooled focal plane arrays. Cameras of similar array sizes and optics were compared, and for each tested group the best performing devices were selected.

  16. Opportunity at Work Inside Victoria Crater

    NASA Technical Reports Server (NTRS)

    2007-01-01

    NASA Mars Exploration Rover Opportunity used its front hazard-identification camera to capture this wide-angle view of its robotic arm extended to a rock in a bright-toned layer inside Victoria Crater.

    The image was taken during the rover's 1,322nd Martian day, or sol (Oct. 13, 2007).

    Victoria Crater has a scalloped shape of alternating alcoves and promontories around the crater's circumference. Opportunity descended into the crater two weeks earlier, within an alcove called 'Duck Bay.' Counterclockwise around the rim, just to the right of the arm in this image, is a promontory called 'Cabo Frio.'

  17. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    NASA Astrophysics Data System (ADS)

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  18. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    PubMed Central

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2017-01-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery. PMID:28943703

  19. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization.

    PubMed

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool ( rdCalib ; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker ® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  20. Measurement of Separated Flow Structures Using a Multiple-Camera DPIV System. [conducted in the Langley Subsonic Basic Research Tunnel

    NASA Technical Reports Server (NTRS)

    Humphreys, William M., Jr.; Bartram, Scott M.

    2001-01-01

    A novel multiple-camera system for the recording of digital particle image velocimetry (DPIV) images acquired in a two-dimensional separating/reattaching flow is described. The measurements were performed in the NASA Langley Subsonic Basic Research Tunnel as part of an overall series of experiments involving the simultaneous acquisition of dynamic surface pressures and off-body velocities. The DPIV system utilized two frequency-doubled Nd:YAG lasers to generate two coplanar, orthogonally polarized light sheets directed upstream along the horizontal centerline of the test model. A recording system containing two pairs of matched high resolution, 8-bit cameras was used to separate and capture images of illuminated tracer particles embedded in the flow field. Background image subtraction was used to reduce undesirable flare light emanating from the surface of the model, and custom pixel alignment algorithms were employed to provide accurate registration among the various cameras. Spatial cross correlation analysis with median filter validation was used to determine the instantaneous velocity structure in the separating/reattaching flow region illuminated by the laser light sheets. In operation the DPIV system exhibited a good ability to resolve large-scale separated flow structures with acceptable accuracy over the extended field of view of the cameras. The recording system design provided enhanced performance versus traditional DPIV systems by allowing a variety of standard and non-standard cameras to be easily incorporated into the system.

  1. Optical design for CETUS: a wide-field 1.5m aperture UV payload being studied for a NASA probe class mission study

    NASA Astrophysics Data System (ADS)

    Woodruff, Robert A.; Hull, Tony; Heap, Sara R.; Danchi, William; Kendrick, Stephen E.; Purves, Lloyd

    2017-09-01

    We are developing a NASA Headquarters selected Probe-class mission concept called the Cosmic Evolution Through UV Spectroscopy (CETUS) mission, which includes a 1.5-m aperture diameter large field-of-view (FOV) telescope optimized for UV imaging, multi-object spectroscopy, and point-source spectroscopy. The optical system includes a Three Mirror Anastigmatic (TMA) telescope that simultaneously feeds three separate scientific instruments: the near-UV (NUV) Multi-Object Spectrograph (MOS) with a next-generation Micro-Shutter Array (MSA); the two-channel camera covering the far-UV (FUV) and NUV spectrum; and the point-source spectrograph covering the FUV and NUV region with selectable R 40,000 echelle modes and R 2,000 first order modes. The optical system includes fine guidance sensors, wavefront sensing, and spectral and flat-field in-flight calibration sources. This paper will describe the current optical design of CETUS.

  2. Optical design for CETUS: a wide-field 1.5m aperture UV payload being studied for a NASA probe class mission study

    NASA Astrophysics Data System (ADS)

    Woodruff, Robert; Robert Woodruff, Goddard Space Flight Center, Kendrick Optical Consulting

    2018-01-01

    We are developing a NASA Headquarters selected Probe-class mission concept called the Cosmic Evolution Through UV Spectroscopy (CETUS) mission, which includes a 1.5-m aperture diameter large field-of-view (FOV) telescope optimized for UV imaging, multi-object spectroscopy, and point-source spectroscopy. The optical system includes a Three Mirror Anastigmatic (TMA) telescope that simultaneously feeds three separate scientific instruments: the near-UV (NUV) Multi-Object Spectrograph (MOS) with a next-generation Micro-Shutter Array (MSA); the two-channel camera covering the far-UV (FUV) and NUV spectrum; and the point-source spectrograph covering the FUV and NUV region with selectable R~ 40,000 echelle modes and R~ 2,000 first order modes. The optical system includes fine guidance sensors, wavefront sensing, and spectral and flat-field in-flight calibration sources. This paper will describe the current optical design of CETUS.

  3. Status of the photomultiplier-based FlashCam camera for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Pühlhofer, G.; Bauer, C.; Eisenkolb, F.; Florin, D.; Föhr, C.; Gadola, A.; Garrecht, F.; Hermann, G.; Jung, I.; Kalekin, O.; Kalkuhl, C.; Kasperek, J.; Kihm, T.; Koziol, J.; Lahmann, R.; Manalaysay, A.; Marszalek, A.; Rajda, P. J.; Reimer, O.; Romaszkan, W.; Rupinski, M.; Schanz, T.; Schwab, T.; Steiner, S.; Straumann, U.; Tenzer, C.; Vollhardt, A.; Weitzel, Q.; Winiarski, K.; Zietara, K.

    2014-07-01

    The FlashCam project is preparing a camera prototype around a fully digital FADC-based readout system, for the medium sized telescopes (MST) of the Cherenkov Telescope Array (CTA). The FlashCam design is the first fully digital readout system for Cherenkov cameras, based on commercial FADCs and FPGAs as key components for digitization and triggering, and a high performance camera server as back end. It provides the option to easily implement different types of trigger algorithms as well as digitization and readout scenarios using identical hardware, by simply changing the firmware on the FPGAs. The readout of the front end modules into the camera server is Ethernet-based using standard Ethernet switches and a custom, raw Ethernet protocol. In the current implementation of the system, data transfer and back end processing rates of 3.8 GB/s and 2.4 GB/s have been achieved, respectively. Together with the dead-time-free front end event buffering on the FPGAs, this permits the cameras to operate at trigger rates of up to several ten kHz. In the horizontal architecture of FlashCam, the photon detector plane (PDP), consisting of photon detectors, preamplifiers, high voltage-, control-, and monitoring systems, is a self-contained unit, mechanically detached from the front end modules. It interfaces to the digital readout system via analogue signal transmission. The horizontal integration of FlashCam is expected not only to be more cost efficient, it also allows PDPs with different types of photon detectors to be adapted to the FlashCam readout system. By now, a 144-pixel mini-camera" setup, fully equipped with photomultipliers, PDP electronics, and digitization/ trigger electronics, has been realized and extensively tested. Preparations for a full-scale, 1764 pixel camera mechanics and a cooling system are ongoing. The paper describes the status of the project.

  4. Optomechanical System Development of the AWARE Gigapixel Scale Camera

    NASA Astrophysics Data System (ADS)

    Son, Hui S.

    Electronic focal plane arrays (FPA) such as CMOS and CCD sensors have dramatically improved to the point that digital cameras have essentially phased out film (except in very niche applications such as hobby photography and cinema). However, the traditional method of mating a single lens assembly to a single detector plane, as required for film cameras, is still the dominant design used in cameras today. The use of electronic sensors and their ability to capture digital signals that can be processed and manipulated post acquisition offers much more freedom of design at system levels and opens up many interesting possibilities for the next generation of computational imaging systems. The AWARE gigapixel scale camera is one such computational imaging system. By utilizing a multiscale optical design, in which a large aperture objective lens is mated with an array of smaller, well corrected relay lenses, we are able to build an optically simple system that is capable of capturing gigapixel scale images via post acquisition stitching of the individual pictures from the array. Properly shaping the array of digital cameras allows us to form an effectively continuous focal surface using off the shelf (OTS) flat sensor technology. This dissertation details developments and physical implementations of the AWARE system architecture. It illustrates the optomechanical design principles and system integration strategies we have developed through the course of the project by summarizing the results of the two design phases for AWARE: AWARE-2 and AWARE-10. These systems represent significant advancements in the pursuit of scalable, commercially viable snapshot gigapixel imaging systems and should serve as a foundation for future development of such systems.

  5. Data Acquisition System of Nobeyama MKID Camera

    NASA Astrophysics Data System (ADS)

    Nagai, M.; Hisamatsu, S.; Zhai, G.; Nitta, T.; Nakai, N.; Kuno, N.; Murayama, Y.; Hattori, S.; Mandal, P.; Sekimoto, Y.; Kiuchi, H.; Noguchi, T.; Matsuo, H.; Dominjon, A.; Sekiguchi, S.; Naruse, M.; Maekawa, J.; Minamidani, T.; Saito, M.

    2018-05-01

    We are developing a superconducting camera based on microwave kinetic inductance detectors (MKIDs) to observe 100-GHz continuum with the Nobeyama 45-m telescope. A data acquisition (DAQ) system for the camera has been designed to operate the MKIDs with the telescope. This system is required to connect the telescope control system (COSMOS) to the readout system of the MKIDs (MKID DAQ) which employs the frequency-sweeping probe scheme. The DAQ system is also required to record the reference signal of the beam switching for the demodulation by the analysis pipeline in order to suppress the sky fluctuation. The system has to be able to merge and save all data acquired both by the camera and by the telescope, including the cryostat temperature and pressure and the telescope pointing. A collection of software which implements these functions and works as a TCP/IP server on a workstation was developed. The server accepts commands and observation scripts from COSMOS and then issues commands to MKID DAQ to configure and start data acquisition. We made a commissioning of the MKID camera on the Nobeyama 45-m telescope and obtained successful scan signals of the atmosphere and of the Moon.

  6. Brahms Mobile Agents: Architecture and Field Tests

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhuis, Maarten; Kaskiris, Charis; vanHoof, Ron

    2002-01-01

    We have developed a model-based, distributed architecture that integrates diverse components in a system designed for lunar and planetary surface operations: an astronaut's space suit, cameras, rover/All-Terrain Vehicle (ATV), robotic assistant, other personnel in a local habitat, and a remote mission support team (with time delay). Software processes, called agents, implemented in the Brahms language, run on multiple, mobile platforms. These mobile agents interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. The Brahms-based mobile agent architecture (MAA) uses a novel combination of agent types so the software agents may understand and facilitate communications between people and between system components. A state-of-the-art spoken dialogue interface is integrated with Brahms models, supporting a speech-driven field observation record and rover command system (e.g., return here later and bring this back to the habitat ). This combination of agents, rover, and model-based spoken dialogue interface constitutes a personal assistant. An important aspect of the methodology involves first simulating the entire system in Brahms, then configuring the agents into a run-time system.

  7. Developing a multi-Kinect-system for monitoring in dairy cows: object recognition and surface analysis using wavelets.

    PubMed

    Salau, J; Haas, J H; Thaller, G; Leisen, M; Junge, W

    2016-09-01

    Camera-based systems in dairy cattle were intensively studied over the last years. Different from this study, single camera systems with a limited range of applications were presented, mostly using 2D cameras. This study presents current steps in the development of a camera system comprising multiple 3D cameras (six Microsoft Kinect cameras) for monitoring purposes in dairy cows. An early prototype was constructed, and alpha versions of software for recording, synchronizing, sorting and segmenting images and transforming the 3D data in a joint coordinate system have already been implemented. This study introduced the application of two-dimensional wavelet transforms as method for object recognition and surface analyses. The method was explained in detail, and four differently shaped wavelets were tested with respect to their reconstruction error concerning Kinect recorded depth maps from different camera positions. The images' high frequency parts reconstructed from wavelet decompositions using the haar and the biorthogonal 1.5 wavelet were statistically analyzed with regard to the effects of image fore- or background and of cows' or persons' surface. Furthermore, binary classifiers based on the local high frequencies have been implemented to decide whether a pixel belongs to the image foreground and if it was located on a cow or a person. Classifiers distinguishing between image regions showed high (⩾0.8) values of Area Under reciever operation characteristic Curve (AUC). The classifications due to species showed maximal AUC values of 0.69.

  8. Inspecting rapidly moving surfaces for small defects using CNN cameras

    NASA Astrophysics Data System (ADS)

    Blug, Andreas; Carl, Daniel; Höfler, Heinrich

    2013-04-01

    A continuous increase in production speed and manufacturing precision raises a demand for the automated detection of small image features on rapidly moving surfaces. An example are wire drawing processes where kilometers of cylindrical metal surfaces moving with 10 m/s have to be inspected for defects such as scratches, dents, grooves, or chatter marks with a lateral size of 100 μm in real time. Up to now, complex eddy current systems are used for quality control instead of line cameras, because the ratio between lateral feature size and surface speed is limited by the data transport between camera and computer. This bottleneck is avoided by "cellular neural network" (CNN) cameras which enable image processing directly on the camera chip. This article reports results achieved with a demonstrator based on this novel analogue camera - computer system. The results show that computational speed and accuracy of the analogue computer system are sufficient to detect and discriminate the different types of defects. Area images with 176 x 144 pixels are acquired and evaluated in real time with frame rates of 4 to 10 kHz - depending on the number of defects to be detected. These frame rates correspond to equivalent line rates on line cameras between 360 and 880 kHz, a number far beyond the available features. Using the relation between lateral feature size and surface speed as a figure of merit, the CNN based system outperforms conventional image processing systems by an order of magnitude.

  9. In-camera video-stream processing for bandwidth reduction in web inspection

    NASA Astrophysics Data System (ADS)

    Jullien, Graham A.; Li, QiuPing; Hajimowlana, S. Hossain; Morvay, J.; Conflitti, D.; Roberts, James W.; Doody, Brian C.

    1996-02-01

    Automated machine vision systems are now widely used for industrial inspection tasks where video-stream data information is taken in by the camera and then sent out to the inspection system for future processing. In this paper we describe a prototype system for on-line programming of arbitrary real-time video data stream bandwidth reduction algorithms; the output of the camera only contains information that has to be further processed by a host computer. The processing system is built into a DALSA CCD camera and uses a microcontroller interface to download bit-stream data to a XILINXTM FPGA. The FPGA is directly connected to the video data-stream and outputs data to a low bandwidth output bus. The camera communicates to a host computer via an RS-232 link to the microcontroller. Static memory is used to both generate a FIFO interface for buffering defect burst data, and for off-line examination of defect detection data. In addition to providing arbitrary FPGA architectures, the internal program of the microcontroller can also be changed via the host computer and a ROM monitor. This paper describes a prototype system board, mounted inside a DALSA camera, and discusses some of the algorithms currently being implemented for web inspection applications.

  10. Utilizing ISS Camera Systems for Scientific Analysis of Lightning Characteristics and Comparison with ISS-LIS and GLM

    NASA Technical Reports Server (NTRS)

    Schultz, Christopher J.; Lang, Timothy J.; Leake, Skye; Runco, Mario, Jr.; Blakeslee, Richard J.

    2017-01-01

    Video and still frame images from cameras aboard the International Space Station (ISS) are used to inspire, educate, and provide a unique vantage point from low-Earth orbit that is second to none; however, these cameras have overlooked capabilities for contributing to scientific analysis of the Earth and near-space environment. The goal of this project is to study how geo referenced video/images from available ISS camera systems can be useful for scientific analysis, using lightning properties as a demonstration.

  11. Reducing flicker due to ambient illumination in camera captured images

    NASA Astrophysics Data System (ADS)

    Kim, Minwoong; Bengtson, Kurt; Li, Lisa; Allebach, Jan P.

    2013-02-01

    The flicker artifact dealt with in this paper is the scanning distortion arising when an image is captured by a digital camera using a CMOS imaging sensor with an electronic rolling shutter under strong ambient light sources powered by AC. This type of camera scans a target line-by-line in a frame. Therefore, time differences exist between the lines. This mechanism causes a captured image to be corrupted by the change of illumination. This phenomenon is called the flicker artifact. The non-content area of the captured image is used to estimate a flicker signal that is a key to being able to compensate the flicker artifact. The average signal of the non-content area taken along the scan direction has local extrema where the peaks of flicker exist. The locations of the extrema are very useful information to estimate the desired distribution of pixel intensities assuming that the flicker artifact does not exist. The flicker-reduced images compensated by our approach clearly demonstrate the reduced flicker artifact, based on visual observation.

  12. ARC-1986-A86-7011

    NASA Image and Video Library

    1986-01-14

    Range : 2.52 million miles (1.56 million miles) P-29481B/W Voyager 2 returned this photograph with all nine known Uranus rings visible from a 15 sec. exposure through the narrow angle camera. The rings are quite dark and very narrow. The most prominent and outermost of the nine, Epsilon, is seen at top. The next three in toward Uranus, called Delta, Gamma, and Eta, are much fainter and more narrow than Epsilon ring. Then come Beta and Alpha rings, and finally, the innermost grouping, known simply as the 4,5, & 6 rings. The last three are very faint and are at the limit of detection for the Voyager camera. Uranus' rings range in width from about 100 km. (60 mi.) at the widest part of the Epsilon ring, to only a few kilometers for most of the others. this iamge was processed to enhance narrow features; the bright dots are imperfections on the camera detector. The resolution scale is about 50 km. (30 mi.)

  13. Evaluation of multispectral plenoptic camera

    NASA Astrophysics Data System (ADS)

    Meng, Lingfei; Sun, Ting; Kosoglow, Rich; Berkner, Kathrin

    2013-01-01

    Plenoptic cameras enable capture of a 4D lightfield, allowing digital refocusing and depth estimation from data captured with a compact portable camera. Whereas most of the work on plenoptic camera design has been based a simplistic geometric-optics-based characterization of the optical path only, little work has been done of optimizing end-to-end system performance for a specific application. Such design optimization requires design tools that need to include careful parameterization of main lens elements, as well as microlens array and sensor characteristics. In this paper we are interested in evaluating the performance of a multispectral plenoptic camera, i.e. a camera with spectral filters inserted into the aperture plane of the main lens. Such a camera enables single-snapshot spectral data acquisition.1-3 We first describe in detail an end-to-end imaging system model for a spectrally coded plenoptic camera that we briefly introduced in.4 Different performance metrics are defined to evaluate the spectral reconstruction quality. We then present a prototype which is developed based on a modified DSLR camera containing a lenslet array on the sensor and a filter array in the main lens. Finally we evaluate the spectral reconstruction performance of a spectral plenoptic camera based on both simulation and measurements obtained from the prototype.

  14. A multiple camera tongue switch for a child with severe spastic quadriplegic cerebral palsy.

    PubMed

    Leung, Brian; Chau, Tom

    2010-01-01

    The present study proposed a video-based access technology that facilitated a non-contact tongue protrusion access modality for a 7-year-old boy with severe spastic quadriplegic cerebral palsy (GMFCS level 5). The proposed system featured a centre camera and two peripheral cameras to extend coverage of the frontal face view of this user for longer durations. The child participated in a descriptive case study. The participant underwent 3 months of tongue protrusion training while the multiple camera tongue switch prototype was being prepared. Later, the participant was brought back for five experiment sessions where he worked on a single-switch picture matching activity, using the multiple camera tongue switch prototype in a controlled environment. The multiple camera tongue switch achieved an average sensitivity of 82% and specificity of 80%. In three of the experiment sessions, the peripheral cameras were associated with most of the true positive switch activations. These activations would have been missed by a centre-camera-only setup. The study demonstrated proof-of-concept of a non-contact tongue access modality implemented by a video-based system involving three cameras and colour video processing.

  15. Realtime system for GLAS on WHT

    NASA Astrophysics Data System (ADS)

    Skvarč, Jure; Tulloch, Simon; Myers, Richard M.

    2006-06-01

    The new ground layer adaptive optics system (GLAS) on the William Herschel Telescope (WHT) on La Palma will be based on the existing natural guide star adaptive optics system called NAOMI. A part of the new developments is a new control system for the tip-tilt mirror. Instead of the existing system, built around a custom built multiprocessor computer made of C40 DSPs, this system uses an ordinary PC machine and a Linux operating system. It is equipped with a high sensitivity L3 CCD camera with effective readout noise of nearly zero. The software design for the tip-tilt system is being completely redeveloped, in order to make a use of object oriented design which should facilitate easier integration with the rest of the observing system at the WHT. The modular design of the system allows incorporation of different centroiding and loop control methods. To test the system off-sky, we have built a laboratory bench using an artificial light source and a tip-tilt mirror. We present results of tip-tilt correction quality using different centroiding algorithms and different control loop methods at different light levels. This system will serve as a testing ground for a transition to a completely PC-based real-time control system.

  16. 25 CFR 542.7 - What are the minimum internal control standards for bingo?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... by the Tribal gaming regulatory authority, will be acceptable. (b) Game play standards. (1) The... procedures that ensure the correct calling of numbers selected in the bingo game. (5) Each ball shall be.... For speed bingo games not verified by camera equipment, each ball drawn shall be verified by a person...

  17. 25 CFR 542.7 - What are the minimum internal control standards for bingo?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... by the Tribal gaming regulatory authority, will be acceptable. (b) Game play standards. (1) The... procedures that ensure the correct calling of numbers selected in the bingo game. (5) Each ball shall be.... For speed bingo games not verified by camera equipment, each ball drawn shall be verified by a person...

  18. Chapter 5: Research on the ferruginous pygmy-owl in Southern Texas: Methodology and applications

    Treesearch

    Glenn A. Proudfoot; Jody L. Mays; Sam L. Beasom

    2000-01-01

    Using broadcasted conspecific calls, nest boxes, miniature-video cameras, a fiberoptic stratascope, and radio-telemetry, researchers from Caesar Kleberg Wildlife Research Institute conducted studies to assess the viability and profile the natural history of ferruginous pygmy-owls in Texas (Mays 1996, Proudfoot 1996a, Proudfoot and Beasom 1996, Proudfoot and Beasom 1997...

  19. Sensing Place: Embodiment, Sensoriality, Kinesis, and Children behind the Camera

    ERIC Educational Resources Information Center

    Mills, Kathy; Comber, Barbara; Kelly, Pippa

    2013-01-01

    This article is a call to literacy teachers and researchers to embrace the possibility of attending more consciously to the senses in digital media production. Literacy practices do not occur only in the mind, but involve the sensoriality, embodiment, co-presence, and movement of bodies. This paper theorises the sensorial and embodied dimension of…

  20. ISS EarthKam: Taking Photos of the Earth from Space

    ERIC Educational Resources Information Center

    Haste, Turtle

    2008-01-01

    NASA is involved in a project involving the International Space Station (ISS) and an Earth-focused camera called EarthKam, where schools, and ultimately students, are allowed to remotely program the EarthKAM to take images. Here the author describes how EarthKam was used to help middle school students learn about biomes and develop their…

Top